diff --git a/.github/workflows/lock.yaml b/.github/workflows/lock.yaml
new file mode 100644
index 0000000000000..ed67648c78872
--- /dev/null
+++ b/.github/workflows/lock.yaml
@@ -0,0 +1,23 @@
+name: 'Lock Threads'
+
+on:
+ schedule:
+ - cron: '50 1 * * *'
+
+jobs:
+ lock:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: dessant/lock-threads@v2
+ with:
+ github-token: ${{ github.token }}
+ issue-lock-comment: >
+ I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
+
+ If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
+ issue-lock-inactive-days: '30'
+ pr-lock-comment: >
+ I'm going to lock this pull request because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active contributions.
+
+ If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
+ pr-lock-inactive-days: '30'
diff --git a/.hashibot.hcl b/.hashibot.hcl
index ff3069ff6b3a0..28b50e4bc2573 100644
--- a/.hashibot.hcl
+++ b/.hashibot.hcl
@@ -11,16 +11,3 @@ queued_behavior "release_commenter" "releases" {
```
EOF
}
-
-poll "closed_issue_locker" "locker" {
- schedule = "0 50 14 * * *"
- closed_for = "720h" # 30 days
- max_issues = 500
- sleep_between_issues = "5s"
-
- message = <<-EOF
- I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
-
- If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!
- EOF
-}
diff --git a/.teamcity/components/generated/services.kt b/.teamcity/components/generated/services.kt
index 117e23f0105d8..f552260490e95 100644
--- a/.teamcity/components/generated/services.kt
+++ b/.teamcity/components/generated/services.kt
@@ -18,6 +18,7 @@ var services = mapOf(
"cognitive" to "Cognitive Services",
"communication" to "Communication",
"compute" to "Compute",
+ "consumption" to "Consumption",
"containers" to "Container Services",
"cosmos" to "CosmosDB",
"costmanagement" to "Cost Management",
diff --git a/CHANGELOG-v2.md b/CHANGELOG-v2.md
index 1112c09685d79..f09dd76633704 100644
--- a/CHANGELOG-v2.md
+++ b/CHANGELOG-v2.md
@@ -1,3 +1,399 @@
+## 2.59.0 (May 14, 2021)
+
+FEATURES:
+
+* **New Resource:** `azurerm_consumption_budget_resource_group` ([#9201](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9201))
+* **New Resource:** `azurerm_consumption_budget_subscription` ([#9201](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9201))
+* **New Resource:** `azurerm_monitor_aad_diagnostic_setting` ([#11660](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11660))
+* **New Resource:** `azurerm_sentinel_alert_rule_machine_learning_behavior_analytics` ([#11552](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11552))
+* **New Resource:** `azurerm_servicebus_namespace_disaster_recovery_config` ([#11638](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11638))
+
+ENHANCEMENTS:
+
+* dependencies: updating to `v54.4.0` of `github.com/Azure/azure-sdk-for-go` ([#11593](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11593))
+* dependencies: updating `databox` to API version `2020-12-01` ([#11626](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11626))
+* dependencies: updating `maps` to API version `2021-02-01` ([#11676](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11676))
+* Data Source: `azurerm_kubernetes_cluster` - Add `ingress_application_gateway_identity` export for add-on `ingress_application_gateway` ([#11622](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11622))
+* `azurerm_cosmosdb_account` - support for the `identity` and `cors_rule` blocks ([#11653](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11653))
+* `azurerm_cosmosdb_account` - support for the `backup` property ([#11597](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11597))
+* `azurerm_cosmosdb_sql_container` - support for the `analytical_storage_ttl` property ([#11655](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11655))
+* `azurerm_container_registry` - support for the `identity` and `encryption` blocks ([#11661](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11661))
+* `azurerm_frontdoor_custom_https_configuration` - Add support for resource import. ([#11642](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11642))
+* `azurerm_kubernetes_cluster` - export the `ingress_application_gateway_identity` attribute for the `ingress_application_gateway` add-on ([#11622](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11622))
+* `azurerm_managed_disk` - support for the `tier` property ([#11634](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11634))
+* `azurerm_storage_account` - support for the `azure_files_identity_based_authentication` and `routing_preference` blocks ([#11485](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11485))
+* `azurerm_storage_account` - support for the `private_link_access` property ([#11629](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11629))
+* `azurerm_storage_account` - support for the `change_feed_enabled` property ([#11695](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11695))
+
+BUG FIXES
+
+* Data Source: `azurerm_container_registry_token` - updating the validation for the `name` field ([#11607](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11607))
+* `azurerm_bastion_host` - updating the `ip_configuration` block properties now forces a new resource ([#11700](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11700))
+* `azurerm_container_registry_token` - updating the validation for the `name` field ([#11607](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11607))
+* `azurerm_mssql_database` - wil now correctly import the `creation_source_database_id` property for Secondary databases ([#11703](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11703))
+* `azurerm_storage_account` - allow empty/blank values for the `allowed_headers` and `exposed_headers` properties ([#11692](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11692))
+
+## 2.58.0 (May 07, 2021)
+
+UPGRADE NOTES
+
+* `azurerm_frontdoor` - The `custom_https_provisioning_enabled` field and the `custom_https_configuration` block have been deprecated and has been removed as they are no longer supported. ([#11456](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11456))
+* `azurerm_frontdoor_custom_https_configuration` - The `resource_group_name` has been deprecated and has been removed as it is no longer supported. ([#11456](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11456))
+
+FEATURES:
+
+* **New Data Source:** `azurerm_storage_table_entity` ([#11562](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11562))
+* **New Resource:** `azurerm_app_service_environment_v3` ([#11174](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11174))
+* **New Resource:** `azurerm_cosmosdb_notebook_workspace` ([#11536](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11536))
+* **New Resource:** `azurerm_cosmosdb_sql_trigger` ([#11535](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11535))
+* **New Resource:** `azurerm_cosmosdb_sql_user_defined_function` ([#11537](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11537))
+* **New Resource:** `azurerm_iot_time_series_insights_event_source_iothub` ([#11484](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11484))
+* **New Resource:** `azurerm_storage_blob_inventory_policy` ([#11533](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11533))
+
+ENHANCEMENTS:
+
+* dependencies: updating `network-db` to API version `2020-07-01` ([#10767](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10767))
+* `azurerm_cosmosdb_account` - support for the `access_key_metadata_writes_enabled`, `mongo_server_version`, and `network_acl_bypass` properties ([#11486](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11486))
+* `azurerm_data_factory` - support for the `customer_managed_key_id` property ([#10502](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10502))
+* `azurerm_data_factory_pipeline` - support for the `folder` property ([#11575](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11575))
+* `azurerm_frontdoor` - Fix for Frontdoor resource elements being returned out of order. ([#11456](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11456))
+* `azurerm_hdinsight_*_cluster` - support for autoscale #8104 ([#11547](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11547))
+* `azurerm_network_security_rule` - support for the protocols `Ah` and `Esp` ([#11581](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11581))
+* `azurerm_network_connection_monitor` - support for the `coverage_level`, `excluded_ip_addresses`, `included_ip_addresses`, `target_resource_id`, and `resource_type` propeties ([#11540](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11540))
+
+## 2.57.0 (April 30, 2021)
+
+UPGRADE NOTES
+
+* `azurerm_api_management_authorization_server` - due to a bug in the `2020-12-01` version of the API Management API, changes to `resource_owner_username` and `resource_owner_password` in Azure will not be noticed by Terraform ([#11146](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11146))
+* `azurerm_cosmosdb_account` - the `2021-02-01` version of the cosmos API defaults new MongoDB accounts to `v3.6` rather then `v3.2` ([#10926](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10926))
+* `azurerm_cosmosdb_mongo_collection` - the `_id` index is now required by the new API/MongoDB version ([#10926](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10926))
+* `azurerm_cosmosdb_gremlin_graph` and `azurerm_cosmosdb_sql_container` - the `patition_key_path` property is now required ([#10926](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10926))
+
+FEATURES:
+
+* **Data Source:** `azurerm_container_registry_scope_map` ([#11350](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11350))
+* **Data Source:** `azurerm_container_registry_token` ([#11350](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11350))
+* **Data Source:** `azurerm_postgresql_flexible_server` ([#11081](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11081))
+* **Data Source:** `azurerm_key_vault_managed_hardware_security_module` ([#10873](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10873))
+* **New Resource:** `azurerm_container_registry_scope_map` ([#11350](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11350))
+* **New Resource:** `azurerm_container_registry_token` ([#11350](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11350))
+* **New Resource:** `azurerm_data_factory_dataset_snowflake ` ([#11116](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11116))
+* **New Resource:** `azurerm_healthbot` ([#11002](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11002))
+* **New Resource:** `azurerm_key_vault_managed_hardware_security_module ` ([#10873](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10873))
+* **New Resource:** `azurerm_media_asset_filter` ([#11110](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11110))
+* **New Resource:** `azurerm_mssql_job_agent` ([#11248](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11248))
+* **New Resource:** `azurerm_mssql_job_credential` ([#11363](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11363))
+* **New Resource:** `azurerm_mssql_transparent_data_encryption` ([#11148](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11148))
+* **New Resource:** `azurerm_postgresql_flexible_server` ([#11081](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11081))
+* **New Resource:** `azurerm_spring_cloud_app_cosmosdb_association` ([#11307](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11307))
+* **New Resource:** `azurerm_sentinel_data_connector_microsoft_defender_advanced_threat_protection` ([#10669](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10669))
+* **New Resource:** `azurerm_virtual_machine_configuration_policy_assignment` ([#11334](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11334))
+* **New Resource:** `azurerm_vmware_cluster` ([#10848](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10848))
+
+ENHANCEMENTS:
+
+* dependencies: updating to `v53.4.0` of `github.com/Azure/azure-sdk-for-go` ([#11439](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11439))
+* dependencies: updating to `v1.17.2` of `github.com/hashicorp/terraform-plugin-sdk` ([#11431](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11431))
+* dependencies: updating `cosmos-db` to API version `2021-02-01` ([#10926](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10926))
+* dependencies: updating `keyvault` to API version `v7.1` ([#10926](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10926))
+* Data Source: `azurerm_healthcare_service` - export the `cosmosdb_key_vault_key_versionless_id` attribute ([#11481](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11481))
+* Data Source: `azurerm_key_vault_certificate` - export the `curve` attribute in the `key_properties` block ([#10867](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10867))
+* Data Source: `azurerm_virtual_machine_scale_set` - now exports the `network_interfaces` ([#10585](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10585))
+* `azurerm_app_service` - support for the `site_config.ip_restrictions.headers` and `site_config.scm_ip_restrictions.headers` properties ([#11209](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11209))
+* `azurerm_app_service_slot` - support for the `site_config.ip_restrictions.headers` and `site_config.scm_ip_restrictions.headers` properties ([#11209](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11209))
+* `azurerm_backup_policy_file_share` - support for the `retention_weekly`, `retention_monthly`, and `retention_yearly` blocks ([#10733](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10733))
+* `azurerm_cosmosdb_sql_container` - support for the `conflict_resolution_policy` block ([#11517](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11517))
+* `azurerm_container_group` - support for the `exposed_port` block ([#10491](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10491))
+* `azurerm_container_registry` - deprecating the `georeplication_locations` property in favour of the `georeplications` property [#11200](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11200)]
+* `azurerm_database_migration` - switching to using an ID Formatter ([#11378](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11378))
+* `azurerm_database_migration_project` - switching to using an ID Formatter ([#11378](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11378))
+* `azurerm_databricks_workspace` - switching to using an ID Formatter ([#11378](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11378))
+* `azurerm_databricks_workspace` - fixes propagation of tags to connected resources ([#11405](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11405))
+* `azurerm_data_factory_linked_service_azure_file_storage` - support for the `key_vault_password` property ([#11436](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11436))
+* `azurerm_dedicated_host_group` - support for the `automatic_placement_enabled` property ([#11428](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11428))
+* `azurerm_frontdoor` - sync `MaxItems` on various attributes to match azure docs ([#11421](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11421))
+* `azurerm_frontdoor_custom_https_configuration` - removing secret version validation when using azure key vault as the certificate source ([#11310](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11310))
+* `azurerm_function_app` - support for the `site_config.ip_restrictions.headers` and `site_config.scm_ip_restrictions.headers` properties ([#11209](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11209))
+* `azurerm_function_app` - support the `java_version` property ([#10495](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10495))
+* `azurerm_hdinsight_interactive_query_cluster` - add support for private link endpoint ([#11300](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11300))
+* `azurerm_hdinsight_hadoop_cluster` - add support for private link endpoint ([#11300](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11300))
+* `azurerm_hdinsight_spark_cluster` - add support for private link endpoint ([#11300](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11300))
+* `azurerm_healthcare_service` - support for the `cosmosdb_key_vault_key_versionless_id` property ([#11481](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11481))
+* `azurerm_kubernetes_cluster` - support for the `ingress_application_gateway` addon ([#11376](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11376))
+* `azurerm_kubernetes_cluster` - support for the `azure_rbac_enabled` property ([#10441](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10441))
+* `azurerm_hpc_cache` - support for the `directory_active_directory`, `directory_flat_file`, and `directory_ldap` blocks ([#11332](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11332))
+* `azurerm_key_vault_certificate` - support additional values for the `key_size` property in the `key_properties` block ([#10867](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10867))
+* `azurerm_key_vault_certificate` - support the `curve` property in the `key_properties` block ([#10867](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10867))
+* `azurerm_key_vault_certificate` - the `key_size` property in the `key_properties` block is now optional ([#10867](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10867))
+* `azurerm_kubernetes_cluster` - support for the `dns_prefix_private_cluster` property ([#11321](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11321))
+* `azurerm_kubernetes_cluster` - support for the `max_node_provisioning_time`, `max_unready_percentage`, and `max_unready_nodes` properties ([#11406](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11406))
+* `azurerm_storage_encryption_scope` - support for the `infrastructure_encryption_required` property ([#11462](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11462))
+* `azurerm_kubernetes_cluster` support for the `empty_bulk_delete_max` in the `auto_scaler_profile` block #([#11060](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11060))
+* `azurerm_lighthouse_definition` - support for the `delegated_role_definition_ids` property ([#11269](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11269))
+* `azurerm_managed_application` - support for the `parameter_values` property ([#8632](https://github.com/terraform-providers/terraform-provider-azurerm/issues/8632))
+* `azurerm_managed_disk` - support for the `network_access_policy` and `disk_access_id` properties ([#9862](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9862))
+* `azurerm_postgresql_server` - wait for replica restarts when needed ([#11458](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11458))
+* `azurerm_redis_enterprise_cluster` - support for the `minimum_tls_version` and `hostname` properties ([#11203](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11203))
+* `azurerm_storage_account` - support for the `versioning_enabled`, `default_service_version`, and `last_access_time_enabled` properties within the `blob_properties` block ([#11301](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11301))
+* `azurerm_storage_account` - support for the `nfsv3_enabled` property ([#11387](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11387))
+* `azurerm_storage_management_policy` - support for the `version` block ([#11163](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11163))
+* `azurerm_synapse_workspace` - support for the `customer_managed_key_versionless_id` property ([#11328](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11328))
+
+BUG FIXES:
+
+* `azurerm_api_management` - will no longer panic with an empty `hostname_configuration` ([#11426](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11426))
+* `azurerm_api_management_diagnostic` - fix a crash with the `frontend_request`, `frontend_response`, `backend_request`, `backend_response` blocks ([#11402](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11402))
+* `azurerm_eventgrid_system_topic` - remove strict validation on `topic_type` ([#11352](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11352))
+* `azurerm_iothub` - change `filter_rule` from TypeSet to TypeList to resolve an ordering issue ([#10341](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10341))
+* `azurerm_linux_virtual_machine_scale_set` - the default value for the `priority` property will no longer force a replacement of the resource ([#11362](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11362))
+* `azurerm_monitor_activity_log_alert` - fix a persistent diff for the `service_health` block ([#11383](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11383))
+* `azurerm_mssql_database ` - return an error when secondary database uses `max_size_gb` ([#11401](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11401))
+* `azurerm_mssql_database` - correctly import the `create_mode` property ([#11026](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11026))
+* `azurerm_netap_volume` - correctly set the `replication_frequency` attribute in the `data_protection_replication` block ([#11530](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11530))
+* `azurerm_postgresql_server` - ensure `public_network_access_enabled` is correctly set for replicas ([#11465](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11465))
+* `azurerm_postgresql_server` - can now correctly disable replication if required when `create_mode` is changed ([#11467](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11467))
+* `azurerm_virtual_network_gatewa` - updating the `custom_route` block no longer forces a new resource to be created [GH- 11433]
+
+## 2.56.0 (April 15, 2021)
+
+FEATURES:
+
+* **New Resource:** `azurerm_data_factory_linked_service_azure_databricks` ([#10962](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10962))
+* **New Resource:** `azurerm_data_lake_store_virtual_network_rule` ([#10430](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10430))
+* **New Resource:** `azurerm_media_live_event_output` ([#10917](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10917))
+* **New Resource:** `azurerm_spring_cloud_app_mysql_association` ([#11229](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11229))
+
+ENHANCEMENTS:
+
+* dependencies: updating `github.com/Azure/azure-sdk-for-go` to `v53.0.0` ([#11302](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11302))
+* dependencies: updating `containerservice` to API version `2021-02-01` ([#10972](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10972))
+* `azurerm_app_service` - fix broken `ip_restrictions` and `scm_ip_restrictions` ([#11170](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11170))
+* `azurerm_application_gateway` - support for configuring `firewall_policy_id` within the `path_rule` block ([#11239](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11239))
+* `azurerm_firewall_policy_rule_collection_group` - allow `*` for the `network_rule_collection.destination_ports` property ([#11326](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11326))
+* `azurerm_function_app` - fix broken `ip_restrictions` and `scm_ip_restrictions` ([#11170](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11170))
+* `azurerm_data_factory_linked_service_sql_database` - support managed identity and service principal auth and add the `keyvault_password` property ([#10735](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10735))
+* `azurerm_hpc_cache` - support for `tags` ([#11268](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11268))
+* `azurerm_linux_virtual_machine_scale_set` - Support health extension for rolling ugrade mode ([#9136](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9136))
+* `azurerm_monitor_activity_log_alert` - support for `service_health` ([#10978](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10978))
+* `azurerm_mssql_database` - support for the `geo_backup_enabled` property ([#11177](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11177))
+* `azurerm_public_ip` - support for `ip_tags` ([#11270](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11270))
+* `azurerm_windows_virtual_machine_scale_set` - Support health extension for rolling ugrade mode ([#9136](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9136))
+
+BUG FIXES:
+
+* `azurerm_app_service_slot` - fix crash bug when given empty `http_logs` ([#11267](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11267))
+
+## 2.55.0 (April 08, 2021)
+
+FEATURES:
+
+* **New Resource:** `azurerm_api_management_email_template` ([#10914](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10914))
+* **New Resource:** `azurerm_communication_service` ([#11066](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11066))
+* **New Resource:** `azurerm_express_route_port` ([#10074](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10074))
+* **New Resource:** `azurerm_spring_cloud_app_redis_association` ([#11154](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11154))
+
+ENHANCEMENTS:
+
+* Data Source: `azurerm_user_assigned_identity` - exporting `tenant_id` ([#11253](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11253))
+* Data Source: `azurerm_function_app` - exporting `client_cert_mode` ([#11161](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11161))
+* `azurerm_eventgrid_data_connection` - support for the `table_name`, `mapping_rule_name`, and `data_format` properties ([#11157](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11157))
+* `azurerm_hpc_cache` - support for configuring `dns` ([#11236](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11236))
+* `azurerm_hpc_cache` - support for configuring `ntp_server` ([#11236](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11236))
+* `azurerm_hpc_cache_nfs_target` - support for the `access_policy_name` property ([#11186](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11186))
+* `azurerm_hpc_cache_nfs_target` - `usage_model` can now be set to `READ_HEAVY_CHECK_180`, `WRITE_WORKLOAD_CHECK_30`, `WRITE_WORKLOAD_CHECK_60` and `WRITE_WORKLOAD_CLOUDWS` ([#11247](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11247))
+* `azurerm_function_app` - support for configuring `client_cert_mode` ([#11161](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11161))
+* `azurerm_netapp_volume` - adding `root_access_enabled` to the `export_policy_rule` block ([#11105](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11105))
+* `azurerm_private_endpoint` - allows for an alias to specified ([#10779](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10779))
+* `azurerm_user_assigned_identity` - exporting `tenant_id` ([#11253](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11253))
+* `azurerm_web_application_firewall_policy` - `version` within the `managed_rule_set` block can now be set to (OWASP) `3.2` ([#11244](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11244))
+
+BUG FIXES:
+
+* Data Source: `azurerm_dns_zone` - fixing a bug where the Resource ID wouldn't contain the Resource Group name when looking this up ([#11221](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11221))
+* `azurerm_media_service_account` - `storage_authentication_type` correctly accepts both `ManagedIdentity` and `System` ([#11222](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11222))
+* `azurerm_web_application_firewall_policy` - `http_listener_ids` and `path_based_rule_ids` are now Computed only ([#11196](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11196))
+
+## 2.54.0 (April 02, 2021)
+
+FEATURES:
+
+* **New Resource:** `azurerm_hpc_cache_access_policy` ([#11083](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11083))
+* **New Resource:** `azurerm_management_group_subscription_association` ([#11069](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11069))
+* **New Resource:** `azurerm_media_live_event` ([#10724](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10724))
+
+ENHANCEMENTS:
+
+* dependencies: updating to `v52.6.0` of `github.com/Azure/azure-sdk-for-go` ([#11108](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11108))
+* dependencies: updating `storage` to API version `2021-01-01` ([#11094](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11094))
+* dependencies: updating `storagecache` (a.k.a `hpc`) to API version `2021-03-01` ([#11083](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11083))
+* `azurerm_application_gateway` - support for rewriting urls with the `url` block ([#10950](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10950))
+* `azurerm_cognitive_account` - Add support for `network_acls` ([#11164](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11164))
+* `azurerm_container_registry` - support for the `quarantine_policy_enabled` property ([#11011](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11011))
+* `azurerm_firewall` - support for the `private_ip_ranges` property [p[#10627](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10627)]
+* `azurerm_log_analytics_workspace` - Fix issue where -1 couldn't be specified for `daily_quota_gb` ([#11182](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11182))
+* `azurerm_spring_cloud_service` - supports for the `sample_rate` property ([#11106](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11106))
+* `azurerm_storage_account` - support for the `container_delete_retention_policy` property ([#11131](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11131))
+* `azurerm_virtual_desktop_host_pool` - support for the `custom_rdp_properties` property ([#11160](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11160))
+* `azurerm_web_application_firewall_policy` - support for the `http_listener_ids` and `path_based_rule_ids` properties ([#10860](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10860))
+
+BUG FIXES:
+
+* `azurerm_api_management` - the `certificate_password` property is now optional ([#11139](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11139))
+* `azurerm_data_factory_linked_service_azure_blob_storage` - correct managed identity implementation by implementing the `service_endpoint` property ([#10830](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10830))
+* `azurerm_machine_learning_workspace` - deprecate the `Enterprise` sku as it has been deprecated by Azure ([#11063](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11063))
+* `azurerm_machine_learning_workspace` - support container registries in other subscriptions ([#11065](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11065))
+* `azurerm_site_recovery_fabric` - Fixes error in checking for existing resource ([#11130](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11130))
+* `azurerm_spring_cloud_custom_domain` - `thumbprint` is required when specifying `certificate_name` ([#11145](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11145))
+* `azurerm_subscription` - fixes broken timeout on destroy ([#11124](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11124))
+
+## 2.53.0 (March 26, 2021)
+
+FEATURES:
+
+* **New Resource:** `azurerm_management_group_template_deployment` ([#10603](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10603))
+* **New Resource:** `azurerm_tenant_template_deployment` ([#10603](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10603))
+* **New Data Source:** `azurerm_template_spec_version` ([#10603](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10603))
+
+ENHANCEMENTS:
+
+* dependencies: updating to `v52.5.0` of `github.com/Azure/azure-sdk-for-go` ([#11015](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11015))
+* Data Source: `azurerm_key_vault_secret` - support for the `versionless_id` attribute ([#11091](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11091))
+* `azurerm_container_registry` - support for the `public_network_access_enabled` property ([#10969](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10969))
+* `azurerm_kusto_eventhub_data_connection` - support for the `event_system_properties` block ([#11006](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11006))
+* `azurerm_logic_app_trigger_recurrence` - Add support for `schedule` ([#11055](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11055))
+* `azurerm_resource_group_template_deployment` - add support for `template_spec_version_id` property ([#10603](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10603))
+* `azurerm_role_definition` - the `permissions` block is now optional ([#9850](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9850))
+* `azurerm_subscription_template_deployment` - add support for `template_spec_version_id` property ([#10603](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10603))
+
+
+BUG FIXES:
+
+* `azurerm_frontdoor_custom_https_configuration` - fixing a crash during update ([#11046](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11046))
+* `azurerm_resource_group_template_deployment` - always sending `parameters_content` during an update ([#11001](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11001))
+* `azurerm_role_definition` - fixing crash when permissions are empty ([#9850](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9850))
+* `azurerm_subscription_template_deployment` - always sending `parameters_content` during an update ([#11001](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11001))
+* `azurerm_spring_cloud_app` - supports for the `tls_enabled` property ([#11064](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11064))
+
+## 2.52.0 (March 18, 2021)
+
+FEATURES:
+
+* **New Resource:** `azurerm_mssql_firewall_rule` ([#10954](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10954))
+* **New Resource:** `azurerm_mssql_virtual_network_rule` ([#10954](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10954))
+
+ENHANCEMENTS:
+
+* dependencies: updating to `v52.4.0` of `github.com/Azure/azure-sdk-for-go` ([#10982](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10982))
+* `azurerm_api_management_subscription` - making `user_id` property optional [[#10638](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10638)}
+
+BUG FIXES:
+
+* `azurerm_cosmosdb_account_resource` - marking `connection_string` as sensitive ([#10942](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10942))
+* `azurerm_eventhub_namespace_disaster_recovery_config` - deprecating the `alternate_name` property due to a service side API bug ([#11013](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11013))
+* `azurerm_local_network_gateway` - making the `address_space` property optional ([#10983](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10983))
+* `azurerm_management_group` - validation for `subscription_id` list property entries ([#10948](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10948))
+
+## 2.51.0 (March 12, 2021)
+
+FEATURES:
+
+* **New Resource:** `azurerm_purview_account` ([#10395](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10395))
+* **New Resource:** `azurerm_data_factory_dataset_parquet` ([#10852](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10852))
+* **New Resource:** `azurerm_security_center_server_vulnerability_assessment` ([#10030](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10030))
+* **New Resource:** `azurerm_security_center_assessment` ([#10694](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10694))
+* **New Resource:** `azurerm_security_center_assessment_policy` ([#10694](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10694))
+* **New Resource:** `azurerm_sentinel_data_connector_azure_advanced_threat_protection` ([#10666](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10666))
+* **New Resource:** `azurerm_sentinel_data_connector_azure_security_center` ([#10667](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10667))
+* **New Resource:** `azurerm_sentinel_data_connector_microsoft_cloud_app_security` ([#10668](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10668))
+
+ENHANCEMENTS:
+
+* dependencies: updating to v52.3.0 of `github.com/Azure/azure-sdk-for-go` ([#10829](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10829))
+* `azurerm_role_assignment` - support enrollment ids in `scope` argument ([#10890](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10890))
+* `azurerm_kubernetes_cluster` - support `None` for the `private_dns_zone_id` property ([#10774](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10774))
+* `azurerm_kubernetes_cluster` - support for `expander` in the `auto_scaler_profile` block ([#10777](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10777))
+* `azurerm_linux_virtual_machine` - support for configuring `platform_fault_domain` ([#10803](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10803))
+* `azurerm_linux_virtual_machine_scale_set` - will no longer recreate the resource when `rolling_upgrade_policy` or `health_probe_id` is updated ([#10856](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10856))
+* `azurerm_netapp_volume` - support creating from a snapshot via the `create_from_snapshot_resource_id` property ([#10906](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10906))
+* `azurerm_role_assignment` - support for the `description`, `condition`, and `condition_version` ([#10804](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10804))
+* `azurerm_windows_virtual_machine` - support for configuring `platform_fault_domain` ([#10803](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10803))
+* `azurerm_windows_virtual_machine_scale_set` - will no longer recreate the resource when `rolling_upgrade_policy` or `health_probe_id` is updated ([#10856](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10856))
+
+BUG FIXES:
+
+* Data Source: `azurerm_function_app_host_keys` - retrying reading the keys to work around a broken API ([#10894](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10894))
+* Data Source: `azurerm_log_analytics_workspace` - ensure the `id` is returned with the correct casing ([#10892](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10892))
+* Data Source: `azurerm_monitor_action_group` - add support for `aad_auth` attribute ([#10876](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10876))
+* `azurerm_api_management_custom_domain` - prevent a perpetual diff ([#10636](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10636))
+* `azurerm_eventhub_consumer_group` - detecting as removed when deleted in Azure ([#10900](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10900))
+* `azurerm_key_vault_access_policy` - Fix destroy where permissions casing on service does not match config / state ([#10931](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10931))
+* `azurerm_key_vault_secret` - setting the value of the secret after recovering it ([#10920](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10920))
+* `azurerm_kusto_eventhub_data_connection` - make `table_name` and `data_format` optional ([#10913](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10913))
+* `azurerm_mssql_virtual_machine` - workaround for inconsistent API value for `log_backup_frequency_in_minutes` in the `manual_schedule` block ([#10899](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10899))
+* `azurerm_postgres_server` - support for replicaset scaling ([#10754](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10754))
+* `azurerm_postgresql_aad_administrator` - prevent invalid usernames for the `login` property ([#10757](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10757))
+
+## 2.50.0 (March 05, 2021)
+
+FEATURES:
+
+* **New Data Source:** `azurerm_vmware_private_cloud` ([#9284](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9284))
+* **New Resource:** `azurerm_kusto_eventgrid_data_connection` ([#10712](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10712))
+* **New Resource:** `azurerm_sentinel_data_connector_aws_cloud_trail` ([#10664](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10664))
+* **New Resource:** `azurerm_sentinel_data_connector_azure_active_directory` ([#10665](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10665))
+* **New Resource:** `azurerm_sentinel_data_connector_office_365` ([#10671](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10671))
+* **New Resource:** `azurerm_sentinel_data_connector_threat_intelligence` ([#10670](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10670))
+* **New Resource:** `azurerm_subscription` ([#10718](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10718))
+* **New Resource:** `azurerm_vmware_private_cloud` ([#9284](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9284))
+
+ENHANCEMENTS:
+* dependencies: updating to `v52.0.0` of `github.com/Azure/azure-sdk-for-go` ([#10787](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10787))
+* dependencies: updating `compute` to API version `2020-12-01` ([#10650](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10650))
+* Data Source: `azurerm_dns_zone` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
+* `azurerm_dns_a_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
+* `azurerm_dns_aaaa_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
+* `azurerm_dns_caa_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
+* `azurerm_dns_cname_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
+* `azurerm_dns_mx_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
+* `azurerm_dns_ns_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
+* `azurerm_dns_ptr_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
+* `azurerm_dns_srv_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
+* `azurerm_dns_txt_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
+* `azurerm_dns_zone` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
+* `azurerm_function_app_host_keys` - support for `event_grid_extension_config_key` ([#10823](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10823))
+* `azurerm_keyvault_secret` - support for the `versionless_id` property ([#10738](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10738))
+* `azurerm_kubernetes_cluster` - support `private_dns_zone_id` when using a `service_principal` ([#10737](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10737))
+* `azurerm_kusto_cluster` - supports for the `double_encryption_enabled` property ([#10264](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10264))
+* `azurerm_linux_virtual_machine` - support for configuring `license_type` ([#10776](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10776))
+* `azurerm_log_analytics_workspace_resource` - support permanent deletion of workspaces with the `permanently_delete_on_destroy` feature flag ([#10235](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10235))
+* `azurerm_monitor_action_group` - support for secure webhooks via the `aad_auth` block ([#10509](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10509))
+* `azurerm_mssql_database` - support for the `log_monitoring_enabled` property within the `extended_auditing_policy` block ([#10324](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10324))
+* `azurerm_mssql_database_extended_auditing_policy ` - support for the `log_monitoring_enabled` property ([#10324](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10324))
+* `azurerm_mssql_server` - support for the `log_monitoring_enabled` property within the `extended_auditing_policy` block ([#10324](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10324))
+* `azurerm_mssql_server_extended_auditing_policy ` - support for the `log_monitoring_enabled` property [[#10324](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10324)]
+* `azurerm_signalr_service` - support for the `upstream_endpoint` block ([#10459](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10459))
+* `azurerm_sql_server` - support for the `log_monitoring_enabled` property within the `extended_auditing_policy` block ([#10324](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10324))
+* `azurerm_sql_database` - support for the `log_monitoring_enabled` property within the `extended_auditing_policy` block ([#10324](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10324))
+* `azurerm_spring_cloud_java_deployment` - supporting delta updates ([#10729](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10729))
+* `azurerm_virtual_network_gateway` - deprecate `peering_address` in favour of `peering_addresses` ([#10381](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10381))
+
+BUG FIXES:
+
+* Data Source: `azurerm_netapp_volume` - fixing a crash when setting `data_protection_replication` ([#10795](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10795))
+* `azurerm_api_management` - changing the `sku_name` property no longer forces a new resouce to be created ([#10747](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10747))
+* `azurerm_api_management` - the field `tenant_access` can only be configured when not using a Consumption SKU ([#10766](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10766))
+* `azurerum_frontdoor` - removed the MaxItems validation from the Backend Pools ([#10828](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10828))
+* `azurerm_kubernetes_cluster_resource` - allow windows passwords as short as `8` charaters long ([#10816](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10816))
+* `azurerm_cosmosdb_mongo_collection` - ignore throughput if Cosmos DB provisioned in 'serverless' capacity mode ([#10389](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10389))
+* `azurerm_linux_virtual_machine` - parsing the User Assigned Identity ID case-insensitively to work around an Azure API issue ([#10722](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10722))
+* `azurerm_linux_virtual_machine_scale_set` - parsing the User Assigned Identity ID case-insensitively to work around an Azure API issue ([#10722](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10722))
+* `azurerm_netapp_volume` - fixing a crash when setting `data_protection_replication` ([#10795](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10795))
+* `azurerm_virtual_machine` - parsing the User Assigned Identity ID case-insensitively to work around an Azure API issue ([#10722](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10722))
+* `azurerm_virtual_machine_scale_set` - parsing the User Assigned Identity ID case-insensitively to work around an Azure API issue ([#10722](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10722))
+* `azurerm_windows_virtual_machine` - parsing the User Assigned Identity ID case-insensitively to work around an Azure API issue ([#10722](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10722))
+* `azurerm_windows_virtual_machine_scale_set` - parsing the User Assigned Identity ID case-insensitively to work around an Azure API issue ([#10722](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10722))
+
## 2.49.0 (February 26, 2021)
FEATURES:
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 73e07cc6a2624..d76b705c70820 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,387 +1,45 @@
-## 2.59.0 (Unreleased)
+## 2.61.0 (Unreleased)
FEATURES:
-* **New Resource:** `azurerm_sentinel_alert_rule_machine_learning_behavior_analytics` [GH-11552]
-
-ENHANCEMENTS:
-
-* dependencies: updating to `v54.4.0` of `github.com/Azure/azure-sdk-for-go` [GH-11593]
-* Data Source: `azurerm_kubernetes_cluster` - Add `ingress_application_gateway_identity` export for add-on `ingress_application_gateway` [GH-11622]
-* `azurerm_cosmosdb_account` - support for the `backup` property [GH-11597]
-* `azurerm_databox_edge_device` - upgrade to databox edge API version 2020-12-01 and fix test cases [GH-11626]
-* `azurerm_databox_edge_order` - upgrade to databox edge API version 2020-12-01 and fix test cases [GH-11626]
-* `azurerm_kubernetes_cluster` - Add `ingress_application_gateway_identity` export for add-on `ingress_application_gateway` [GH-11622]
-* `azurerm_storage_account` - support for the `azure_files_identity_based_authentication` and `routing_preference` blocks [GH-11485]
-
-BUG FIXES
-
-* Data Source: `azurerm_container_registry_token` - updating the validation for the `name` field [GH-11607]
-* `azurerm_container_registry_token` - updating the validation for the `name` field [GH-11607]
-
-## 2.58.0 (May 07, 2021)
-
-UPGRADE NOTES
-
-* `azurerm_frontdoor` - The `custom_https_provisioning_enabled` field and the `custom_https_configuration` block have been deprecated and has been removed as they are no longer supported. ([#11456](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11456))
-* `azurerm_frontdoor_custom_https_configuration` - The `resource_group_name` has been deprecated and has been removed as it is no longer supported. ([#11456](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11456))
-
-FEATURES:
-
-* **New Data Source:** `azurerm_storage_table_entity` ([#11562](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11562))
-* **New Resource:** `azurerm_app_service_environment_v3` ([#11174](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11174))
-* **New Resource:** `azurerm_cosmosdb_notebook_workspace` ([#11536](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11536))
-* **New Resource:** `azurerm_cosmosdb_sql_trigger` ([#11535](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11535))
-* **New Resource:** `azurerm_cosmosdb_sql_user_defined_function` ([#11537](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11537))
-* **New Resource:** `azurerm_iot_time_series_insights_event_source_iothub` ([#11484](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11484))
-* **New Resource:** `azurerm_storage_blob_inventory_policy` ([#11533](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11533))
-
-ENHANCEMENTS:
-
-* dependencies: updating `network-db` to API version `2020-07-01` ([#10767](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10767))
-* `azurerm_cosmosdb_account` - support for the `access_key_metadata_writes_enabled`, `mongo_server_version`, and `network_acl_bypass` properties ([#11486](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11486))
-* `azurerm_data_factory` - support for the `customer_managed_key_id` property ([#10502](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10502))
-* `azurerm_data_factory_pipeline` - support for the `folder` property ([#11575](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11575))
-* `azurerm_frontdoor` - Fix for Frontdoor resource elements being returned out of order. ([#11456](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11456))
-* `azurerm_hdinsight_*_cluster` - support for autoscale #8104 ([#11547](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11547))
-* `azurerm_network_security_rule` - support for the protocols `Ah` and `Esp` ([#11581](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11581))
-* `azurerm_network_connection_monitor` - support for the `coverage_level`, `excluded_ip_addresses`, `included_ip_addresses`, `target_resource_id`, and `resource_type` propeties ([#11540](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11540))
-
-## 2.57.0 (April 30, 2021)
-
-UPGRADE NOTES
-
-* `azurerm_api_management_authorization_server` - due to a bug in the `2020-12-01` version of the API Management API, changes to `resource_owner_username` and `resource_owner_password` in Azure will not be noticed by Terraform ([#11146](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11146))
-* `azurerm_cosmosdb_account` - the `2021-02-01` version of the cosmos API defaults new MongoDB accounts to `v3.6` rather then `v3.2` ([#10926](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10926))
-* `azurerm_cosmosdb_mongo_collection` - the `_id` index is now required by the new API/MongoDB version ([#10926](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10926))
-* `azurerm_cosmosdb_gremlin_graph` and `azurerm_cosmosdb_sql_container` - the `patition_key_path` property is now required ([#10926](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10926))
-
-FEATURES:
-
-* **Data Source:** `azurerm_container_registry_scope_map` ([#11350](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11350))
-* **Data Source:** `azurerm_container_registry_token` ([#11350](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11350))
-* **Data Source:** `azurerm_postgresql_flexible_server` ([#11081](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11081))
-* **Data Source:** `azurerm_key_vault_managed_hardware_security_module` ([#10873](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10873))
-* **New Resource:** `azurerm_container_registry_scope_map` ([#11350](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11350))
-* **New Resource:** `azurerm_container_registry_token` ([#11350](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11350))
-* **New Resource:** `azurerm_data_factory_dataset_snowflake ` ([#11116](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11116))
-* **New Resource:** `azurerm_healthbot` ([#11002](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11002))
-* **New Resource:** `azurerm_key_vault_managed_hardware_security_module ` ([#10873](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10873))
-* **New Resource:** `azurerm_media_asset_filter` ([#11110](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11110))
-* **New Resource:** `azurerm_mssql_job_agent` ([#11248](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11248))
-* **New Resource:** `azurerm_mssql_job_credential` ([#11363](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11363))
-* **New Resource:** `azurerm_mssql_transparent_data_encryption` ([#11148](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11148))
-* **New Resource:** `azurerm_postgresql_flexible_server` ([#11081](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11081))
-* **New Resource:** `azurerm_spring_cloud_app_cosmosdb_association` ([#11307](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11307))
-* **New Resource:** `azurerm_sentinel_data_connector_microsoft_defender_advanced_threat_protection` ([#10669](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10669))
-* **New Resource:** `azurerm_virtual_machine_configuration_policy_assignment` ([#11334](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11334))
-* **New Resource:** `azurerm_vmware_cluster` ([#10848](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10848))
-
-ENHANCEMENTS:
-
-* dependencies: updating to `v53.4.0` of `github.com/Azure/azure-sdk-for-go` ([#11439](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11439))
-* dependencies: updating to `v1.17.2` of `github.com/hashicorp/terraform-plugin-sdk` ([#11431](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11431))
-* dependencies: updating `cosmos-db` to API version `2021-02-01` ([#10926](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10926))
-* dependencies: updating `keyvault` to API version `v7.1` ([#10926](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10926))
-* Data Source: `azurerm_healthcare_service` - export the `cosmosdb_key_vault_key_versionless_id` attribute ([#11481](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11481))
-* Data Source: `azurerm_key_vault_certificate` - export the `curve` attribute in the `key_properties` block ([#10867](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10867))
-* Data Source: `azurerm_virtual_machine_scale_set` - now exports the `network_interfaces` ([#10585](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10585))
-* `azurerm_app_service` - support for the `site_config.ip_restrictions.headers` and `site_config.scm_ip_restrictions.headers` properties ([#11209](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11209))
-* `azurerm_app_service_slot` - support for the `site_config.ip_restrictions.headers` and `site_config.scm_ip_restrictions.headers` properties ([#11209](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11209))
-* `azurerm_backup_policy_file_share` - support for the `retention_weekly`, `retention_monthly`, and `retention_yearly` blocks ([#10733](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10733))
-* `azurerm_cosmosdb_sql_container` - support for the `conflict_resolution_policy` block ([#11517](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11517))
-* `azurerm_container_group` - support for the `exposed_port` block ([#10491](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10491))
-* `azurerm_container_registry` - deprecating the `georeplication_locations` property in favour of the `georeplications` property [#11200](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11200)]
-* `azurerm_database_migration` - switching to using an ID Formatter ([#11378](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11378))
-* `azurerm_database_migration_project` - switching to using an ID Formatter ([#11378](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11378))
-* `azurerm_databricks_workspace` - switching to using an ID Formatter ([#11378](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11378))
-* `azurerm_databricks_workspace` - fixes propagation of tags to connected resources ([#11405](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11405))
-* `azurerm_data_factory_linked_service_azure_file_storage` - support for the `key_vault_password` property ([#11436](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11436))
-* `azurerm_dedicated_host_group` - support for the `automatic_placement_enabled` property ([#11428](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11428))
-* `azurerm_frontdoor` - sync `MaxItems` on various attributes to match azure docs ([#11421](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11421))
-* `azurerm_frontdoor_custom_https_configuration` - removing secret version validation when using azure key vault as the certificate source ([#11310](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11310))
-* `azurerm_function_app` - support for the `site_config.ip_restrictions.headers` and `site_config.scm_ip_restrictions.headers` properties ([#11209](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11209))
-* `azurerm_function_app` - support the `java_version` property ([#10495](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10495))
-* `azurerm_hdinsight_interactive_query_cluster` - add support for private link endpoint ([#11300](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11300))
-* `azurerm_hdinsight_hadoop_cluster` - add support for private link endpoint ([#11300](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11300))
-* `azurerm_hdinsight_spark_cluster` - add support for private link endpoint ([#11300](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11300))
-* `azurerm_healthcare_service` - support for the `cosmosdb_key_vault_key_versionless_id` property ([#11481](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11481))
-* `azurerm_kubernetes_cluster` - support for the `ingress_application_gateway` addon ([#11376](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11376))
-* `azurerm_kubernetes_cluster` - support for the `azure_rbac_enabled` property ([#10441](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10441))
-* `azurerm_hpc_cache` - support for the `directory_active_directory`, `directory_flat_file`, and `directory_ldap` blocks ([#11332](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11332))
-* `azurerm_key_vault_certificate` - support additional values for the `key_size` property in the `key_properties` block ([#10867](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10867))
-* `azurerm_key_vault_certificate` - support the `curve` property in the `key_properties` block ([#10867](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10867))
-* `azurerm_key_vault_certificate` - the `key_size` property in the `key_properties` block is now optional ([#10867](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10867))
-* `azurerm_kubernetes_cluster` - support for the `dns_prefix_private_cluster` property ([#11321](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11321))
-* `azurerm_kubernetes_cluster` - support for the `max_node_provisioning_time`, `max_unready_percentage`, and `max_unready_nodes` properties ([#11406](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11406))
-* `azurerm_storage_encryption_scope` - support for the `infrastructure_encryption_required` property ([#11462](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11462))
-* `azurerm_kubernetes_cluster` support for the `empty_bulk_delete_max` in the `auto_scaler_profile` block #([#11060](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11060))
-* `azurerm_lighthouse_definition` - support for the `delegated_role_definition_ids` property ([#11269](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11269))
-* `azurerm_managed_application` - support for the `parameter_values` property ([#8632](https://github.com/terraform-providers/terraform-provider-azurerm/issues/8632))
-* `azurerm_managed_disk` - support for the `network_access_policy` and `disk_access_id` properties ([#9862](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9862))
-* `azurerm_postgresql_server` - wait for replica restarts when needed ([#11458](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11458))
-* `azurerm_redis_enterprise_cluster` - support for the `minimum_tls_version` and `hostname` properties ([#11203](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11203))
-* `azurerm_storage_account` - support for the `versioning_enabled`, `default_service_version`, and `last_access_time_enabled` properties within the `blob_properties` block ([#11301](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11301))
-* `azurerm_storage_account` - support for the `nfsv3_enabled` property ([#11387](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11387))
-* `azurerm_storage_management_policy` - support for the `version` block ([#11163](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11163))
-* `azurerm_synapse_workspace` - support for the `customer_managed_key_versionless_id` property ([#11328](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11328))
-
-BUG FIXES:
-
-* `azurerm_api_management` - will no longer panic with an empty `hostname_configuration` ([#11426](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11426))
-* `azurerm_api_management_diagnostic` - fix a crash with the `frontend_request`, `frontend_response`, `backend_request`, `backend_response` blocks ([#11402](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11402))
-* `azurerm_eventgrid_system_topic` - remove strict validation on `topic_type` ([#11352](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11352))
-* `azurerm_iothub` - change `filter_rule` from TypeSet to TypeList to resolve an ordering issue ([#10341](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10341))
-* `azurerm_linux_virtual_machine_scale_set` - the default value for the `priority` property will no longer force a replacement of the resource ([#11362](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11362))
-* `azurerm_monitor_activity_log_alert` - fix a persistent diff for the `service_health` block ([#11383](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11383))
-* `azurerm_mssql_database ` - return an error when secondary database uses `max_size_gb` ([#11401](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11401))
-* `azurerm_mssql_database` - correctly import the `create_mode` property ([#11026](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11026))
-* `azurerm_netap_volume` - correctly set the `replication_frequency` attribute in the `data_protection_replication` block ([#11530](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11530))
-* `azurerm_postgresql_server` - ensure `public_network_access_enabled` is correctly set for replicas ([#11465](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11465))
-* `azurerm_postgresql_server` - can now correctly disable replication if required when `create_mode` is changed ([#11467](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11467))
-* `azurerm_virtual_network_gatewa` - updating the `custom_route` block no longer forces a new resource to be created [GH- 11433]
-
-## 2.56.0 (April 15, 2021)
-
-FEATURES:
-
-* **New Resource:** `azurerm_data_factory_linked_service_azure_databricks` ([#10962](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10962))
-* **New Resource:** `azurerm_data_lake_store_virtual_network_rule` ([#10430](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10430))
-* **New Resource:** `azurerm_media_live_event_output` ([#10917](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10917))
-* **New Resource:** `azurerm_spring_cloud_app_mysql_association` ([#11229](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11229))
-
-ENHANCEMENTS:
-
-* dependencies: updating `github.com/Azure/azure-sdk-for-go` to `v53.0.0` ([#11302](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11302))
-* dependencies: updating `containerservice` to API version `2021-02-01` ([#10972](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10972))
-* `azurerm_app_service` - fix broken `ip_restrictions` and `scm_ip_restrictions` ([#11170](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11170))
-* `azurerm_application_gateway` - support for configuring `firewall_policy_id` within the `path_rule` block ([#11239](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11239))
-* `azurerm_firewall_policy_rule_collection_group` - allow `*` for the `network_rule_collection.destination_ports` property ([#11326](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11326))
-* `azurerm_function_app` - fix broken `ip_restrictions` and `scm_ip_restrictions` ([#11170](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11170))
-* `azurerm_data_factory_linked_service_sql_database` - support managed identity and service principal auth and add the `keyvault_password` property ([#10735](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10735))
-* `azurerm_hpc_cache` - support for `tags` ([#11268](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11268))
-* `azurerm_linux_virtual_machine_scale_set` - Support health extension for rolling ugrade mode ([#9136](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9136))
-* `azurerm_monitor_activity_log_alert` - support for `service_health` ([#10978](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10978))
-* `azurerm_mssql_database` - support for the `geo_backup_enabled` property ([#11177](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11177))
-* `azurerm_public_ip` - support for `ip_tags` ([#11270](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11270))
-* `azurerm_windows_virtual_machine_scale_set` - Support health extension for rolling ugrade mode ([#9136](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9136))
-
-BUG FIXES:
-
-* `azurerm_app_service_slot` - fix crash bug when given empty `http_logs` ([#11267](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11267))
-
-## 2.55.0 (April 08, 2021)
-
-FEATURES:
-
-* **New Resource:** `azurerm_api_management_email_template` ([#10914](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10914))
-* **New Resource:** `azurerm_communication_service` ([#11066](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11066))
-* **New Resource:** `azurerm_express_route_port` ([#10074](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10074))
-* **New Resource:** `azurerm_spring_cloud_app_redis_association` ([#11154](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11154))
-
-ENHANCEMENTS:
-
-* Data Source: `azurerm_user_assigned_identity` - exporting `tenant_id` ([#11253](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11253))
-* Data Source: `azurerm_function_app` - exporting `client_cert_mode` ([#11161](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11161))
-* `azurerm_eventgrid_data_connection` - support for the `table_name`, `mapping_rule_name`, and `data_format` properties ([#11157](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11157))
-* `azurerm_hpc_cache` - support for configuring `dns` ([#11236](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11236))
-* `azurerm_hpc_cache` - support for configuring `ntp_server` ([#11236](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11236))
-* `azurerm_hpc_cache_nfs_target` - support for the `access_policy_name` property ([#11186](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11186))
-* `azurerm_hpc_cache_nfs_target` - `usage_model` can now be set to `READ_HEAVY_CHECK_180`, `WRITE_WORKLOAD_CHECK_30`, `WRITE_WORKLOAD_CHECK_60` and `WRITE_WORKLOAD_CLOUDWS` ([#11247](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11247))
-* `azurerm_function_app` - support for configuring `client_cert_mode` ([#11161](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11161))
-* `azurerm_netapp_volume` - adding `root_access_enabled` to the `export_policy_rule` block ([#11105](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11105))
-* `azurerm_private_endpoint` - allows for an alias to specified ([#10779](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10779))
-* `azurerm_user_assigned_identity` - exporting `tenant_id` ([#11253](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11253))
-* `azurerm_web_application_firewall_policy` - `version` within the `managed_rule_set` block can now be set to (OWASP) `3.2` ([#11244](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11244))
-
-BUG FIXES:
-
-* Data Source: `azurerm_dns_zone` - fixing a bug where the Resource ID wouldn't contain the Resource Group name when looking this up ([#11221](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11221))
-* `azurerm_media_service_account` - `storage_authentication_type` correctly accepts both `ManagedIdentity` and `System` ([#11222](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11222))
-* `azurerm_web_application_firewall_policy` - `http_listener_ids` and `path_based_rule_ids` are now Computed only ([#11196](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11196))
-
-## 2.54.0 (April 02, 2021)
-
-FEATURES:
-
-* **New Resource:** `azurerm_hpc_cache_access_policy` ([#11083](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11083))
-* **New Resource:** `azurerm_management_group_subscription_association` ([#11069](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11069))
-* **New Resource:** `azurerm_media_live_event` ([#10724](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10724))
-
ENHANCEMENTS:
-* dependencies: updating to `v52.6.0` of `github.com/Azure/azure-sdk-for-go` ([#11108](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11108))
-* dependencies: updating `storage` to API version `2021-01-01` ([#11094](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11094))
-* dependencies: updating `storagecache` (a.k.a `hpc`) to API version `2021-03-01` ([#11083](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11083))
-* `azurerm_application_gateway` - support for rewriting urls with the `url` block ([#10950](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10950))
-* `azurerm_cognitive_account` - Add support for `network_acls` ([#11164](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11164))
-* `azurerm_container_registry` - support for the `quarantine_policy_enabled` property ([#11011](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11011))
-* `azurerm_firewall` - support for the `private_ip_ranges` property [p[#10627](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10627)]
-* `azurerm_log_analytics_workspace` - Fix issue where -1 couldn't be specified for `daily_quota_gb` ([#11182](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11182))
-* `azurerm_spring_cloud_service` - supports for the `sample_rate` property ([#11106](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11106))
-* `azurerm_storage_account` - support for the `container_delete_retention_policy` property ([#11131](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11131))
-* `azurerm_virtual_desktop_host_pool` - support for the `custom_rdp_properties` property ([#11160](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11160))
-* `azurerm_web_application_firewall_policy` - support for the `http_listener_ids` and `path_based_rule_ids` properties ([#10860](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10860))
-
BUG FIXES:
-* `azurerm_api_management` - the `certificate_password` property is now optional ([#11139](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11139))
-* `azurerm_data_factory_linked_service_azure_blob_storage` - correct managed identity implementation by implementing the `service_endpoint` property ([#10830](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10830))
-* `azurerm_machine_learning_workspace` - deprecate the `Enterprise` sku as it has been deprecated by Azure ([#11063](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11063))
-* `azurerm_machine_learning_workspace` - support container registries in other subscriptions ([#11065](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11065))
-* `azurerm_site_recovery_fabric` - Fixes error in checking for existing resource ([#11130](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11130))
-* `azurerm_spring_cloud_custom_domain` - `thumbprint` is required when specifying `certificate_name` ([#11145](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11145))
-* `azurerm_subscription` - fixes broken timeout on destroy ([#11124](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11124))
-
-## 2.53.0 (March 26, 2021)
+## 2.60.0 (May 20, 2021)
FEATURES:
-* **New Resource:** `azurerm_management_group_template_deployment` ([#10603](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10603))
-* **New Resource:** `azurerm_tenant_template_deployment` ([#10603](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10603))
-* **New Data Source:** `azurerm_template_spec_version` ([#10603](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10603))
+* **New Data Source:** `azurerm_eventhub_cluster` ([#11763](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11763))
+* **New Data Source:** `azurerm_redis_enterprise_database` ([#11734](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11734))
+* **New Resource:** `azurerm_static_site` ([#7150](https://github.com/terraform-providers/terraform-provider-azurerm/issues/7150))
+* **New Resource:** `azurerm_machine_learning_inference_cluster` ([#11550](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11550))
ENHANCEMENTS:
-* dependencies: updating to `v52.5.0` of `github.com/Azure/azure-sdk-for-go` ([#11015](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11015))
-* Data Source: `azurerm_key_vault_secret` - support for the `versionless_id` attribute ([#11091](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11091))
-* `azurerm_container_registry` - support for the `public_network_access_enabled` property ([#10969](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10969))
-* `azurerm_kusto_eventhub_data_connection` - support for the `event_system_properties` block ([#11006](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11006))
-* `azurerm_logic_app_trigger_recurrence` - Add support for `schedule` ([#11055](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11055))
-* `azurerm_resource_group_template_deployment` - add support for `template_spec_version_id` property ([#10603](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10603))
-* `azurerm_role_definition` - the `permissions` block is now optional ([#9850](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9850))
-* `azurerm_subscription_template_deployment` - add support for `template_spec_version_id` property ([#10603](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10603))
-
-
-BUG FIXES:
-
-* `azurerm_frontdoor_custom_https_configuration` - fixing a crash during update ([#11046](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11046))
-* `azurerm_resource_group_template_deployment` - always sending `parameters_content` during an update ([#11001](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11001))
-* `azurerm_role_definition` - fixing crash when permissions are empty ([#9850](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9850))
-* `azurerm_subscription_template_deployment` - always sending `parameters_content` during an update ([#11001](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11001))
-* `azurerm_spring_cloud_app` - supports for the `tls_enabled` property ([#11064](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11064))
-
-## 2.52.0 (March 18, 2021)
-
-FEATURES:
-
-* **New Resource:** `azurerm_mssql_firewall_rule` ([#10954](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10954))
-* **New Resource:** `azurerm_mssql_virtual_network_rule` ([#10954](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10954))
-
-ENHANCEMENTS:
-
-* dependencies: updating to `v52.4.0` of `github.com/Azure/azure-sdk-for-go` ([#10982](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10982))
-* `azurerm_api_management_subscription` - making `user_id` property optional [[#10638](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10638)}
-
-BUG FIXES:
-
-* `azurerm_cosmosdb_account_resource` - marking `connection_string` as sensitive ([#10942](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10942))
-* `azurerm_eventhub_namespace_disaster_recovery_config` - deprecating the `alternate_name` property due to a service side API bug ([#11013](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11013))
-* `azurerm_local_network_gateway` - making the `address_space` property optional ([#10983](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10983))
-* `azurerm_management_group` - validation for `subscription_id` list property entries ([#10948](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10948))
-
-## 2.51.0 (March 12, 2021)
-
-FEATURES:
-
-* **New Resource:** `azurerm_purview_account` ([#10395](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10395))
-* **New Resource:** `azurerm_data_factory_dataset_parquet` ([#10852](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10852))
-* **New Resource:** `azurerm_security_center_server_vulnerability_assessment` ([#10030](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10030))
-* **New Resource:** `azurerm_security_center_assessment` ([#10694](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10694))
-* **New Resource:** `azurerm_security_center_assessment_policy` ([#10694](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10694))
-* **New Resource:** `azurerm_sentinel_data_connector_azure_advanced_threat_protection` ([#10666](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10666))
-* **New Resource:** `azurerm_sentinel_data_connector_azure_security_center` ([#10667](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10667))
-* **New Resource:** `azurerm_sentinel_data_connector_microsoft_cloud_app_security` ([#10668](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10668))
-
-ENHANCEMENTS:
-
-* dependencies: updating to v52.3.0 of `github.com/Azure/azure-sdk-for-go` ([#10829](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10829))
-* `azurerm_role_assignment` - support enrollment ids in `scope` argument ([#10890](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10890))
-* `azurerm_kubernetes_cluster` - support `None` for the `private_dns_zone_id` property ([#10774](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10774))
-* `azurerm_kubernetes_cluster` - support for `expander` in the `auto_scaler_profile` block ([#10777](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10777))
-* `azurerm_linux_virtual_machine` - support for configuring `platform_fault_domain` ([#10803](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10803))
-* `azurerm_linux_virtual_machine_scale_set` - will no longer recreate the resource when `rolling_upgrade_policy` or `health_probe_id` is updated ([#10856](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10856))
-* `azurerm_netapp_volume` - support creating from a snapshot via the `create_from_snapshot_resource_id` property ([#10906](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10906))
-* `azurerm_role_assignment` - support for the `description`, `condition`, and `condition_version` ([#10804](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10804))
-* `azurerm_windows_virtual_machine` - support for configuring `platform_fault_domain` ([#10803](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10803))
-* `azurerm_windows_virtual_machine_scale_set` - will no longer recreate the resource when `rolling_upgrade_policy` or `health_probe_id` is updated ([#10856](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10856))
-
-BUG FIXES:
-
-* Data Source: `azurerm_function_app_host_keys` - retrying reading the keys to work around a broken API ([#10894](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10894))
-* Data Source: `azurerm_log_analytics_workspace` - ensure the `id` is returned with the correct casing ([#10892](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10892))
-* Data Source: `azurerm_monitor_action_group` - add support for `aad_auth` attribute ([#10876](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10876))
-* `azurerm_api_management_custom_domain` - prevent a perpetual diff ([#10636](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10636))
-* `azurerm_eventhub_consumer_group` - detecting as removed when deleted in Azure ([#10900](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10900))
-* `azurerm_key_vault_access_policy` - Fix destroy where permissions casing on service does not match config / state ([#10931](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10931))
-* `azurerm_key_vault_secret` - setting the value of the secret after recovering it ([#10920](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10920))
-* `azurerm_kusto_eventhub_data_connection` - make `table_name` and `data_format` optional ([#10913](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10913))
-* `azurerm_mssql_virtual_machine` - workaround for inconsistent API value for `log_backup_frequency_in_minutes` in the `manual_schedule` block ([#10899](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10899))
-* `azurerm_postgres_server` - support for replicaset scaling ([#10754](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10754))
-* `azurerm_postgresql_aad_administrator` - prevent invalid usernames for the `login` property ([#10757](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10757))
-
-## 2.50.0 (March 05, 2021)
-
-FEATURES:
-
-* **New Data Source:** `azurerm_vmware_private_cloud` ([#9284](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9284))
-* **New Resource:** `azurerm_kusto_eventgrid_data_connection` ([#10712](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10712))
-* **New Resource:** `azurerm_sentinel_data_connector_aws_cloud_trail` ([#10664](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10664))
-* **New Resource:** `azurerm_sentinel_data_connector_azure_active_directory` ([#10665](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10665))
-* **New Resource:** `azurerm_sentinel_data_connector_office_365` ([#10671](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10671))
-* **New Resource:** `azurerm_sentinel_data_connector_threat_intelligence` ([#10670](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10670))
-* **New Resource:** `azurerm_subscription` ([#10718](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10718))
-* **New Resource:** `azurerm_vmware_private_cloud` ([#9284](https://github.com/terraform-providers/terraform-provider-azurerm/issues/9284))
-
-ENHANCEMENTS:
-* dependencies: updating to `v52.0.0` of `github.com/Azure/azure-sdk-for-go` ([#10787](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10787))
-* dependencies: updating `compute` to API version `2020-12-01` ([#10650](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10650))
-* Data Source: `azurerm_dns_zone` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
-* `azurerm_dns_a_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
-* `azurerm_dns_aaaa_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
-* `azurerm_dns_caa_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
-* `azurerm_dns_cname_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
-* `azurerm_dns_mx_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
-* `azurerm_dns_ns_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
-* `azurerm_dns_ptr_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
-* `azurerm_dns_srv_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
-* `azurerm_dns_txt_record` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
-* `azurerm_dns_zone` - updating to use a consistent Terraform Resource ID to avoid API issues ([#10786](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10786))
-* `azurerm_function_app_host_keys` - support for `event_grid_extension_config_key` ([#10823](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10823))
-* `azurerm_keyvault_secret` - support for the `versionless_id` property ([#10738](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10738))
-* `azurerm_kubernetes_cluster` - support `private_dns_zone_id` when using a `service_principal` ([#10737](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10737))
-* `azurerm_kusto_cluster` - supports for the `double_encryption_enabled` property ([#10264](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10264))
-* `azurerm_linux_virtual_machine` - support for configuring `license_type` ([#10776](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10776))
-* `azurerm_log_analytics_workspace_resource` - support permanent deletion of workspaces with the `permanently_delete_on_destroy` feature flag ([#10235](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10235))
-* `azurerm_monitor_action_group` - support for secure webhooks via the `aad_auth` block ([#10509](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10509))
-* `azurerm_mssql_database` - support for the `log_monitoring_enabled` property within the `extended_auditing_policy` block ([#10324](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10324))
-* `azurerm_mssql_database_extended_auditing_policy ` - support for the `log_monitoring_enabled` property ([#10324](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10324))
-* `azurerm_mssql_server` - support for the `log_monitoring_enabled` property within the `extended_auditing_policy` block ([#10324](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10324))
-* `azurerm_mssql_server_extended_auditing_policy ` - support for the `log_monitoring_enabled` property [[#10324](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10324)]
-* `azurerm_signalr_service` - support for the `upstream_endpoint` block ([#10459](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10459))
-* `azurerm_sql_server` - support for the `log_monitoring_enabled` property within the `extended_auditing_policy` block ([#10324](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10324))
-* `azurerm_sql_database` - support for the `log_monitoring_enabled` property within the `extended_auditing_policy` block ([#10324](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10324))
-* `azurerm_spring_cloud_java_deployment` - supporting delta updates ([#10729](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10729))
-* `azurerm_virtual_network_gateway` - deprecate `peering_address` in favour of `peering_addresses` ([#10381](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10381))
+* dependencies: updating `aks` to use API Version `2021-03-01` ([#11708](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11708))
+* dependencies: updating `eventgrid` to use API Version `2020-10-15-preview` ([#11746](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11746))
+* `azurerm_cosmosdb_mongo_collection` - support for the `analytical_storage_ttl` property ([#11735](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11735))
+* `azurerm_cosmosdb_cassandra_table` - support for the `analytical_storage_ttl` property ([#11755](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11755))
+* `azurerm_healthcare_service` - support for the `public_network_access_enabled` property ([#11736](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11736))
+* `azurerm_hdinsight_kafka_cluster` - support for the `encryption_in_transit_enabled` property ([#11737](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11737))
+* `azurerm_media_services_account` - support for the `key_delivery_access_control` block ([#11726](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11726))
+* `azurerm_monitor_activity_log_alert` - support for `Security` event type for Azure Service Health alerts ([#11802](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11802))
+* `azurerm_netapp_volume` - support for the `security_style` property - ([#11684](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11684))
+* `azurerm_redis_cache` - suppot for the `replicas_per_master` peoperty ([#11714](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11714))
+* `azurerm_spring_cloud_service` - support for the `required_network_traffic_rules` block ([#11633](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11633))
+* `azurerm_storage_account_management_policy` - the `name` property can now contain `-` ([#11792](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11792))
BUG FIXES:
-* Data Source: `azurerm_netapp_volume` - fixing a crash when setting `data_protection_replication` ([#10795](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10795))
-* `azurerm_api_management` - changing the `sku_name` property no longer forces a new resouce to be created ([#10747](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10747))
-* `azurerm_api_management` - the field `tenant_access` can only be configured when not using a Consumption SKU ([#10766](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10766))
-* `azurerum_frontdoor` - removed the MaxItems validation from the Backend Pools ([#10828](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10828))
-* `azurerm_kubernetes_cluster_resource` - allow windows passwords as short as `8` charaters long ([#10816](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10816))
-* `azurerm_cosmosdb_mongo_collection` - ignore throughput if Cosmos DB provisioned in 'serverless' capacity mode ([#10389](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10389))
-* `azurerm_linux_virtual_machine` - parsing the User Assigned Identity ID case-insensitively to work around an Azure API issue ([#10722](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10722))
-* `azurerm_linux_virtual_machine_scale_set` - parsing the User Assigned Identity ID case-insensitively to work around an Azure API issue ([#10722](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10722))
-* `azurerm_netapp_volume` - fixing a crash when setting `data_protection_replication` ([#10795](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10795))
-* `azurerm_virtual_machine` - parsing the User Assigned Identity ID case-insensitively to work around an Azure API issue ([#10722](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10722))
-* `azurerm_virtual_machine_scale_set` - parsing the User Assigned Identity ID case-insensitively to work around an Azure API issue ([#10722](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10722))
-* `azurerm_windows_virtual_machine` - parsing the User Assigned Identity ID case-insensitively to work around an Azure API issue ([#10722](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10722))
-* `azurerm_windows_virtual_machine_scale_set` - parsing the User Assigned Identity ID case-insensitively to work around an Azure API issue ([#10722](https://github.com/terraform-providers/terraform-provider-azurerm/issues/10722))
+* `azurerm_frontdoor` - added a check for `nil` to avoid panic on destroy ([#11720](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11720))
+* `azurerm_linux_virtual_machine_scale_set` - the `extension` blocks are now a set ([#11425](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11425))
+* `azurerm_virtual_network_gateway_connection` - fix a bug where `shared_key` was not being updated ([#11742](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11742))
+* `azurerm_windows_virtual_machine_scale_set` - the `extension` blocks are now a set ([#11425](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11425))
+* `azurerm_windows_virtual_machine_scale_set` - changing the `license_type` will no longer create a new resource ([#11731](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11731))
---
-For information on changes between the v2.49.0 and v2.0.0 releases, please see [the previous v2.x changelog entries](https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/CHANGELOG-v2.md).
+For information on changes between the v2.59.0 and v2.0.0 releases, please see [the previous v2.x changelog entries](https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/CHANGELOG-v2.md).
For information on changes in version v1.44.0 and prior releases, please see [the v1.x changelog](https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/CHANGELOG-v1.md).
diff --git a/azurerm/internal/clients/client.go b/azurerm/internal/clients/client.go
index 3938092f09428..5b87a0ead072a 100644
--- a/azurerm/internal/clients/client.go
+++ b/azurerm/internal/clients/client.go
@@ -23,6 +23,7 @@ import (
cognitiveServices "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/cognitive/client"
communication "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/communication/client"
compute "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/compute/client"
+ consumption "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/consumption/client"
containerServices "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/containers/client"
cosmosdb "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/cosmos/client"
costmanagement "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/costmanagement/client"
@@ -123,6 +124,7 @@ type Client struct {
Cognitive *cognitiveServices.Client
Communication *communication.Client
Compute *compute.Client
+ Consumption *consumption.Client
Containers *containerServices.Client
Cosmos *cosmosdb.Client
CostManagement *costmanagement.Client
@@ -225,6 +227,7 @@ func (client *Client) Build(ctx context.Context, o *common.ClientOptions) error
client.Cognitive = cognitiveServices.NewClient(o)
client.Communication = communication.NewClient(o)
client.Compute = compute.NewClient(o)
+ client.Consumption = consumption.NewClient(o)
client.Containers = containerServices.NewClient(o)
client.Cosmos = cosmosdb.NewClient(o)
client.CostManagement = costmanagement.NewClient(o)
diff --git a/azurerm/internal/provider/services.go b/azurerm/internal/provider/services.go
index 522e88180cbf0..f1c5fe4e86891 100644
--- a/azurerm/internal/provider/services.go
+++ b/azurerm/internal/provider/services.go
@@ -19,6 +19,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/cognitive"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/communication"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/compute"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/consumption"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/containers"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/cosmos"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/costmanagement"
@@ -127,6 +128,7 @@ func SupportedUntypedServices() []sdk.UntypedServiceRegistration {
communication.Registration{},
compute.Registration{},
containers.Registration{},
+ consumption.Registration{},
cosmos.Registration{},
costmanagement.Registration{},
customproviders.Registration{},
diff --git a/azurerm/internal/services/batch/batch_account_data_source_test.go b/azurerm/internal/services/batch/batch_account_data_source_test.go
index faaee68b809fb..9e64eeb436b9e 100644
--- a/azurerm/internal/services/batch/batch_account_data_source_test.go
+++ b/azurerm/internal/services/batch/batch_account_data_source_test.go
@@ -164,7 +164,8 @@ resource "azurerm_key_vault" "test" {
"get",
"list",
"set",
- "delete"
+ "delete",
+ "recover"
]
}
diff --git a/azurerm/internal/services/batch/batch_account_resource_test.go b/azurerm/internal/services/batch/batch_account_resource_test.go
index 67752f9f43e92..24b28575c1d28 100644
--- a/azurerm/internal/services/batch/batch_account_resource_test.go
+++ b/azurerm/internal/services/batch/batch_account_resource_test.go
@@ -273,7 +273,8 @@ resource "azurerm_key_vault" "test" {
"get",
"list",
"set",
- "delete"
+ "delete",
+ "recover"
]
}
diff --git a/azurerm/internal/services/bot/bot_healthbot_resource.go b/azurerm/internal/services/bot/bot_healthbot_resource.go
index cea37626b99a3..5b4c6c0785e63 100644
--- a/azurerm/internal/services/bot/bot_healthbot_resource.go
+++ b/azurerm/internal/services/bot/bot_healthbot_resource.go
@@ -144,7 +144,7 @@ func resourceHealthbotServiceRead(d *schema.ResourceData, meta interface{}) erro
}
if props := resp.Properties; props != nil {
- d.Set("bot_management_portal_link", props.BotManagementPortalLink)
+ d.Set("bot_management_portal_url", props.BotManagementPortalLink)
}
return tags.FlattenAndSet(d, resp.Tags)
}
diff --git a/azurerm/internal/services/compute/linux_virtual_machine_resource_network_test.go b/azurerm/internal/services/compute/linux_virtual_machine_resource_network_test.go
index d553ffd10f649..2d51e3613ef52 100644
--- a/azurerm/internal/services/compute/linux_virtual_machine_resource_network_test.go
+++ b/azurerm/internal/services/compute/linux_virtual_machine_resource_network_test.go
@@ -188,7 +188,7 @@ func TestAccLinuxVirtualMachine_networkPublicDynamicPrivateDynamicIP(t *testing.
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("private_ip_address").Exists(),
- check.That(data.ResourceName).Key("public_ip_address").Exists(),
+ check.That(data.ResourceName).Key("public_ip_address").IsEmpty(),
),
},
data.ImportStep(),
@@ -205,7 +205,7 @@ func TestAccLinuxVirtualMachine_networkPublicDynamicPrivateStaticIP(t *testing.T
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("private_ip_address").Exists(),
- check.That(data.ResourceName).Key("public_ip_address").Exists(),
+ check.That(data.ResourceName).Key("public_ip_address").IsEmpty(),
),
},
data.ImportStep(),
@@ -222,7 +222,7 @@ func TestAccLinuxVirtualMachine_networkPublicDynamicUpdate(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("private_ip_address").Exists(),
- check.That(data.ResourceName).Key("public_ip_address").Exists(),
+ check.That(data.ResourceName).Key("public_ip_address").IsEmpty(),
),
},
data.ImportStep(),
@@ -231,7 +231,7 @@ func TestAccLinuxVirtualMachine_networkPublicDynamicUpdate(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("private_ip_address").Exists(),
- check.That(data.ResourceName).Key("public_ip_address").Exists(),
+ check.That(data.ResourceName).Key("public_ip_address").IsEmpty(),
),
},
data.ImportStep(),
diff --git a/azurerm/internal/services/compute/linux_virtual_machine_resource_other_test.go b/azurerm/internal/services/compute/linux_virtual_machine_resource_other_test.go
index 762369613ef14..6c84a99f15914 100644
--- a/azurerm/internal/services/compute/linux_virtual_machine_resource_other_test.go
+++ b/azurerm/internal/services/compute/linux_virtual_machine_resource_other_test.go
@@ -1623,7 +1623,7 @@ resource "azurerm_linux_virtual_machine" "test" {
name = "acctestVM-%d"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
- size = "Standard_DS3_V2"
+ size = "Standard_DS3_v2"
admin_username = "adminuser"
network_interface_ids = [
azurerm_network_interface.test.id,
diff --git a/azurerm/internal/services/compute/linux_virtual_machine_resource_scaling_test.go b/azurerm/internal/services/compute/linux_virtual_machine_resource_scaling_test.go
index a1b2cebe70132..c185b0fd4cb33 100644
--- a/azurerm/internal/services/compute/linux_virtual_machine_resource_scaling_test.go
+++ b/azurerm/internal/services/compute/linux_virtual_machine_resource_scaling_test.go
@@ -158,7 +158,7 @@ resource "azurerm_linux_virtual_machine" "test" {
name = "acctestVM-%d"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
- size = "Standard_D2S_V3"
+ size = "Standard_D2s_v3"
admin_username = "adminuser"
network_interface_ids = [
azurerm_network_interface.test.id,
diff --git a/azurerm/internal/services/compute/linux_virtual_machine_scale_set_disk_os_resource_test.go b/azurerm/internal/services/compute/linux_virtual_machine_scale_set_disk_os_resource_test.go
index 3134b897e6d24..c071b8a70ebad 100644
--- a/azurerm/internal/services/compute/linux_virtual_machine_scale_set_disk_os_resource_test.go
+++ b/azurerm/internal/services/compute/linux_virtual_machine_scale_set_disk_os_resource_test.go
@@ -424,7 +424,7 @@ resource "azurerm_linux_virtual_machine_scale_set" "test" {
"azurerm_key_vault_access_policy.disk-encryption",
]
}
-`, r.disksOSDisk_diskEncryptionSetDependencies(data), data.RandomInteger)
+`, r.disksOSDisk_diskEncryptionSetResource(data), data.RandomInteger)
}
func (r LinuxVirtualMachineScaleSetResource) disksOSDiskEphemeral(data acceptance.TestData) string {
diff --git a/azurerm/internal/services/compute/linux_virtual_machine_scale_set_images_resource_test.go b/azurerm/internal/services/compute/linux_virtual_machine_scale_set_images_resource_test.go
index 782b55894e2b2..54642ce4df612 100644
--- a/azurerm/internal/services/compute/linux_virtual_machine_scale_set_images_resource_test.go
+++ b/azurerm/internal/services/compute/linux_virtual_machine_scale_set_images_resource_test.go
@@ -35,6 +35,32 @@ func TestAccLinuxVirtualMachineScaleSet_imagesAutomaticUpdate(t *testing.T) {
})
}
+func TestAccLinuxVirtualMachineScaleSet_imagesDisableAutomaticUpdate(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_linux_virtual_machine_scale_set", "test")
+ r := LinuxVirtualMachineScaleSetResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.imagesDisableAutomaticUpdate(data, "16.04-LTS"),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(
+ "admin_password",
+ ),
+ {
+ Config: r.imagesDisableAutomaticUpdate(data, "18.04-LTS"),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(
+ "admin_password",
+ ),
+ })
+}
+
func TestAccLinuxVirtualMachineScaleSet_imagesFromCapturedVirtualMachineImage(t *testing.T) {
data := acceptance.BuildTestData(t, "azurerm_linux_virtual_machine_scale_set", "test")
r := LinuxVirtualMachineScaleSetResource{}
@@ -287,6 +313,59 @@ resource "azurerm_linux_virtual_machine_scale_set" "test" {
`, r.template(data), data.RandomInteger, data.RandomInteger, data.RandomInteger, version)
}
+func (r LinuxVirtualMachineScaleSetResource) imagesDisableAutomaticUpdate(data acceptance.TestData, version string) string {
+ return fmt.Sprintf(`
+%s
+resource "azurerm_linux_virtual_machine_scale_set" "test" {
+ name = "acctestvmss-%d"
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+ sku = "Standard_F2"
+ instances = 1
+ admin_username = "adminuser"
+ admin_password = "P@ssword1234!"
+ upgrade_mode = "Automatic"
+
+ disable_password_authentication = false
+
+ source_image_reference {
+ publisher = "Canonical"
+ offer = "UbuntuServer"
+ sku = "%s"
+ version = "latest"
+ }
+
+ os_disk {
+ storage_account_type = "Standard_LRS"
+ caching = "ReadWrite"
+ }
+
+ network_interface {
+ name = "example"
+ primary = true
+
+ ip_configuration {
+ name = "internal"
+ primary = true
+ subnet_id = azurerm_subnet.test.id
+ }
+ }
+
+ automatic_os_upgrade_policy {
+ disable_automatic_rollback = false
+ enable_automatic_os_upgrade = false
+ }
+
+ rolling_upgrade_policy {
+ max_batch_instance_percent = 100
+ max_unhealthy_instance_percent = 100
+ max_unhealthy_upgraded_instance_percent = 100
+ pause_time_between_batches = "PT30S"
+ }
+}
+`, r.template(data), data.RandomInteger, version)
+}
+
func (r LinuxVirtualMachineScaleSetResource) imagesFromVirtualMachinePrerequisites(data acceptance.TestData) string {
return fmt.Sprintf(`
%s
diff --git a/azurerm/internal/services/compute/linux_virtual_machine_scale_set_other_resource_test.go b/azurerm/internal/services/compute/linux_virtual_machine_scale_set_other_resource_test.go
index 26acff68582c2..ea214aa131936 100644
--- a/azurerm/internal/services/compute/linux_virtual_machine_scale_set_other_resource_test.go
+++ b/azurerm/internal/services/compute/linux_virtual_machine_scale_set_other_resource_test.go
@@ -507,8 +507,7 @@ func TestAccLinuxVirtualMachineScaleSet_otherEncryptionAtHost(t *testing.T) {
check.That(data.ResourceName).ExistsInAzure(r),
),
},
- // TODO - extension should be changed to extension.0.protected_settings when either binary testing is available or this feature is promoted from beta
- data.ImportStep("admin_password", "extension"),
+ data.ImportStep("admin_password", "extension.0.protected_settings"),
})
}
@@ -523,24 +522,21 @@ func TestAccLinuxVirtualMachineScaleSet_otherEncryptionAtHostUpdate(t *testing.T
check.That(data.ResourceName).ExistsInAzure(r),
),
},
- // TODO - extension should be changed to extension.0.protected_settings when either binary testing is available or this feature is promoted from beta
- data.ImportStep("admin_password", "extension"),
+ data.ImportStep("admin_password", "extension.0.protected_settings"),
{
Config: r.otherEncryptionAtHost(data, false),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
),
},
- // TODO - extension should be changed to extension.0.protected_settings when either binary testing is available or this feature is promoted from beta
- data.ImportStep("admin_password", "extension"),
+ data.ImportStep("admin_password", "extension.0.protected_settings"),
{
Config: r.otherEncryptionAtHost(data, true),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
),
},
- // TODO - extension should be changed to extension.0.protected_settings when either binary testing is available or this feature is promoted from beta
- data.ImportStep("admin_password", "extension"),
+ data.ImportStep("admin_password", "extension.0.protected_settings"),
})
}
@@ -555,8 +551,7 @@ func TestAccLinuxVirtualMachineScaleSet_otherEncryptionAtHostWithCMK(t *testing.
check.That(data.ResourceName).ExistsInAzure(r),
),
},
- // TODO - extension should be changed to extension.0.protected_settings when either binary testing is available or this feature is promoted from beta
- data.ImportStep("admin_password", "extension"),
+ data.ImportStep("admin_password", "extension.0.protected_settings"),
})
}
@@ -571,8 +566,7 @@ func TestAccLinuxVirtualMachineScaleSet_otherPlatformFaultDomainCount(t *testing
check.That(data.ResourceName).ExistsInAzure(r),
),
},
- // TODO - extension should be changed to extension.0.protected_settings when either binary testing is available or this feature is promoted from beta
- data.ImportStep("admin_password", "extension"),
+ data.ImportStep("admin_password", "extension.0.protected_settings"),
})
}
diff --git a/azurerm/internal/services/compute/linux_virtual_machine_scale_set_resource.go b/azurerm/internal/services/compute/linux_virtual_machine_scale_set_resource.go
index 0b2eda1e626fa..c3df5166b1d3e 100644
--- a/azurerm/internal/services/compute/linux_virtual_machine_scale_set_resource.go
+++ b/azurerm/internal/services/compute/linux_virtual_machine_scale_set_resource.go
@@ -417,7 +417,7 @@ func resourceLinuxVirtualMachineScaleSetCreate(d *schema.ResourceData, meta inte
hasHealthExtension := false
if vmExtensionsRaw, ok := d.GetOk("extension"); ok {
- virtualMachineProfile.ExtensionProfile, hasHealthExtension, err = expandVirtualMachineScaleSetExtensions(vmExtensionsRaw.([]interface{}))
+ virtualMachineProfile.ExtensionProfile, hasHealthExtension, err = expandVirtualMachineScaleSetExtensions(vmExtensionsRaw.(*schema.Set).List())
if err != nil {
return err
}
@@ -432,8 +432,10 @@ func resourceLinuxVirtualMachineScaleSetCreate(d *schema.ResourceData, meta inte
// otherwise the service return the error:
// Automatic OS Upgrade is not supported for this Virtual Machine Scale Set because a health probe or health extension was not specified.
- if upgradeMode == compute.Automatic && len(automaticOSUpgradePolicyRaw) > 0 && (healthProbeId == "" && !hasHealthExtension) {
- return fmt.Errorf("`health_probe_id` must be set or a health extension must be specified when `upgrade_mode` is set to %q and `automatic_os_upgrade_policy` block exists", string(upgradeMode))
+ if upgradeMode == compute.Automatic && len(automaticOSUpgradePolicyRaw) > 0 {
+ if *automaticOSUpgradePolicy.EnableAutomaticOSUpgrade && (healthProbeId == "" && !hasHealthExtension) {
+ return fmt.Errorf("`health_probe_id` must be set or a health extension must be specified when `upgrade_mode` is set to %q and `automatic_os_upgrade_policy` block exists", string(upgradeMode))
+ }
}
// otherwise the service return the error:
@@ -822,7 +824,7 @@ func resourceLinuxVirtualMachineScaleSetUpdate(d *schema.ResourceData, meta inte
if d.HasChanges("extension", "extensions_time_budget") {
updateInstances = true
- extensionProfile, _, err := expandVirtualMachineScaleSetExtensions(d.Get("extension").([]interface{}))
+ extensionProfile, _, err := expandVirtualMachineScaleSetExtensions(d.Get("extension").(*schema.Set).List())
if err != nil {
return err
}
diff --git a/azurerm/internal/services/compute/managed_disk_resource.go b/azurerm/internal/services/compute/managed_disk_resource.go
index 7647280f18ccc..6cc30a17eff98 100644
--- a/azurerm/internal/services/compute/managed_disk_resource.go
+++ b/azurerm/internal/services/compute/managed_disk_resource.go
@@ -25,7 +25,7 @@ import (
func resourceManagedDisk() *schema.Resource {
return &schema.Resource{
- Create: resourceManagedDiskCreateUpdate,
+ Create: resourceManagedDiskCreate,
Read: resourceManagedDiskRead,
Update: resourceManagedDiskUpdate,
Delete: resourceManagedDiskDelete,
@@ -163,15 +163,21 @@ func resourceManagedDisk() *schema.Resource {
ValidateFunc: azure.ValidateResourceID,
},
+ "tier": {
+ Type: schema.TypeString,
+ Optional: true,
+ Computed: true,
+ },
+
"tags": tags.Schema(),
},
}
}
-func resourceManagedDiskCreateUpdate(d *schema.ResourceData, meta interface{}) error {
+func resourceManagedDiskCreate(d *schema.ResourceData, meta interface{}) error {
subscriptionId := meta.(*clients.Client).Account.SubscriptionId
client := meta.(*clients.Client).Compute.DisksClient
- ctx, cancel := timeouts.ForCreateUpdate(meta.(*clients.Client).StopContext, d)
+ ctx, cancel := timeouts.ForCreate(meta.(*clients.Client).StopContext, d)
defer cancel()
log.Printf("[INFO] preparing arguments for Azure ARM Managed Disk creation.")
@@ -197,6 +203,7 @@ func resourceManagedDiskCreateUpdate(d *schema.ResourceData, meta interface{}) e
createOption := compute.DiskCreateOption(d.Get("create_option").(string))
storageAccountType := d.Get("storage_account_type").(string)
osType := d.Get("os_type").(string)
+
t := d.Get("tags").(map[string]interface{})
zones := azure.ExpandZones(d.Get("zones").([]interface{}))
skuName := compute.DiskStorageAccountTypes(storageAccountType)
@@ -295,6 +302,13 @@ func resourceManagedDiskCreateUpdate(d *schema.ResourceData, meta interface{}) e
}
}
+ if tier := d.Get("tier").(string); tier != "" {
+ if storageAccountType != string(compute.PremiumZRS) && storageAccountType != string(compute.PremiumLRS) {
+ return fmt.Errorf("`tier` can only be specified when `storage_account_type` is set to `Premium_LRS` or `Premium_ZRS`")
+ }
+ props.Tier = &tier
+ }
+
createDisk := compute.Disk{
Name: &name,
Location: &location,
@@ -353,6 +367,15 @@ func resourceManagedDiskUpdate(d *schema.ResourceData, meta interface{}) error {
DiskUpdateProperties: &compute.DiskUpdateProperties{},
}
+ if d.HasChange("tier") {
+ if storageAccountType != string(compute.PremiumZRS) && storageAccountType != string(compute.PremiumLRS) {
+ return fmt.Errorf("`tier` can only be specified when `storage_account_type` is set to `Premium_LRS` or `Premium_ZRS`")
+ }
+ shouldShutDown = true
+ tier := d.Get("tier").(string)
+ diskUpdate.Tier = &tier
+ }
+
if d.HasChange("tags") {
t := d.Get("tags").(map[string]interface{})
diskUpdate.Tags = tags.Expand(t)
@@ -600,6 +623,7 @@ func resourceManagedDiskRead(d *schema.ResourceData, meta interface{}) error {
d.Set("disk_iops_read_write", props.DiskIOPSReadWrite)
d.Set("disk_mbps_read_write", props.DiskMBpsReadWrite)
d.Set("os_type", props.OsType)
+ d.Set("tier", props.Tier)
if networkAccessPolicy := props.NetworkAccessPolicy; networkAccessPolicy != compute.AllowAll {
d.Set("network_access_policy", props.NetworkAccessPolicy)
diff --git a/azurerm/internal/services/compute/managed_disk_resource_test.go b/azurerm/internal/services/compute/managed_disk_resource_test.go
index 1aa4878b3dfda..dc0c206714f80 100644
--- a/azurerm/internal/services/compute/managed_disk_resource_test.go
+++ b/azurerm/internal/services/compute/managed_disk_resource_test.go
@@ -322,6 +322,30 @@ func TestAccManagedDisk_attachedStorageTypeUpdate(t *testing.T) {
})
}
+func TestAccManagedDisk_attachedTierUpdate(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_managed_disk", "test")
+ r := ManagedDiskResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.tierUpdateWhileAttached(data, "P10"),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ check.That(data.ResourceName).Key("tier").HasValue("P10"),
+ ),
+ },
+ data.ImportStep(),
+ {
+ Config: r.tierUpdateWhileAttached(data, "P20"),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ check.That(data.ResourceName).Key("tier").HasValue("P20"),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
func TestAccAzureRMManagedDisk_networkPolicy(t *testing.T) {
data := acceptance.BuildTestData(t, "azurerm_managed_disk", "test")
r := ManagedDiskResource{}
@@ -1023,6 +1047,33 @@ resource "azurerm_virtual_machine_data_disk_attachment" "test" {
`, r.templateAttached(data), data.RandomInteger, diskSize)
}
+func (r ManagedDiskResource) tierUpdateWhileAttached(data acceptance.TestData, tier string) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+%s
+
+resource "azurerm_managed_disk" "test" {
+ name = "%d-disk1"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ storage_account_type = "Premium_LRS"
+ create_option = "Empty"
+ disk_size_gb = 10
+ tier = "%s"
+}
+
+resource "azurerm_virtual_machine_data_disk_attachment" "test" {
+ managed_disk_id = azurerm_managed_disk.test.id
+ virtual_machine_id = azurerm_linux_virtual_machine.test.id
+ lun = "0"
+ caching = "None"
+}
+`, r.templateAttached(data), data.RandomInteger, tier)
+}
+
func (r ManagedDiskResource) storageTypeUpdateWhilstAttached(data acceptance.TestData, storageAccountType string) string {
return fmt.Sprintf(`
provider "azurerm" {
diff --git a/azurerm/internal/services/compute/virtual_machine_resource_test.go b/azurerm/internal/services/compute/virtual_machine_resource_test.go
index 1e095797d4184..bc48b4a0183ad 100644
--- a/azurerm/internal/services/compute/virtual_machine_resource_test.go
+++ b/azurerm/internal/services/compute/virtual_machine_resource_test.go
@@ -29,7 +29,6 @@ func TestAccVirtualMachine_winTimeZone(t *testing.T) {
Config: r.winTimeZone(data),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
- check.That(data.ResourceName).Key("os_profile_windows_config.59207889.timezone").HasValue("Pacific Standard Time"),
),
},
})
diff --git a/azurerm/internal/services/compute/virtual_machine_scale_set.go b/azurerm/internal/services/compute/virtual_machine_scale_set.go
index a33a54e507eb4..4c4a4177da4d3 100644
--- a/azurerm/internal/services/compute/virtual_machine_scale_set.go
+++ b/azurerm/internal/services/compute/virtual_machine_scale_set.go
@@ -324,7 +324,7 @@ func virtualMachineScaleSetIPConfigurationSchema() *schema.Schema {
func virtualMachineScaleSetIPConfigurationSchemaForDataSource() *schema.Schema {
return &schema.Schema{
Type: schema.TypeList,
- Required: true,
+ Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"name": {
@@ -448,7 +448,7 @@ func virtualMachineScaleSetPublicIPAddressSchema() *schema.Schema {
func virtualMachineScaleSetPublicIPAddressSchemaForDataSource() *schema.Schema {
return &schema.Schema{
Type: schema.TypeList,
- Optional: true,
+ Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"name": {
@@ -1494,7 +1494,7 @@ func FlattenVirtualMachineScaleSetAutomaticRepairsPolicy(input *compute.Automati
func VirtualMachineScaleSetExtensionsSchema() *schema.Schema {
return &schema.Schema{
- Type: schema.TypeList,
+ Type: schema.TypeSet,
Optional: true,
Computed: true,
Elem: &schema.Resource{
diff --git a/azurerm/internal/services/compute/virtual_machine_scale_set_resource_test.go b/azurerm/internal/services/compute/virtual_machine_scale_set_resource_test.go
index 55744165b01eb..e9ccab88fc12f 100644
--- a/azurerm/internal/services/compute/virtual_machine_scale_set_resource_test.go
+++ b/azurerm/internal/services/compute/virtual_machine_scale_set_resource_test.go
@@ -575,7 +575,7 @@ func TestAccVirtualMachineScaleSet_multipleAssignedMSI(t *testing.T) {
Config: r.multipleAssignedMSI(data),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
- check.That(data.ResourceName).Key("identity.0.type").HasValue("SystemAssigned"),
+ check.That(data.ResourceName).Key("identity.0.type").HasValue("SystemAssigned, UserAssigned"),
check.That(data.ResourceName).Key("identity.0.identity_ids.#").HasValue("1"),
resource.TestMatchResourceAttr(data.ResourceName, "identity.0.principal_id", validate.UUIDRegExp),
),
diff --git a/azurerm/internal/services/compute/virtual_machine_unmanaged_disks_resource_test.go b/azurerm/internal/services/compute/virtual_machine_unmanaged_disks_resource_test.go
index 471b560890c7f..dfdc5e9ddc350 100644
--- a/azurerm/internal/services/compute/virtual_machine_unmanaged_disks_resource_test.go
+++ b/azurerm/internal/services/compute/virtual_machine_unmanaged_disks_resource_test.go
@@ -649,7 +649,7 @@ resource "azurerm_storage_blob" "test" {
storage_account_name = "${azurerm_storage_account.test.name}"
storage_container_name = "${azurerm_storage_container.test.name}"
- type = "page"
+ type = "Page"
source_uri = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/myosdisk1.vhd"
}
diff --git a/azurerm/internal/services/compute/windows_virtual_machine_resource_other_test.go b/azurerm/internal/services/compute/windows_virtual_machine_resource_other_test.go
index a9157371e19d8..3bbcfdfad2b96 100644
--- a/azurerm/internal/services/compute/windows_virtual_machine_resource_other_test.go
+++ b/azurerm/internal/services/compute/windows_virtual_machine_resource_other_test.go
@@ -705,7 +705,7 @@ func TestAccWindowsVirtualMachine_otherUltraSsdDefault(t *testing.T) {
data.ResourceTest(t, r, []resource.TestStep{
{
- Config: r.otherUltraSsdDefault(data),
+ Config: r.otherUltraSsd(data, false),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("additional_capabilities.0.ultra_ssd_enabled").HasValue("false"),
@@ -723,7 +723,7 @@ func TestAccWindowsVirtualMachine_otherUltraSsdEnabled(t *testing.T) {
data.ResourceTest(t, r, []resource.TestStep{
{
- Config: r.otherUltraSsdEnabled(data),
+ Config: r.otherUltraSsd(data, true),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("additional_capabilities.0.ultra_ssd_enabled").HasValue("true"),
@@ -741,7 +741,7 @@ func TestAccWindowsVirtualMachine_otherUltraSsdUpdated(t *testing.T) {
data.ResourceTest(t, r, []resource.TestStep{
{
- Config: r.otherUltraSsdDefault(data),
+ Config: r.otherUltraSsd(data, false),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("additional_capabilities.0.ultra_ssd_enabled").HasValue("false"),
@@ -751,7 +751,7 @@ func TestAccWindowsVirtualMachine_otherUltraSsdUpdated(t *testing.T) {
"admin_password",
),
{
- Config: r.otherUltraSsdEnabled(data),
+ Config: r.otherUltraSsd(data, true),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("additional_capabilities.0.ultra_ssd_enabled").HasValue("true"),
@@ -2081,7 +2081,7 @@ resource "azurerm_windows_virtual_machine" "test" {
`, r.template(data))
}
-func (r WindowsVirtualMachineResource) otherUltraSsdDefault(data acceptance.TestData) string {
+func (r WindowsVirtualMachineResource) otherUltraSsd(data acceptance.TestData, ultraSsdEnabled bool) string {
return fmt.Sprintf(`
%s
@@ -2089,38 +2089,7 @@ resource "azurerm_windows_virtual_machine" "test" {
name = local.vm_name
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
- size = "Standard_F2"
- admin_username = "adminuser"
- admin_password = "P@$$w0rd1234!"
- network_interface_ids = [
- azurerm_network_interface.test.id,
- ]
- zone = 1
-
- os_disk {
- caching = "ReadWrite"
- storage_account_type = "Standard_LRS"
- }
-
- source_image_reference {
- publisher = "MicrosoftWindowsServer"
- offer = "WindowsServer"
- sku = "2016-Datacenter"
- version = "latest"
- }
-}
-`, r.template(data))
-}
-
-func (r WindowsVirtualMachineResource) otherUltraSsdEnabled(data acceptance.TestData) string {
- return fmt.Sprintf(`
-%s
-
-resource "azurerm_windows_virtual_machine" "test" {
- name = local.vm_name
- resource_group_name = azurerm_resource_group.test.name
- location = azurerm_resource_group.test.location
- size = "Standard_F2"
+ size = "Standard_D2s_v3"
admin_username = "adminuser"
admin_password = "P@$$w0rd1234!"
network_interface_ids = [
@@ -2141,10 +2110,10 @@ resource "azurerm_windows_virtual_machine" "test" {
}
additional_capabilities {
- ultra_ssd_enabled = true
+ ultra_ssd_enabled = %t
}
}
-`, r.template(data))
+`, r.template(data), ultraSsdEnabled)
}
func (r WindowsVirtualMachineResource) otherWinRMHTTP(data acceptance.TestData) string {
@@ -2379,7 +2348,7 @@ resource "azurerm_windows_virtual_machine" "test" {
name = local.vm_name
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
- size = "Standard_DS3_V2"
+ size = "Standard_DS3_v2"
admin_username = "adminuser"
admin_password = "P@$$w0rd1234!"
network_interface_ids = [
diff --git a/azurerm/internal/services/compute/windows_virtual_machine_resource_scaling_test.go b/azurerm/internal/services/compute/windows_virtual_machine_resource_scaling_test.go
index ece95f8a75326..ebde5f7d7a2ca 100644
--- a/azurerm/internal/services/compute/windows_virtual_machine_resource_scaling_test.go
+++ b/azurerm/internal/services/compute/windows_virtual_machine_resource_scaling_test.go
@@ -182,7 +182,7 @@ resource "azurerm_windows_virtual_machine" "test" {
name = local.vm_name
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
- size = "Standard_D2S_V3"
+ size = "Standard_D2s_v3"
admin_username = "adminuser"
admin_password = "P@$$w0rd1234!"
network_interface_ids = [
diff --git a/azurerm/internal/services/compute/windows_virtual_machine_scale_set_images_resource_test.go b/azurerm/internal/services/compute/windows_virtual_machine_scale_set_images_resource_test.go
index 9ba6f676e0dd4..a98c603dd33c0 100644
--- a/azurerm/internal/services/compute/windows_virtual_machine_scale_set_images_resource_test.go
+++ b/azurerm/internal/services/compute/windows_virtual_machine_scale_set_images_resource_test.go
@@ -37,6 +37,34 @@ func TestAccWindowsVirtualMachineScaleSet_imagesAutomaticUpdate(t *testing.T) {
})
}
+func TestAccWindowsVirtualMachineScaleSet_imagesDisableAutomaticUpdate(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_windows_virtual_machine_scale_set", "test")
+ r := WindowsVirtualMachineScaleSetResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.imagesDisableAutomaticUpdate(data, "2016-Datacenter"),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(
+ "admin_password",
+ "enable_automatic_updates",
+ ),
+ {
+ Config: r.imagesDisableAutomaticUpdate(data, "2019-Datacenter"),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(
+ "admin_password",
+ "enable_automatic_updates",
+ ),
+ })
+}
+
func TestAccWindowsVirtualMachineScaleSet_imagesFromCapturedVirtualMachineImage(t *testing.T) {
data := acceptance.BuildTestData(t, "azurerm_windows_virtual_machine_scale_set", "test")
r := WindowsVirtualMachineScaleSetResource{}
@@ -287,6 +315,57 @@ resource "azurerm_windows_virtual_machine_scale_set" "test" {
`, r.template(data), data.RandomInteger, data.RandomInteger, version)
}
+func (r WindowsVirtualMachineScaleSetResource) imagesDisableAutomaticUpdate(data acceptance.TestData, version string) string {
+ return fmt.Sprintf(`
+%s
+resource "azurerm_windows_virtual_machine_scale_set" "test" {
+ name = local.vm_name
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+ sku = "Standard_F2"
+ instances = 1
+ admin_username = "adminuser"
+ admin_password = "P@ssword1234!"
+ upgrade_mode = "Automatic"
+
+ source_image_reference {
+ publisher = "MicrosoftWindowsServer"
+ offer = "WindowsServer"
+ sku = "%s"
+ version = "latest"
+ }
+
+ os_disk {
+ storage_account_type = "Standard_LRS"
+ caching = "ReadWrite"
+ }
+
+ network_interface {
+ name = "example"
+ primary = true
+
+ ip_configuration {
+ name = "internal"
+ primary = true
+ subnet_id = azurerm_subnet.test.id
+ }
+ }
+
+ automatic_os_upgrade_policy {
+ disable_automatic_rollback = false
+ enable_automatic_os_upgrade = false
+ }
+
+ rolling_upgrade_policy {
+ max_batch_instance_percent = 100
+ max_unhealthy_instance_percent = 100
+ max_unhealthy_upgraded_instance_percent = 100
+ pause_time_between_batches = "PT30S"
+ }
+}
+`, r.template(data), version)
+}
+
func (r WindowsVirtualMachineScaleSetResource) imagesFromVirtualMachinePrerequisites(data acceptance.TestData) string {
return fmt.Sprintf(`
%s
diff --git a/azurerm/internal/services/compute/windows_virtual_machine_scale_set_network_resource_test.go b/azurerm/internal/services/compute/windows_virtual_machine_scale_set_network_resource_test.go
index a23c94a3c8728..e2549c17c4640 100644
--- a/azurerm/internal/services/compute/windows_virtual_machine_scale_set_network_resource_test.go
+++ b/azurerm/internal/services/compute/windows_virtual_machine_scale_set_network_resource_test.go
@@ -640,13 +640,14 @@ resource "azurerm_subnet" "other" {
}
resource "azurerm_windows_virtual_machine_scale_set" "test" {
- name = local.vm_name
- resource_group_name = azurerm_resource_group.test.name
- location = azurerm_resource_group.test.location
- sku = "Standard_F2"
- instances = 1
- admin_username = "adminuser"
- admin_password = "P@ssword1234!"
+ name = local.vm_name
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+ sku = "Standard_F2"
+ instances = 1
+ admin_username = "adminuser"
+ admin_password = "P@ssword1234!"
+ computer_name_prefix = "testvm"
source_image_reference {
publisher = "MicrosoftWindowsServer"
diff --git a/azurerm/internal/services/compute/windows_virtual_machine_scale_set_other_resource_test.go b/azurerm/internal/services/compute/windows_virtual_machine_scale_set_other_resource_test.go
index b76f18404c3e3..6bcd35d55bcaf 100644
--- a/azurerm/internal/services/compute/windows_virtual_machine_scale_set_other_resource_test.go
+++ b/azurerm/internal/services/compute/windows_virtual_machine_scale_set_other_resource_test.go
@@ -613,8 +613,7 @@ func TestAccWindowsVirtualMachineScaleSet_otherEncryptionAtHostEnabled(t *testin
check.That(data.ResourceName).ExistsInAzure(r),
),
},
- // TODO - extension should be changed to extension.0.protected_settings when either binary testing is available or this feature is promoted from beta
- data.ImportStep("admin_password", "extension"),
+ data.ImportStep("admin_password", "extension.0.protected_settings"),
})
}
@@ -629,24 +628,21 @@ func TestAccWindowsVirtualMachineScaleSet_otherEncryptionAtHostEnabledUpdate(t *
check.That(data.ResourceName).ExistsInAzure(r),
),
},
- // TODO - extension should be changed to extension.0.protected_settings when either binary testing is available or this feature is promoted from beta
- data.ImportStep("admin_password", "extension"),
+ data.ImportStep("admin_password", "extension.0.protected_settings"),
{
Config: r.otherEncryptionAtHostEnabled(data, false),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
),
},
- // TODO - extension should be changed to extension.0.protected_settings when either binary testing is available or this feature is promoted from beta
- data.ImportStep("admin_password", "extension"),
+ data.ImportStep("admin_password", "extension.0.protected_settings"),
{
Config: r.otherEncryptionAtHostEnabled(data, true),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
),
},
- // TODO - extension should be changed to extension.0.protected_settings when either binary testing is available or this feature is promoted from beta
- data.ImportStep("admin_password", "extension"),
+ data.ImportStep("admin_password", "extension.0.protected_settings"),
})
}
@@ -661,8 +657,7 @@ func TestAccWindowsVirtualMachineScaleSet_otherEncryptionAtHostEnabledWithCMK(t
check.That(data.ResourceName).ExistsInAzure(r),
),
},
- // TODO - extension should be changed to extension.0.protected_settings when either binary testing is available or this feature is promoted from beta
- data.ImportStep("admin_password", "extension"),
+ data.ImportStep("admin_password", "extension.0.protected_settings"),
})
}
@@ -677,8 +672,7 @@ func TestAccWindowsVirtualMachineScaleSet_otherPlatformFaultDomainCount(t *testi
check.That(data.ResourceName).ExistsInAzure(r),
),
},
- // TODO - extension should be changed to extension.0.protected_settings when either binary testing is available or this feature is promoted from beta
- data.ImportStep("admin_password", "extension"),
+ data.ImportStep("admin_password", "extension.0.protected_settings"),
})
}
@@ -736,6 +730,42 @@ func TestAccWindowsVirtualMachineScaleSet_otherHealthProbeUpdate(t *testing.T) {
})
}
+func TestAccWindowsVirtualMachineScaleSet_otherLicenseTypeUpdated(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_windows_virtual_machine_scale_set", "test")
+ r := WindowsVirtualMachineScaleSetResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.otherLicenseTypeDefault(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(
+ "admin_password",
+ ),
+ {
+ Config: r.otherLicenseType(data, "Windows_Client"),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ check.That(data.ResourceName).Key("license_type").HasValue("Windows_Client"),
+ ),
+ },
+ data.ImportStep(
+ "admin_password",
+ ),
+ {
+ Config: r.otherLicenseTypeDefault(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(
+ "admin_password",
+ ),
+ })
+}
+
func (WindowsVirtualMachineScaleSetResource) otherAdditionalUnattendContent(data acceptance.TestData) string {
template := WindowsVirtualMachineScaleSetResource{}.template(data)
return fmt.Sprintf(`
@@ -2794,3 +2824,82 @@ resource "azurerm_windows_virtual_machine_scale_set" "test" {
}
`, r.template(data), data.RandomInteger)
}
+
+func (r WindowsVirtualMachineScaleSetResource) otherLicenseTypeDefault(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_windows_virtual_machine_scale_set" "test" {
+ name = local.vm_name
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+ sku = "Standard_F2"
+ instances = 1
+ admin_username = "adminuser"
+ admin_password = "P@ssword1234!"
+
+ source_image_reference {
+ publisher = "MicrosoftWindowsServer"
+ offer = "WindowsServer"
+ sku = "2019-Datacenter"
+ version = "latest"
+ }
+
+ os_disk {
+ storage_account_type = "Standard_LRS"
+ caching = "ReadWrite"
+ }
+
+ network_interface {
+ name = "example"
+ primary = true
+
+ ip_configuration {
+ name = "internal"
+ primary = true
+ subnet_id = azurerm_subnet.test.id
+ }
+ }
+}
+`, r.template(data))
+}
+
+func (r WindowsVirtualMachineScaleSetResource) otherLicenseType(data acceptance.TestData, licenseType string) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_windows_virtual_machine_scale_set" "test" {
+ name = local.vm_name
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+ sku = "Standard_F2"
+ instances = 1
+ admin_username = "adminuser"
+ admin_password = "P@ssword1234!"
+ license_type = %q
+
+ source_image_reference {
+ publisher = "MicrosoftWindowsServer"
+ offer = "WindowsServer"
+ sku = "2019-Datacenter"
+ version = "latest"
+ }
+
+ os_disk {
+ storage_account_type = "Standard_LRS"
+ caching = "ReadWrite"
+ }
+
+ network_interface {
+ name = "example"
+ primary = true
+
+ ip_configuration {
+ name = "internal"
+ primary = true
+ subnet_id = azurerm_subnet.test.id
+ }
+ }
+}
+`, r.template(data), licenseType)
+}
diff --git a/azurerm/internal/services/compute/windows_virtual_machine_scale_set_resource.go b/azurerm/internal/services/compute/windows_virtual_machine_scale_set_resource.go
index 663344a2dc879..80721ec59d0df 100644
--- a/azurerm/internal/services/compute/windows_virtual_machine_scale_set_resource.go
+++ b/azurerm/internal/services/compute/windows_virtual_machine_scale_set_resource.go
@@ -163,12 +163,18 @@ func resourceWindowsVirtualMachineScaleSet() *schema.Resource {
"license_type": {
Type: schema.TypeString,
Optional: true,
- ForceNew: true,
ValidateFunc: validation.StringInSlice([]string{
"None",
"Windows_Client",
"Windows_Server",
}, false),
+ DiffSuppressFunc: func(_, old, new string, _ *schema.ResourceData) bool {
+ if old == "None" && new == "" || old == "" && new == "None" {
+ return true
+ }
+
+ return false
+ },
},
"max_bid_price": {
@@ -433,7 +439,7 @@ func resourceWindowsVirtualMachineScaleSetCreate(d *schema.ResourceData, meta in
hasHealthExtension := false
if vmExtensionsRaw, ok := d.GetOk("extension"); ok {
- virtualMachineProfile.ExtensionProfile, hasHealthExtension, err = expandVirtualMachineScaleSetExtensions(vmExtensionsRaw.([]interface{}))
+ virtualMachineProfile.ExtensionProfile, hasHealthExtension, err = expandVirtualMachineScaleSetExtensions(vmExtensionsRaw.(*schema.Set).List())
if err != nil {
return err
}
@@ -448,8 +454,10 @@ func resourceWindowsVirtualMachineScaleSetCreate(d *schema.ResourceData, meta in
// otherwise the service return the error:
// Automatic OS Upgrade is not supported for this Virtual Machine Scale Set because a health probe or health extension was not specified.
- if upgradeMode == compute.Automatic && len(automaticOSUpgradePolicyRaw) > 0 && (healthProbeId == "" && !hasHealthExtension) {
- return fmt.Errorf("`health_probe_id` must be set or a health extension must be specified when `upgrade_mode` is set to %q and `automatic_os_upgrade_policy` block exists", string(upgradeMode))
+ if upgradeMode == compute.Automatic && len(automaticOSUpgradePolicyRaw) > 0 {
+ if *automaticOSUpgradePolicy.EnableAutomaticOSUpgrade && (healthProbeId == "" && !hasHealthExtension) {
+ return fmt.Errorf("`health_probe_id` must be set or a health extension must be specified when `upgrade_mode` is set to %q and `automatic_os_upgrade_policy` block exists", string(upgradeMode))
+ }
}
// otherwise the service return the error:
@@ -459,11 +467,7 @@ func resourceWindowsVirtualMachineScaleSetCreate(d *schema.ResourceData, meta in
}
enableAutomaticUpdates := d.Get("enable_automatic_updates").(bool)
- if upgradeMode != compute.Automatic {
- virtualMachineProfile.OsProfile.WindowsConfiguration.EnableAutomaticUpdates = utils.Bool(enableAutomaticUpdates)
- } else if !enableAutomaticUpdates {
- return fmt.Errorf("`enable_automatic_updates` must be set to `true` when `upgrade_mode` is set to `Automatic`")
- }
+ virtualMachineProfile.OsProfile.WindowsConfiguration.EnableAutomaticUpdates = utils.Bool(enableAutomaticUpdates)
if v, ok := d.Get("max_bid_price").(float64); ok && v > 0 {
if priority != compute.Spot {
@@ -811,6 +815,17 @@ func resourceWindowsVirtualMachineScaleSetUpdate(d *schema.ResourceData, meta in
}
}
+ if d.HasChange("license_type") {
+ license := d.Get("license_type").(string)
+ if license == "" {
+ // Only for create no specification is possible in the API. API does not allow empty string in update.
+ // So removing attribute license_type from Terraform configuration if it was set to value other than 'None' would lead to an endless loop in apply.
+ // To allow updating in this case set value explicitly to 'None'.
+ license = "None"
+ }
+ updateProps.VirtualMachineProfile.LicenseType = &license
+ }
+
if d.HasChange("automatic_instance_repair") {
automaticRepairsPolicyRaw := d.Get("automatic_instance_repair").([]interface{})
automaticRepairsPolicy := ExpandVirtualMachineScaleSetAutomaticRepairsPolicy(automaticRepairsPolicyRaw)
@@ -853,7 +868,7 @@ func resourceWindowsVirtualMachineScaleSetUpdate(d *schema.ResourceData, meta in
if d.HasChanges("extension", "extensions_time_budget") {
updateInstances = true
- extensionProfile, _, err := expandVirtualMachineScaleSetExtensions(d.Get("extension").([]interface{}))
+ extensionProfile, _, err := expandVirtualMachineScaleSetExtensions(d.Get("extension").(*schema.Set).List())
if err != nil {
return err
}
diff --git a/azurerm/internal/services/consumption/client/client.go b/azurerm/internal/services/consumption/client/client.go
new file mode 100644
index 0000000000000..f0047cb4bbc44
--- /dev/null
+++ b/azurerm/internal/services/consumption/client/client.go
@@ -0,0 +1,19 @@
+package client
+
+import (
+ "github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/common"
+)
+
+type Client struct {
+ BudgetsClient *consumption.BudgetsClient
+}
+
+func NewClient(o *common.ClientOptions) *Client {
+ budgetsClient := consumption.NewBudgetsClientWithBaseURI(o.ResourceManagerEndpoint, o.SubscriptionId)
+ o.ConfigureClient(&budgetsClient.Client, o.ResourceManagerAuthorizer)
+
+ return &Client{
+ BudgetsClient: &budgetsClient,
+ }
+}
diff --git a/azurerm/internal/services/consumption/consumption_budget_common.go b/azurerm/internal/services/consumption/consumption_budget_common.go
new file mode 100644
index 0000000000000..7636ae514005d
--- /dev/null
+++ b/azurerm/internal/services/consumption/consumption_budget_common.go
@@ -0,0 +1,110 @@
+package consumption
+
+import (
+ "fmt"
+
+ "github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/shopspring/decimal"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+func resourceArmConsumptionBudgetRead(d *schema.ResourceData, meta interface{}, scope, name string) error {
+ client := meta.(*clients.Client).Consumption.BudgetsClient
+ ctx, cancel := timeouts.ForCreateUpdate(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ resp, err := client.Get(ctx, scope, name)
+ if err != nil {
+ if utils.ResponseWasNotFound(resp.Response) {
+ d.SetId("")
+ return nil
+ }
+ return fmt.Errorf("error making read request on Azure Consumption Budget %q for scope %q: %+v", name, scope, err)
+ }
+
+ d.Set("name", resp.Name)
+ if resp.Amount != nil {
+ amount, _ := resp.Amount.Float64()
+ d.Set("amount", amount)
+ }
+ d.Set("time_grain", string(resp.TimeGrain))
+ d.Set("time_period", FlattenConsumptionBudgetTimePeriod(resp.TimePeriod))
+ d.Set("notification", schema.NewSet(schema.HashResource(SchemaConsumptionBudgetNotificationElement()), FlattenConsumptionBudgetNotifications(resp.Notifications)))
+ d.Set("filter", FlattenConsumptionBudgetFilter(resp.Filter))
+
+ return nil
+}
+
+func resourceArmConsumptionBudgetDelete(d *schema.ResourceData, meta interface{}, scope string) error {
+ client := meta.(*clients.Client).Consumption.BudgetsClient
+ ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ name := d.Get("name").(string)
+ resp, err := client.Delete(ctx, scope, name)
+
+ if err != nil {
+ if !utils.ResponseWasNotFound(resp) {
+ return fmt.Errorf("error issuing delete request on Azure Consumption Budget %q for scope %q: %+v", name, scope, err)
+ }
+ }
+
+ return nil
+}
+
+func resourceArmConsumptionBudgetCreateUpdate(d *schema.ResourceData, meta interface{}, resourceName, scope string) error {
+ client := meta.(*clients.Client).Consumption.BudgetsClient
+ ctx, cancel := timeouts.ForCreateUpdate(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ name := d.Get("name").(string)
+
+ if d.IsNewResource() {
+ existing, err := client.Get(ctx, scope, name)
+ if err != nil {
+ if !utils.ResponseWasNotFound(existing.Response) {
+ return fmt.Errorf("error checking for presence of existing Consumption Budget %q for scope %q: %s", name, scope, err)
+ }
+ }
+
+ if existing.ID != nil && *existing.ID != "" {
+ return tf.ImportAsExistsError(resourceName, *existing.ID)
+ }
+ }
+
+ amount := decimal.NewFromFloat(d.Get("amount").(float64))
+ timePeriod, err := ExpandConsumptionBudgetTimePeriod(d.Get("time_period").([]interface{}))
+ if err != nil {
+ return fmt.Errorf("error expanding `time_period`: %+v", err)
+ }
+
+ // The Consumption Budget API requires the category type field to be set in a budget's properties.
+ // 'Cost' is the only valid Budget type today according to the API spec.
+ category := "Cost"
+ parameters := consumption.Budget{
+ Name: utils.String(name),
+ BudgetProperties: &consumption.BudgetProperties{
+ Amount: &amount,
+ Category: &category,
+ Filter: ExpandConsumptionBudgetFilter(d.Get("filter").([]interface{})),
+ Notifications: ExpandConsumptionBudgetNotifications(d.Get("notification").(*schema.Set).List()),
+ TimeGrain: consumption.TimeGrainType(d.Get("time_grain").(string)),
+ TimePeriod: timePeriod,
+ },
+ }
+
+ read, err := client.CreateOrUpdate(ctx, scope, name, parameters)
+ if err != nil {
+ return err
+ }
+
+ if read.ID == nil {
+ return fmt.Errorf("cannot read Azure Consumption Budget %q for scope %q", name, scope)
+ }
+
+ return nil
+}
diff --git a/azurerm/internal/services/consumption/consumption_budget_resource_group_resource.go b/azurerm/internal/services/consumption/consumption_budget_resource_group_resource.go
new file mode 100644
index 0000000000000..ae057007e2d68
--- /dev/null
+++ b/azurerm/internal/services/consumption/consumption_budget_resource_group_resource.go
@@ -0,0 +1,79 @@
+package consumption
+
+import (
+ "time"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/consumption/parse"
+ resourceParse "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/resource/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
+)
+
+func resourceArmConsumptionBudgetResourceGroup() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceArmConsumptionBudgetResourceGroupCreateUpdate,
+ Read: resourceArmConsumptionBudgetResourceGroupRead,
+ Update: resourceArmConsumptionBudgetResourceGroupCreateUpdate,
+ Delete: resourceArmConsumptionBudgetResourceGroupDelete,
+ Importer: pluginsdk.ImporterValidatingResourceId(func(id string) error {
+ _, err := parse.ConsumptionBudgetResourceGroupID(id)
+ return err
+ }),
+
+ Timeouts: &schema.ResourceTimeout{
+ Create: schema.DefaultTimeout(30 * time.Minute),
+ Read: schema.DefaultTimeout(5 * time.Minute),
+ Update: schema.DefaultTimeout(30 * time.Minute),
+ Delete: schema.DefaultTimeout(30 * time.Minute),
+ },
+
+ Schema: SchemaConsumptionBudgetResourceGroupResource(),
+ }
+}
+
+func resourceArmConsumptionBudgetResourceGroupCreateUpdate(d *schema.ResourceData, meta interface{}) error {
+ name := d.Get("name").(string)
+ resourceGroupId, err := resourceParse.ResourceGroupID(d.Get("resource_group_id").(string))
+ if err != nil {
+ return err
+ }
+
+ err = resourceArmConsumptionBudgetCreateUpdate(d, meta, consumptionBudgetResourceGroupName, resourceGroupId.ID())
+ if err != nil {
+ return err
+ }
+
+ d.SetId(parse.NewConsumptionBudgetResourceGroupID(resourceGroupId.SubscriptionId, resourceGroupId.ResourceGroup, name).ID())
+
+ return resourceArmConsumptionBudgetResourceGroupRead(d, meta)
+}
+
+func resourceArmConsumptionBudgetResourceGroupRead(d *schema.ResourceData, meta interface{}) error {
+ consumptionBudgetId, err := parse.ConsumptionBudgetResourceGroupID(d.Id())
+ if err != nil {
+ return err
+ }
+
+ resourceGroupId := resourceParse.NewResourceGroupID(consumptionBudgetId.SubscriptionId, consumptionBudgetId.ResourceGroup)
+
+ err = resourceArmConsumptionBudgetRead(d, meta, resourceGroupId.ID(), consumptionBudgetId.BudgetName)
+ if err != nil {
+ return err
+ }
+
+ // The scope of a Resource Group consumption budget is the Resource Group ID
+ d.Set("resource_group_id", resourceGroupId.ID())
+
+ return nil
+}
+
+func resourceArmConsumptionBudgetResourceGroupDelete(d *schema.ResourceData, meta interface{}) error {
+ consumptionBudgetId, err := parse.ConsumptionBudgetResourceGroupID(d.Id())
+ if err != nil {
+ return err
+ }
+
+ resourceGroupId := resourceParse.NewResourceGroupID(consumptionBudgetId.SubscriptionId, consumptionBudgetId.ResourceGroup)
+
+ return resourceArmConsumptionBudgetDelete(d, meta, resourceGroupId.ID())
+}
diff --git a/azurerm/internal/services/consumption/consumption_budget_resource_group_resource_test.go b/azurerm/internal/services/consumption/consumption_budget_resource_group_resource_test.go
new file mode 100644
index 0000000000000..385ad9dda0506
--- /dev/null
+++ b/azurerm/internal/services/consumption/consumption_budget_resource_group_resource_test.go
@@ -0,0 +1,443 @@
+package consumption_test
+
+import (
+ "context"
+ "fmt"
+ "testing"
+ "time"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
+ "github.com/hashicorp/terraform-plugin-sdk/terraform"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance/check"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/consumption/parse"
+ resourceParse "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/resource/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+type ConsumptionBudgetResourceGroupResource struct{}
+
+func TestAccConsumptionBudgetResourceGroup_basic(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_consumption_budget_resource_group", "test")
+ r := ConsumptionBudgetResourceGroupResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.basic(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
+func TestAccConsumptionBudgetResourceGroup_basicUpdate(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_consumption_budget_resource_group", "test")
+ r := ConsumptionBudgetResourceGroupResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.basic(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ {
+ Config: r.basicUpdate(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
+func TestAccConsumptionBudgetResourceGroup_requiresImport(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_consumption_budget_resource_group", "test")
+ r := ConsumptionBudgetResourceGroupResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.basic(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ {
+ Config: r.requiresImport(data),
+ ExpectError: acceptance.RequiresImportError("azurerm_consumption_budget_resource_group"),
+ },
+ })
+}
+
+func TestAccConsumptionBudgetResourceGroup_complete(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_consumption_budget_resource_group", "test")
+ r := ConsumptionBudgetResourceGroupResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.complete(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+func TestAccConsumptionBudgetResourceGroup_completeUpdate(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_consumption_budget_resource_group", "test")
+ r := ConsumptionBudgetResourceGroupResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.complete(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ {
+ Config: r.completeUpdate(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
+func (ConsumptionBudgetResourceGroupResource) Exists(ctx context.Context, clients *clients.Client, state *terraform.InstanceState) (*bool, error) {
+ id, err := parse.ConsumptionBudgetResourceGroupID(state.ID)
+ if err != nil {
+ return nil, err
+ }
+
+ resourceGroupId := resourceParse.NewResourceGroupID(id.SubscriptionId, id.ResourceGroup)
+ resp, err := clients.Consumption.BudgetsClient.Get(ctx, resourceGroupId.ID(), id.BudgetName)
+ if err != nil {
+ return nil, fmt.Errorf("retrieving %s: %v", id.String(), err)
+ }
+
+ return utils.Bool(resp.BudgetProperties != nil), nil
+}
+
+func (ConsumptionBudgetResourceGroupResource) basic(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+data "azurerm_subscription" "current" {}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-%d"
+ location = "%s"
+}
+
+resource "azurerm_consumption_budget_resource_group" "test" {
+ name = "acctestconsumptionbudgetresourcegroup-%d"
+ resource_group_id = azurerm_resource_group.test.id
+
+ amount = 1000
+ time_grain = "Monthly"
+
+ time_period {
+ start_date = "%s"
+ }
+
+ filter {
+ tag {
+ name = "foo"
+ values = [
+ "bar",
+ ]
+ }
+ }
+
+ notification {
+ enabled = true
+ threshold = 90.0
+ operator = "EqualTo"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+ }
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, consumptionBudgetTestStartDate().Format(time.RFC3339))
+}
+
+func (ConsumptionBudgetResourceGroupResource) basicUpdate(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+data "azurerm_subscription" "current" {}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-%d"
+ location = "%s"
+}
+
+resource "azurerm_consumption_budget_resource_group" "test" {
+ name = "acctestconsumptionbudgetresourcegroup-%d"
+ resource_group_id = azurerm_resource_group.test.id
+
+ // Changed the amount from 1000 to 2000
+ amount = 3000
+ time_grain = "Monthly"
+
+ // Add end_date
+ time_period {
+ start_date = "%s"
+ end_date = "%s"
+ }
+
+ // Remove filter
+
+ // Changed threshold and operator
+ notification {
+ enabled = true
+ threshold = 95.0
+ operator = "GreaterThan"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+ }
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, consumptionBudgetTestStartDate().Format(time.RFC3339), consumptionBudgetTestStartDate().AddDate(1, 1, 0).Format(time.RFC3339))
+}
+
+func (ConsumptionBudgetResourceGroupResource) requiresImport(data acceptance.TestData) string {
+ template := ConsumptionBudgetResourceGroupResource{}.basic(data)
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_consumption_budget_resource_group" "import" {
+ name = azurerm_consumption_budget_resource_group.test.name
+ resource_group_id = azurerm_resource_group.test.id
+
+ amount = azurerm_consumption_budget_resource_group.test.amount
+ time_grain = azurerm_consumption_budget_resource_group.test.time_grain
+
+ time_period {
+ start_date = "%s"
+ }
+
+ notification {
+ enabled = true
+ threshold = 90.0
+ operator = "EqualTo"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+ }
+}
+`, template, consumptionBudgetTestStartDate().Format(time.RFC3339))
+}
+
+func (ConsumptionBudgetResourceGroupResource) complete(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+data "azurerm_subscription" "current" {}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-%d"
+ location = "%s"
+}
+
+resource "azurerm_monitor_action_group" "test" {
+ name = "acctestAG-%d"
+ resource_group_name = azurerm_resource_group.test.name
+ short_name = "acctestAG"
+}
+
+resource "azurerm_consumption_budget_resource_group" "test" {
+ name = "acctestconsumptionbudgetresourcegroup-%d"
+ resource_group_id = azurerm_resource_group.test.id
+
+ amount = 1000
+ time_grain = "Monthly"
+
+ time_period {
+ start_date = "%s"
+ end_date = "%s"
+ }
+
+ filter {
+ dimension {
+ name = "ResourceGroupName"
+ values = [
+ azurerm_resource_group.test.name,
+ ]
+ }
+
+ dimension {
+ name = "ResourceId"
+ values = [
+ azurerm_monitor_action_group.test.id,
+ ]
+ }
+
+ tag {
+ name = "foo"
+ values = [
+ "bar",
+ "baz",
+ ]
+ }
+
+ not {
+ tag {
+ name = "zip"
+ values = [
+ "zap",
+ "zop"
+ ]
+ }
+ }
+ }
+
+ notification {
+ enabled = true
+ threshold = 90.0
+ operator = "EqualTo"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+
+ contact_groups = [
+ azurerm_monitor_action_group.test.id,
+ ]
+
+ contact_roles = [
+ "Owner",
+ ]
+ }
+
+ notification {
+ enabled = false
+ threshold = 100.0
+ operator = "GreaterThan"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+ }
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, data.RandomInteger, consumptionBudgetTestStartDate().Format(time.RFC3339), consumptionBudgetTestStartDate().AddDate(1, 1, 0).Format(time.RFC3339))
+}
+
+func (ConsumptionBudgetResourceGroupResource) completeUpdate(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+data "azurerm_subscription" "current" {}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-%d"
+ location = "%s"
+}
+
+resource "azurerm_monitor_action_group" "test" {
+ name = "acctestAG-%d"
+ resource_group_name = azurerm_resource_group.test.name
+ short_name = "acctestAG"
+}
+
+resource "azurerm_consumption_budget_resource_group" "test" {
+ name = "acctestconsumptionbudgetresourcegroup-%d"
+ resource_group_id = azurerm_resource_group.test.id
+
+ // Changed the amount from 1000 to 2000
+ amount = 2000
+ time_grain = "Monthly"
+
+ // Removed end_date
+ time_period {
+ start_date = "%s"
+ }
+
+ filter {
+ dimension {
+ name = "ResourceGroupName"
+ values = [
+ azurerm_resource_group.test.name,
+ ]
+ }
+
+ tag {
+ name = "foo"
+ values = [
+ "bar",
+ "baz",
+ ]
+ }
+
+ // Added tag: zip
+ tag {
+ name = "zip"
+ values = [
+ "zap",
+ "zop",
+ ]
+ }
+
+ // Removed not block
+ }
+
+ notification {
+ enabled = true
+ threshold = 90.0
+ operator = "EqualTo"
+
+ contact_emails = [
+ // Added baz@example.com
+ "baz@example.com",
+ "foo@example.com",
+ "bar@example.com",
+ ]
+
+ contact_groups = [
+ azurerm_monitor_action_group.test.id,
+ ]
+ // Removed contact_roles
+ }
+
+ notification {
+ // Set enabled to true
+ enabled = true
+ threshold = 100.0
+ // Changed from EqualTo to GreaterThanOrEqualTo
+ operator = "GreaterThanOrEqualTo"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+
+ // Added contact_groups
+ contact_groups = [
+ azurerm_monitor_action_group.test.id,
+ ]
+ }
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, data.RandomInteger, consumptionBudgetTestStartDate().Format(time.RFC3339))
+}
diff --git a/azurerm/internal/services/consumption/consumption_budget_subscription_resource.go b/azurerm/internal/services/consumption/consumption_budget_subscription_resource.go
new file mode 100644
index 0000000000000..3d43f0c2f5ba7
--- /dev/null
+++ b/azurerm/internal/services/consumption/consumption_budget_subscription_resource.go
@@ -0,0 +1,75 @@
+package consumption
+
+import (
+ "time"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/consumption/parse"
+ subscriptionParse "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/subscription/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
+)
+
+func resourceArmConsumptionBudgetSubscription() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceArmConsumptionBudgetSubscriptionCreateUpdate,
+ Read: resourceArmConsumptionBudgetSubscriptionRead,
+ Update: resourceArmConsumptionBudgetSubscriptionCreateUpdate,
+ Delete: resourceArmConsumptionBudgetSubscriptionDelete,
+ Importer: pluginsdk.ImporterValidatingResourceId(func(id string) error {
+ _, err := parse.ConsumptionBudgetSubscriptionID(id)
+ return err
+ }),
+
+ Timeouts: &schema.ResourceTimeout{
+ Create: schema.DefaultTimeout(30 * time.Minute),
+ Read: schema.DefaultTimeout(5 * time.Minute),
+ Update: schema.DefaultTimeout(30 * time.Minute),
+ Delete: schema.DefaultTimeout(30 * time.Minute),
+ },
+
+ Schema: SchemaConsumptionBudgetSubscriptionResource(),
+ }
+}
+
+func resourceArmConsumptionBudgetSubscriptionCreateUpdate(d *schema.ResourceData, meta interface{}) error {
+ name := d.Get("name").(string)
+ subscriptionId := subscriptionParse.NewSubscriptionId(d.Get("subscription_id").(string))
+
+ err := resourceArmConsumptionBudgetCreateUpdate(d, meta, consumptionBudgetSubscriptionName, subscriptionId.ID())
+ if err != nil {
+ return err
+ }
+
+ d.SetId(parse.NewConsumptionBudgetSubscriptionID(subscriptionId.SubscriptionID, name).ID())
+
+ return resourceArmConsumptionBudgetSubscriptionRead(d, meta)
+}
+
+func resourceArmConsumptionBudgetSubscriptionRead(d *schema.ResourceData, meta interface{}) error {
+ consumptionBudgetId, err := parse.ConsumptionBudgetSubscriptionID(d.Id())
+ if err != nil {
+ return err
+ }
+
+ subscriptionId := subscriptionParse.NewSubscriptionId(consumptionBudgetId.SubscriptionId)
+
+ err = resourceArmConsumptionBudgetRead(d, meta, subscriptionId.ID(), consumptionBudgetId.BudgetName)
+ if err != nil {
+ return err
+ }
+
+ d.Set("subscription_id", consumptionBudgetId.SubscriptionId)
+
+ return nil
+}
+
+func resourceArmConsumptionBudgetSubscriptionDelete(d *schema.ResourceData, meta interface{}) error {
+ consumptionBudgetId, err := parse.ConsumptionBudgetSubscriptionID(d.Id())
+ if err != nil {
+ return err
+ }
+
+ subscriptionId := subscriptionParse.NewSubscriptionId(consumptionBudgetId.SubscriptionId)
+
+ return resourceArmConsumptionBudgetDelete(d, meta, subscriptionId.ID())
+}
diff --git a/azurerm/internal/services/consumption/consumption_budget_subscription_resource_test.go b/azurerm/internal/services/consumption/consumption_budget_subscription_resource_test.go
new file mode 100644
index 0000000000000..34b4e36ba6e10
--- /dev/null
+++ b/azurerm/internal/services/consumption/consumption_budget_subscription_resource_test.go
@@ -0,0 +1,439 @@
+package consumption_test
+
+import (
+ "context"
+ "fmt"
+ "testing"
+ "time"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
+ "github.com/hashicorp/terraform-plugin-sdk/terraform"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance/check"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/consumption/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+func consumptionBudgetTestStartDate() time.Time {
+ utcNow := time.Now().UTC()
+ startDate := time.Date(utcNow.Year(), utcNow.Month(), 1, 0, 0, 0, 0, utcNow.Location())
+
+ return startDate
+}
+
+type ConsumptionBudgetSubscriptionResource struct{}
+
+func TestAccConsumptionBudgetSubscription_basic(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_consumption_budget_subscription", "test")
+ r := ConsumptionBudgetSubscriptionResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.basic(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
+func TestAccConsumptionBudgetSubscription_basicUpdate(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_consumption_budget_subscription", "test")
+ r := ConsumptionBudgetSubscriptionResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.basic(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ {
+ Config: r.basicUpdate(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
+func TestAccConsumptionBudgetSubscription_requiresImport(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_consumption_budget_subscription", "test")
+ r := ConsumptionBudgetSubscriptionResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.basic(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ {
+ Config: r.requiresImport(data),
+ ExpectError: acceptance.RequiresImportError("azurerm_consumption_budget_subscription"),
+ },
+ })
+}
+
+func TestAccConsumptionBudgetSubscription_complete(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_consumption_budget_subscription", "test")
+ r := ConsumptionBudgetSubscriptionResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.complete(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+func TestAccConsumptionBudgetSubscription_completeUpdate(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_consumption_budget_subscription", "test")
+ r := ConsumptionBudgetSubscriptionResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.basic(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ {
+ Config: r.completeUpdate(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
+func (ConsumptionBudgetSubscriptionResource) Exists(ctx context.Context, clients *clients.Client, state *terraform.InstanceState) (*bool, error) {
+ id, err := parse.ConsumptionBudgetSubscriptionID(state.ID)
+ if err != nil {
+ return nil, err
+ }
+
+ scope := fmt.Sprintf("/subscriptions/%s", id.SubscriptionId)
+ resp, err := clients.Consumption.BudgetsClient.Get(ctx, scope, id.BudgetName)
+ if err != nil {
+ return nil, fmt.Errorf("retrieving %s: %v", id.String(), err)
+ }
+
+ return utils.Bool(resp.BudgetProperties != nil), nil
+}
+
+func (ConsumptionBudgetSubscriptionResource) basic(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+data "azurerm_subscription" "current" {}
+
+resource "azurerm_consumption_budget_subscription" "test" {
+ name = "acctestconsumptionbudgetsubscription-%d"
+ subscription_id = data.azurerm_subscription.current.subscription_id
+
+ amount = 1000
+ time_grain = "Monthly"
+
+ time_period {
+ start_date = "%s"
+ }
+
+ filter {
+ tag {
+ name = "foo"
+ values = [
+ "bar"
+ ]
+ }
+ }
+
+ notification {
+ enabled = true
+ threshold = 90.0
+ operator = "EqualTo"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+ }
+}
+`, data.RandomInteger, consumptionBudgetTestStartDate().Format(time.RFC3339))
+}
+
+func (ConsumptionBudgetSubscriptionResource) basicUpdate(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+data "azurerm_subscription" "current" {}
+
+resource "azurerm_consumption_budget_subscription" "test" {
+ name = "acctestconsumptionbudgetsubscription-%d"
+ subscription_id = data.azurerm_subscription.current.subscription_id
+
+ // Changed the amount from 1000 to 2000
+ amount = 3000
+ time_grain = "Monthly"
+
+ // Add end_date
+ time_period {
+ start_date = "%s"
+ end_date = "%s"
+ }
+
+ // Remove filter
+
+ // Changed threshold and operator
+ notification {
+ enabled = true
+ threshold = 95.0
+ operator = "GreaterThan"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+ }
+}
+`, data.RandomInteger, consumptionBudgetTestStartDate().Format(time.RFC3339), consumptionBudgetTestStartDate().AddDate(1, 1, 0).Format(time.RFC3339))
+}
+
+func (ConsumptionBudgetSubscriptionResource) requiresImport(data acceptance.TestData) string {
+ template := ConsumptionBudgetSubscriptionResource{}.basic(data)
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_consumption_budget_subscription" "import" {
+ name = azurerm_consumption_budget_subscription.test.name
+ subscription_id = azurerm_consumption_budget_subscription.test.subscription_id
+
+ amount = azurerm_consumption_budget_subscription.test.amount
+ time_grain = azurerm_consumption_budget_subscription.test.time_grain
+
+ time_period {
+ start_date = "%s"
+ }
+
+ notification {
+ enabled = true
+ threshold = 90.0
+ operator = "EqualTo"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+ }
+}
+`, template, consumptionBudgetTestStartDate().Format(time.RFC3339))
+}
+
+func (ConsumptionBudgetSubscriptionResource) complete(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+data "azurerm_subscription" "current" {}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-%d"
+ location = "%s"
+}
+
+resource "azurerm_monitor_action_group" "test" {
+ name = "acctestAG-%d"
+ resource_group_name = azurerm_resource_group.test.name
+ short_name = "acctestAG"
+}
+
+resource "azurerm_consumption_budget_subscription" "test" {
+ name = "acctestconsumptionbudgetsubscription-%d"
+ subscription_id = data.azurerm_subscription.current.subscription_id
+
+ amount = 1000
+ time_grain = "Monthly"
+
+ time_period {
+ start_date = "%s"
+ end_date = "%s"
+ }
+
+ filter {
+ dimension {
+ name = "ResourceGroupName"
+ values = [
+ azurerm_resource_group.test.name,
+ ]
+ }
+
+ dimension {
+ name = "ResourceId"
+ values = [
+ azurerm_monitor_action_group.test.id,
+ ]
+ }
+
+ tag {
+ name = "foo"
+ values = [
+ "bar",
+ "baz",
+ ]
+ }
+
+ not {
+ tag {
+ name = "zip"
+ values = [
+ "zap",
+ "zop"
+ ]
+ }
+ }
+ }
+
+ notification {
+ enabled = true
+ threshold = 90.0
+ operator = "EqualTo"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+
+ contact_groups = [
+ azurerm_monitor_action_group.test.id,
+ ]
+
+ contact_roles = [
+ "Owner",
+ ]
+ }
+
+ notification {
+ enabled = false
+ threshold = 100.0
+ operator = "GreaterThan"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+ }
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, data.RandomInteger, consumptionBudgetTestStartDate().Format(time.RFC3339), consumptionBudgetTestStartDate().AddDate(1, 1, 0).Format(time.RFC3339))
+}
+
+func (ConsumptionBudgetSubscriptionResource) completeUpdate(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+data "azurerm_subscription" "current" {}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-%d"
+ location = "%s"
+}
+
+resource "azurerm_monitor_action_group" "test" {
+ name = "acctestAG-%d"
+ resource_group_name = azurerm_resource_group.test.name
+ short_name = "acctestAG"
+}
+
+resource "azurerm_consumption_budget_subscription" "test" {
+ name = "acctestconsumptionbudgetsubscription-%d"
+ subscription_id = data.azurerm_subscription.current.subscription_id
+
+ // Changed the amount from 1000 to 2000
+ amount = 2000
+ time_grain = "Monthly"
+
+ // Removed end_date
+ time_period {
+ start_date = "%s"
+ }
+
+ filter {
+ dimension {
+ name = "ResourceGroupName"
+ values = [
+ azurerm_resource_group.test.name,
+ ]
+ }
+
+ tag {
+ name = "foo"
+ values = [
+ "bar",
+ "baz",
+ ]
+ }
+
+ // Added tag: zip
+ tag {
+ name = "zip"
+ values = [
+ "zap",
+ "zop",
+ ]
+ }
+
+ // Removed not block
+ }
+
+ notification {
+ enabled = true
+ threshold = 90.0
+ operator = "EqualTo"
+
+ contact_emails = [
+ // Added baz@example.com
+ "baz@example.com",
+ "foo@example.com",
+ "bar@example.com",
+ ]
+
+ contact_groups = [
+ azurerm_monitor_action_group.test.id,
+ ]
+ // Removed contact_roles
+ }
+
+ notification {
+ // Set enabled to true
+ enabled = true
+ threshold = 100.0
+ // Changed from EqualTo to GreaterThanOrEqualTo
+ operator = "GreaterThanOrEqualTo"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+
+ // Added contact_groups
+ contact_groups = [
+ azurerm_monitor_action_group.test.id,
+ ]
+ }
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, data.RandomInteger, consumptionBudgetTestStartDate().Format(time.RFC3339))
+}
diff --git a/azurerm/internal/services/consumption/helpers.go b/azurerm/internal/services/consumption/helpers.go
new file mode 100644
index 0000000000000..e5f53209f5f98
--- /dev/null
+++ b/azurerm/internal/services/consumption/helpers.go
@@ -0,0 +1,289 @@
+package consumption
+
+import (
+ "fmt"
+ "time"
+
+ "github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption"
+ "github.com/Azure/go-autorest/autorest/date"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/shopspring/decimal"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+// expand and flatten
+func ExpandConsumptionBudgetTimePeriod(i []interface{}) (*consumption.BudgetTimePeriod, error) {
+ if len(i) == 0 || i[0] == nil {
+ return nil, nil
+ }
+
+ input := i[0].(map[string]interface{})
+ timePeriod := consumption.BudgetTimePeriod{}
+
+ if startDateInput, ok := input["start_date"].(string); ok {
+ startDate, err := date.ParseTime(time.RFC3339, startDateInput)
+ if err != nil {
+ return nil, fmt.Errorf("start_date '%s' was not in the correct format: %+v", startDateInput, err)
+ }
+
+ timePeriod.StartDate = &date.Time{
+ Time: startDate,
+ }
+ }
+
+ if endDateInput, ok := input["end_date"].(string); ok {
+ if endDateInput != "" {
+ endDate, err := date.ParseTime(time.RFC3339, endDateInput)
+ if err != nil {
+ return nil, fmt.Errorf("end_date '%s' was not in the correct format: %+v", endDateInput, err)
+ }
+
+ timePeriod.EndDate = &date.Time{
+ Time: endDate,
+ }
+ }
+ }
+
+ return &timePeriod, nil
+}
+
+func FlattenConsumptionBudgetTimePeriod(input *consumption.BudgetTimePeriod) []interface{} {
+ timePeriod := make([]interface{}, 0)
+
+ if input == nil {
+ return timePeriod
+ }
+
+ timePeriodBlock := make(map[string]interface{})
+
+ timePeriodBlock["start_date"] = input.StartDate.String()
+ timePeriodBlock["end_date"] = input.EndDate.String()
+
+ return append(timePeriod, timePeriodBlock)
+}
+
+func ExpandConsumptionBudgetNotifications(input []interface{}) map[string]*consumption.Notification {
+ if len(input) == 0 {
+ return nil
+ }
+
+ notifications := make(map[string]*consumption.Notification)
+
+ for _, v := range input {
+ if v != nil {
+ notificationRaw := v.(map[string]interface{})
+ notification := consumption.Notification{}
+
+ notification.Enabled = utils.Bool(notificationRaw["enabled"].(bool))
+ notification.Operator = consumption.OperatorType(notificationRaw["operator"].(string))
+
+ thresholdDecimal := decimal.NewFromInt(int64(notificationRaw["threshold"].(int)))
+ notification.Threshold = &thresholdDecimal
+
+ notification.ContactEmails = utils.ExpandStringSlice(notificationRaw["contact_emails"].([]interface{}))
+ notification.ContactRoles = utils.ExpandStringSlice(notificationRaw["contact_roles"].([]interface{}))
+ notification.ContactGroups = utils.ExpandStringSlice(notificationRaw["contact_groups"].([]interface{}))
+
+ notificationKey := fmt.Sprintf("actual_%s_%s_Percent", string(notification.Operator), notification.Threshold.StringFixed(0))
+ notifications[notificationKey] = ¬ification
+ }
+ }
+
+ return notifications
+}
+
+func FlattenConsumptionBudgetNotifications(input map[string]*consumption.Notification) []interface{} {
+ notifications := make([]interface{}, 0)
+
+ if input == nil {
+ return notifications
+ }
+
+ for _, v := range input {
+ if v != nil {
+ notificationBlock := make(map[string]interface{})
+
+ notificationBlock["enabled"] = *v.Enabled
+ notificationBlock["operator"] = string(v.Operator)
+ threshold, _ := v.Threshold.Float64()
+ notificationBlock["threshold"] = int(threshold)
+ notificationBlock["contact_emails"] = utils.FlattenStringSlice(v.ContactEmails)
+ notificationBlock["contact_roles"] = utils.FlattenStringSlice(v.ContactRoles)
+ notificationBlock["contact_groups"] = utils.FlattenStringSlice(v.ContactGroups)
+
+ notifications = append(notifications, notificationBlock)
+ }
+ }
+
+ return notifications
+}
+
+func ExpandConsumptionBudgetComparisonExpression(input interface{}) *consumption.BudgetComparisonExpression {
+ if input == nil {
+ return nil
+ }
+
+ v := input.(map[string]interface{})
+
+ return &consumption.BudgetComparisonExpression{
+ Name: utils.String(v["name"].(string)),
+ Operator: utils.String(v["operator"].(string)),
+ Values: utils.ExpandStringSlice(v["values"].([]interface{})),
+ }
+}
+
+func FlattenConsumptionBudgetComparisonExpression(input *consumption.BudgetComparisonExpression) *map[string]interface{} {
+ consumptionBudgetComparisonExpression := make(map[string]interface{})
+
+ consumptionBudgetComparisonExpression["name"] = input.Name
+ consumptionBudgetComparisonExpression["operator"] = input.Operator
+ consumptionBudgetComparisonExpression["values"] = utils.FlattenStringSlice(input.Values)
+
+ return &consumptionBudgetComparisonExpression
+}
+
+func ExpandConsumptionBudgetFilterDimensions(input []interface{}) []consumption.BudgetFilterProperties {
+ if len(input) == 0 {
+ return nil
+ }
+
+ dimensions := make([]consumption.BudgetFilterProperties, 0)
+
+ for _, v := range input {
+ dimension := consumption.BudgetFilterProperties{
+ Dimensions: ExpandConsumptionBudgetComparisonExpression(v),
+ }
+ dimensions = append(dimensions, dimension)
+ }
+
+ return dimensions
+}
+
+func ExpandConsumptionBudgetFilterTag(input []interface{}) []consumption.BudgetFilterProperties {
+ if len(input) == 0 {
+ return nil
+ }
+
+ tags := make([]consumption.BudgetFilterProperties, 0)
+
+ for _, v := range input {
+ tag := consumption.BudgetFilterProperties{
+ Tags: ExpandConsumptionBudgetComparisonExpression(v),
+ }
+
+ tags = append(tags, tag)
+ }
+
+ return tags
+}
+
+func ExpandConsumptionBudgetFilter(i []interface{}) *consumption.BudgetFilter {
+ if len(i) == 0 || i[0] == nil {
+ return nil
+ }
+ input := i[0].(map[string]interface{})
+
+ filter := consumption.BudgetFilter{}
+
+ notBlock := input["not"].([]interface{})
+ if len(notBlock) != 0 && notBlock[0] != nil {
+ not := notBlock[0].(map[string]interface{})
+
+ tags := ExpandConsumptionBudgetFilterTag(not["tag"].([]interface{}))
+ dimensions := ExpandConsumptionBudgetFilterDimensions(not["dimension"].([]interface{}))
+
+ if len(dimensions) != 0 {
+ filter.Not = &dimensions[0]
+ } else if len(tags) != 0 {
+ filter.Not = &tags[0]
+ }
+ }
+
+ tags := ExpandConsumptionBudgetFilterTag(input["tag"].(*schema.Set).List())
+ dimensions := ExpandConsumptionBudgetFilterDimensions(input["dimension"].(*schema.Set).List())
+
+ tagsSet := len(tags) > 0
+ dimensionsSet := len(dimensions) > 0
+
+ if dimensionsSet && tagsSet {
+ and := append(dimensions, tags...)
+ filter.And = &and
+ } else {
+ if dimensionsSet {
+ if len(dimensions) > 1 {
+ filter.And = &dimensions
+ } else {
+ filter.Dimensions = dimensions[0].Dimensions
+ }
+ } else if tagsSet {
+ if len(tags) > 1 {
+ filter.And = &tags
+ } else {
+ filter.Tags = tags[0].Tags
+ }
+ }
+ }
+
+ return &filter
+}
+
+func FlattenConsumptionBudgetFilter(input *consumption.BudgetFilter) []interface{} {
+ filter := make([]interface{}, 0)
+
+ if input == nil {
+ return filter
+ }
+
+ dimensions := make([]interface{}, 0)
+ tags := make([]interface{}, 0)
+
+ filterBlock := make(map[string]interface{})
+
+ notBlock := make(map[string]interface{})
+
+ if input.Not != nil {
+ if input.Not.Dimensions != nil {
+ notBlock["dimension"] = []interface{}{FlattenConsumptionBudgetComparisonExpression(input.Not.Dimensions)}
+ }
+
+ if input.Not.Tags != nil {
+ notBlock["tag"] = []interface{}{FlattenConsumptionBudgetComparisonExpression(input.Not.Tags)}
+ }
+
+ if len(notBlock) != 0 {
+ filterBlock["not"] = []interface{}{notBlock}
+ }
+ }
+
+ if input.And != nil {
+ for _, v := range *input.And {
+ if v.Dimensions != nil {
+ dimensions = append(dimensions, FlattenConsumptionBudgetComparisonExpression(v.Dimensions))
+ } else {
+ tags = append(tags, FlattenConsumptionBudgetComparisonExpression(v.Tags))
+ }
+ }
+
+ if len(dimensions) != 0 {
+ filterBlock["dimension"] = dimensions
+ }
+
+ if len(tags) != 0 {
+ filterBlock["tag"] = tags
+ }
+ } else {
+ if input.Tags != nil {
+ filterBlock["tag"] = append(tags, FlattenConsumptionBudgetComparisonExpression(input.Tags))
+ }
+
+ if input.Dimensions != nil {
+ filterBlock["dimension"] = append(dimensions, FlattenConsumptionBudgetComparisonExpression(input.Dimensions))
+ }
+ }
+
+ if len(filterBlock) != 0 {
+ filter = append(filter, filterBlock)
+ }
+
+ return filter
+}
diff --git a/azurerm/internal/services/consumption/parse/consumption_budget_resource_group.go b/azurerm/internal/services/consumption/parse/consumption_budget_resource_group.go
new file mode 100644
index 0000000000000..52483d10278ce
--- /dev/null
+++ b/azurerm/internal/services/consumption/parse/consumption_budget_resource_group.go
@@ -0,0 +1,69 @@
+package parse
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "fmt"
+ "strings"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
+)
+
+type ConsumptionBudgetResourceGroupId struct {
+ SubscriptionId string
+ ResourceGroup string
+ BudgetName string
+}
+
+func NewConsumptionBudgetResourceGroupID(subscriptionId, resourceGroup, budgetName string) ConsumptionBudgetResourceGroupId {
+ return ConsumptionBudgetResourceGroupId{
+ SubscriptionId: subscriptionId,
+ ResourceGroup: resourceGroup,
+ BudgetName: budgetName,
+ }
+}
+
+func (id ConsumptionBudgetResourceGroupId) String() string {
+ segments := []string{
+ fmt.Sprintf("Budget Name %q", id.BudgetName),
+ fmt.Sprintf("Resource Group %q", id.ResourceGroup),
+ }
+ segmentsStr := strings.Join(segments, " / ")
+ return fmt.Sprintf("%s: (%s)", "Consumption Budget Resource Group", segmentsStr)
+}
+
+func (id ConsumptionBudgetResourceGroupId) ID() string {
+ fmtString := "/subscriptions/%s/resourceGroups/%s/providers/Microsoft.Consumption/budgets/%s"
+ return fmt.Sprintf(fmtString, id.SubscriptionId, id.ResourceGroup, id.BudgetName)
+}
+
+// ConsumptionBudgetResourceGroupID parses a ConsumptionBudgetResourceGroup ID into an ConsumptionBudgetResourceGroupId struct
+func ConsumptionBudgetResourceGroupID(input string) (*ConsumptionBudgetResourceGroupId, error) {
+ id, err := azure.ParseAzureResourceID(input)
+ if err != nil {
+ return nil, err
+ }
+
+ resourceId := ConsumptionBudgetResourceGroupId{
+ SubscriptionId: id.SubscriptionID,
+ ResourceGroup: id.ResourceGroup,
+ }
+
+ if resourceId.SubscriptionId == "" {
+ return nil, fmt.Errorf("ID was missing the 'subscriptions' element")
+ }
+
+ if resourceId.ResourceGroup == "" {
+ return nil, fmt.Errorf("ID was missing the 'resourceGroups' element")
+ }
+
+ if resourceId.BudgetName, err = id.PopSegment("budgets"); err != nil {
+ return nil, err
+ }
+
+ if err := id.ValidateNoEmptySegments(input); err != nil {
+ return nil, err
+ }
+
+ return &resourceId, nil
+}
diff --git a/azurerm/internal/services/consumption/parse/consumption_budget_resource_group_test.go b/azurerm/internal/services/consumption/parse/consumption_budget_resource_group_test.go
new file mode 100644
index 0000000000000..0d33577d77003
--- /dev/null
+++ b/azurerm/internal/services/consumption/parse/consumption_budget_resource_group_test.go
@@ -0,0 +1,112 @@
+package parse
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "testing"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/resourceid"
+)
+
+var _ resourceid.Formatter = ConsumptionBudgetResourceGroupId{}
+
+func TestConsumptionBudgetResourceGroupIDFormatter(t *testing.T) {
+ actual := NewConsumptionBudgetResourceGroupID("12345678-1234-9876-4563-123456789012", "resGroup1", "budget1").ID()
+ expected := "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.Consumption/budgets/budget1"
+ if actual != expected {
+ t.Fatalf("Expected %q but got %q", expected, actual)
+ }
+}
+
+func TestConsumptionBudgetResourceGroupID(t *testing.T) {
+ testData := []struct {
+ Input string
+ Error bool
+ Expected *ConsumptionBudgetResourceGroupId
+ }{
+
+ {
+ // empty
+ Input: "",
+ Error: true,
+ },
+
+ {
+ // missing SubscriptionId
+ Input: "/",
+ Error: true,
+ },
+
+ {
+ // missing value for SubscriptionId
+ Input: "/subscriptions/",
+ Error: true,
+ },
+
+ {
+ // missing ResourceGroup
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/",
+ Error: true,
+ },
+
+ {
+ // missing value for ResourceGroup
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/",
+ Error: true,
+ },
+
+ {
+ // missing BudgetName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.Consumption/",
+ Error: true,
+ },
+
+ {
+ // missing value for BudgetName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.Consumption/budgets/",
+ Error: true,
+ },
+
+ {
+ // valid
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.Consumption/budgets/budget1",
+ Expected: &ConsumptionBudgetResourceGroupId{
+ SubscriptionId: "12345678-1234-9876-4563-123456789012",
+ ResourceGroup: "resGroup1",
+ BudgetName: "budget1",
+ },
+ },
+
+ {
+ // upper-cased
+ Input: "/SUBSCRIPTIONS/12345678-1234-9876-4563-123456789012/RESOURCEGROUPS/RESGROUP1/PROVIDERS/MICROSOFT.CONSUMPTION/BUDGETS/BUDGET1",
+ Error: true,
+ },
+ }
+
+ for _, v := range testData {
+ t.Logf("[DEBUG] Testing %q", v.Input)
+
+ actual, err := ConsumptionBudgetResourceGroupID(v.Input)
+ if err != nil {
+ if v.Error {
+ continue
+ }
+
+ t.Fatalf("Expect a value but got an error: %s", err)
+ }
+ if v.Error {
+ t.Fatal("Expect an error but didn't get one")
+ }
+
+ if actual.SubscriptionId != v.Expected.SubscriptionId {
+ t.Fatalf("Expected %q but got %q for SubscriptionId", v.Expected.SubscriptionId, actual.SubscriptionId)
+ }
+ if actual.ResourceGroup != v.Expected.ResourceGroup {
+ t.Fatalf("Expected %q but got %q for ResourceGroup", v.Expected.ResourceGroup, actual.ResourceGroup)
+ }
+ if actual.BudgetName != v.Expected.BudgetName {
+ t.Fatalf("Expected %q but got %q for BudgetName", v.Expected.BudgetName, actual.BudgetName)
+ }
+ }
+}
diff --git a/azurerm/internal/services/consumption/parse/consumption_budget_subscription.go b/azurerm/internal/services/consumption/parse/consumption_budget_subscription.go
new file mode 100644
index 0000000000000..82185968bbe03
--- /dev/null
+++ b/azurerm/internal/services/consumption/parse/consumption_budget_subscription.go
@@ -0,0 +1,61 @@
+package parse
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "fmt"
+ "strings"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
+)
+
+type ConsumptionBudgetSubscriptionId struct {
+ SubscriptionId string
+ BudgetName string
+}
+
+func NewConsumptionBudgetSubscriptionID(subscriptionId, budgetName string) ConsumptionBudgetSubscriptionId {
+ return ConsumptionBudgetSubscriptionId{
+ SubscriptionId: subscriptionId,
+ BudgetName: budgetName,
+ }
+}
+
+func (id ConsumptionBudgetSubscriptionId) String() string {
+ segments := []string{
+ fmt.Sprintf("Budget Name %q", id.BudgetName),
+ }
+ segmentsStr := strings.Join(segments, " / ")
+ return fmt.Sprintf("%s: (%s)", "Consumption Budget Subscription", segmentsStr)
+}
+
+func (id ConsumptionBudgetSubscriptionId) ID() string {
+ fmtString := "/subscriptions/%s/providers/Microsoft.Consumption/budgets/%s"
+ return fmt.Sprintf(fmtString, id.SubscriptionId, id.BudgetName)
+}
+
+// ConsumptionBudgetSubscriptionID parses a ConsumptionBudgetSubscription ID into an ConsumptionBudgetSubscriptionId struct
+func ConsumptionBudgetSubscriptionID(input string) (*ConsumptionBudgetSubscriptionId, error) {
+ id, err := azure.ParseAzureResourceID(input)
+ if err != nil {
+ return nil, err
+ }
+
+ resourceId := ConsumptionBudgetSubscriptionId{
+ SubscriptionId: id.SubscriptionID,
+ }
+
+ if resourceId.SubscriptionId == "" {
+ return nil, fmt.Errorf("ID was missing the 'subscriptions' element")
+ }
+
+ if resourceId.BudgetName, err = id.PopSegment("budgets"); err != nil {
+ return nil, err
+ }
+
+ if err := id.ValidateNoEmptySegments(input); err != nil {
+ return nil, err
+ }
+
+ return &resourceId, nil
+}
diff --git a/azurerm/internal/services/consumption/parse/consumption_budget_subscription_test.go b/azurerm/internal/services/consumption/parse/consumption_budget_subscription_test.go
new file mode 100644
index 0000000000000..e8549f9ee73eb
--- /dev/null
+++ b/azurerm/internal/services/consumption/parse/consumption_budget_subscription_test.go
@@ -0,0 +1,96 @@
+package parse
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "testing"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/resourceid"
+)
+
+var _ resourceid.Formatter = ConsumptionBudgetSubscriptionId{}
+
+func TestConsumptionBudgetSubscriptionIDFormatter(t *testing.T) {
+ actual := NewConsumptionBudgetSubscriptionID("12345678-1234-9876-4563-123456789012", "budget1").ID()
+ expected := "/subscriptions/12345678-1234-9876-4563-123456789012/providers/Microsoft.Consumption/budgets/budget1"
+ if actual != expected {
+ t.Fatalf("Expected %q but got %q", expected, actual)
+ }
+}
+
+func TestConsumptionBudgetSubscriptionID(t *testing.T) {
+ testData := []struct {
+ Input string
+ Error bool
+ Expected *ConsumptionBudgetSubscriptionId
+ }{
+
+ {
+ // empty
+ Input: "",
+ Error: true,
+ },
+
+ {
+ // missing SubscriptionId
+ Input: "/",
+ Error: true,
+ },
+
+ {
+ // missing value for SubscriptionId
+ Input: "/subscriptions/",
+ Error: true,
+ },
+
+ {
+ // missing BudgetName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/providers/Microsoft.Consumption/",
+ Error: true,
+ },
+
+ {
+ // missing value for BudgetName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/providers/Microsoft.Consumption/budgets/",
+ Error: true,
+ },
+
+ {
+ // valid
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/providers/Microsoft.Consumption/budgets/budget1",
+ Expected: &ConsumptionBudgetSubscriptionId{
+ SubscriptionId: "12345678-1234-9876-4563-123456789012",
+ BudgetName: "budget1",
+ },
+ },
+
+ {
+ // upper-cased
+ Input: "/SUBSCRIPTIONS/12345678-1234-9876-4563-123456789012/PROVIDERS/MICROSOFT.CONSUMPTION/BUDGETS/BUDGET1",
+ Error: true,
+ },
+ }
+
+ for _, v := range testData {
+ t.Logf("[DEBUG] Testing %q", v.Input)
+
+ actual, err := ConsumptionBudgetSubscriptionID(v.Input)
+ if err != nil {
+ if v.Error {
+ continue
+ }
+
+ t.Fatalf("Expect a value but got an error: %s", err)
+ }
+ if v.Error {
+ t.Fatal("Expect an error but didn't get one")
+ }
+
+ if actual.SubscriptionId != v.Expected.SubscriptionId {
+ t.Fatalf("Expected %q but got %q for SubscriptionId", v.Expected.SubscriptionId, actual.SubscriptionId)
+ }
+ if actual.BudgetName != v.Expected.BudgetName {
+ t.Fatalf("Expected %q but got %q for BudgetName", v.Expected.BudgetName, actual.BudgetName)
+ }
+ }
+}
diff --git a/azurerm/internal/services/consumption/registration.go b/azurerm/internal/services/consumption/registration.go
new file mode 100644
index 0000000000000..6f25e52a42450
--- /dev/null
+++ b/azurerm/internal/services/consumption/registration.go
@@ -0,0 +1,41 @@
+package consumption
+
+import (
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+)
+
+const (
+ // The Consumption Budget resource names are extracted into their own variables
+ // as the core logic for the Consumption Budget resources is generic and has been
+ // extracted out of the specific Consumption Budget resources. These constants are
+ // used when the generic Consumption Budget functions require a resource name.
+ consumptionBudgetResourceGroupName = "azurerm_consumption_budget_resource_group"
+ consumptionBudgetSubscriptionName = "azurerm_consumption_budget_subscription"
+)
+
+type Registration struct{}
+
+// Name is the name of this Service
+func (r Registration) Name() string {
+ return "Consumption"
+}
+
+// WebsiteCategories returns a list of categories which can be used for the sidebar
+func (r Registration) WebsiteCategories() []string {
+ return []string{
+ "Consumption",
+ }
+}
+
+// SupportedDataSources returns the supported Data Sources supported by this Service
+func (r Registration) SupportedDataSources() map[string]*schema.Resource {
+ return map[string]*schema.Resource{}
+}
+
+// SupportedResources returns the supported Resources supported by this Service
+func (r Registration) SupportedResources() map[string]*schema.Resource {
+ return map[string]*schema.Resource{
+ consumptionBudgetResourceGroupName: resourceArmConsumptionBudgetResourceGroup(),
+ consumptionBudgetSubscriptionName: resourceArmConsumptionBudgetSubscription(),
+ }
+}
diff --git a/azurerm/internal/services/consumption/resourceids.go b/azurerm/internal/services/consumption/resourceids.go
new file mode 100644
index 0000000000000..7628c18ee553e
--- /dev/null
+++ b/azurerm/internal/services/consumption/resourceids.go
@@ -0,0 +1,4 @@
+package consumption
+
+//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=ConsumptionBudgetResourceGroup -id=/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.Consumption/budgets/budget1
+//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=ConsumptionBudgetSubscription -id=/subscriptions/12345678-1234-9876-4563-123456789012/providers/Microsoft.Consumption/budgets/budget1
diff --git a/azurerm/internal/services/consumption/schema.go b/azurerm/internal/services/consumption/schema.go
new file mode 100644
index 0000000000000..95bdd1a4673d1
--- /dev/null
+++ b/azurerm/internal/services/consumption/schema.go
@@ -0,0 +1,280 @@
+package consumption
+
+import (
+ "github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/validation"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/consumption/validate"
+ resourceValidate "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/resource/validate"
+)
+
+func SchemaConsumptionBudgetResourceGroupResource() map[string]*schema.Schema {
+ resourceGroupNameSchema := map[string]*schema.Schema{
+ "resource_group_id": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ ValidateFunc: resourceValidate.ResourceGroupID,
+ },
+ }
+
+ return azure.MergeSchema(SchemaConsumptionBudgetCommonResource(), resourceGroupNameSchema)
+}
+
+func SchemaConsumptionBudgetSubscriptionResource() map[string]*schema.Schema {
+ subscriptionIDSchema := map[string]*schema.Schema{
+ "subscription_id": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ ValidateFunc: validation.IsUUID,
+ },
+ }
+
+ return azure.MergeSchema(SchemaConsumptionBudgetCommonResource(), subscriptionIDSchema)
+}
+
+func SchemaConsumptionBudgetFilterDimensionElement() *schema.Resource {
+ return &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "name": {
+ Type: schema.TypeString,
+ Required: true,
+ ValidateFunc: validation.StringInSlice([]string{
+ "ChargeType",
+ "Frequency",
+ "InvoiceId",
+ "Meter",
+ "MeterCategory",
+ "MeterSubCategory",
+ "PartNumber",
+ "PricingModel",
+ "Product",
+ "ProductOrderId",
+ "ProductOrderName",
+ "PublisherType",
+ "ReservationId",
+ "ReservationName",
+ "ResourceGroupName",
+ "ResourceGuid",
+ "ResourceId",
+ "ResourceLocation",
+ "ResourceType",
+ "ServiceFamily",
+ "ServiceName",
+ "UnitOfMeasure",
+ }, false),
+ },
+ "operator": {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "In",
+ ValidateFunc: validation.StringInSlice([]string{
+ "In",
+ }, false),
+ },
+ "values": {
+ Type: schema.TypeList,
+ MinItems: 1,
+ Required: true,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringIsNotEmpty,
+ },
+ },
+ },
+ }
+}
+
+func SchemaConsumptionBudgetFilterTagElement() *schema.Resource {
+ return &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "name": {
+ Type: schema.TypeString,
+ Required: true,
+ },
+ "operator": {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "In",
+ ValidateFunc: validation.StringInSlice([]string{
+ "In",
+ }, false),
+ },
+ "values": {
+ Type: schema.TypeList,
+ Required: true,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringIsNotEmpty,
+ },
+ },
+ },
+ }
+}
+
+func SchemaConsumptionBudgetNotificationElement() *schema.Resource {
+ return &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "enabled": {
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: true,
+ },
+ "threshold": {
+ Type: schema.TypeInt,
+ Required: true,
+ ValidateFunc: validation.IntBetween(0, 1000),
+ },
+ "operator": {
+ Type: schema.TypeString,
+ Required: true,
+ ValidateFunc: validation.StringInSlice([]string{
+ string(consumption.EqualTo),
+ string(consumption.GreaterThan),
+ string(consumption.GreaterThanOrEqualTo),
+ }, false),
+ },
+
+ "contact_emails": {
+ Type: schema.TypeList,
+ Optional: true,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringIsNotEmpty,
+ },
+ },
+
+ "contact_groups": {
+ Type: schema.TypeList,
+ Optional: true,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringIsNotEmpty,
+ },
+ },
+
+ "contact_roles": {
+ Type: schema.TypeList,
+ Optional: true,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringIsNotEmpty,
+ },
+ },
+ },
+ }
+}
+
+func SchemaConsumptionBudgetCommonResource() map[string]*schema.Schema {
+ return map[string]*schema.Schema{
+ "name": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ ValidateFunc: validate.ConsumptionBudgetName(),
+ },
+
+ "amount": {
+ Type: schema.TypeFloat,
+ Required: true,
+ ValidateFunc: validation.FloatAtLeast(1.0),
+ },
+
+ "filter": {
+ Type: schema.TypeList,
+ Optional: true,
+ MaxItems: 1,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "dimension": {
+ Type: schema.TypeSet,
+ Optional: true,
+ Set: schema.HashResource(SchemaConsumptionBudgetFilterDimensionElement()),
+ Elem: SchemaConsumptionBudgetFilterDimensionElement(),
+ AtLeastOneOf: []string{"filter.0.dimension", "filter.0.tag", "filter.0.not"},
+ },
+ "tag": {
+ Type: schema.TypeSet,
+ Optional: true,
+ Set: schema.HashResource(SchemaConsumptionBudgetFilterTagElement()),
+ Elem: SchemaConsumptionBudgetFilterTagElement(),
+ AtLeastOneOf: []string{"filter.0.dimension", "filter.0.tag", "filter.0.not"},
+ },
+ "not": {
+ Type: schema.TypeList,
+ Optional: true,
+ MaxItems: 1,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "dimension": {
+ Type: schema.TypeList,
+ MaxItems: 1,
+ Optional: true,
+ ExactlyOneOf: []string{"filter.0.not.0.tag"},
+ Elem: SchemaConsumptionBudgetFilterDimensionElement(),
+ },
+ "tag": {
+ Type: schema.TypeList,
+ MaxItems: 1,
+ Optional: true,
+ ExactlyOneOf: []string{"filter.0.not.0.dimension"},
+ Elem: SchemaConsumptionBudgetFilterTagElement(),
+ },
+ },
+ },
+ AtLeastOneOf: []string{"filter.0.dimension", "filter.0.tag", "filter.0.not"},
+ },
+ },
+ },
+ },
+
+ "notification": {
+ Type: schema.TypeSet,
+ Required: true,
+ MinItems: 1,
+ MaxItems: 5,
+ Set: schema.HashResource(SchemaConsumptionBudgetNotificationElement()),
+ Elem: SchemaConsumptionBudgetNotificationElement(),
+ },
+
+ "time_grain": {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: string(consumption.TimeGrainTypeMonthly),
+ ForceNew: true,
+ ValidateFunc: validation.StringInSlice([]string{
+ string(consumption.TimeGrainTypeBillingAnnual),
+ string(consumption.TimeGrainTypeBillingMonth),
+ string(consumption.TimeGrainTypeBillingQuarter),
+ string(consumption.TimeGrainTypeAnnually),
+ string(consumption.TimeGrainTypeMonthly),
+ string(consumption.TimeGrainTypeQuarterly),
+ }, false),
+ },
+
+ "time_period": {
+ Type: schema.TypeList,
+ Required: true,
+ MinItems: 1,
+ MaxItems: 1,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "start_date": {
+ Type: schema.TypeString,
+ Required: true,
+ ValidateFunc: validate.ConsumptionBudgetTimePeriodStartDate,
+ ForceNew: true,
+ },
+ "end_date": {
+ Type: schema.TypeString,
+ Optional: true,
+ Computed: true,
+ ValidateFunc: validation.IsRFC3339Time,
+ },
+ },
+ },
+ },
+ }
+}
diff --git a/azurerm/internal/services/consumption/validate/consumption_budget_resource_group_id.go b/azurerm/internal/services/consumption/validate/consumption_budget_resource_group_id.go
new file mode 100644
index 0000000000000..794dc495f2d18
--- /dev/null
+++ b/azurerm/internal/services/consumption/validate/consumption_budget_resource_group_id.go
@@ -0,0 +1,23 @@
+package validate
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "fmt"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/consumption/parse"
+)
+
+func ConsumptionBudgetResourceGroupID(input interface{}, key string) (warnings []string, errors []error) {
+ v, ok := input.(string)
+ if !ok {
+ errors = append(errors, fmt.Errorf("expected %q to be a string", key))
+ return
+ }
+
+ if _, err := parse.ConsumptionBudgetResourceGroupID(v); err != nil {
+ errors = append(errors, err)
+ }
+
+ return
+}
diff --git a/azurerm/internal/services/consumption/validate/consumption_budget_resource_group_id_test.go b/azurerm/internal/services/consumption/validate/consumption_budget_resource_group_id_test.go
new file mode 100644
index 0000000000000..9595b0fdc466e
--- /dev/null
+++ b/azurerm/internal/services/consumption/validate/consumption_budget_resource_group_id_test.go
@@ -0,0 +1,76 @@
+package validate
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import "testing"
+
+func TestConsumptionBudgetResourceGroupID(t *testing.T) {
+ cases := []struct {
+ Input string
+ Valid bool
+ }{
+
+ {
+ // empty
+ Input: "",
+ Valid: false,
+ },
+
+ {
+ // missing SubscriptionId
+ Input: "/",
+ Valid: false,
+ },
+
+ {
+ // missing value for SubscriptionId
+ Input: "/subscriptions/",
+ Valid: false,
+ },
+
+ {
+ // missing ResourceGroup
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/",
+ Valid: false,
+ },
+
+ {
+ // missing value for ResourceGroup
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/",
+ Valid: false,
+ },
+
+ {
+ // missing BudgetName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.Consumption/",
+ Valid: false,
+ },
+
+ {
+ // missing value for BudgetName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.Consumption/budgets/",
+ Valid: false,
+ },
+
+ {
+ // valid
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.Consumption/budgets/budget1",
+ Valid: true,
+ },
+
+ {
+ // upper-cased
+ Input: "/SUBSCRIPTIONS/12345678-1234-9876-4563-123456789012/RESOURCEGROUPS/RESGROUP1/PROVIDERS/MICROSOFT.CONSUMPTION/BUDGETS/BUDGET1",
+ Valid: false,
+ },
+ }
+ for _, tc := range cases {
+ t.Logf("[DEBUG] Testing Value %s", tc.Input)
+ _, errors := ConsumptionBudgetResourceGroupID(tc.Input, "test")
+ valid := len(errors) == 0
+
+ if tc.Valid != valid {
+ t.Fatalf("Expected %t but got %t", tc.Valid, valid)
+ }
+ }
+}
diff --git a/azurerm/internal/services/consumption/validate/consumption_budget_subscription_id.go b/azurerm/internal/services/consumption/validate/consumption_budget_subscription_id.go
new file mode 100644
index 0000000000000..be37ebf8f5f2f
--- /dev/null
+++ b/azurerm/internal/services/consumption/validate/consumption_budget_subscription_id.go
@@ -0,0 +1,23 @@
+package validate
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "fmt"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/consumption/parse"
+)
+
+func ConsumptionBudgetSubscriptionID(input interface{}, key string) (warnings []string, errors []error) {
+ v, ok := input.(string)
+ if !ok {
+ errors = append(errors, fmt.Errorf("expected %q to be a string", key))
+ return
+ }
+
+ if _, err := parse.ConsumptionBudgetSubscriptionID(v); err != nil {
+ errors = append(errors, err)
+ }
+
+ return
+}
diff --git a/azurerm/internal/services/consumption/validate/consumption_budget_subscription_id_test.go b/azurerm/internal/services/consumption/validate/consumption_budget_subscription_id_test.go
new file mode 100644
index 0000000000000..94d51b2b635cd
--- /dev/null
+++ b/azurerm/internal/services/consumption/validate/consumption_budget_subscription_id_test.go
@@ -0,0 +1,64 @@
+package validate
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import "testing"
+
+func TestConsumptionBudgetSubscriptionID(t *testing.T) {
+ cases := []struct {
+ Input string
+ Valid bool
+ }{
+
+ {
+ // empty
+ Input: "",
+ Valid: false,
+ },
+
+ {
+ // missing SubscriptionId
+ Input: "/",
+ Valid: false,
+ },
+
+ {
+ // missing value for SubscriptionId
+ Input: "/subscriptions/",
+ Valid: false,
+ },
+
+ {
+ // missing BudgetName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/providers/Microsoft.Consumption/",
+ Valid: false,
+ },
+
+ {
+ // missing value for BudgetName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/providers/Microsoft.Consumption/budgets/",
+ Valid: false,
+ },
+
+ {
+ // valid
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/providers/Microsoft.Consumption/budgets/budget1",
+ Valid: true,
+ },
+
+ {
+ // upper-cased
+ Input: "/SUBSCRIPTIONS/12345678-1234-9876-4563-123456789012/PROVIDERS/MICROSOFT.CONSUMPTION/BUDGETS/BUDGET1",
+ Valid: false,
+ },
+ }
+ for _, tc := range cases {
+ t.Logf("[DEBUG] Testing Value %s", tc.Input)
+ _, errors := ConsumptionBudgetSubscriptionID(tc.Input, "test")
+ valid := len(errors) == 0
+
+ if tc.Valid != valid {
+ t.Fatalf("Expected %t but got %t", tc.Valid, valid)
+ }
+ }
+}
diff --git a/azurerm/internal/services/consumption/validate/name.go b/azurerm/internal/services/consumption/validate/name.go
new file mode 100644
index 0000000000000..253bc5ca8c6f2
--- /dev/null
+++ b/azurerm/internal/services/consumption/validate/name.go
@@ -0,0 +1,15 @@
+package validate
+
+import (
+ "regexp"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/validation"
+)
+
+func ConsumptionBudgetName() schema.SchemaValidateFunc {
+ return validation.StringMatch(
+ regexp.MustCompile("^[-_a-zA-Z0-9]{1,63}$"),
+ "The consumption budget name can contain only letters, numbers, underscores, and hyphens. The consumption budget name be between 6 and 63 characters long.",
+ )
+}
diff --git a/azurerm/internal/services/consumption/validate/time_period.go b/azurerm/internal/services/consumption/validate/time_period.go
new file mode 100644
index 0000000000000..3954dd4de704c
--- /dev/null
+++ b/azurerm/internal/services/consumption/validate/time_period.go
@@ -0,0 +1,43 @@
+package validate
+
+import (
+ "fmt"
+ "time"
+
+ "github.com/Azure/go-autorest/autorest/date"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/validation"
+)
+
+func ConsumptionBudgetTimePeriodStartDate(i interface{}, k string) (warnings []string, errors []error) {
+ validateRFC3339TimeWarnings, validateRFC3339TimeErrors := validation.IsRFC3339Time(i, k)
+ errors = append(errors, validateRFC3339TimeErrors...)
+ warnings = append(warnings, validateRFC3339TimeWarnings...)
+
+ if len(errors) != 0 || len(warnings) != 0 {
+ return warnings, errors
+ }
+
+ // Errors were already checked by validation.IsRFC3339Time
+ startDate, _ := date.ParseTime(time.RFC3339, i.(string))
+
+ // The start date must be first of the month
+ if startDate.Day() != 1 {
+ errors = append(errors, fmt.Errorf("%q must be first of the month, got day %d", k, startDate.Day()))
+ return warnings, errors
+ }
+
+ // Budget start date must be on or after June 1, 2017.
+ earliestPossibleStartDateString := "2017-06-01T00:00:00Z"
+ earliestPossibleStartDate, _ := date.ParseTime(time.RFC3339, earliestPossibleStartDateString)
+ if startDate.Before(earliestPossibleStartDate) {
+ errors = append(errors, fmt.Errorf("%q must be on or after June 1, 2017, got %q", k, i.(string)))
+ return warnings, errors
+ }
+
+ // Future start date should not be more than twelve months.
+ if startDate.After(time.Now().AddDate(0, 12, 0)) {
+ warnings = append(warnings, fmt.Sprintf("%q should not be more than twelve months in the future", k))
+ }
+
+ return warnings, errors
+}
diff --git a/azurerm/internal/services/consumption/validate/time_period_test.go b/azurerm/internal/services/consumption/validate/time_period_test.go
new file mode 100644
index 0000000000000..af008390706ab
--- /dev/null
+++ b/azurerm/internal/services/consumption/validate/time_period_test.go
@@ -0,0 +1,85 @@
+package validate
+
+import (
+ "testing"
+ "time"
+)
+
+func TestConsumptionBudgetTimePeriodStartDate(t *testing.T) {
+ // Set up time for testing
+ now := time.Now()
+ validTime := time.Date(
+ now.Year(), now.Month(), 1, 0, 0, 0, 0, time.UTC)
+
+ cases := []struct {
+ Input string
+ ExpectError bool
+ ExpectWarning bool
+ }{
+ {
+ Input: "",
+ ExpectError: true,
+ ExpectWarning: false,
+ },
+ {
+ Input: "2006-01-02",
+ ExpectError: true,
+ ExpectWarning: false,
+ },
+ {
+ // Not on the first of a month
+ Input: "2020-11-02T00:00:00Z",
+ ExpectError: true,
+ ExpectWarning: false,
+ },
+ {
+ // Before June 1, 2017
+ Input: "2000-01-01T00:00:00Z",
+ ExpectError: true,
+ ExpectWarning: false,
+ },
+ {
+ // Valid date and time
+ Input: validTime.Format(time.RFC3339),
+ ExpectError: false,
+ ExpectWarning: false,
+ },
+ {
+ // More than 12 months in the future
+ Input: validTime.AddDate(2, 0, 0).Format(time.RFC3339),
+ ExpectError: false,
+ ExpectWarning: true,
+ },
+ }
+
+ for _, tc := range cases {
+ warnings, errors := ConsumptionBudgetTimePeriodStartDate(tc.Input, "start_date")
+ if errors != nil {
+ if !tc.ExpectError {
+ t.Fatalf("Got error for input %q: %+v", tc.Input, errors)
+ }
+
+ return
+ }
+
+ if warnings != nil {
+ if !tc.ExpectWarning {
+ t.Fatalf("Got warnings for input %q: %+v", tc.Input, warnings)
+ }
+
+ return
+ }
+
+ if tc.ExpectError && len(errors) == 0 {
+ t.Fatalf("Got no errors for input %q but expected some", tc.Input)
+ } else if !tc.ExpectError && len(errors) > 0 {
+ t.Fatalf("Got %d errors for input %q when didn't expect any", len(errors), tc.Input)
+ }
+
+ if tc.ExpectWarning && len(warnings) == 0 {
+ t.Fatalf("Got no warnings for input %q but expected some", tc.Input)
+ } else if !tc.ExpectWarning && len(warnings) > 0 {
+ t.Fatalf("Got %d warnings for input %q when didn't expect any", len(warnings), tc.Input)
+ }
+ }
+}
diff --git a/azurerm/internal/services/containers/client/client.go b/azurerm/internal/services/containers/client/client.go
index be8d364b3cb97..172d499f31025 100644
--- a/azurerm/internal/services/containers/client/client.go
+++ b/azurerm/internal/services/containers/client/client.go
@@ -3,7 +3,7 @@ package client
import (
"github.com/Azure/azure-sdk-for-go/services/containerinstance/mgmt/2019-12-01/containerinstance"
legacy "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2019-08-01/containerservice"
- "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice"
+ "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice"
"github.com/Azure/azure-sdk-for-go/services/preview/containerregistry/mgmt/2020-11-01-preview/containerregistry"
"github.com/Azure/go-autorest/autorest/azure"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/common"
diff --git a/azurerm/internal/services/containers/container_registry_resource.go b/azurerm/internal/services/containers/container_registry_resource.go
index 6abd59d02989b..a31901eb01295 100644
--- a/azurerm/internal/services/containers/container_registry_resource.go
+++ b/azurerm/internal/services/containers/container_registry_resource.go
@@ -20,6 +20,9 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/location"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/containers/migration"
validate2 "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/containers/validate"
+ keyVaultValidate "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/keyvault/validate"
+ identityParse "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/msi/parse"
+ identityValidate "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/msi/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tags"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/suppress"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
@@ -142,6 +145,67 @@ func resourceContainerRegistry() *schema.Resource {
Sensitive: true,
},
+ "identity": {
+ Type: schema.TypeList,
+ Optional: true,
+ Computed: true,
+ MaxItems: 1,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "type": {
+ Type: schema.TypeString,
+ Required: true,
+ DiffSuppressFunc: suppress.CaseDifference,
+ ValidateFunc: validation.StringInSlice([]string{
+ string(containerregistry.ResourceIdentityTypeSystemAssigned),
+ string(containerregistry.ResourceIdentityTypeUserAssigned),
+ string(containerregistry.ResourceIdentityTypeSystemAssignedUserAssigned),
+ }, false),
+ },
+ "principal_id": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ "identity_ids": {
+ Type: schema.TypeList,
+ Optional: true,
+ MinItems: 1,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: identityValidate.UserAssignedIdentityID,
+ },
+ },
+ },
+ },
+ },
+
+ "encryption": {
+ Type: schema.TypeList,
+ Optional: true,
+ Computed: true,
+ MaxItems: 1,
+ ConfigMode: schema.SchemaConfigModeAttr,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "enabled": {
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: false,
+ },
+ "identity_client_id": {
+ Type: schema.TypeString,
+ Required: true,
+ ValidateFunc: validation.IsUUID,
+ },
+ "key_vault_key_id": {
+ Type: schema.TypeString,
+ Required: true,
+ ValidateFunc: keyVaultValidate.NestedItemId,
+ },
+ },
+ },
+ },
+
"network_rule_set": {
Type: schema.TypeList,
Optional: true,
@@ -280,6 +344,10 @@ func resourceContainerRegistry() *schema.Resource {
return fmt.Errorf("ACR trust policy can only be applied when using the Premium Sku. If you are downgrading from a Premium SKU please set trust_policy {}")
}
+ encryptionEnabled, ok := d.GetOk("encryption.0.enabled")
+ if ok && encryptionEnabled.(bool) && !strings.EqualFold(sku, string(containerregistry.Premium)) {
+ return fmt.Errorf("ACR encryption can only be applied when using the Premium Sku.")
+ }
return nil
}),
}
@@ -340,6 +408,12 @@ func resourceContainerRegistryCreate(d *schema.ResourceData, meta interface{}) e
trustPolicyRaw := d.Get("trust_policy").([]interface{})
trustPolicy := expandTrustPolicy(trustPolicyRaw)
+ encryptionRaw := d.Get("encryption").([]interface{})
+ encryption := expandEncryption(encryptionRaw)
+
+ identityRaw := d.Get("identity").([]interface{})
+ identity := expandIdentityProperties(identityRaw)
+
publicNetworkAccess := containerregistry.PublicNetworkAccessEnabled
if !d.Get("public_network_access_enabled").(bool) {
publicNetworkAccess = containerregistry.PublicNetworkAccessDisabled
@@ -350,6 +424,7 @@ func resourceContainerRegistryCreate(d *schema.ResourceData, meta interface{}) e
Name: containerregistry.SkuName(sku),
Tier: containerregistry.SkuTier(sku),
},
+ Identity: identity,
RegistryProperties: &containerregistry.RegistryProperties{
AdminUserEnabled: utils.Bool(adminUserEnabled),
NetworkRuleSet: networkRuleSet,
@@ -359,6 +434,7 @@ func resourceContainerRegistryCreate(d *schema.ResourceData, meta interface{}) e
TrustPolicy: trustPolicy,
},
PublicNetworkAccess: publicNetworkAccess,
+ Encryption: encryption,
},
Tags: tags.Expand(t),
@@ -463,6 +539,9 @@ func resourceContainerRegistryUpdate(d *schema.ResourceData, meta interface{}) e
publicNetworkAccess = containerregistry.PublicNetworkAccessDisabled
}
+ identityRaw := d.Get("identity").([]interface{})
+ identity := expandIdentityProperties(identityRaw)
+
parameters := containerregistry.RegistryUpdateParameters{
RegistryPropertiesUpdateParameters: &containerregistry.RegistryPropertiesUpdateParameters{
AdminUserEnabled: utils.Bool(adminUserEnabled),
@@ -474,7 +553,8 @@ func resourceContainerRegistryUpdate(d *schema.ResourceData, meta interface{}) e
},
PublicNetworkAccess: publicNetworkAccess,
},
- Tags: tags.Expand(t),
+ Identity: identity,
+ Tags: tags.Expand(t),
}
// geo replication is only supported by Premium Sku
@@ -630,6 +710,11 @@ func resourceContainerRegistryRead(d *schema.ResourceData, meta interface{}) err
return fmt.Errorf("Error setting `network_rule_set`: %+v", err)
}
+ identity, _ := flattenIdentityProperties(resp.Identity)
+ if err := d.Set("identity", identity); err != nil {
+ return fmt.Errorf("Error setting `identity`: %+v", err)
+ }
+
if properties := resp.RegistryProperties; properties != nil {
if err := d.Set("quarantine_policy_enabled", flattenQuarantinePolicy(properties.Policies)); err != nil {
return fmt.Errorf("Error setting `quarantine_policy`: %+v", err)
@@ -640,6 +725,9 @@ func resourceContainerRegistryRead(d *schema.ResourceData, meta interface{}) err
if err := d.Set("trust_policy", flattenTrustPolicy(properties.Policies)); err != nil {
return fmt.Errorf("Error setting `trust_policy`: %+v", err)
}
+ if err := d.Set("encryption", flattenEncryption(properties.Encryption)); err != nil {
+ return fmt.Errorf("Error setting `encryption`: %+v", err)
+ }
}
if sku := resp.Sku; sku != nil {
@@ -835,6 +923,79 @@ func expandReplications(p []interface{}) []*containerregistry.Replication {
return replications
}
+func expandIdentityProperties(e []interface{}) *containerregistry.IdentityProperties {
+ identityProperties := containerregistry.IdentityProperties{}
+ identityProperties.Type = containerregistry.ResourceIdentityTypeNone
+ if len(e) > 0 {
+ v := e[0].(map[string]interface{})
+ identityPropertType := containerregistry.ResourceIdentityType(v["type"].(string))
+ identityProperties.Type = identityPropertType
+ if identityPropertType == containerregistry.ResourceIdentityTypeUserAssigned || identityPropertType == containerregistry.ResourceIdentityTypeSystemAssignedUserAssigned {
+ identityIds := make(map[string]*containerregistry.UserIdentityProperties)
+ for _, id := range v["identity_ids"].([]interface{}) {
+ identityIds[id.(string)] = &containerregistry.UserIdentityProperties{}
+ }
+ identityProperties.UserAssignedIdentities = identityIds
+ }
+ }
+ return &identityProperties
+}
+
+func expandEncryption(e []interface{}) *containerregistry.EncryptionProperty {
+ encryptionProperty := containerregistry.EncryptionProperty{
+ Status: containerregistry.EncryptionStatusDisabled,
+ }
+ if len(e) > 0 {
+ v := e[0].(map[string]interface{})
+ enabled := v["enabled"].(bool)
+ if enabled {
+ encryptionProperty.Status = containerregistry.EncryptionStatusEnabled
+ keyId := v["key_vault_key_id"].(string)
+ identityClientId := v["identity_client_id"].(string)
+ encryptionProperty.KeyVaultProperties = &containerregistry.KeyVaultProperties{
+ KeyIdentifier: &keyId,
+ Identity: &identityClientId,
+ }
+ }
+ }
+
+ return &encryptionProperty
+}
+
+func flattenEncryption(encryptionProperty *containerregistry.EncryptionProperty) []interface{} {
+ if encryptionProperty == nil {
+ return nil
+ }
+ encryption := make(map[string]interface{})
+ encryption["enabled"] = strings.EqualFold(string(encryptionProperty.Status), string(containerregistry.EncryptionStatusEnabled))
+ if encryptionProperty.KeyVaultProperties != nil {
+ encryption["key_vault_key_id"] = encryptionProperty.KeyVaultProperties.KeyIdentifier
+ encryption["identity_client_id"] = encryptionProperty.KeyVaultProperties.Identity
+ }
+
+ return []interface{}{encryption}
+}
+
+func flattenIdentityProperties(identityProperties *containerregistry.IdentityProperties) ([]interface{}, error) {
+ if identityProperties == nil {
+ return make([]interface{}, 0), nil
+ }
+ identity := make(map[string]interface{})
+ identity["type"] = string(identityProperties.Type)
+ if identityProperties.UserAssignedIdentities != nil {
+ identityIds := make([]string, 0)
+ for key := range identityProperties.UserAssignedIdentities {
+ parsedId, err := identityParse.UserAssignedIdentityIDInsensitively(key)
+ if err != nil {
+ return nil, err
+ }
+ identityIds = append(identityIds, parsedId.ID())
+ }
+ identity["identity_ids"] = identityIds
+ }
+ return []interface{}{identity}, nil
+}
+
func flattenNetworkRuleSet(networkRuleSet *containerregistry.NetworkRuleSet) []interface{} {
if networkRuleSet == nil {
return []interface{}{}
diff --git a/azurerm/internal/services/containers/container_registry_resource_test.go b/azurerm/internal/services/containers/container_registry_resource_test.go
index f6dfabc02dd1c..17642a91fda0f 100644
--- a/azurerm/internal/services/containers/container_registry_resource_test.go
+++ b/azurerm/internal/services/containers/container_registry_resource_test.go
@@ -491,6 +491,25 @@ func TestAccContainerRegistry_policies(t *testing.T) {
})
}
+func TestAccContainerRegistry_identity(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_container_registry", "test")
+ r := ContainerRegistryResource{}
+ skuPremium := "Premium"
+ userAssigned := "userAssigned"
+ data.ResourceTest(t, r, []resource.TestStep{
+ // creates an ACR with encryption
+ {
+ Config: r.identity(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ check.That(data.ResourceName).Key("sku").HasValue(skuPremium),
+ check.That(data.ResourceName).Key("identity.0.type").HasValue(userAssigned),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
func (t ContainerRegistryResource) Exists(ctx context.Context, clients *clients.Client, state *terraform.InstanceState) (*bool, error) {
id, err := azure.ParseAzureResourceID(state.ID)
if err != nil {
@@ -985,3 +1004,36 @@ resource "azurerm_container_registry" "test" {
}
`, data.RandomInteger, data.Locations.Primary, data.RandomInteger)
}
+
+func (ContainerRegistryResource) identity(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-acr-%d"
+ location = "%s"
+}
+
+resource "azurerm_container_registry" "test" {
+ name = "testacccr%d"
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+ sku = "Premium"
+ identity {
+ type = "UserAssigned"
+ identity_ids = [
+ azurerm_user_assigned_identity.test.id
+ ]
+ }
+}
+
+resource "azurerm_user_assigned_identity" "test" {
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+
+ name = "testaccuai%d"
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, data.RandomInteger)
+}
diff --git a/azurerm/internal/services/containers/container_registry_scope_map_resource.go b/azurerm/internal/services/containers/container_registry_scope_map_resource.go
index 3de73d874b139..9442aa4590f73 100644
--- a/azurerm/internal/services/containers/container_registry_scope_map_resource.go
+++ b/azurerm/internal/services/containers/container_registry_scope_map_resource.go
@@ -40,7 +40,7 @@ func resourceContainerRegistryScopeMap() *schema.Resource {
Type: schema.TypeString,
Required: true,
ForceNew: true,
- ValidateFunc: validate.ContainerRegistryName,
+ ValidateFunc: validate.ContainerRegistryScopeMapName,
},
"description": {
diff --git a/azurerm/internal/services/containers/kubernetes_addons.go b/azurerm/internal/services/containers/kubernetes_addons.go
index fd4670432cd5c..e438fac431ae0 100644
--- a/azurerm/internal/services/containers/kubernetes_addons.go
+++ b/azurerm/internal/services/containers/kubernetes_addons.go
@@ -4,7 +4,7 @@ import (
"fmt"
"strings"
- "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice"
+ "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice"
"github.com/Azure/go-autorest/autorest/azure"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
diff --git a/azurerm/internal/services/containers/kubernetes_cluster_data_source.go b/azurerm/internal/services/containers/kubernetes_cluster_data_source.go
index f75a977575d37..2e3ab0163363a 100644
--- a/azurerm/internal/services/containers/kubernetes_cluster_data_source.go
+++ b/azurerm/internal/services/containers/kubernetes_cluster_data_source.go
@@ -5,7 +5,7 @@ import (
"strings"
"time"
- "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice"
+ "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
@@ -1010,11 +1010,6 @@ func flattenKubernetesClusterDataSourceAgentPoolProfiles(input *[]containerservi
osDiskSizeGb = int(*profile.OsDiskSizeGB)
}
- vnetSubnetId := ""
- if profile.VnetSubnetID != nil {
- vnetSubnetId = *profile.VnetSubnetID
- }
-
orchestratorVersion := ""
if profile.OrchestratorVersion != nil && *profile.OrchestratorVersion != "" {
orchestratorVersion = *profile.OrchestratorVersion
@@ -1041,6 +1036,16 @@ func flattenKubernetesClusterDataSourceAgentPoolProfiles(input *[]containerservi
nodeTaints = *profile.NodeTaints
}
+ vmSize := ""
+ if profile.VMSize != nil {
+ vmSize = *profile.VMSize
+ }
+
+ vnetSubnetId := ""
+ if profile.VnetSubnetID != nil {
+ vnetSubnetId = *profile.VnetSubnetID
+ }
+
agentPoolProfiles = append(agentPoolProfiles, map[string]interface{}{
"availability_zones": utils.FlattenStringSlice(profile.AvailabilityZones),
"count": count,
@@ -1059,7 +1064,7 @@ func flattenKubernetesClusterDataSourceAgentPoolProfiles(input *[]containerservi
"tags": tags.Flatten(profile.Tags),
"type": string(profile.Type),
"upgrade_settings": flattenUpgradeSettings(profile.UpgradeSettings),
- "vm_size": string(profile.VMSize),
+ "vm_size": vmSize,
"vnet_subnet_id": vnetSubnetId,
})
}
diff --git a/azurerm/internal/services/containers/kubernetes_cluster_node_pool_data_source.go b/azurerm/internal/services/containers/kubernetes_cluster_node_pool_data_source.go
index 7a2cb0ff7314d..9d8dfeae6a4d8 100644
--- a/azurerm/internal/services/containers/kubernetes_cluster_node_pool_data_source.go
+++ b/azurerm/internal/services/containers/kubernetes_cluster_node_pool_data_source.go
@@ -4,7 +4,7 @@ import (
"fmt"
"time"
- "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice"
+ "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
@@ -230,7 +230,7 @@ func dataSourceKubernetesClusterNodePoolRead(d *schema.ResourceData, meta interf
}
d.Set("min_count", minCount)
- mode := string(containerservice.User)
+ mode := string(containerservice.AgentPoolModeUser)
if props.Mode != "" {
mode = string(props.Mode)
}
@@ -259,7 +259,7 @@ func dataSourceKubernetesClusterNodePoolRead(d *schema.ResourceData, meta interf
}
d.Set("os_disk_size_gb", osDiskSizeGB)
- osDiskType := containerservice.Managed
+ osDiskType := containerservice.OSDiskTypeManaged
if props.OsDiskType != "" {
osDiskType = props.OsDiskType
}
@@ -267,7 +267,7 @@ func dataSourceKubernetesClusterNodePoolRead(d *schema.ResourceData, meta interf
d.Set("os_type", string(props.OsType))
// not returned from the API if not Spot
- priority := string(containerservice.Regular)
+ priority := string(containerservice.ScaleSetPriorityRegular)
if props.ScaleSetPriority != "" {
priority = string(props.ScaleSetPriority)
}
@@ -290,7 +290,7 @@ func dataSourceKubernetesClusterNodePoolRead(d *schema.ResourceData, meta interf
}
d.Set("vnet_subnet_id", props.VnetSubnetID)
- d.Set("vm_size", string(props.VMSize))
+ d.Set("vm_size", props.VMSize)
}
return tags.FlattenAndSet(d, resp.Tags)
diff --git a/azurerm/internal/services/containers/kubernetes_cluster_node_pool_resource.go b/azurerm/internal/services/containers/kubernetes_cluster_node_pool_resource.go
index e764ce79d87e8..ab01f5da04d0a 100644
--- a/azurerm/internal/services/containers/kubernetes_cluster_node_pool_resource.go
+++ b/azurerm/internal/services/containers/kubernetes_cluster_node_pool_resource.go
@@ -6,7 +6,7 @@ import (
"strings"
"time"
- "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice"
+ "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
@@ -104,8 +104,8 @@ func resourceKubernetesClusterNodePool() *schema.Resource {
Optional: true,
ForceNew: true,
ValidateFunc: validation.StringInSlice([]string{
- string(containerservice.Delete),
- string(containerservice.Deallocate),
+ string(containerservice.ScaleSetEvictionPolicyDelete),
+ string(containerservice.ScaleSetEvictionPolicyDeallocate),
}, false),
},
@@ -125,10 +125,10 @@ func resourceKubernetesClusterNodePool() *schema.Resource {
"mode": {
Type: schema.TypeString,
Optional: true,
- Default: string(containerservice.User),
+ Default: string(containerservice.AgentPoolModeUser),
ValidateFunc: validation.StringInSlice([]string{
- string(containerservice.System),
- string(containerservice.User),
+ string(containerservice.AgentPoolModeSystem),
+ string(containerservice.AgentPoolModeUser),
}, false),
},
@@ -182,10 +182,10 @@ func resourceKubernetesClusterNodePool() *schema.Resource {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
- Default: containerservice.Managed,
+ Default: containerservice.OSDiskTypeManaged,
ValidateFunc: validation.StringInSlice([]string{
- string(containerservice.Ephemeral),
- string(containerservice.Managed),
+ string(containerservice.OSDiskTypeEphemeral),
+ string(containerservice.OSDiskTypeManaged),
}, false),
},
@@ -193,10 +193,10 @@ func resourceKubernetesClusterNodePool() *schema.Resource {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
- Default: string(containerservice.Linux),
+ Default: string(containerservice.OSTypeLinux),
ValidateFunc: validation.StringInSlice([]string{
- string(containerservice.Linux),
- string(containerservice.Windows),
+ string(containerservice.OSTypeLinux),
+ string(containerservice.OSTypeWindows),
}, false),
},
@@ -204,10 +204,10 @@ func resourceKubernetesClusterNodePool() *schema.Resource {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
- Default: string(containerservice.Regular),
+ Default: string(containerservice.ScaleSetPriorityRegular),
ValidateFunc: validation.StringInSlice([]string{
- string(containerservice.Regular),
- string(containerservice.Spot),
+ string(containerservice.ScaleSetPriorityRegular),
+ string(containerservice.ScaleSetPrioritySpot),
}, false),
},
@@ -269,7 +269,7 @@ func resourceKubernetesClusterNodePoolCreate(d *schema.ResourceData, meta interf
if props := cluster.ManagedClusterProperties; props != nil {
if pools := props.AgentPoolProfiles; pools != nil {
for _, p := range *pools {
- if p.Type == containerservice.VirtualMachineScaleSets {
+ if p.Type == containerservice.AgentPoolTypeVirtualMachineScaleSets {
defaultPoolIsVMSS = true
break
}
@@ -310,8 +310,8 @@ func resourceKubernetesClusterNodePoolCreate(d *schema.ResourceData, meta interf
NodePublicIPPrefixID: utils.String(d.Get("node_public_ip_prefix_id").(string)),
ScaleSetPriority: containerservice.ScaleSetPriority(priority),
Tags: tags.Expand(t),
- Type: containerservice.VirtualMachineScaleSets,
- VMSize: containerservice.VMSizeTypes(vmSize),
+ Type: containerservice.AgentPoolTypeVirtualMachineScaleSets,
+ VMSize: utils.String(vmSize),
EnableEncryptionAtHost: utils.Bool(enableHostEncryption),
UpgradeSettings: expandUpgradeSettings(d.Get("upgrade_settings").([]interface{})),
@@ -319,7 +319,7 @@ func resourceKubernetesClusterNodePoolCreate(d *schema.ResourceData, meta interf
Count: utils.Int32(int32(count)),
}
- if priority == string(containerservice.Spot) {
+ if priority == string(containerservice.ScaleSetPrioritySpot) {
profile.ScaleSetEvictionPolicy = containerservice.ScaleSetEvictionPolicy(evictionPolicy)
profile.SpotMaxPrice = utils.Float(spotMaxPrice)
} else {
@@ -515,7 +515,7 @@ func resourceKubernetesClusterNodePoolUpdate(d *schema.ResourceData, meta interf
// > You must replace your existing spot node pool with a new one to do operations such as upgrading
// > the Kubernetes version. To replace a spot node pool, create a new spot node pool with a different
// > version of Kubernetes, wait until its status is Ready, then remove the old node pool.
- if strings.EqualFold(string(props.ScaleSetPriority), string(containerservice.Spot)) {
+ if strings.EqualFold(string(props.ScaleSetPriority), string(containerservice.ScaleSetPrioritySpot)) {
// ^ the Scale Set Priority isn't returned when Regular
return fmt.Errorf("the Orchestrator Version cannot be updated when using a Spot Node Pool")
}
@@ -651,7 +651,7 @@ func resourceKubernetesClusterNodePoolRead(d *schema.ResourceData, meta interfac
}
d.Set("min_count", minCount)
- mode := string(containerservice.User)
+ mode := string(containerservice.AgentPoolModeUser)
if props.Mode != "" {
mode = string(props.Mode)
}
@@ -680,7 +680,7 @@ func resourceKubernetesClusterNodePoolRead(d *schema.ResourceData, meta interfac
}
d.Set("os_disk_size_gb", osDiskSizeGB)
- osDiskType := containerservice.Managed
+ osDiskType := containerservice.OSDiskTypeManaged
if props.OsDiskType != "" {
osDiskType = props.OsDiskType
}
@@ -688,7 +688,7 @@ func resourceKubernetesClusterNodePoolRead(d *schema.ResourceData, meta interfac
d.Set("os_type", string(props.OsType))
// not returned from the API if not Spot
- priority := string(containerservice.Regular)
+ priority := string(containerservice.ScaleSetPriorityRegular)
if props.ScaleSetPriority != "" {
priority = string(props.ScaleSetPriority)
}
@@ -703,7 +703,7 @@ func resourceKubernetesClusterNodePoolRead(d *schema.ResourceData, meta interfac
d.Set("spot_max_price", spotMaxPrice)
d.Set("vnet_subnet_id", props.VnetSubnetID)
- d.Set("vm_size", string(props.VMSize))
+ d.Set("vm_size", props.VMSize)
if err := d.Set("upgrade_settings", flattenUpgradeSettings(props.UpgradeSettings)); err != nil {
return fmt.Errorf("setting `upgrade_settings`: %+v", err)
diff --git a/azurerm/internal/services/containers/kubernetes_cluster_resource.go b/azurerm/internal/services/containers/kubernetes_cluster_resource.go
index 152b1b448efa0..949855b0c27ae 100644
--- a/azurerm/internal/services/containers/kubernetes_cluster_resource.go
+++ b/azurerm/internal/services/containers/kubernetes_cluster_resource.go
@@ -8,7 +8,7 @@ import (
"strings"
"time"
- "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice"
+ "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
@@ -120,10 +120,10 @@ func resourceKubernetesCluster() *schema.Resource {
Optional: true,
Computed: true,
ValidateFunc: validation.StringInSlice([]string{
- string(containerservice.LeastWaste),
- string(containerservice.MostPods),
- string(containerservice.Priority),
- string(containerservice.Random),
+ string(containerservice.ExpanderLeastWaste),
+ string(containerservice.ExpanderMostPods),
+ string(containerservice.ExpanderPriority),
+ string(containerservice.ExpanderRandom),
}, false),
},
"max_graceful_termination_sec": {
@@ -327,8 +327,8 @@ func resourceKubernetesCluster() *schema.Resource {
Required: true,
ForceNew: true,
ValidateFunc: validation.StringInSlice([]string{
- string(containerservice.Azure),
- string(containerservice.Kubenet),
+ string(containerservice.NetworkPluginAzure),
+ string(containerservice.NetworkPluginKubenet),
}, false),
},
@@ -341,8 +341,8 @@ func resourceKubernetesCluster() *schema.Resource {
// https://github.com/Azure/AKS/issues/1954#issuecomment-759306712
// Transparent is already the default and only option for CNI
// Bridge is only kept for backward compatibility
- string(containerservice.Bridge),
- string(containerservice.Transparent),
+ string(containerservice.NetworkModeBridge),
+ string(containerservice.NetworkModeTransparent),
}, false),
},
@@ -392,12 +392,12 @@ func resourceKubernetesCluster() *schema.Resource {
"load_balancer_sku": {
Type: schema.TypeString,
Optional: true,
- Default: string(containerservice.Standard),
+ Default: string(containerservice.LoadBalancerSkuStandard),
ForceNew: true,
// TODO: fix the casing in the Swagger
ValidateFunc: validation.StringInSlice([]string{
- string(containerservice.Basic),
- string(containerservice.Standard),
+ string(containerservice.LoadBalancerSkuBasic),
+ string(containerservice.LoadBalancerSkuStandard),
}, true),
DiffSuppressFunc: suppress.CaseDifference,
},
@@ -406,10 +406,10 @@ func resourceKubernetesCluster() *schema.Resource {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
- Default: string(containerservice.LoadBalancer),
+ Default: string(containerservice.OutboundTypeLoadBalancer),
ValidateFunc: validation.StringInSlice([]string{
- string(containerservice.LoadBalancer),
- string(containerservice.UserDefinedRouting),
+ string(containerservice.OutboundTypeLoadBalancer),
+ string(containerservice.OutboundTypeUserDefinedRouting),
}, false),
},
@@ -648,10 +648,10 @@ func resourceKubernetesCluster() *schema.Resource {
// * Private clusters aren't currently supported.
// @jackofallops (2020-07-21) - Update:
// * sku_tier can now be upgraded in place, downgrade requires rebuild
- Default: string(containerservice.Free),
+ Default: string(containerservice.ManagedClusterSKUTierFree),
ValidateFunc: validation.StringInSlice([]string{
- string(containerservice.Free),
- string(containerservice.Paid),
+ string(containerservice.ManagedClusterSKUTierFree),
+ string(containerservice.ManagedClusterSKUTierPaid),
}, false),
},
@@ -1343,7 +1343,7 @@ func resourceKubernetesClusterRead(d *schema.ResourceData, meta interface{}) err
d.Set("location", azure.NormalizeLocation(*location))
}
- skuTier := string(containerservice.Free)
+ skuTier := string(containerservice.ManagedClusterSKUTierFree)
if resp.Sku != nil && resp.Sku.Tier != "" {
skuTier = string(resp.Sku.Tier)
}
diff --git a/azurerm/internal/services/containers/kubernetes_cluster_validate.go b/azurerm/internal/services/containers/kubernetes_cluster_validate.go
index e391e5188c85e..ec4b5c2ebd230 100644
--- a/azurerm/internal/services/containers/kubernetes_cluster_validate.go
+++ b/azurerm/internal/services/containers/kubernetes_cluster_validate.go
@@ -6,7 +6,7 @@ import (
"net/http"
"strings"
- "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice"
+ "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/containers/client"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
diff --git a/azurerm/internal/services/containers/kubernetes_nodepool.go b/azurerm/internal/services/containers/kubernetes_nodepool.go
index 63c7896d3ec80..658bd74d2d092 100644
--- a/azurerm/internal/services/containers/kubernetes_nodepool.go
+++ b/azurerm/internal/services/containers/kubernetes_nodepool.go
@@ -7,7 +7,7 @@ import (
computeValidate "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/compute/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/containers/validate"
- "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice"
+ "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
@@ -34,10 +34,10 @@ func SchemaDefaultNodePool() *schema.Schema {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
- Default: string(containerservice.VirtualMachineScaleSets),
+ Default: string(containerservice.AgentPoolTypeVirtualMachineScaleSets),
ValidateFunc: validation.StringInSlice([]string{
- string(containerservice.AvailabilitySet),
- string(containerservice.VirtualMachineScaleSets),
+ string(containerservice.AgentPoolTypeAvailabilitySet),
+ string(containerservice.AgentPoolTypeVirtualMachineScaleSets),
}, false),
},
@@ -143,10 +143,10 @@ func SchemaDefaultNodePool() *schema.Schema {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
- Default: containerservice.Managed,
+ Default: containerservice.OSDiskTypeManaged,
ValidateFunc: validation.StringInSlice([]string{
- string(containerservice.Ephemeral),
- string(containerservice.Managed),
+ string(containerservice.OSDiskTypeEphemeral),
+ string(containerservice.OSDiskTypeManaged),
}, false),
},
@@ -243,17 +243,17 @@ func ExpandDefaultNodePool(d *schema.ResourceData) (*[]containerservice.ManagedC
NodeTaints: nodeTaints,
Tags: tags.Expand(t),
Type: containerservice.AgentPoolType(raw["type"].(string)),
- VMSize: containerservice.VMSizeTypes(raw["vm_size"].(string)),
+ VMSize: utils.String(raw["vm_size"].(string)),
// at this time the default node pool has to be Linux or the AKS cluster fails to provision with:
// Pods not in Running status: coredns-7fc597cc45-v5z7x,coredns-autoscaler-7ccc76bfbd-djl7j,metrics-server-cbd95f966-5rl97,tunnelfront-7d9884977b-wpbvn
// Windows agents can be configured via the separate node pool resource
- OsType: containerservice.Linux,
+ OsType: containerservice.OSTypeLinux,
// without this set the API returns:
// Code="MustDefineAtLeastOneSystemPool" Message="Must define at least one system pool."
// since this is the "default" node pool we can assume this is a system node pool
- Mode: containerservice.System,
+ Mode: containerservice.AgentPoolModeSystem,
UpgradeSettings: expandUpgradeSettings(raw["upgrade_settings"].([]interface{})),
@@ -282,7 +282,7 @@ func ExpandDefaultNodePool(d *schema.ResourceData) (*[]containerservice.ManagedC
profile.OsDiskSizeGB = utils.Int32(osDiskSizeGB)
}
- profile.OsDiskType = containerservice.Managed
+ profile.OsDiskType = containerservice.OSDiskTypeManaged
if osDiskType := raw["os_disk_type"].(string); osDiskType != "" {
profile.OsDiskType = containerservice.OSDiskType(raw["os_disk_type"].(string))
}
@@ -433,7 +433,7 @@ func FlattenDefaultNodePool(input *[]containerservice.ManagedClusterAgentPoolPro
osDiskSizeGB = int(*agentPool.OsDiskSizeGB)
}
- osDiskType := containerservice.Managed
+ osDiskType := containerservice.OSDiskTypeManaged
if agentPool.OsDiskType != "" {
osDiskType = agentPool.OsDiskType
}
@@ -453,6 +453,11 @@ func FlattenDefaultNodePool(input *[]containerservice.ManagedClusterAgentPoolPro
proximityPlacementGroupId = *agentPool.ProximityPlacementGroupID
}
+ vmSize := ""
+ if agentPool.VMSize != nil {
+ vmSize = *agentPool.VMSize
+ }
+
upgradeSettings := flattenUpgradeSettings(agentPool.UpgradeSettings)
return &[]interface{}{
@@ -473,7 +478,7 @@ func FlattenDefaultNodePool(input *[]containerservice.ManagedClusterAgentPoolPro
"os_disk_type": string(osDiskType),
"tags": tags.Flatten(agentPool.Tags),
"type": string(agentPool.Type),
- "vm_size": string(agentPool.VMSize),
+ "vm_size": vmSize,
"orchestrator_version": orchestratorVersion,
"proximity_placement_group_id": proximityPlacementGroupId,
"upgrade_settings": upgradeSettings,
@@ -504,7 +509,7 @@ func findDefaultNodePool(input *[]containerservice.ManagedClusterAgentPoolProfil
if v.Name == nil {
continue
}
- if v.Mode != containerservice.System {
+ if v.Mode != containerservice.AgentPoolModeSystem {
continue
}
diff --git a/azurerm/internal/services/containers/validate/container_registry_scope_map_name.go b/azurerm/internal/services/containers/validate/container_registry_scope_map_name.go
new file mode 100644
index 0000000000000..2a35b0c2989df
--- /dev/null
+++ b/azurerm/internal/services/containers/validate/container_registry_scope_map_name.go
@@ -0,0 +1,24 @@
+package validate
+
+import (
+ "fmt"
+ "regexp"
+)
+
+func ContainerRegistryScopeMapName(v interface{}, k string) (warnings []string, errors []error) {
+ value := v.(string)
+ if !regexp.MustCompile(`^[a-zA-Z0-9\-]+$`).MatchString(value) {
+ errors = append(errors, fmt.Errorf(
+ "alpha numeric characters and hyphens only are allowed in %q: %q", k, value))
+ }
+
+ if 5 > len(value) {
+ errors = append(errors, fmt.Errorf("%q cannot be less than 5 characters: %q", k, value))
+ }
+
+ if len(value) >= 50 {
+ errors = append(errors, fmt.Errorf("%q cannot be longer than 50 characters: %q %d", k, value, len(value)))
+ }
+
+ return warnings, errors
+}
diff --git a/azurerm/internal/services/containers/validate/container_registry_scope_map_name_test.go b/azurerm/internal/services/containers/validate/container_registry_scope_map_name_test.go
new file mode 100644
index 0000000000000..234364a1f67aa
--- /dev/null
+++ b/azurerm/internal/services/containers/validate/container_registry_scope_map_name_test.go
@@ -0,0 +1,68 @@
+package validate_test
+
+import (
+ "testing"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/containers/validate"
+)
+
+func TestContainerRegistryScopeMapName(t *testing.T) {
+ cases := []struct {
+ Value string
+ ErrCount int
+ }{
+ {
+ Value: "four",
+ ErrCount: 1,
+ },
+ {
+ Value: "5five",
+ ErrCount: 0,
+ },
+ {
+ Value: "five-123",
+ ErrCount: 0,
+ },
+ {
+ Value: "hello-world",
+ ErrCount: 0,
+ },
+ {
+ Value: "hello_world",
+ ErrCount: 1,
+ },
+ {
+ Value: "helloWorld",
+ ErrCount: 0,
+ },
+ {
+ Value: "helloworld12",
+ ErrCount: 0,
+ },
+ {
+ Value: "hello@world",
+ ErrCount: 1,
+ },
+
+ {
+ Value: "qfvbdsbvipqdbwsbddbdcwqffewsqwcdw21ddwqwd3324120",
+ ErrCount: 0,
+ },
+ {
+ Value: "qfvbdsbvipqdbwsbddbdcwqffewsqwcdw21ddwqwd33241202",
+ ErrCount: 0,
+ },
+ {
+ Value: "qfvbdsbvipqdbwsbddbdcwqfjjfewsqwcdw21ddwqwd3324120fadfadf",
+ ErrCount: 1,
+ },
+ }
+
+ for _, tc := range cases {
+ _, errors := validate.ContainerRegistryScopeMapName(tc.Value, "azurerm_container_registry_scope_map")
+
+ if len(errors) != tc.ErrCount {
+ t.Fatalf("Expected the Azure RM Container Registry Token Name to trigger a validation error: %v", errors)
+ }
+ }
+}
diff --git a/azurerm/internal/services/cosmos/common/cors_rule.go b/azurerm/internal/services/cosmos/common/cors_rule.go
new file mode 100644
index 0000000000000..94dae87d09419
--- /dev/null
+++ b/azurerm/internal/services/cosmos/common/cors_rule.go
@@ -0,0 +1,140 @@
+package common
+
+import (
+ "strings"
+
+ "github.com/Azure/azure-sdk-for-go/services/cosmos-db/mgmt/2021-01-15/documentdb"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/validation"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+func SchemaCorsRule() *schema.Schema {
+ allowedMethods := []string{
+ "DELETE",
+ "GET",
+ "HEAD",
+ "MERGE",
+ "POST",
+ "OPTIONS",
+ "PUT",
+ "PATCH",
+ }
+
+ return &schema.Schema{
+ Type: schema.TypeList,
+ Optional: true,
+ MaxItems: 1,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "allowed_origins": {
+ Type: schema.TypeList,
+ Required: true,
+ MaxItems: 64,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringIsNotEmpty,
+ },
+ },
+
+ "exposed_headers": {
+ Type: schema.TypeList,
+ Required: true,
+ MaxItems: 64,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringIsNotEmpty,
+ },
+ },
+
+ "allowed_headers": {
+ Type: schema.TypeList,
+ Required: true,
+ MaxItems: 64,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringIsNotEmpty,
+ },
+ },
+
+ "allowed_methods": {
+ Type: schema.TypeList,
+ Required: true,
+ MaxItems: 64,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringInSlice(allowedMethods, false),
+ },
+ },
+
+ "max_age_in_seconds": {
+ Type: schema.TypeInt,
+ Required: true,
+ ValidateFunc: validation.IntBetween(1, 2000000000),
+ },
+ },
+ },
+ }
+}
+
+func ExpandCosmosCorsRule(input []interface{}) *[]documentdb.CorsPolicy {
+ if len(input) == 0 || input[0] == nil {
+ return nil
+ }
+ corsRules := make([]documentdb.CorsPolicy, 0)
+
+ if len(input) == 0 {
+ return &corsRules
+ }
+
+ for _, attr := range input {
+ corsRuleAttr := attr.(map[string]interface{})
+ corsRule := documentdb.CorsPolicy{}
+ corsRule.AllowedOrigins = utils.String(strings.Join(*utils.ExpandStringSlice(corsRuleAttr["allowed_origins"].([]interface{})), ","))
+ corsRule.ExposedHeaders = utils.String(strings.Join(*utils.ExpandStringSlice(corsRuleAttr["exposed_headers"].([]interface{})), ","))
+ corsRule.AllowedHeaders = utils.String(strings.Join(*utils.ExpandStringSlice(corsRuleAttr["allowed_headers"].([]interface{})), ","))
+ corsRule.AllowedMethods = utils.String(strings.Join(*utils.ExpandStringSlice(corsRuleAttr["allowed_methods"].([]interface{})), ","))
+ corsRule.MaxAgeInSeconds = utils.Int64(int64(corsRuleAttr["max_age_in_seconds"].(int)))
+
+ corsRules = append(corsRules, corsRule)
+ }
+
+ return &corsRules
+}
+
+func FlattenCosmosCorsRule(input *[]documentdb.CorsPolicy) []interface{} {
+ corsRules := make([]interface{}, 0)
+
+ if input == nil || len(*input) == 0 {
+ return corsRules
+ }
+
+ for _, corsRule := range *input {
+ var maxAgeInSeconds int
+
+ if corsRule.MaxAgeInSeconds != nil {
+ maxAgeInSeconds = int(*corsRule.MaxAgeInSeconds)
+ }
+
+ corsRules = append(corsRules, map[string]interface{}{
+ "allowed_headers": flattenCorsProperty(corsRule.AllowedHeaders),
+ "allowed_origins": flattenCorsProperty(corsRule.AllowedOrigins),
+ "allowed_methods": flattenCorsProperty(corsRule.AllowedMethods),
+ "exposed_headers": flattenCorsProperty(corsRule.ExposedHeaders),
+ "max_age_in_seconds": maxAgeInSeconds,
+ })
+ }
+
+ return corsRules
+}
+
+func flattenCorsProperty(input *string) []interface{} {
+ results := make([]interface{}, 0, len(*input))
+
+ origins := strings.Split(*input, ",")
+ for _, origin := range origins {
+ results = append(results, origin)
+ }
+
+ return results
+}
diff --git a/azurerm/internal/services/cosmos/cosmosdb_account_resource.go b/azurerm/internal/services/cosmos/cosmosdb_account_resource.go
index 5c778fa2592dc..fee2f293b8793 100644
--- a/azurerm/internal/services/cosmos/cosmosdb_account_resource.go
+++ b/azurerm/internal/services/cosmos/cosmosdb_account_resource.go
@@ -330,20 +330,50 @@ func resourceCosmosDbAccount() *schema.Resource {
"interval_in_minutes": {
Type: schema.TypeInt,
Optional: true,
- Default: 240,
+ Computed: true,
ValidateFunc: validation.IntBetween(60, 1440),
},
"retention_in_hours": {
Type: schema.TypeInt,
Optional: true,
- Default: 8,
+ Computed: true,
ValidateFunc: validation.IntBetween(8, 720),
},
},
},
},
+ "identity": {
+ Type: schema.TypeList,
+ Optional: true,
+ MaxItems: 1,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ // only system assigned identity is supported
+ "type": {
+ Type: schema.TypeString,
+ Required: true,
+ ValidateFunc: validation.StringInSlice([]string{
+ string(documentdb.ResourceIdentityTypeSystemAssigned),
+ }, false),
+ },
+
+ "principal_id": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+
+ "tenant_id": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ },
+ },
+ },
+
+ "cors_rule": common.SchemaCorsRule(),
+
// computed
"endpoint": {
Type: schema.TypeString,
@@ -495,6 +525,7 @@ func resourceCosmosDbAccountCreate(d *schema.ResourceData, meta interface{}) err
account := documentdb.DatabaseAccountCreateUpdateParameters{
Location: utils.String(location),
Kind: documentdb.DatabaseAccountKind(kind),
+ Identity: expandCosmosdbAccountIdentity(d.Get("identity").([]interface{})),
DatabaseAccountCreateUpdateProperties: &documentdb.DatabaseAccountCreateUpdateProperties{
DatabaseAccountOfferType: utils.String(offerType),
IPRules: common.CosmosDBIpRangeFilterToIpRules(ipRangeFilter),
@@ -508,6 +539,7 @@ func resourceCosmosDbAccountCreate(d *schema.ResourceData, meta interface{}) err
EnableMultipleWriteLocations: utils.Bool(enableMultipleWriteLocations),
PublicNetworkAccess: publicNetworkAccess,
EnableAnalyticalStorage: utils.Bool(enableAnalyticalStorage),
+ Cors: common.ExpandCosmosCorsRule(d.Get("cors_rule").([]interface{})),
DisableKeyBasedMetadataWriteAccess: utils.Bool(!d.Get("access_key_metadata_writes_enabled").(bool)),
NetworkACLBypass: networkByPass,
NetworkACLBypassResourceIds: utils.ExpandStringSlice(d.Get("network_acl_bypass_ids").([]interface{})),
@@ -625,6 +657,7 @@ func resourceCosmosDbAccountUpdate(d *schema.ResourceData, meta interface{}) err
account := documentdb.DatabaseAccountCreateUpdateParameters{
Location: utils.String(location),
Kind: documentdb.DatabaseAccountKind(kind),
+ Identity: expandCosmosdbAccountIdentity(d.Get("identity").([]interface{})),
DatabaseAccountCreateUpdateProperties: &documentdb.DatabaseAccountCreateUpdateProperties{
DatabaseAccountOfferType: utils.String(offerType),
IPRules: common.CosmosDBIpRangeFilterToIpRules(ipRangeFilter),
@@ -638,6 +671,7 @@ func resourceCosmosDbAccountUpdate(d *schema.ResourceData, meta interface{}) err
EnableMultipleWriteLocations: resp.EnableMultipleWriteLocations,
PublicNetworkAccess: publicNetworkAccess,
EnableAnalyticalStorage: utils.Bool(enableAnalyticalStorage),
+ Cors: common.ExpandCosmosCorsRule(d.Get("cors_rule").([]interface{})),
DisableKeyBasedMetadataWriteAccess: utils.Bool(!d.Get("access_key_metadata_writes_enabled").(bool)),
NetworkACLBypass: networkByPass,
NetworkACLBypassResourceIds: utils.ExpandStringSlice(d.Get("network_acl_bypass_ids").([]interface{})),
@@ -743,6 +777,12 @@ func resourceCosmosDbAccountRead(d *schema.ResourceData, meta interface{}) error
d.Set("kind", string(resp.Kind))
+ if v := resp.Identity; v != nil {
+ if err := d.Set("identity", flattenAzureRmdocumentdbMachineIdentity(v)); err != nil {
+ return fmt.Errorf("setting `identity`: %+v", err)
+ }
+ }
+
if props := resp.DatabaseAccountGetProperties; props != nil {
d.Set("offer_type", string(props.DatabaseAccountOfferType))
d.Set("ip_range_filter", common.CosmosDBIpRulesToIpRangeFilter(props.IPRules))
@@ -799,6 +839,8 @@ func resourceCosmosDbAccountRead(d *schema.ResourceData, meta interface{}) error
if err = d.Set("backup", policy); err != nil {
return fmt.Errorf("setting `backup`: %+v", err)
}
+
+ d.Set("cors_rule", common.FlattenCosmosCorsRule(props.Cors))
}
readEndpoints := make([]string, 0)
@@ -1281,3 +1323,39 @@ func flattenCosmosdbAccountBackup(input documentdb.BasicBackupPolicy) ([]interfa
return nil, fmt.Errorf("unknown `type` in `backup`: %+v", input)
}
}
+
+func expandCosmosdbAccountIdentity(vs []interface{}) *documentdb.ManagedServiceIdentity {
+ if len(vs) == 0 || vs[0] == nil {
+ return &documentdb.ManagedServiceIdentity{
+ Type: documentdb.ResourceIdentityTypeNone,
+ }
+ }
+
+ v := vs[0].(map[string]interface{})
+
+ return &documentdb.ManagedServiceIdentity{
+ Type: documentdb.ResourceIdentityType(v["type"].(string)),
+ }
+}
+
+func flattenAzureRmdocumentdbMachineIdentity(identity *documentdb.ManagedServiceIdentity) []interface{} {
+ if identity == nil || identity.Type == documentdb.ResourceIdentityTypeNone {
+ return make([]interface{}, 0)
+ }
+
+ var principalID, tenantID string
+ if identity.PrincipalID != nil {
+ principalID = *identity.PrincipalID
+ }
+
+ if identity.TenantID != nil {
+ tenantID = *identity.TenantID
+ }
+
+ return []interface{}{map[string]interface{}{
+ "type": string(identity.Type),
+ "principal_id": principalID,
+ "tenant_id": tenantID,
+ },
+ }
+}
diff --git a/azurerm/internal/services/cosmos/cosmosdb_account_resource_test.go b/azurerm/internal/services/cosmos/cosmosdb_account_resource_test.go
index 926c36f548aa0..440d82ae6b38a 100644
--- a/azurerm/internal/services/cosmos/cosmosdb_account_resource_test.go
+++ b/azurerm/internal/services/cosmos/cosmosdb_account_resource_test.go
@@ -619,6 +619,37 @@ func TestAccCosmosDBAccount_vNetFilters(t *testing.T) {
})
}
+func TestAccCosmosDBAccount_identity(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_cosmosdb_account", "test")
+ r := CosmosDBAccountResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.basicMongoDB(data, documentdb.Session),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ {
+ Config: r.systemAssignedIdentity(data, documentdb.Session),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ check.That(data.ResourceName).Key("identity.0.principal_id").Exists(),
+ check.That(data.ResourceName).Key("identity.0.tenant_id").Exists(),
+ ),
+ },
+ data.ImportStep(),
+ {
+ Config: r.basicMongoDB(data, documentdb.Session),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
func TestAccCosmosDBAccount_backup(t *testing.T) {
data := acceptance.BuildTestData(t, "azurerm_cosmosdb_account", "test")
r := CosmosDBAccountResource{}
@@ -646,8 +677,6 @@ func TestAccCosmosDBAccount_backup(t *testing.T) {
Check: resource.ComposeAggregateTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("backup.0.type").HasValue("Periodic"),
- check.That(data.ResourceName).Key("backup.0.interval_in_minutes").HasValue("240"),
- check.That(data.ResourceName).Key("backup.0.retention_in_hours").HasValue("8"),
),
},
data.ImportStep(),
@@ -977,6 +1006,14 @@ resource "azurerm_cosmosdb_account" "test" {
failover_priority = 2
}
+ cors_rule {
+ allowed_origins = ["http://www.example.com"]
+ exposed_headers = ["x-tempo-*"]
+ allowed_headers = ["x-tempo-*"]
+ allowed_methods = ["GET", "PUT"]
+ max_age_in_seconds = "500"
+ }
+
access_key_metadata_writes_enabled = false
network_acl_bypass_for_azure_services = true
}
@@ -1031,6 +1068,14 @@ resource "azurerm_cosmosdb_account" "test" {
failover_priority = 2
}
+ cors_rule {
+ allowed_origins = ["http://www.example.com"]
+ exposed_headers = ["x-tempo-*"]
+ allowed_headers = ["x-tempo-*"]
+ allowed_methods = ["GET", "PUT"]
+ max_age_in_seconds = "500"
+ }
+
access_key_metadata_writes_enabled = false
network_acl_bypass_for_azure_services = true
}
@@ -1160,6 +1205,15 @@ resource "azurerm_cosmosdb_account" "test" {
location = "%[6]s"
failover_priority = 2
}
+
+ cors_rule {
+ allowed_origins = ["http://www.example.com", "http://www.test.com"]
+ exposed_headers = ["x-tempo-*", "x-method-*"]
+ allowed_headers = ["*"]
+ allowed_methods = ["GET"]
+ max_age_in_seconds = "2000000000"
+ }
+
access_key_metadata_writes_enabled = true
}
`, r.completePreReqs(data), data.RandomInteger, string(kind), string(consistency), data.Locations.Secondary, data.Locations.Ternary)
@@ -1208,6 +1262,14 @@ resource "azurerm_cosmosdb_account" "test" {
location = "%[5]s"
failover_priority = 2
}
+
+ cors_rule {
+ allowed_origins = ["http://www.example.com", "http://www.test.com"]
+ exposed_headers = ["x-tempo-*", "x-method-*"]
+ allowed_headers = ["*"]
+ allowed_methods = ["GET"]
+ max_age_in_seconds = "2000000000"
+ }
access_key_metadata_writes_enabled = true
}
`, r.completePreReqs(data), data.RandomInteger, string(consistency), data.Locations.Secondary, data.Locations.Ternary)
@@ -1541,6 +1603,42 @@ resource "azurerm_cosmosdb_account" "test" {
`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, string(kind), string(consistency))
}
+func (CosmosDBAccountResource) mongoAnalyticalStorage(data acceptance.TestData, consistency documentdb.DefaultConsistencyLevel) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-cosmos-%d"
+ location = "%s"
+}
+
+resource "azurerm_cosmosdb_account" "test" {
+ name = "acctest-ca-%d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ offer_type = "Standard"
+ kind = "MongoDB"
+
+ analytical_storage_enabled = true
+
+ consistency_policy {
+ consistency_level = "%s"
+ }
+
+ capabilities {
+ name = "EnableMongo"
+ }
+
+ geo_location {
+ location = azurerm_resource_group.test.location
+ failover_priority = 0
+ }
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, string(consistency))
+}
+
func checkAccCosmosDBAccount_basic(data acceptance.TestData, consistency documentdb.DefaultConsistencyLevel, locationCount int) resource.TestCheckFunc {
return resource.ComposeTestCheckFunc(
check.That(data.ResourceName).Key("name").Exists(),
@@ -1710,6 +1808,44 @@ resource "azurerm_cosmosdb_account" "test" {
`, data.RandomInteger, data.Locations.Primary, data.RandomString, data.RandomString, data.RandomInteger, string(kind), string(consistency))
}
+func (CosmosDBAccountResource) systemAssignedIdentity(data acceptance.TestData, consistency documentdb.DefaultConsistencyLevel) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-cosmos-%d"
+ location = "%s"
+}
+
+resource "azurerm_cosmosdb_account" "test" {
+ name = "acctest-ca-%d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ offer_type = "Standard"
+ kind = "MongoDB"
+
+ capabilities {
+ name = "EnableMongo"
+ }
+
+ consistency_policy {
+ consistency_level = "%s"
+ }
+
+ geo_location {
+ location = azurerm_resource_group.test.location
+ failover_priority = 0
+ }
+
+ identity {
+ type = "SystemAssigned"
+ }
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, string(consistency))
+}
+
func (CosmosDBAccountResource) basicWithBackupPeriodic(data acceptance.TestData, kind documentdb.DatabaseAccountKind, consistency documentdb.DefaultConsistencyLevel) string {
return fmt.Sprintf(`
provider "azurerm" {
@@ -1774,7 +1910,9 @@ resource "azurerm_cosmosdb_account" "test" {
}
backup {
- type = "Periodic"
+ type = "Periodic"
+ interval_in_minutes = 60
+ retention_in_hours = 8
}
}
`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, string(kind), string(consistency))
diff --git a/azurerm/internal/services/cosmos/cosmosdb_cassandra_table_resource.go b/azurerm/internal/services/cosmos/cosmosdb_cassandra_table_resource.go
index 92b61d01ef5aa..23982be700820 100644
--- a/azurerm/internal/services/cosmos/cosmosdb_cassandra_table_resource.go
+++ b/azurerm/internal/services/cosmos/cosmosdb_cassandra_table_resource.go
@@ -59,6 +59,14 @@ func resourceCosmosDbCassandraTable() *schema.Resource {
ValidateFunc: validation.IntAtLeast(-1),
},
+ "analytical_storage_ttl": {
+ Type: schema.TypeInt,
+ Optional: true,
+ ForceNew: true,
+ Default: -2,
+ ValidateFunc: validation.IntAtLeast(-1),
+ },
+
"schema": common.CassandraTableSchemaPropertySchema(),
"throughput": {
@@ -115,6 +123,10 @@ func resourceCosmosDbCassandraTableCreate(d *schema.ResourceData, meta interface
table.CassandraTableCreateUpdateProperties.Resource.DefaultTTL = utils.Int32(int32(defaultTTL.(int)))
}
+ if analyticalTTL := d.Get("analytical_storage_ttl").(int); analyticalTTL != -2 {
+ table.CassandraTableCreateUpdateProperties.Resource.AnalyticalStorageTTL = utils.Int32(int32(analyticalTTL))
+ }
+
if throughput, hasThroughput := d.GetOk("throughput"); hasThroughput {
if throughput != 0 {
table.CassandraTableCreateUpdateProperties.Options.Throughput = common.ConvertThroughputFromResourceData(throughput)
@@ -184,7 +196,7 @@ func resourceCosmosDbCassandraTableUpdate(d *schema.ResourceData, meta interface
if err != nil {
if response.WasNotFound(throughputFuture.Response()) {
return fmt.Errorf("setting Throughput for %s: %+v - "+
- "If the collection has not been created with an initial throughput, you cannot configure it later.", *id, err)
+ "If the collection has not been created with an initial throughput, you cannot configure it later", *id, err)
}
}
@@ -229,6 +241,12 @@ func resourceCosmosDbCassandraTableRead(d *schema.ResourceData, meta interface{}
d.Set("default_ttl", defaultTTL)
}
+ analyticalTTL := -2
+ if res.AnalyticalStorageTTL != nil {
+ analyticalTTL = int(*res.AnalyticalStorageTTL)
+ }
+ d.Set("analytical_storage_ttl", analyticalTTL)
+
if schema := res.Schema; schema != nil {
d.Set("schema", flattenTableSchema(schema))
}
diff --git a/azurerm/internal/services/cosmos/cosmosdb_cassandra_table_resource_test.go b/azurerm/internal/services/cosmos/cosmosdb_cassandra_table_resource_test.go
index 663d5579a471d..d206da66dd33f 100644
--- a/azurerm/internal/services/cosmos/cosmosdb_cassandra_table_resource_test.go
+++ b/azurerm/internal/services/cosmos/cosmosdb_cassandra_table_resource_test.go
@@ -47,6 +47,22 @@ func TestAccCosmosDbCassandraTable_basic(t *testing.T) {
})
}
+func TestAccCosmosDbCassandraTable_analyticalStorageTTL(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_cosmosdb_cassandra_table", "test")
+ r := CosmosDBCassandraTableResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+
+ Config: r.analyticalStorageTTL(data),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
func (CosmosDBCassandraTableResource) basic(data acceptance.TestData) string {
return fmt.Sprintf(`
%[1]s
@@ -73,3 +89,72 @@ resource "azurerm_cosmosdb_cassandra_table" "test" {
}
`, CosmosDbCassandraKeyspaceResource{}.basic(data), data.RandomInteger)
}
+
+func (CosmosDBCassandraTableResource) analyticalStorageTTLTemplate(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-cosmos-%[1]d"
+ location = "%[2]s"
+}
+
+resource "azurerm_cosmosdb_account" "test" {
+ name = "acctest-ca-%[1]d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ offer_type = "Standard"
+ kind = "GlobalDocumentDB"
+ analytical_storage_enabled = true
+
+ consistency_policy {
+ consistency_level = "Strong"
+ }
+
+ capabilities {
+ name = "EnableCassandra"
+ }
+
+ geo_location {
+ location = azurerm_resource_group.test.location
+ failover_priority = 0
+ }
+}
+
+resource "azurerm_cosmosdb_cassandra_keyspace" "test" {
+ name = "acctest-%[1]d"
+ resource_group_name = azurerm_cosmosdb_account.test.resource_group_name
+ account_name = azurerm_cosmosdb_account.test.name
+}
+`, data.RandomInteger, data.Locations.Primary)
+}
+
+func (r CosmosDBCassandraTableResource) analyticalStorageTTL(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%[1]s
+
+resource "azurerm_cosmosdb_cassandra_table" "test" {
+ name = "acctest-CCASST-%[2]d"
+ cassandra_keyspace_id = azurerm_cosmosdb_cassandra_keyspace.test.id
+ analytical_storage_ttl = 0
+
+ schema {
+ column {
+ name = "test1"
+ type = "ascii"
+ }
+
+ column {
+ name = "test2"
+ type = "int"
+ }
+
+ partition_key {
+ name = "test1"
+ }
+ }
+}
+`, r.analyticalStorageTTLTemplate(data), data.RandomInteger)
+}
diff --git a/azurerm/internal/services/cosmos/cosmosdb_mongo_collection_resource.go b/azurerm/internal/services/cosmos/cosmosdb_mongo_collection_resource.go
index a99ce5a4643ff..e0786a511b907 100644
--- a/azurerm/internal/services/cosmos/cosmosdb_mongo_collection_resource.go
+++ b/azurerm/internal/services/cosmos/cosmosdb_mongo_collection_resource.go
@@ -83,6 +83,12 @@ func resourceCosmosDbMongoCollection() *schema.Resource {
ValidateFunc: validation.IntAtLeast(-1),
},
+ "analytical_storage_ttl": {
+ Type: schema.TypeInt,
+ Optional: true,
+ ValidateFunc: validation.IntAtLeast(-1),
+ },
+
"throughput": {
Type: schema.TypeInt,
Optional: true,
@@ -172,6 +178,10 @@ func resourceCosmosDbMongoCollectionCreate(d *schema.ResourceData, meta interfac
},
}
+ if analyticalStorageTTL, ok := d.GetOk("analytical_storage_ttl"); ok {
+ db.MongoDBCollectionCreateUpdateProperties.Resource.AnalyticalStorageTTL = utils.Int32(int32(analyticalStorageTTL.(int)))
+ }
+
if throughput, hasThroughput := d.GetOk("throughput"); hasThroughput {
if throughput != 0 {
db.MongoDBCollectionCreateUpdateProperties.Options.Throughput = common.ConvertThroughputFromResourceData(throughput)
@@ -241,6 +251,10 @@ func resourceCosmosDbMongoCollectionUpdate(d *schema.ResourceData, meta interfac
},
}
+ if analyticalStorageTTL, ok := d.GetOk("analytical_storage_ttl"); ok {
+ db.MongoDBCollectionCreateUpdateProperties.Resource.AnalyticalStorageTTL = utils.Int32(int32(analyticalStorageTTL.(int)))
+ }
+
if shardKey := d.Get("shard_key").(string); shardKey != "" {
db.MongoDBCollectionCreateUpdateProperties.Resource.ShardKey = map[string]*string{
shardKey: utils.String("Hash"), // looks like only hash is supported for now
@@ -337,6 +351,8 @@ func resourceCosmosDbMongoCollectionRead(d *schema.ResourceData, meta interface{
if err := d.Set("system_indexes", systemIndexes); err != nil {
return fmt.Errorf("failed to set `system_indexes`: %+v", err)
}
+
+ d.Set("analytical_storage_ttl", res.AnalyticalStorageTTL)
}
}
diff --git a/azurerm/internal/services/cosmos/cosmosdb_mongo_collection_resource_test.go b/azurerm/internal/services/cosmos/cosmosdb_mongo_collection_resource_test.go
index 21a5c3c775e56..eca61c321792f 100644
--- a/azurerm/internal/services/cosmos/cosmosdb_mongo_collection_resource_test.go
+++ b/azurerm/internal/services/cosmos/cosmosdb_mongo_collection_resource_test.go
@@ -129,6 +129,22 @@ func TestAccCosmosDbMongoCollection_withIndex(t *testing.T) {
})
}
+func TestAccCosmosDbMongoCollection_analyticalStorageTTL(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_cosmosdb_mongo_collection", "test")
+ r := CosmosMongoCollectionResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.analyticalStorageTTL(data),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ check.That(data.ResourceName).Key("analytical_storage_ttl").HasValue("600"),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
func TestAccCosmosDbMongoCollection_autoscale(t *testing.T) {
data := acceptance.BuildTestData(t, "azurerm_cosmosdb_mongo_collection", "test")
r := CosmosMongoCollectionResource{}
@@ -389,3 +405,29 @@ resource "azurerm_cosmosdb_mongo_collection" "test" {
}
`, CosmosDBAccountResource{}.capabilities(data, documentdb.MongoDB, []string{"EnableMongo", "EnableServerless"}), data.RandomInteger)
}
+
+func (CosmosMongoCollectionResource) analyticalStorageTTL(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%[1]s
+
+resource "azurerm_cosmosdb_mongo_database" "test" {
+ name = "acctest-%[2]d"
+ resource_group_name = azurerm_cosmosdb_account.test.resource_group_name
+ account_name = azurerm_cosmosdb_account.test.name
+}
+
+resource "azurerm_cosmosdb_mongo_collection" "test" {
+ name = "acctest-%[2]d"
+ resource_group_name = azurerm_cosmosdb_mongo_database.test.resource_group_name
+ account_name = azurerm_cosmosdb_mongo_database.test.account_name
+ database_name = azurerm_cosmosdb_mongo_database.test.name
+
+ index {
+ keys = ["_id"]
+ unique = true
+ }
+
+ analytical_storage_ttl = 600
+}
+`, CosmosDBAccountResource{}.mongoAnalyticalStorage(data, documentdb.Eventual), data.RandomInteger, data.RandomInteger)
+}
diff --git a/azurerm/internal/services/cosmos/cosmosdb_sql_container_resource.go b/azurerm/internal/services/cosmos/cosmosdb_sql_container_resource.go
index 96fac23c42c84..82c5fb14902d7 100644
--- a/azurerm/internal/services/cosmos/cosmosdb_sql_container_resource.go
+++ b/azurerm/internal/services/cosmos/cosmosdb_sql_container_resource.go
@@ -93,6 +93,12 @@ func resourceCosmosDbSQLContainer() *schema.Resource {
"autoscale_settings": common.DatabaseAutoscaleSettingsSchema(),
+ "analytical_storage_ttl": {
+ Type: schema.TypeInt,
+ Optional: true,
+ ValidateFunc: validation.IntAtLeast(-1),
+ },
+
"default_ttl": {
Type: schema.TypeInt,
Optional: true,
@@ -156,9 +162,8 @@ func resourceCosmosDbSQLContainerCreate(d *schema.ResourceData, meta interface{}
db := documentdb.SQLContainerCreateUpdateParameters{
SQLContainerCreateUpdateProperties: &documentdb.SQLContainerCreateUpdateProperties{
Resource: &documentdb.SQLContainerResource{
- ID: &name,
- IndexingPolicy: indexingPolicy,
-
+ ID: &name,
+ IndexingPolicy: indexingPolicy,
ConflictResolutionPolicy: common.ExpandCosmosDbConflicResolutionPolicy(d.Get("conflict_resolution_policy").([]interface{})),
},
Options: &documentdb.CreateUpdateOptions{},
@@ -182,6 +187,10 @@ func resourceCosmosDbSQLContainerCreate(d *schema.ResourceData, meta interface{}
}
}
+ if analyticalStorageTTL, ok := d.GetOk("analytical_storage_ttl"); ok {
+ db.SQLContainerCreateUpdateProperties.Resource.AnalyticalStorageTTL = utils.Int64(int64(analyticalStorageTTL.(int)))
+ }
+
if defaultTTL, hasTTL := d.GetOk("default_ttl"); hasTTL {
db.SQLContainerCreateUpdateProperties.Resource.DefaultTTL = utils.Int32(int32(defaultTTL.(int)))
}
@@ -269,6 +278,10 @@ func resourceCosmosDbSQLContainerUpdate(d *schema.ResourceData, meta interface{}
}
}
+ if analyticalStorageTTL, ok := d.GetOk("analytical_storage_ttl"); ok {
+ db.SQLContainerCreateUpdateProperties.Resource.AnalyticalStorageTTL = utils.Int64(int64(analyticalStorageTTL.(int)))
+ }
+
if defaultTTL, hasTTL := d.GetOk("default_ttl"); hasTTL {
db.SQLContainerCreateUpdateProperties.Resource.DefaultTTL = utils.Int32(int32(defaultTTL.(int)))
}
@@ -348,6 +361,10 @@ func resourceCosmosDbSQLContainerRead(d *schema.ResourceData, meta interface{})
}
}
+ if analyticalStorageTTL := res.AnalyticalStorageTTL; analyticalStorageTTL != nil {
+ d.Set("analytical_storage_ttl", analyticalStorageTTL)
+ }
+
if defaultTTL := res.DefaultTTL; defaultTTL != nil {
d.Set("default_ttl", defaultTTL)
}
diff --git a/azurerm/internal/services/cosmos/cosmosdb_sql_container_resource_test.go b/azurerm/internal/services/cosmos/cosmosdb_sql_container_resource_test.go
index 03ab317d26a44..78ed182d7af73 100644
--- a/azurerm/internal/services/cosmos/cosmosdb_sql_container_resource_test.go
+++ b/azurerm/internal/services/cosmos/cosmosdb_sql_container_resource_test.go
@@ -5,6 +5,7 @@ import (
"fmt"
"testing"
+ "github.com/Azure/azure-sdk-for-go/services/cosmos-db/mgmt/2021-01-15/documentdb"
"github.com/hashicorp/terraform-plugin-sdk/helper/resource"
"github.com/hashicorp/terraform-plugin-sdk/terraform"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance"
@@ -65,6 +66,22 @@ func TestAccCosmosDbSqlContainer_complete(t *testing.T) {
})
}
+func TestAccCosmosDbSqlContainer_analyticalStorageTTL(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_cosmosdb_sql_container", "test")
+ r := CosmosSqlContainerResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+
+ Config: r.analyticalStorageTTL(data),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
func TestAccCosmosDbSqlContainer_update(t *testing.T) {
data := acceptance.BuildTestData(t, "azurerm_cosmosdb_sql_container", "test")
r := CosmosSqlContainerResource{}
@@ -289,6 +306,27 @@ resource "azurerm_cosmosdb_sql_container" "test" {
`, CosmosSqlDatabaseResource{}.basic(data), data.RandomInteger)
}
+func (CosmosSqlContainerResource) analyticalStorageTTL(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%[1]s
+
+resource "azurerm_cosmosdb_sql_database" "test" {
+ name = "acctest-%[2]d"
+ resource_group_name = azurerm_cosmosdb_account.test.resource_group_name
+ account_name = azurerm_cosmosdb_account.test.name
+}
+
+resource "azurerm_cosmosdb_sql_container" "test" {
+ name = "acctest-CSQLC-%[2]d"
+ resource_group_name = azurerm_cosmosdb_account.test.resource_group_name
+ account_name = azurerm_cosmosdb_account.test.name
+ database_name = azurerm_cosmosdb_sql_database.test.name
+ partition_key_path = "/definition/id"
+ analytical_storage_ttl = 600
+}
+`, CosmosDBAccountResource{}.analyticalStorage(data, "GlobalDocumentDB", documentdb.Eventual), data.RandomInteger, data.RandomInteger)
+}
+
func (CosmosSqlContainerResource) update(data acceptance.TestData) string {
return fmt.Sprintf(`
%[1]s
diff --git a/azurerm/internal/services/datafactory/data_factory_linked_service_azure_blob_storage_resource.go b/azurerm/internal/services/datafactory/data_factory_linked_service_azure_blob_storage_resource.go
index cd9d2e09d5df9..92f14eb39b566 100644
--- a/azurerm/internal/services/datafactory/data_factory_linked_service_azure_blob_storage_resource.go
+++ b/azurerm/internal/services/datafactory/data_factory_linked_service_azure_blob_storage_resource.go
@@ -10,6 +10,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
@@ -251,31 +252,28 @@ func resourceDataFactoryLinkedServiceBlobStorageRead(d *schema.ResourceData, met
ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- resp, err := client.Get(ctx, resourceGroup, dataFactoryName, name, "")
+ resp, err := client.Get(ctx, id.ResourceGroup, id.FactoryName, id.Name, "")
if err != nil {
if utils.ResponseWasNotFound(resp.Response) {
d.SetId("")
return nil
}
- return fmt.Errorf("Error retrieving Data Factory Linked Service BlobStorage %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error retrieving Data Factory Linked Service BlobStorage %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
d.Set("name", resp.Name)
- d.Set("resource_group_name", resourceGroup)
- d.Set("data_factory_name", dataFactoryName)
+ d.Set("resource_group_name", id.ResourceGroup)
+ d.Set("data_factory_name", id.FactoryName)
blobStorage, ok := resp.Properties.AsAzureBlobStorageLinkedService()
if !ok {
- return fmt.Errorf("Error classifiying Data Factory Linked Service BlobStorage %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", name, dataFactoryName, resourceGroup, datafactory.TypeBasicLinkedServiceTypeAzureBlobStorage, *resp.Type)
+ return fmt.Errorf("Error classifiying Data Factory Linked Service BlobStorage %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", id.Name, id.FactoryName, id.ResourceGroup, datafactory.TypeBasicLinkedServiceTypeAzureBlobStorage, *resp.Type)
}
if blobStorage != nil {
@@ -297,7 +295,7 @@ func resourceDataFactoryLinkedServiceBlobStorageRead(d *schema.ResourceData, met
annotations := flattenDataFactoryAnnotations(blobStorage.Annotations)
if err := d.Set("annotations", annotations); err != nil {
- return fmt.Errorf("Error setting `annotations` for Data Factory Linked Service Azure Blob Storage %q (Data Factory %q) / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error setting `annotations` for Data Factory Linked Service Azure Blob Storage %q (Data Factory %q) / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
parameters := flattenDataFactoryParameters(blobStorage.Parameters)
@@ -319,18 +317,15 @@ func resourceDataFactoryLinkedServiceBlobStorageDelete(d *schema.ResourceData, m
ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- response, err := client.Delete(ctx, resourceGroup, dataFactoryName, name)
+ response, err := client.Delete(ctx, id.ResourceGroup, id.FactoryName, id.Name)
if err != nil {
if !utils.ResponseWasNotFound(response) {
- return fmt.Errorf("Error deleting Data Factory Linked Service BlobStorage %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error deleting Data Factory Linked Service BlobStorage %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
}
diff --git a/azurerm/internal/services/datafactory/data_factory_linked_service_azure_databricks_resource.go b/azurerm/internal/services/datafactory/data_factory_linked_service_azure_databricks_resource.go
index 7acc29dc36dbb..9562954e231d1 100644
--- a/azurerm/internal/services/datafactory/data_factory_linked_service_azure_databricks_resource.go
+++ b/azurerm/internal/services/datafactory/data_factory_linked_service_azure_databricks_resource.go
@@ -13,6 +13,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
databricksValidator "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/databricks/validate"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
@@ -432,31 +433,28 @@ func resourceDataFactoryLinkedServiceDatabricksRead(d *schema.ResourceData, meta
ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- resp, err := client.Get(ctx, resourceGroup, dataFactoryName, name, "")
+ resp, err := client.Get(ctx, id.ResourceGroup, id.FactoryName, id.Name, "")
if err != nil {
if utils.ResponseWasNotFound(resp.Response) {
d.SetId("")
return nil
}
- return fmt.Errorf("Error retrieving Data Factory Linked Service Databricks %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error retrieving Data Factory Linked Service Databricks %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
d.Set("name", resp.Name)
- d.Set("resource_group_name", resourceGroup)
- d.Set("data_factory_name", dataFactoryName)
+ d.Set("resource_group_name", id.ResourceGroup)
+ d.Set("data_factory_name", id.FactoryName)
databricks, ok := resp.Properties.AsAzureDatabricksLinkedService()
if !ok {
- return fmt.Errorf("classifiying Data Factory Linked Service Databricks %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", name, dataFactoryName, resourceGroup, datafactory.TypeBasicLinkedServiceTypeAzureDatabricks, *resp.Type)
+ return fmt.Errorf("classifiying Data Factory Linked Service Databricks %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", id.Name, id.FactoryName, id.ResourceGroup, datafactory.TypeBasicLinkedServiceTypeAzureDatabricks, *resp.Type)
}
// Check the properties and verify if authentication is set to MSI
@@ -581,18 +579,15 @@ func resourceDataFactoryLinkedServiceDatabricksDelete(d *schema.ResourceData, me
ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- response, err := client.Delete(ctx, resourceGroup, dataFactoryName, name)
+ response, err := client.Delete(ctx, id.ResourceGroup, id.FactoryName, id.Name)
if err != nil {
if !utils.ResponseWasNotFound(response) {
- return fmt.Errorf("Error deleting Data Factory Linked Service Databricks %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error deleting Data Factory Linked Service Databricks %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
}
return nil
diff --git a/azurerm/internal/services/datafactory/data_factory_linked_service_azure_file_storage_resource.go b/azurerm/internal/services/datafactory/data_factory_linked_service_azure_file_storage_resource.go
index 6fa413141e39a..8b61a5c026b6b 100644
--- a/azurerm/internal/services/datafactory/data_factory_linked_service_azure_file_storage_resource.go
+++ b/azurerm/internal/services/datafactory/data_factory_linked_service_azure_file_storage_resource.go
@@ -10,6 +10,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
@@ -239,31 +240,28 @@ func resourceDataFactoryLinkedServiceAzureFileStorageRead(d *schema.ResourceData
ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- resp, err := client.Get(ctx, resourceGroup, dataFactoryName, name, "")
+ resp, err := client.Get(ctx, id.ResourceGroup, id.FactoryName, id.Name, "")
if err != nil {
if utils.ResponseWasNotFound(resp.Response) {
d.SetId("")
return nil
}
- return fmt.Errorf("Error retrieving Data Factory Linked Service Azure File Storage %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error retrieving Data Factory Linked Service Azure File Storage %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
d.Set("name", resp.Name)
- d.Set("resource_group_name", resourceGroup)
- d.Set("data_factory_name", dataFactoryName)
+ d.Set("resource_group_name", id.ResourceGroup)
+ d.Set("data_factory_name", id.FactoryName)
fileStorage, ok := resp.Properties.AsAzureFileStorageLinkedService()
if !ok {
- return fmt.Errorf("Error classifiying Data Factory Linked Service Azure File Storage %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", name, dataFactoryName, resourceGroup, datafactory.TypeBasicLinkedServiceTypeAzureFileStorage, *resp.Type)
+ return fmt.Errorf("Error classifiying Data Factory Linked Service Azure File Storage %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", id.Name, id.FactoryName, id.ResourceGroup, datafactory.TypeBasicLinkedServiceTypeAzureFileStorage, *resp.Type)
}
d.Set("additional_properties", fileStorage.AdditionalProperties)
@@ -279,7 +277,7 @@ func resourceDataFactoryLinkedServiceAzureFileStorageRead(d *schema.ResourceData
annotations := flattenDataFactoryAnnotations(fileStorage.Annotations)
if err := d.Set("annotations", annotations); err != nil {
- return fmt.Errorf("Error setting `annotations` for Data Factory Linked Service Azure File Storage %q (Data Factory %q) / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error setting `annotations` for Data Factory Linked Service Azure File Storage %q (Data Factory %q) / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
parameters := flattenDataFactoryParameters(fileStorage.Parameters)
@@ -307,18 +305,15 @@ func resourceDataFactoryLinkedServiceAzureFileStorageDelete(d *schema.ResourceDa
ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- response, err := client.Delete(ctx, resourceGroup, dataFactoryName, name)
+ response, err := client.Delete(ctx, id.ResourceGroup, id.FactoryName, id.Name)
if err != nil {
if !utils.ResponseWasNotFound(response) {
- return fmt.Errorf("Error deleting Data Factory Linked Service Azure File Storage %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error deleting Data Factory Linked Service Azure File Storage %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
}
diff --git a/azurerm/internal/services/datafactory/data_factory_linked_service_azure_function_resource.go b/azurerm/internal/services/datafactory/data_factory_linked_service_azure_function_resource.go
index edcb4004344a6..68f47564889eb 100644
--- a/azurerm/internal/services/datafactory/data_factory_linked_service_azure_function_resource.go
+++ b/azurerm/internal/services/datafactory/data_factory_linked_service_azure_function_resource.go
@@ -10,6 +10,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
@@ -182,31 +183,28 @@ func resourceDataFactoryLinkedServiceAzureFunctionRead(d *schema.ResourceData, m
ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- resp, err := client.Get(ctx, resourceGroup, dataFactoryName, name, "")
+ resp, err := client.Get(ctx, id.ResourceGroup, id.FactoryName, id.Name, "")
if err != nil {
if utils.ResponseWasNotFound(resp.Response) {
d.SetId("")
return nil
}
- return fmt.Errorf("Error retrieving Data Factory Linked Service Azure Function %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error retrieving Data Factory Linked Service Azure Function %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
d.Set("name", resp.Name)
- d.Set("resource_group_name", resourceGroup)
- d.Set("data_factory_name", dataFactoryName)
+ d.Set("resource_group_name", id.ResourceGroup)
+ d.Set("data_factory_name", id.FactoryName)
azureFunction, ok := resp.Properties.AsAzureFunctionLinkedService()
if !ok {
- return fmt.Errorf("Error classifiying Data Factory Linked Service Azure Function %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", name, dataFactoryName, resourceGroup, datafactory.TypeBasicLinkedServiceTypeAzureFunction, *resp.Type)
+ return fmt.Errorf("Error classifiying Data Factory Linked Service Azure Function %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", id.Name, id.FactoryName, id.ResourceGroup, datafactory.TypeBasicLinkedServiceTypeAzureFunction, *resp.Type)
}
d.Set("url", azureFunction.AzureFunctionLinkedServiceTypeProperties.FunctionAppURL)
@@ -216,7 +214,7 @@ func resourceDataFactoryLinkedServiceAzureFunctionRead(d *schema.ResourceData, m
annotations := flattenDataFactoryAnnotations(azureFunction.Annotations)
if err := d.Set("annotations", annotations); err != nil {
- return fmt.Errorf("Error setting `annotations` for Data Factory Linked Service Azure Function %q (Data Factory %q) / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error setting `annotations` for Data Factory Linked Service Azure Function %q (Data Factory %q) / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
parameters := flattenDataFactoryParameters(azureFunction.Parameters)
@@ -238,18 +236,15 @@ func resourceDataFactoryLinkedServiceAzureFunctionDelete(d *schema.ResourceData,
ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- response, err := client.Delete(ctx, resourceGroup, dataFactoryName, name)
+ response, err := client.Delete(ctx, id.ResourceGroup, id.FactoryName, id.Name)
if err != nil {
if !utils.ResponseWasNotFound(response) {
- return fmt.Errorf("Error deleting Data Factory Linked Service Azure Function %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error deleting Data Factory Linked Service Azure Function %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
}
diff --git a/azurerm/internal/services/datafactory/data_factory_linked_service_azure_table_storage_resource.go b/azurerm/internal/services/datafactory/data_factory_linked_service_azure_table_storage_resource.go
index c3e2846a05c58..d6f77a74ce819 100644
--- a/azurerm/internal/services/datafactory/data_factory_linked_service_azure_table_storage_resource.go
+++ b/azurerm/internal/services/datafactory/data_factory_linked_service_azure_table_storage_resource.go
@@ -10,6 +10,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
@@ -175,31 +176,28 @@ func resourceDataFactoryLinkedServiceTableStorageRead(d *schema.ResourceData, me
ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- resp, err := client.Get(ctx, resourceGroup, dataFactoryName, name, "")
+ resp, err := client.Get(ctx, id.ResourceGroup, id.FactoryName, id.Name, "")
if err != nil {
if utils.ResponseWasNotFound(resp.Response) {
d.SetId("")
return nil
}
- return fmt.Errorf("Error retrieving Data Factory Linked Service TableStorage %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error retrieving Data Factory Linked Service TableStorage %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
d.Set("name", resp.Name)
- d.Set("resource_group_name", resourceGroup)
- d.Set("data_factory_name", dataFactoryName)
+ d.Set("resource_group_name", id.ResourceGroup)
+ d.Set("data_factory_name", id.FactoryName)
tableStorage, ok := resp.Properties.AsAzureTableStorageLinkedService()
if !ok {
- return fmt.Errorf("Error classifying Data Factory Linked Service TableStorage %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", name, dataFactoryName, resourceGroup, datafactory.TypeBasicLinkedServiceTypeAzureTableStorage, *resp.Type)
+ return fmt.Errorf("Error classifying Data Factory Linked Service TableStorage %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", id.Name, id.FactoryName, id.ResourceGroup, datafactory.TypeBasicLinkedServiceTypeAzureTableStorage, *resp.Type)
}
d.Set("additional_properties", tableStorage.AdditionalProperties)
@@ -207,7 +205,7 @@ func resourceDataFactoryLinkedServiceTableStorageRead(d *schema.ResourceData, me
annotations := flattenDataFactoryAnnotations(tableStorage.Annotations)
if err := d.Set("annotations", annotations); err != nil {
- return fmt.Errorf("Error setting `annotations` for Data Factory Linked Service Azure Table Storage %q (Data Factory %q) / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error setting `annotations` for Data Factory Linked Service Azure Table Storage %q (Data Factory %q) / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
parameters := flattenDataFactoryParameters(tableStorage.Parameters)
@@ -229,18 +227,15 @@ func resourceDataFactoryLinkedServiceTableStorageDelete(d *schema.ResourceData,
ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- response, err := client.Delete(ctx, resourceGroup, dataFactoryName, name)
+ response, err := client.Delete(ctx, id.ResourceGroup, id.FactoryName, id.Name)
if err != nil {
if !utils.ResponseWasNotFound(response) {
- return fmt.Errorf("Error deleting Data Factory Linked Service TableStorage %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error deleting Data Factory Linked Service TableStorage %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
}
diff --git a/azurerm/internal/services/datafactory/data_factory_linked_service_cosmosdb_resource.go b/azurerm/internal/services/datafactory/data_factory_linked_service_cosmosdb_resource.go
index 28d54a3d74938..fbf8c55ea99b6 100644
--- a/azurerm/internal/services/datafactory/data_factory_linked_service_cosmosdb_resource.go
+++ b/azurerm/internal/services/datafactory/data_factory_linked_service_cosmosdb_resource.go
@@ -10,6 +10,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
@@ -221,31 +222,28 @@ func resourceDataFactoryLinkedServiceCosmosDbRead(d *schema.ResourceData, meta i
ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- resp, err := client.Get(ctx, resourceGroup, dataFactoryName, name, "")
+ resp, err := client.Get(ctx, id.ResourceGroup, id.FactoryName, id.Name, "")
if err != nil {
if utils.ResponseWasNotFound(resp.Response) {
d.SetId("")
return nil
}
- return fmt.Errorf("Error retrieving Data Factory Linked Service CosmosDB %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error retrieving Data Factory Linked Service CosmosDB %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
d.Set("name", resp.Name)
- d.Set("resource_group_name", resourceGroup)
- d.Set("data_factory_name", dataFactoryName)
+ d.Set("resource_group_name", id.ResourceGroup)
+ d.Set("data_factory_name", id.FactoryName)
cosmosdb, ok := resp.Properties.AsCosmosDbLinkedService()
if !ok {
- return fmt.Errorf("Error classifiying Data Factory Linked Service CosmosDb %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", name, dataFactoryName, resourceGroup, datafactory.TypeBasicLinkedServiceTypeCosmosDb, *resp.Type)
+ return fmt.Errorf("Error classifiying Data Factory Linked Service CosmosDb %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", id.Name, id.FactoryName, id.ResourceGroup, datafactory.TypeBasicLinkedServiceTypeCosmosDb, *resp.Type)
}
d.Set("additional_properties", cosmosdb.AdditionalProperties)
@@ -283,18 +281,15 @@ func resourceDataFactoryLinkedServiceCosmosDbDelete(d *schema.ResourceData, meta
ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- response, err := client.Delete(ctx, resourceGroup, dataFactoryName, name)
+ response, err := client.Delete(ctx, id.ResourceGroup, id.FactoryName, id.Name)
if err != nil {
if !utils.ResponseWasNotFound(response) {
- return fmt.Errorf("Error deleting Data Factory Linked Service CosmosDb %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error deleting Data Factory Linked Service CosmosDb %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
}
diff --git a/azurerm/internal/services/datafactory/data_factory_linked_service_data_lake_storage_gen2_resource.go b/azurerm/internal/services/datafactory/data_factory_linked_service_data_lake_storage_gen2_resource.go
index d42280d25c016..8890ec4360055 100644
--- a/azurerm/internal/services/datafactory/data_factory_linked_service_data_lake_storage_gen2_resource.go
+++ b/azurerm/internal/services/datafactory/data_factory_linked_service_data_lake_storage_gen2_resource.go
@@ -10,6 +10,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
@@ -222,32 +223,29 @@ func resourceDataFactoryLinkedServiceDataLakeStorageGen2Read(d *schema.ResourceD
ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- resp, err := client.Get(ctx, resourceGroup, dataFactoryName, name, "")
+ resp, err := client.Get(ctx, id.ResourceGroup, id.FactoryName, id.Name, "")
if err != nil {
if utils.ResponseWasNotFound(resp.Response) {
d.SetId("")
return nil
}
- return fmt.Errorf("Error retrieving Data Factory Linked Service Data Lake Storage Gen2 %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error retrieving Data Factory Linked Service Data Lake Storage Gen2 %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
d.Set("name", resp.Name)
- d.Set("resource_group_name", resourceGroup)
- d.Set("data_factory_name", dataFactoryName)
+ d.Set("resource_group_name", id.ResourceGroup)
+ d.Set("data_factory_name", id.FactoryName)
dataLakeStorageGen2, ok := resp.Properties.AsAzureBlobFSLinkedService()
if !ok {
- return fmt.Errorf("Error classifiying Data Factory Linked Service Data Lake Storage Gen2 %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", name, dataFactoryName, resourceGroup, datafactory.TypeBasicLinkedServiceTypeAzureBlobFS, *resp.Type)
+ return fmt.Errorf("Error classifiying Data Factory Linked Service Data Lake Storage Gen2 %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", id.Name, id.FactoryName, id.ResourceGroup, datafactory.TypeBasicLinkedServiceTypeAzureBlobFS, *resp.Type)
}
if dataLakeStorageGen2.Tenant != nil {
@@ -295,18 +293,15 @@ func resourceDataFactoryLinkedServiceDataLakeStorageGen2Delete(d *schema.Resourc
ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- response, err := client.Delete(ctx, resourceGroup, dataFactoryName, name)
+ response, err := client.Delete(ctx, id.ResourceGroup, id.FactoryName, id.Name)
if err != nil {
if !utils.ResponseWasNotFound(response) {
- return fmt.Errorf("Error deleting Data Factory Linked Service Data Lake Storage Gen2 %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error deleting Data Factory Linked Service Data Lake Storage Gen2 %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
}
diff --git a/azurerm/internal/services/datafactory/data_factory_linked_service_key_vault_resource.go b/azurerm/internal/services/datafactory/data_factory_linked_service_key_vault_resource.go
index f85ffb66e3cda..887a60d279d90 100644
--- a/azurerm/internal/services/datafactory/data_factory_linked_service_key_vault_resource.go
+++ b/azurerm/internal/services/datafactory/data_factory_linked_service_key_vault_resource.go
@@ -11,6 +11,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/validate"
keyVaultParse "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/keyvault/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
@@ -187,31 +188,28 @@ func resourceDataFactoryLinkedServiceKeyVaultRead(d *schema.ResourceData, meta i
ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- resp, err := client.Get(ctx, resourceGroup, dataFactoryName, name, "")
+ resp, err := client.Get(ctx, id.ResourceGroup, id.FactoryName, id.Name, "")
if err != nil {
if utils.ResponseWasNotFound(resp.Response) {
d.SetId("")
return nil
}
- return fmt.Errorf("Error retrieving Data Factory Linked Service Key Vault %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error retrieving Data Factory Linked Service Key Vault %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
d.Set("name", resp.Name)
- d.Set("resource_group_name", resourceGroup)
- d.Set("data_factory_name", dataFactoryName)
+ d.Set("resource_group_name", id.ResourceGroup)
+ d.Set("data_factory_name", id.FactoryName)
keyVault, ok := resp.Properties.AsAzureKeyVaultLinkedService()
if !ok {
- return fmt.Errorf("Error classifiying Data Factory Linked Service Key Vault %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", name, dataFactoryName, resourceGroup, datafactory.TypeBasicLinkedServiceTypeAzureKeyVault, *resp.Type)
+ return fmt.Errorf("Error classifiying Data Factory Linked Service Key Vault %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", id.Name, id.FactoryName, id.ResourceGroup, datafactory.TypeBasicLinkedServiceTypeAzureKeyVault, *resp.Type)
}
d.Set("additional_properties", keyVault.AdditionalProperties)
@@ -262,18 +260,15 @@ func resourceDataFactoryLinkedServiceKeyVaultDelete(d *schema.ResourceData, meta
ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- response, err := client.Delete(ctx, resourceGroup, dataFactoryName, name)
+ response, err := client.Delete(ctx, id.ResourceGroup, id.FactoryName, id.Name)
if err != nil {
if !utils.ResponseWasNotFound(response) {
- return fmt.Errorf("Error deleting Data Factory Linked Service Key Vault %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error deleting Data Factory Linked Service Key Vault %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
}
diff --git a/azurerm/internal/services/datafactory/data_factory_linked_service_mysql_resource.go b/azurerm/internal/services/datafactory/data_factory_linked_service_mysql_resource.go
index 3160d8b7f86c0..9f804d9b61a1d 100644
--- a/azurerm/internal/services/datafactory/data_factory_linked_service_mysql_resource.go
+++ b/azurerm/internal/services/datafactory/data_factory_linked_service_mysql_resource.go
@@ -10,6 +10,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
@@ -182,31 +183,28 @@ func resourceDataFactoryLinkedServiceMySQLRead(d *schema.ResourceData, meta inte
ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- resp, err := client.Get(ctx, resourceGroup, dataFactoryName, name, "")
+ resp, err := client.Get(ctx, id.ResourceGroup, id.FactoryName, id.Name, "")
if err != nil {
if utils.ResponseWasNotFound(resp.Response) {
d.SetId("")
return nil
}
- return fmt.Errorf("Error retrieving Data Factory Linked Service MySQL %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error retrieving Data Factory Linked Service MySQL %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
d.Set("name", resp.Name)
- d.Set("resource_group_name", resourceGroup)
- d.Set("data_factory_name", dataFactoryName)
+ d.Set("resource_group_name", id.ResourceGroup)
+ d.Set("data_factory_name", id.FactoryName)
mysql, ok := resp.Properties.AsMySQLLinkedService()
if !ok {
- return fmt.Errorf("Error classifiying Data Factory Linked Service MySQL %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", name, dataFactoryName, resourceGroup, datafactory.TypeBasicLinkedServiceTypeMySQL, *resp.Type)
+ return fmt.Errorf("Error classifiying Data Factory Linked Service MySQL %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", id.Name, id.FactoryName, id.ResourceGroup, datafactory.TypeBasicLinkedServiceTypeMySQL, *resp.Type)
}
d.Set("additional_properties", mysql.AdditionalProperties)
@@ -236,18 +234,15 @@ func resourceDataFactoryLinkedServiceMySQLDelete(d *schema.ResourceData, meta in
ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- response, err := client.Delete(ctx, resourceGroup, dataFactoryName, name)
+ response, err := client.Delete(ctx, id.ResourceGroup, id.FactoryName, id.Name)
if err != nil {
if !utils.ResponseWasNotFound(response) {
- return fmt.Errorf("Error deleting Data Factory Linked Service MySQL %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error deleting Data Factory Linked Service MySQL %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
}
diff --git a/azurerm/internal/services/datafactory/data_factory_linked_service_postgresql_resource.go b/azurerm/internal/services/datafactory/data_factory_linked_service_postgresql_resource.go
index 7459e1a2dbbe1..8ae82bc5d8b9d 100644
--- a/azurerm/internal/services/datafactory/data_factory_linked_service_postgresql_resource.go
+++ b/azurerm/internal/services/datafactory/data_factory_linked_service_postgresql_resource.go
@@ -10,6 +10,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
@@ -182,31 +183,28 @@ func resourceDataFactoryLinkedServicePostgreSQLRead(d *schema.ResourceData, meta
ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- resp, err := client.Get(ctx, resourceGroup, dataFactoryName, name, "")
+ resp, err := client.Get(ctx, id.ResourceGroup, id.FactoryName, id.Name, "")
if err != nil {
if utils.ResponseWasNotFound(resp.Response) {
d.SetId("")
return nil
}
- return fmt.Errorf("Error retrieving Data Factory Linked Service PostgreSQL %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error retrieving Data Factory Linked Service PostgreSQL %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
d.Set("name", resp.Name)
- d.Set("resource_group_name", resourceGroup)
- d.Set("data_factory_name", dataFactoryName)
+ d.Set("resource_group_name", id.ResourceGroup)
+ d.Set("data_factory_name", id.FactoryName)
postgresql, ok := resp.Properties.AsPostgreSQLLinkedService()
if !ok {
- return fmt.Errorf("Error classifiying Data Factory Linked Service PostgreSQL %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", name, dataFactoryName, resourceGroup, datafactory.TypeBasicLinkedServiceTypePostgreSQL, *resp.Type)
+ return fmt.Errorf("Error classifiying Data Factory Linked Service PostgreSQL %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", id.Name, id.FactoryName, id.ResourceGroup, datafactory.TypeBasicLinkedServiceTypePostgreSQL, *resp.Type)
}
d.Set("additional_properties", postgresql.AdditionalProperties)
@@ -236,18 +234,15 @@ func resourceDataFactoryLinkedServicePostgreSQLDelete(d *schema.ResourceData, me
ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- response, err := client.Delete(ctx, resourceGroup, dataFactoryName, name)
+ response, err := client.Delete(ctx, id.ResourceGroup, id.FactoryName, id.Name)
if err != nil {
if !utils.ResponseWasNotFound(response) {
- return fmt.Errorf("Error deleting Data Factory Linked Service PostgreSQL %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error deleting Data Factory Linked Service PostgreSQL %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
}
diff --git a/azurerm/internal/services/datafactory/data_factory_linked_service_sftp_resource.go b/azurerm/internal/services/datafactory/data_factory_linked_service_sftp_resource.go
index 943039a840408..b3096a06024eb 100644
--- a/azurerm/internal/services/datafactory/data_factory_linked_service_sftp_resource.go
+++ b/azurerm/internal/services/datafactory/data_factory_linked_service_sftp_resource.go
@@ -10,6 +10,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
@@ -215,31 +216,28 @@ func resourceDataFactoryLinkedServiceSFTPRead(d *schema.ResourceData, meta inter
ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- resp, err := client.Get(ctx, resourceGroup, dataFactoryName, name, "")
+ resp, err := client.Get(ctx, id.ResourceGroup, id.FactoryName, id.Name, "")
if err != nil {
if utils.ResponseWasNotFound(resp.Response) {
d.SetId("")
return nil
}
- return fmt.Errorf("Error retrieving Data Factory Linked Service SFTP %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error retrieving Data Factory Linked Service SFTP %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
d.Set("name", resp.Name)
- d.Set("resource_group_name", resourceGroup)
- d.Set("data_factory_name", dataFactoryName)
+ d.Set("resource_group_name", id.ResourceGroup)
+ d.Set("data_factory_name", id.FactoryName)
sftp, ok := resp.Properties.AsSftpServerLinkedService()
if !ok {
- return fmt.Errorf("Error classifiying Data Factory Linked Service SFTP %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", name, dataFactoryName, resourceGroup, datafactory.TypeBasicLinkedServiceTypeSftp, *resp.Type)
+ return fmt.Errorf("Error classifiying Data Factory Linked Service SFTP %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", id.Name, id.FactoryName, id.ResourceGroup, datafactory.TypeBasicLinkedServiceTypeSftp, *resp.Type)
}
d.Set("authentication_type", sftp.AuthenticationType)
@@ -274,18 +272,15 @@ func resourceDataFactoryLinkedServiceSFTPDelete(d *schema.ResourceData, meta int
ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- response, err := client.Delete(ctx, resourceGroup, dataFactoryName, name)
+ response, err := client.Delete(ctx, id.ResourceGroup, id.FactoryName, id.Name)
if err != nil {
if !utils.ResponseWasNotFound(response) {
- return fmt.Errorf("Error deleting Data Factory Linked Service SFTP %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error deleting Data Factory Linked Service SFTP %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
}
diff --git a/azurerm/internal/services/datafactory/data_factory_linked_service_web_resource.go b/azurerm/internal/services/datafactory/data_factory_linked_service_web_resource.go
index 35bec7d8b5406..159d7b761d78d 100644
--- a/azurerm/internal/services/datafactory/data_factory_linked_service_web_resource.go
+++ b/azurerm/internal/services/datafactory/data_factory_linked_service_web_resource.go
@@ -10,6 +10,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/datafactory/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
@@ -216,31 +217,28 @@ func resourceDataFactoryLinkedServiceWebRead(d *schema.ResourceData, meta interf
ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- resp, err := client.Get(ctx, resourceGroup, dataFactoryName, name, "")
+ resp, err := client.Get(ctx, id.ResourceGroup, id.FactoryName, id.Name, "")
if err != nil {
if utils.ResponseWasNotFound(resp.Response) {
d.SetId("")
return nil
}
- return fmt.Errorf("Error retrieving Data Factory Linked Service Web %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error retrieving Data Factory Linked Service Web %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
d.Set("name", resp.Name)
- d.Set("resource_group_name", resourceGroup)
- d.Set("data_factory_name", dataFactoryName)
+ d.Set("resource_group_name", id.ResourceGroup)
+ d.Set("data_factory_name", id.FactoryName)
web, ok := resp.Properties.AsWebLinkedService()
if !ok {
- return fmt.Errorf("Error classifiying Data Factory Linked Service Web %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", name, dataFactoryName, resourceGroup, datafactory.TypeBasicLinkedServiceTypeWeb, *resp.Type)
+ return fmt.Errorf("Error classifiying Data Factory Linked Service Web %q (Data Factory %q / Resource Group %q): Expected: %q Received: %q", id.Name, id.FactoryName, id.ResourceGroup, datafactory.TypeBasicLinkedServiceTypeWeb, *resp.Type)
}
isWebPropertiesLoaded := false
@@ -289,18 +287,15 @@ func resourceDataFactoryLinkedServiceWebDelete(d *schema.ResourceData, meta inte
ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
defer cancel()
- id, err := azure.ParseAzureResourceID(d.Id())
+ id, err := parse.LinkedServiceID(d.Id())
if err != nil {
return err
}
- resourceGroup := id.ResourceGroup
- dataFactoryName := id.Path["factories"]
- name := id.Path["linkedservices"]
- response, err := client.Delete(ctx, resourceGroup, dataFactoryName, name)
+ response, err := client.Delete(ctx, id.ResourceGroup, id.FactoryName, id.Name)
if err != nil {
if !utils.ResponseWasNotFound(response) {
- return fmt.Errorf("Error deleting Data Factory Linked Service Web %q (Data Factory %q / Resource Group %q): %+v", name, dataFactoryName, resourceGroup, err)
+ return fmt.Errorf("Error deleting Data Factory Linked Service Web %q (Data Factory %q / Resource Group %q): %+v", id.Name, id.FactoryName, id.ResourceGroup, err)
}
}
diff --git a/azurerm/internal/services/eventgrid/client/client.go b/azurerm/internal/services/eventgrid/client/client.go
index de3f324ffbd43..e430b4a6359f1 100644
--- a/azurerm/internal/services/eventgrid/client/client.go
+++ b/azurerm/internal/services/eventgrid/client/client.go
@@ -1,7 +1,7 @@
package client
import (
- "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid"
+ "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/common"
)
diff --git a/azurerm/internal/services/eventgrid/event_subscription.go b/azurerm/internal/services/eventgrid/event_subscription.go
index 43f28e8847e72..8c1967ab56240 100644
--- a/azurerm/internal/services/eventgrid/event_subscription.go
+++ b/azurerm/internal/services/eventgrid/event_subscription.go
@@ -5,7 +5,7 @@ import (
"regexp"
"time"
- "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid"
+ "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid"
"github.com/Azure/go-autorest/autorest/date"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
diff --git a/azurerm/internal/services/eventgrid/eventgrid.go b/azurerm/internal/services/eventgrid/eventgrid.go
index bbe28fee9ff08..431dfbd37a064 100644
--- a/azurerm/internal/services/eventgrid/eventgrid.go
+++ b/azurerm/internal/services/eventgrid/eventgrid.go
@@ -1,7 +1,7 @@
package eventgrid
import (
- "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid"
+ "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
diff --git a/azurerm/internal/services/eventgrid/eventgrid_domain_resource.go b/azurerm/internal/services/eventgrid/eventgrid_domain_resource.go
index d0c38f35c4181..2b6f939f2b876 100644
--- a/azurerm/internal/services/eventgrid/eventgrid_domain_resource.go
+++ b/azurerm/internal/services/eventgrid/eventgrid_domain_resource.go
@@ -6,7 +6,7 @@ import (
"regexp"
"time"
- "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid"
+ "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid"
"github.com/hashicorp/go-azure-helpers/response"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
diff --git a/azurerm/internal/services/eventgrid/eventgrid_event_subscription_resource.go b/azurerm/internal/services/eventgrid/eventgrid_event_subscription_resource.go
index 99d2f0d3d78e9..903d7a46a298d 100644
--- a/azurerm/internal/services/eventgrid/eventgrid_event_subscription_resource.go
+++ b/azurerm/internal/services/eventgrid/eventgrid_event_subscription_resource.go
@@ -5,7 +5,7 @@ import (
"log"
"time"
- "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid"
+ "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid"
"github.com/hashicorp/go-azure-helpers/response"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
diff --git a/azurerm/internal/services/eventgrid/eventgrid_system_topic_event_subscription_resource.go b/azurerm/internal/services/eventgrid/eventgrid_system_topic_event_subscription_resource.go
index beec11663eefc..504343c57b848 100644
--- a/azurerm/internal/services/eventgrid/eventgrid_system_topic_event_subscription_resource.go
+++ b/azurerm/internal/services/eventgrid/eventgrid_system_topic_event_subscription_resource.go
@@ -5,7 +5,7 @@ import (
"log"
"time"
- "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid"
+ "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid"
"github.com/hashicorp/go-azure-helpers/response"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
diff --git a/azurerm/internal/services/eventgrid/eventgrid_system_topic_resource.go b/azurerm/internal/services/eventgrid/eventgrid_system_topic_resource.go
index 16408a62a671e..cc8aaeea7b8db 100644
--- a/azurerm/internal/services/eventgrid/eventgrid_system_topic_resource.go
+++ b/azurerm/internal/services/eventgrid/eventgrid_system_topic_resource.go
@@ -6,7 +6,7 @@ import (
"regexp"
"time"
- "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid"
+ "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid"
"github.com/hashicorp/go-azure-helpers/response"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
diff --git a/azurerm/internal/services/eventgrid/eventgrid_system_topic_resource_test.go b/azurerm/internal/services/eventgrid/eventgrid_system_topic_resource_test.go
index d3f3929484a10..d3795d1d32220 100644
--- a/azurerm/internal/services/eventgrid/eventgrid_system_topic_resource_test.go
+++ b/azurerm/internal/services/eventgrid/eventgrid_system_topic_resource_test.go
@@ -35,6 +35,24 @@ func TestAccEventGridSystemTopic_basic(t *testing.T) {
})
}
+func TestAccEventGridSystemTopic_policyStates(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_eventgrid_system_topic", "test")
+ r := EventGridSystemTopicResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.policyStates(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ check.That(data.ResourceName).Key("source_arm_resource_id").Exists(),
+ check.That(data.ResourceName).Key("topic_type").Exists(),
+ check.That(data.ResourceName).Key("metric_arm_resource_id").Exists(),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
func TestAccEventGridSystemTopic_requiresImport(t *testing.T) {
data := acceptance.BuildTestData(t, "azurerm_eventgrid_system_topic", "test")
r := EventGridSystemTopicResource{}
@@ -162,3 +180,30 @@ resource "azurerm_eventgrid_system_topic" "test" {
}
`, data.RandomInteger, data.Locations.Primary, data.RandomIntOfLength(12), data.RandomIntOfLength(10))
}
+
+func (EventGridSystemTopicResource) policyStates(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+data "azurerm_subscription" "current" {}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-eg-%d"
+ location = "%s"
+}
+
+resource "azurerm_eventgrid_system_topic" "test" {
+ name = "acctestEGST%d"
+ location = "Global"
+ resource_group_name = azurerm_resource_group.test.name
+ source_arm_resource_id = format("/subscriptions/%%s", data.azurerm_subscription.current.subscription_id)
+ topic_type = "Microsoft.PolicyInsights.PolicyStates"
+
+ tags = {
+ "Foo" = "Bar"
+ }
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomIntOfLength(10))
+}
diff --git a/azurerm/internal/services/eventgrid/eventgrid_topic_resource.go b/azurerm/internal/services/eventgrid/eventgrid_topic_resource.go
index a81420bb97453..13fcde583c80f 100644
--- a/azurerm/internal/services/eventgrid/eventgrid_topic_resource.go
+++ b/azurerm/internal/services/eventgrid/eventgrid_topic_resource.go
@@ -6,7 +6,7 @@ import (
"regexp"
"time"
- "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid"
+ "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid"
"github.com/hashicorp/go-azure-helpers/response"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
diff --git a/azurerm/internal/services/eventhub/eventhub_cluster_data_source.go b/azurerm/internal/services/eventhub/eventhub_cluster_data_source.go
new file mode 100644
index 0000000000000..14fe0de5bedbd
--- /dev/null
+++ b/azurerm/internal/services/eventhub/eventhub_cluster_data_source.go
@@ -0,0 +1,68 @@
+package eventhub
+
+import (
+ "fmt"
+ "time"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/eventhub/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+func dataSourceEventHubCluster() *schema.Resource {
+ return &schema.Resource{
+ Read: dataSourceEventHubClusterRead,
+
+ Timeouts: &schema.ResourceTimeout{
+ Read: schema.DefaultTimeout(5 * time.Minute),
+ },
+
+ Schema: map[string]*schema.Schema{
+ "name": {
+ Type: schema.TypeString,
+ Required: true,
+ },
+
+ "resource_group_name": azure.SchemaResourceGroupName(),
+
+ "location": azure.SchemaLocationForDataSource(),
+
+ "sku_name": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ },
+ }
+}
+
+func dataSourceEventHubClusterRead(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*clients.Client).Eventhub.ClusterClient
+ ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ name := d.Get("name").(string)
+ resourceGroup := d.Get("resource_group_name").(string)
+
+ subscriptionId := meta.(*clients.Client).Account.SubscriptionId
+ id := parse.NewClusterID(subscriptionId, resourceGroup, name)
+ resp, err := client.Get(ctx, resourceGroup, name)
+ if err != nil {
+ if utils.ResponseWasNotFound(resp.Response) {
+ return fmt.Errorf("%s was not found", id)
+ }
+ return fmt.Errorf("making Read request on Azure EventHub Cluster %q (Resource Group %q): %+v", name, resourceGroup, err)
+ }
+ d.SetId(id.ID())
+
+ d.Set("name", resp.Name)
+ d.Set("resource_group_name", resourceGroup)
+ d.Set("sku_name", flattenEventHubClusterSkuName(resp.Sku))
+ if location := resp.Location; location != nil {
+ d.Set("location", azure.NormalizeLocation(*location))
+ }
+
+ return nil
+}
diff --git a/azurerm/internal/services/eventhub/eventhub_cluster_data_source_test.go b/azurerm/internal/services/eventhub/eventhub_cluster_data_source_test.go
new file mode 100644
index 0000000000000..eacd5f84ec6d7
--- /dev/null
+++ b/azurerm/internal/services/eventhub/eventhub_cluster_data_source_test.go
@@ -0,0 +1,52 @@
+package eventhub_test
+
+import (
+ "fmt"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance/check"
+)
+
+type EventHubClusterDataSource struct {
+}
+
+func TestAccEventHubClusterDataSource_basic(t *testing.T) {
+ data := acceptance.BuildTestData(t, "data.azurerm_eventhub_cluster", "test")
+ r := EventHubClusterDataSource{}
+
+ data.DataSourceTest(t, []resource.TestStep{
+ {
+ Config: r.basic(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).Key("sku_name").HasValue("Dedicated_1"),
+ ),
+ },
+ })
+}
+
+func (EventHubClusterDataSource) basic(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-eventhub-%[1]d"
+ location = "%[2]s"
+}
+
+resource "azurerm_eventhub_cluster" "test" {
+ name = "acctesteventhubclusTER-%d"
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+ sku_name = "Dedicated_1"
+}
+
+data "azurerm_eventhub_cluster" "test" {
+ name = azurerm_eventhub_cluster.test.name
+ resource_group_name = azurerm_resource_group.test.name
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomInteger)
+}
diff --git a/azurerm/internal/services/eventhub/registration.go b/azurerm/internal/services/eventhub/registration.go
index 2da44f69c8857..b66af015efd98 100644
--- a/azurerm/internal/services/eventhub/registration.go
+++ b/azurerm/internal/services/eventhub/registration.go
@@ -23,6 +23,7 @@ func (r Registration) WebsiteCategories() []string {
func (r Registration) SupportedDataSources() map[string]*schema.Resource {
return map[string]*schema.Resource{
"azurerm_eventhub": dataSourceEventHub(),
+ "azurerm_eventhub_cluster": dataSourceEventHubCluster(),
"azurerm_eventhub_authorization_rule": EventHubAuthorizationRuleDataSource(),
"azurerm_eventhub_consumer_group": EventHubConsumerGroupDataSource(),
"azurerm_eventhub_namespace": EventHubNamespaceDataSource(),
diff --git a/azurerm/internal/services/frontdoor/customizediff.go b/azurerm/internal/services/frontdoor/customizediff.go
index 0d7c1be0adb5f..4535ac1eff72f 100644
--- a/azurerm/internal/services/frontdoor/customizediff.go
+++ b/azurerm/internal/services/frontdoor/customizediff.go
@@ -26,53 +26,78 @@ func customizeHttpsConfigurationCustomizeDiff(ctx context.Context, d *schema.Res
}
func customHttpsSettings(d *schema.ResourceDiff) error {
- frontendId := d.Get("frontend_endpoint_id").(string)
frontendEndpointCustomHttpsConfig := d.Get("custom_https_configuration").([]interface{})
customHttpsEnabled := d.Get("custom_https_provisioning_enabled").(bool)
if len(frontendEndpointCustomHttpsConfig) > 0 {
if !customHttpsEnabled {
- return fmt.Errorf(`"frontend_endpoint":%q "custom_https_configuration" is invalid because "custom_https_provisioning_enabled" is set to "false". please remove the "custom_https_configuration" block from the configuration file`, frontendId)
+ return fmt.Errorf(`"custom_https_provisioning_enabled" is set to "false". please remove the "custom_https_configuration" block from the configuration file`)
}
// Verify frontend endpoints custom https configuration is valid if defined
- if err := verifyCustomHttpsConfiguration(frontendEndpointCustomHttpsConfig, frontendId); err != nil {
+ if err := verifyCustomHttpsConfiguration(frontendEndpointCustomHttpsConfig); err != nil {
return err
}
} else if customHttpsEnabled {
- return fmt.Errorf(`"frontend_endpoint":%q "custom_https_configuration" is invalid because "custom_https_provisioning_enabled" is set to "true". please add a "custom_https_configuration" block to the configuration file`, frontendId)
+ return fmt.Errorf(`"custom_https_provisioning_enabled" is set to "true". please add a "custom_https_configuration" block to the configuration file`)
}
return nil
}
-func verifyCustomHttpsConfiguration(frontendEndpointCustomHttpsConfig []interface{}, frontendId string) error {
+func verifyCustomHttpsConfiguration(frontendEndpointCustomHttpsConfig []interface{}) error {
if len(frontendEndpointCustomHttpsConfig) > 0 {
customHttpsConfiguration := frontendEndpointCustomHttpsConfig[0].(map[string]interface{})
- certificateSource := customHttpsConfiguration["certificate_source"]
- if certificateSource == string(frontdoor.CertificateSourceAzureKeyVault) {
- if !azureKeyVaultCertificateHasValues(customHttpsConfiguration, true) {
- return fmt.Errorf(`"frontend_endpoint":%q "custom_https_configuration" is invalid, all of the following keys must have values in the "custom_https_configuration" block: "azure_key_vault_certificate_secret_name" and "azure_key_vault_certificate_vault_id"`, frontendId)
+ certificateSource := customHttpsConfiguration["certificate_source"].(string)
+ certificateVersion := customHttpsConfiguration["azure_key_vault_certificate_secret_version"].(string)
+
+ if certificateSource == string(frontdoor.CertificateSourceFrontDoor) {
+ if azureKeyVaultCertificateHasValues(customHttpsConfiguration, true) {
+ return fmt.Errorf(`a Front Door managed "custom_https_configuration" block does not support the following keys. Please remove the following keys from your configuration file: "azure_key_vault_certificate_secret_name", "azure_key_vault_certificate_secret_version", and "azure_key_vault_certificate_vault_id"`)
+ }
+ } else {
+ // The latest secret version is no longer valid for key vaults
+ if strings.EqualFold(certificateVersion, "latest") {
+ return fmt.Errorf(`"azure_key_vault_certificate_secret_version" can not be set to "latest" please remove this attribute from the configuration file. Removing the value has the same functionality as setting it to "latest"`)
+ }
+
+ if !azureKeyVaultCertificateHasValues(customHttpsConfiguration, false) {
+ if certificateVersion == "" {
+ // If using latest, empty string is now equivalent to using the keyword latest
+ return fmt.Errorf(`a "AzureKeyVault" managed "custom_https_configuration" block must have values in the following fileds: "azure_key_vault_certificate_secret_name" and "azure_key_vault_certificate_vault_id"`)
+ } else {
+ // If using a specific version of the secret
+ return fmt.Errorf(`a "AzureKeyVault" managed "custom_https_configuration" block must have values in the following fileds: "azure_key_vault_certificate_secret_name", "azure_key_vault_certificate_secret_version", and "azure_key_vault_certificate_vault_id"`)
+ }
}
- } else if azureKeyVaultCertificateHasValues(customHttpsConfiguration, false) {
- return fmt.Errorf(`"frontend_endpoint":%q "custom_https_configuration" is invalid, all of the following keys must be removed from the "custom_https_configuration" block: "azure_key_vault_certificate_secret_name", "azure_key_vault_certificate_secret_version", and "azure_key_vault_certificate_vault_id"`, frontendId)
}
}
return nil
}
-func azureKeyVaultCertificateHasValues(customHttpsConfiguration map[string]interface{}, matchAllKeys bool) bool {
- certificateSecretName := customHttpsConfiguration["azure_key_vault_certificate_secret_name"]
- certificateSecretVersion := customHttpsConfiguration["azure_key_vault_certificate_secret_version"]
- certificateVaultId := customHttpsConfiguration["azure_key_vault_certificate_vault_id"]
+func azureKeyVaultCertificateHasValues(customHttpsConfiguration map[string]interface{}, isFrontDoorManaged bool) bool {
+ certificateSecretName := customHttpsConfiguration["azure_key_vault_certificate_secret_name"].(string)
+ certificateSecretVersion := customHttpsConfiguration["azure_key_vault_certificate_secret_version"].(string)
+ certificateVaultId := customHttpsConfiguration["azure_key_vault_certificate_vault_id"].(string)
- if matchAllKeys {
- if strings.TrimSpace(certificateSecretName.(string)) != "" && strings.TrimSpace(certificateVaultId.(string)) != "" {
+ if isFrontDoorManaged {
+ // if any of these keys have values it is invalid
+ if strings.TrimSpace(certificateSecretName) != "" || strings.TrimSpace(certificateSecretVersion) != "" || strings.TrimSpace(certificateVaultId) != "" {
return true
}
- } else if strings.TrimSpace(certificateSecretName.(string)) != "" || strings.TrimSpace(certificateSecretVersion.(string)) != "" || strings.TrimSpace(certificateVaultId.(string)) != "" {
- return true
+ } else {
+ if certificateSecretVersion == "" {
+ // using latest ignore certificate secret version
+ if strings.TrimSpace(certificateSecretName) != "" && strings.TrimSpace(certificateVaultId) != "" {
+ return true
+ }
+ } else {
+ // not using latest make sure all keys have values
+ if strings.TrimSpace(certificateSecretName) != "" && strings.TrimSpace(certificateSecretVersion) != "" && strings.TrimSpace(certificateVaultId) != "" {
+ return true
+ }
+ }
}
return false
diff --git a/azurerm/internal/services/frontdoor/frontdoor_custom_https_configuration_resource.go b/azurerm/internal/services/frontdoor/frontdoor_custom_https_configuration_resource.go
index 0a51d39c588b3..3f6c8fc1b80d6 100644
--- a/azurerm/internal/services/frontdoor/frontdoor_custom_https_configuration_resource.go
+++ b/azurerm/internal/services/frontdoor/frontdoor_custom_https_configuration_resource.go
@@ -26,10 +26,33 @@ func resourceFrontDoorCustomHttpsConfiguration() *schema.Resource {
Update: resourceFrontDoorCustomHttpsConfigurationCreateUpdate,
Delete: resourceFrontDoorCustomHttpsConfigurationDelete,
- Importer: pluginsdk.ImporterValidatingResourceId(func(id string) error {
- _, err := parse.FrontendEndpointID(id)
- return err
- }),
+ Importer: &schema.ResourceImporter{
+ State: func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) {
+ client := meta.(*clients.Client).Frontdoor.FrontDoorsFrontendClient
+ ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ // validate that the passed ID is a valid custom HTTPS configuration ID
+ custom, err := parse.CustomHttpsConfigurationID(d.Id())
+ if err != nil {
+ return []*schema.ResourceData{d}, fmt.Errorf("parsing Custom HTTPS Configuration ID %q for import: %v", d.Id(), err)
+ }
+
+ // convert the passed custom HTTPS configuration ID to a frontend endpoint ID
+ frontend := parse.NewFrontendEndpointID(custom.SubscriptionId, custom.ResourceGroup, custom.FrontDoorName, custom.CustomHttpsConfigurationName)
+
+ // validate that the frontend endpoint ID exists in the Frontdoor resource
+ if _, err = client.Get(ctx, custom.ResourceGroup, custom.FrontDoorName, custom.CustomHttpsConfigurationName); err != nil {
+ return []*schema.ResourceData{d}, fmt.Errorf("retrieving the Custom HTTPS Configuration(ID: %q) for the frontend endpoint (ID: %q): %s", custom.ID(), frontend.ID(), err)
+ }
+
+ // set the new values for the custom HTTPS configuration resource
+ d.Set("id", custom.ID())
+ d.Set("frontend_endpoint_id", frontend.ID())
+
+ return []*schema.ResourceData{d}, nil
+ },
+ },
Timeouts: &schema.ResourceTimeout{
Create: schema.DefaultTimeout(6 * time.Hour),
@@ -82,8 +105,6 @@ func resourceFrontDoorCustomHttpsConfigurationCreateUpdate(d *schema.ResourceDat
customHttpsConfigurationId := parse.NewCustomHttpsConfigurationID(frontendEndpointId.SubscriptionId, frontendEndpointId.ResourceGroup, frontendEndpointId.FrontDoorName, frontendEndpointId.Name)
- // TODO: Requires Import support
-
resp, err := client.Get(ctx, frontendEndpointId.ResourceGroup, frontendEndpointId.FrontDoorName, frontendEndpointId.Name)
if err != nil {
return fmt.Errorf("reading Endpoint %q (Front Door %q / Resource Group %q): %+v", frontendEndpointId.Name, frontendEndpointId.FrontDoorName, frontendEndpointId.ResourceGroup, err)
diff --git a/azurerm/internal/services/frontdoor/frontdoor_custom_https_configuration_resource_test.go b/azurerm/internal/services/frontdoor/frontdoor_custom_https_configuration_resource_test.go
index 3b6eb946e2f2c..2c4b9f23d12e9 100644
--- a/azurerm/internal/services/frontdoor/frontdoor_custom_https_configuration_resource_test.go
+++ b/azurerm/internal/services/frontdoor/frontdoor_custom_https_configuration_resource_test.go
@@ -3,6 +3,7 @@ package frontdoor_test
import (
"context"
"fmt"
+ "regexp"
"testing"
"github.com/hashicorp/terraform-plugin-sdk/helper/resource"
@@ -22,7 +23,7 @@ func TestAccFrontDoorCustomHttpsConfiguration_CustomHttps(t *testing.T) {
r := FrontDoorCustomHttpsConfigurationResource{}
data.ResourceTest(t, r, []resource.TestStep{
{
- Config: r.CustomHttpsEnabled(data),
+ Config: r.Enabled(data),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("custom_https_provisioning_enabled").HasValue("true"),
@@ -30,7 +31,7 @@ func TestAccFrontDoorCustomHttpsConfiguration_CustomHttps(t *testing.T) {
),
},
{
- Config: r.CustomHttpsDisabled(data),
+ Config: r.Disabled(data),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("custom_https_provisioning_enabled").HasValue("false"),
@@ -39,6 +40,72 @@ func TestAccFrontDoorCustomHttpsConfiguration_CustomHttps(t *testing.T) {
})
}
+func TestAccFrontDoorCustomHttpsConfiguration_DisabledWithConfigurationBlock(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_frontdoor_custom_https_configuration", "test")
+ r := FrontDoorCustomHttpsConfigurationResource{}
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.DisabledWithConfigurationBlock(data),
+ ExpectError: regexp.MustCompile(`"custom_https_provisioning_enabled" is set to "false". please remove the "custom_https_configuration" block from the configuration file`),
+ },
+ })
+}
+
+func TestAccFrontDoorCustomHttpsConfiguration_EnabledWithoutConfigurationBlock(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_frontdoor_custom_https_configuration", "test")
+ r := FrontDoorCustomHttpsConfigurationResource{}
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.EnabledWithoutConfigurationBlock(data),
+ ExpectError: regexp.MustCompile(`"custom_https_provisioning_enabled" is set to "true". please add a "custom_https_configuration" block to the configuration file`),
+ },
+ })
+}
+
+func TestAccFrontDoorCustomHttpsConfiguration_EnabledFrontdoorExtraAttributes(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_frontdoor_custom_https_configuration", "test")
+ r := FrontDoorCustomHttpsConfigurationResource{}
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.EnabledFrontdoorExtraAttributes(data),
+ ExpectError: regexp.MustCompile(`a Front Door managed "custom_https_configuration" block does not support the following keys.`),
+ },
+ })
+}
+
+func TestAccFrontDoorCustomHttpsConfiguration_EnabledKeyVaultLatest(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_frontdoor_custom_https_configuration", "test")
+ r := FrontDoorCustomHttpsConfigurationResource{}
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.EnabledKeyVaultLatest(data),
+ ExpectError: regexp.MustCompile(`"azure_key_vault_certificate_secret_version" can not be set to "latest" please remove this attribute from the configuration file.`),
+ },
+ })
+}
+
+func TestAccFrontDoorCustomHttpsConfiguration_EnabledKeyVaultLatestMissingAttributes(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_frontdoor_custom_https_configuration", "test")
+ r := FrontDoorCustomHttpsConfigurationResource{}
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.EnabledKeyVaultLatestMissingAttributes(data),
+ ExpectError: regexp.MustCompile(`a "AzureKeyVault" managed "custom_https_configuration" block must have values in the following fileds: "azure_key_vault_certificate_secret_name" and "azure_key_vault_certificate_vault_id"`),
+ },
+ })
+}
+
+func TestAccFrontDoorCustomHttpsConfiguration_EnabledKeyVaultMissingAttributes(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_frontdoor_custom_https_configuration", "test")
+ r := FrontDoorCustomHttpsConfigurationResource{}
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.EnabledKeyVaultMissingAttributes(data),
+ ExpectError: regexp.MustCompile(`a "AzureKeyVault" managed "custom_https_configuration" block must have values in the following fileds: "azure_key_vault_certificate_secret_name", "azure_key_vault_certificate_secret_version", and "azure_key_vault_certificate_vault_id"`),
+ },
+ })
+}
+
func (FrontDoorCustomHttpsConfigurationResource) Exists(ctx context.Context, clients *clients.Client, state *terraform.InstanceState) (*bool, error) {
id, err := parse.CustomHttpsConfigurationIDInsensitively(state.ID)
if err != nil {
@@ -53,7 +120,7 @@ func (FrontDoorCustomHttpsConfigurationResource) Exists(ctx context.Context, cli
return utils.Bool(resp.FrontendEndpointProperties != nil), nil
}
-func (r FrontDoorCustomHttpsConfigurationResource) CustomHttpsEnabled(data acceptance.TestData) string {
+func (r FrontDoorCustomHttpsConfigurationResource) Enabled(data acceptance.TestData) string {
return fmt.Sprintf(`
%s
@@ -68,7 +135,7 @@ resource "azurerm_frontdoor_custom_https_configuration" "test" {
`, r.template(data))
}
-func (r FrontDoorCustomHttpsConfigurationResource) CustomHttpsDisabled(data acceptance.TestData) string {
+func (r FrontDoorCustomHttpsConfigurationResource) Disabled(data acceptance.TestData) string {
return fmt.Sprintf(`
%s
@@ -79,6 +146,98 @@ resource "azurerm_frontdoor_custom_https_configuration" "test" {
`, r.template(data))
}
+func (r FrontDoorCustomHttpsConfigurationResource) DisabledWithConfigurationBlock(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_frontdoor_custom_https_configuration" "test" {
+ frontend_endpoint_id = azurerm_frontdoor.test.frontend_endpoints[local.endpoint_name]
+ custom_https_provisioning_enabled = false
+
+ custom_https_configuration {
+ certificate_source = "FrontDoor"
+ }
+}
+`, r.template(data))
+}
+
+func (r FrontDoorCustomHttpsConfigurationResource) EnabledWithoutConfigurationBlock(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_frontdoor_custom_https_configuration" "test" {
+ frontend_endpoint_id = azurerm_frontdoor.test.frontend_endpoints[local.endpoint_name]
+ custom_https_provisioning_enabled = true
+}
+`, r.template(data))
+}
+
+func (r FrontDoorCustomHttpsConfigurationResource) EnabledFrontdoorExtraAttributes(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_frontdoor_custom_https_configuration" "test" {
+ frontend_endpoint_id = azurerm_frontdoor.test.frontend_endpoints[local.endpoint_name]
+ custom_https_provisioning_enabled = true
+
+ custom_https_configuration {
+ certificate_source = "FrontDoor"
+ azure_key_vault_certificate_secret_name = "accTest"
+ }
+}
+`, r.template(data))
+}
+
+func (r FrontDoorCustomHttpsConfigurationResource) EnabledKeyVaultLatest(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_frontdoor_custom_https_configuration" "test" {
+ frontend_endpoint_id = azurerm_frontdoor.test.frontend_endpoints[local.endpoint_name]
+ custom_https_provisioning_enabled = true
+
+ custom_https_configuration {
+ certificate_source = "AzureKeyVault"
+ azure_key_vault_certificate_secret_name = "accTest"
+ azure_key_vault_certificate_secret_version = "latest"
+ }
+}
+`, r.template(data))
+}
+
+func (r FrontDoorCustomHttpsConfigurationResource) EnabledKeyVaultLatestMissingAttributes(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_frontdoor_custom_https_configuration" "test" {
+ frontend_endpoint_id = azurerm_frontdoor.test.frontend_endpoints[local.endpoint_name]
+ custom_https_provisioning_enabled = true
+
+ custom_https_configuration {
+ certificate_source = "AzureKeyVault"
+ azure_key_vault_certificate_secret_name = "accTest"
+ }
+}
+`, r.template(data))
+}
+
+func (r FrontDoorCustomHttpsConfigurationResource) EnabledKeyVaultMissingAttributes(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_frontdoor_custom_https_configuration" "test" {
+ frontend_endpoint_id = azurerm_frontdoor.test.frontend_endpoints[local.endpoint_name]
+ custom_https_provisioning_enabled = true
+
+ custom_https_configuration {
+ certificate_source = "AzureKeyVault"
+ azure_key_vault_certificate_secret_name = "accTest"
+ azure_key_vault_certificate_secret_version = "accTest"
+ }
+}
+`, r.template(data))
+}
+
func (FrontDoorCustomHttpsConfigurationResource) template(data acceptance.TestData) string {
return fmt.Sprintf(`
provider "azurerm" {
diff --git a/azurerm/internal/services/frontdoor/frontdoor_resource.go b/azurerm/internal/services/frontdoor/frontdoor_resource.go
index 22b40b6b83c58..de5f51339457a 100644
--- a/azurerm/internal/services/frontdoor/frontdoor_resource.go
+++ b/azurerm/internal/services/frontdoor/frontdoor_resource.go
@@ -809,14 +809,18 @@ func resourceFrontDoorDelete(d *schema.ResourceData, meta interface{}) error {
future, err := client.Delete(ctx, id.ResourceGroup, id.Name)
if err != nil {
- if response.WasNotFound(future.Response()) {
- return nil
+ if future.Response() != nil {
+ if response.WasNotFound(future.Response()) {
+ return nil
+ }
}
return fmt.Errorf("deleting Front Door %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
}
if err = future.WaitForCompletionRef(ctx, client.Client); err != nil {
- if !response.WasNotFound(future.Response()) {
- return fmt.Errorf("waiting for deleting Front Door %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
+ if future.Response() != nil {
+ if !response.WasNotFound(future.Response()) {
+ return fmt.Errorf("waiting for deleting Front Door %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
+ }
}
}
diff --git a/azurerm/internal/services/hdinsight/hdinsight_kafka_cluster_resource.go b/azurerm/internal/services/hdinsight/hdinsight_kafka_cluster_resource.go
index c978d4bb8d3f0..68e755801cbbd 100644
--- a/azurerm/internal/services/hdinsight/hdinsight_kafka_cluster_resource.go
+++ b/azurerm/internal/services/hdinsight/hdinsight_kafka_cluster_resource.go
@@ -107,6 +107,12 @@ func resourceHDInsightKafkaCluster() *schema.Resource {
"storage_account_gen2": SchemaHDInsightsGen2StorageAccounts(),
+ "encryption_in_transit_enabled": {
+ Type: schema.TypeBool,
+ ForceNew: true,
+ Optional: true,
+ },
+
"roles": {
Type: schema.TypeList,
Required: true,
@@ -251,6 +257,12 @@ func resourceHDInsightKafkaClusterCreate(d *schema.ResourceData, meta interface{
Identity: identity,
}
+ if encryptionInTransit, ok := d.GetOk("encryption_in_transit_enabled"); ok {
+ params.Properties.EncryptionInTransitProperties = &hdinsight.EncryptionInTransitProperties{
+ IsEncryptionInTransitEnabled: utils.Bool(encryptionInTransit.(bool)),
+ }
+ }
+
future, err := client.Create(ctx, resourceGroup, name, params)
if err != nil {
return fmt.Errorf("failure creating HDInsight Kafka Cluster %q (Resource Group %q): %+v", name, resourceGroup, err)
@@ -361,6 +373,10 @@ func resourceHDInsightKafkaClusterRead(d *schema.ResourceData, meta interface{})
kafkaRestProxyEndpoint := FindHDInsightConnectivityEndpoint("KafkaRestProxyPublicEndpoint", props.ConnectivityEndpoints)
d.Set("kafka_rest_proxy_endpoint", kafkaRestProxyEndpoint)
+ if props.EncryptionInTransitProperties != nil {
+ d.Set("encryption_in_transit_enabled", props.EncryptionInTransitProperties.IsEncryptionInTransitEnabled)
+ }
+
monitor, err := extensionsClient.GetMonitoringStatus(ctx, resourceGroup, name)
if err != nil {
return fmt.Errorf("failed reading monitor configuration for HDInsight Hadoop Cluster %q (Resource Group %q): %+v", name, resourceGroup, err)
diff --git a/azurerm/internal/services/hdinsight/hdinsight_kafka_cluster_resource_test.go b/azurerm/internal/services/hdinsight/hdinsight_kafka_cluster_resource_test.go
index 918fe44cb7ac8..4c167a3d935a4 100644
--- a/azurerm/internal/services/hdinsight/hdinsight_kafka_cluster_resource_test.go
+++ b/azurerm/internal/services/hdinsight/hdinsight_kafka_cluster_resource_test.go
@@ -408,6 +408,28 @@ func TestAccHDInsightKafkaCluster_restProxy(t *testing.T) {
})
}
+func TestAccHDInsightKafkaCluster_encryptionInTransitEnabled(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_hdinsight_kafka_cluster", "test")
+ r := HDInsightKafkaClusterResource{}
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.encryptionInTransitEnabled(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep("roles.0.head_node.0.password",
+ "roles.0.head_node.0.vm_size",
+ "roles.0.worker_node.0.password",
+ "roles.0.worker_node.0.vm_size",
+ "roles.0.zookeeper_node.0.password",
+ "roles.0.zookeeper_node.0.vm_size",
+ "roles.0.kafka_management_node.0.password",
+ "roles.0.kafka_management_node.0.vm_size",
+ "storage_account"),
+ })
+}
+
func (t HDInsightKafkaClusterResource) Exists(ctx context.Context, clients *clients.Client, state *terraform.InstanceState) (*bool, error) {
id, err := parse.ClusterID(state.ID)
if err != nil {
@@ -1305,3 +1327,57 @@ resource "azurerm_hdinsight_kafka_cluster" "test" {
}
`, r.template(data), data.RandomInteger, data.RandomInteger)
}
+
+func (r HDInsightKafkaClusterResource) encryptionInTransitEnabled(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_hdinsight_kafka_cluster" "test" {
+ name = "acctesthdi-%d"
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+ cluster_version = "4.0"
+ tier = "Standard"
+
+ encryption_in_transit_enabled = true
+
+ component_version {
+ kafka = "2.1"
+ }
+
+ gateway {
+ enabled = true
+ username = "acctestusrgw"
+ password = "TerrAform123!"
+ }
+
+ storage_account {
+ storage_container_id = azurerm_storage_container.test.id
+ storage_account_key = azurerm_storage_account.test.primary_access_key
+ is_default = true
+ }
+
+ roles {
+ head_node {
+ vm_size = "Standard_D3_V2"
+ username = "acctestusrvm"
+ password = "AccTestvdSC4daf986!"
+ }
+
+ worker_node {
+ vm_size = "Standard_D3_V2"
+ username = "acctestusrvm"
+ password = "AccTestvdSC4daf986!"
+ target_instance_count = 3
+ number_of_disks_per_node = 2
+ }
+
+ zookeeper_node {
+ vm_size = "Standard_D3_V2"
+ username = "acctestusrvm"
+ password = "AccTestvdSC4daf986!"
+ }
+ }
+}
+`, r.template(data), data.RandomInteger)
+}
diff --git a/azurerm/internal/services/healthcare/healthcare_service_resource.go b/azurerm/internal/services/healthcare/healthcare_service_resource.go
index b1dee0ad6f75e..ede55afcd5c2c 100644
--- a/azurerm/internal/services/healthcare/healthcare_service_resource.go
+++ b/azurerm/internal/services/healthcare/healthcare_service_resource.go
@@ -188,6 +188,12 @@ func resourceHealthcareService() *schema.Resource {
},
},
+ "public_network_access_enabled": {
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: true,
+ },
+
"tags": tags.Schema(),
},
}
@@ -238,6 +244,13 @@ func resourceHealthcareServiceCreateUpdate(d *schema.ResourceData, meta interfac
},
}
+ publicNetworkAccess := d.Get("public_network_access_enabled").(bool)
+ if !publicNetworkAccess {
+ healthcareServiceDescription.Properties.PublicNetworkAccess = healthcareapis.Disabled
+ } else {
+ healthcareServiceDescription.Properties.PublicNetworkAccess = healthcareapis.Enabled
+ }
+
future, err := client.CreateOrUpdate(ctx, resGroup, name, healthcareServiceDescription)
if err != nil {
return fmt.Errorf("Error Creating/Updating Healthcare Service %q (Resource Group %q): %+v", name, resGroup, err)
@@ -307,6 +320,11 @@ func resourceHealthcareServiceRead(d *schema.ResourceData, meta interface{}) err
}
d.Set("cosmosdb_key_vault_key_versionless_id", cosmodDbKeyVaultKeyVersionlessId)
d.Set("cosmosdb_throughput", cosmosDbThroughput)
+ if props.PublicNetworkAccess == healthcareapis.Enabled {
+ d.Set("public_network_access_enabled", true)
+ } else {
+ d.Set("public_network_access_enabled", false)
+ }
if err := d.Set("authentication_configuration", flattenHealthcareAuthConfig(props.AuthenticationConfiguration)); err != nil {
return fmt.Errorf("Error setting `authentication_configuration`: %+v", err)
diff --git a/azurerm/internal/services/healthcare/healthcare_service_resource_test.go b/azurerm/internal/services/healthcare/healthcare_service_resource_test.go
index 8896807b3e04a..03eb42586faa0 100644
--- a/azurerm/internal/services/healthcare/healthcare_service_resource_test.go
+++ b/azurerm/internal/services/healthcare/healthcare_service_resource_test.go
@@ -62,6 +62,21 @@ func TestAccHealthCareService_complete(t *testing.T) {
})
}
+func TestAccHealthCareService_publicNetworkAccessDisabled(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_healthcare_service", "test")
+ r := HealthCareServiceResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.publicNetworkAccessDisabled(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
func (HealthCareServiceResource) Exists(ctx context.Context, clients *clients.Client, state *terraform.InstanceState) (*bool, error) {
id, err := parse.ServiceID(state.ID)
if err != nil {
@@ -234,3 +249,119 @@ resource "azurerm_healthcare_service" "test" {
}
`, data.RandomInteger, location, data.RandomString, data.RandomIntOfLength(17)) // name can only be 24 chars long
}
+
+func (HealthCareServiceResource) publicNetworkAccessDisabled(data acceptance.TestData) string {
+ // currently only supported in "ukwest", "northcentralus", "westus2".
+ location := "westus2"
+
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {
+ key_vault {
+ purge_soft_delete_on_destroy = false
+ }
+ }
+}
+
+provider "azuread" {}
+
+data "azurerm_client_config" "current" {
+}
+
+data "azuread_service_principal" "cosmosdb" {
+ display_name = "Azure Cosmos DB"
+}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-health-%d"
+ location = "%s"
+}
+
+resource "azurerm_key_vault" "test" {
+ name = "acctestkv-%s"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ tenant_id = data.azurerm_client_config.current.tenant_id
+ sku_name = "standard"
+
+ purge_protection_enabled = true
+ soft_delete_enabled = true
+ soft_delete_retention_days = 7
+
+ access_policy {
+ tenant_id = data.azurerm_client_config.current.tenant_id
+ object_id = data.azurerm_client_config.current.object_id
+
+ key_permissions = [
+ "list",
+ "create",
+ "delete",
+ "get",
+ "purge",
+ "update",
+ ]
+ }
+
+ access_policy {
+ tenant_id = data.azurerm_client_config.current.tenant_id
+ object_id = data.azuread_service_principal.cosmosdb.id
+
+ key_permissions = [
+ "get",
+ "unwrapKey",
+ "wrapKey",
+ ]
+ }
+}
+
+resource "azurerm_key_vault_key" "test" {
+ name = "examplekey"
+ key_vault_id = azurerm_key_vault.test.id
+ key_type = "RSA"
+ key_size = 2048
+
+ key_opts = [
+ "decrypt",
+ "encrypt",
+ "sign",
+ "unwrapKey",
+ "verify",
+ "wrapKey",
+ ]
+}
+
+resource "azurerm_healthcare_service" "test" {
+ name = "testacc%d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+
+ tags = {
+ environment = "production"
+ purpose = "AcceptanceTests"
+ }
+
+ access_policy_object_ids = [
+ data.azurerm_client_config.current.object_id,
+ ]
+
+ authentication_configuration {
+ authority = "https://login.microsoftonline.com/${data.azurerm_client_config.current.tenant_id}"
+ audience = "https://azurehealthcareapis.com"
+ smart_proxy_enabled = true
+ }
+
+ cors_configuration {
+ allowed_origins = ["http://www.example.com", "http://www.example2.com"]
+ allowed_headers = ["*"]
+ allowed_methods = ["GET", "PUT"]
+ max_age_in_seconds = 500
+ allow_credentials = true
+ }
+
+ cosmosdb_throughput = 400
+ cosmosdb_key_vault_key_versionless_id = azurerm_key_vault_key.test.versionless_id
+
+ public_network_access_enabled = false
+}
+`, data.RandomInteger, location, data.RandomString, data.RandomIntOfLength(17)) // name can only be 24 chars long
+}
diff --git a/azurerm/internal/services/machinelearning/client/client.go b/azurerm/internal/services/machinelearning/client/client.go
index 1ed909526ffe1..59aeb54ec48a8 100644
--- a/azurerm/internal/services/machinelearning/client/client.go
+++ b/azurerm/internal/services/machinelearning/client/client.go
@@ -6,14 +6,19 @@ import (
)
type Client struct {
- WorkspacesClient *machinelearningservices.WorkspacesClient
+ WorkspacesClient *machinelearningservices.WorkspacesClient
+ MachineLearningComputeClient *machinelearningservices.MachineLearningComputeClient
}
func NewClient(o *common.ClientOptions) *Client {
WorkspacesClient := machinelearningservices.NewWorkspacesClientWithBaseURI(o.ResourceManagerEndpoint, o.SubscriptionId)
o.ConfigureClient(&WorkspacesClient.Client, o.ResourceManagerAuthorizer)
+ MachineLearningComputeClient := machinelearningservices.NewMachineLearningComputeClientWithBaseURI(o.ResourceManagerEndpoint, o.SubscriptionId)
+ o.ConfigureClient(&MachineLearningComputeClient.Client, o.ResourceManagerAuthorizer)
+
return &Client{
- WorkspacesClient: &WorkspacesClient,
+ WorkspacesClient: &WorkspacesClient,
+ MachineLearningComputeClient: &MachineLearningComputeClient,
}
}
diff --git a/azurerm/internal/services/machinelearning/machine_learning_inference_cluster_resource.go b/azurerm/internal/services/machinelearning/machine_learning_inference_cluster_resource.go
new file mode 100644
index 0000000000000..52c0100d90f2a
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/machine_learning_inference_cluster_resource.go
@@ -0,0 +1,286 @@
+package machinelearning
+
+import (
+ "fmt"
+ "time"
+
+ "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice"
+ "github.com/Azure/azure-sdk-for-go/services/machinelearningservices/mgmt/2020-04-01/machinelearningservices"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/validation"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/machinelearning/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/machinelearning/validate"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tags"
+
+ azSchema "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/schema"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/suppress"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+func resourceAksInferenceCluster() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceAksInferenceClusterCreate,
+ Read: resourceAksInferenceClusterRead,
+ Delete: resourceAksInferenceClusterDelete,
+
+ Importer: azSchema.ValidateResourceIDPriorToImport(func(id string) error {
+ _, err := parse.InferenceClusterID(id)
+ return err
+ }),
+
+ Timeouts: &schema.ResourceTimeout{
+ Create: schema.DefaultTimeout(30 * time.Minute),
+ Read: schema.DefaultTimeout(5 * time.Minute),
+ Update: schema.DefaultTimeout(30 * time.Minute),
+ Delete: schema.DefaultTimeout(30 * time.Minute),
+ },
+
+ Schema: map[string]*schema.Schema{
+ "name": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+
+ "kubernetes_cluster_id": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ ValidateFunc: validate.KubernetesClusterID,
+ // remove in 3.0 of the provider
+ DiffSuppressFunc: suppress.CaseDifference,
+ },
+
+ "location": azure.SchemaLocation(),
+
+ "machine_learning_workspace_id": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+
+ "cluster_purpose": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Default: string(machinelearningservices.FastProd),
+ ValidateFunc: validation.StringInSlice([]string{
+ string(machinelearningservices.DevTest),
+ string(machinelearningservices.FastProd),
+ string(machinelearningservices.DenseProd),
+ }, false),
+ },
+
+ "description": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ },
+
+ "ssl": {
+ Type: schema.TypeList,
+ Optional: true,
+ ForceNew: true,
+ MaxItems: 1,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "cert": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Default: "",
+ },
+ "key": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Default: "",
+ },
+ "cname": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Default: "",
+ },
+ },
+ },
+ },
+
+ "tags": tags.ForceNewSchema(),
+ },
+ }
+}
+
+func resourceAksInferenceClusterCreate(d *schema.ResourceData, meta interface{}) error {
+ mlComputeClient := meta.(*clients.Client).MachineLearning.MachineLearningComputeClient
+ aksClient := meta.(*clients.Client).Containers.KubernetesClustersClient
+ ctx, cancel := timeouts.ForCreate(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ // Define Inference Cluster Name
+ name := d.Get("name").(string)
+
+ // Get Machine Learning Workspace Name and Resource Group from ID
+ workspaceID, err := parse.WorkspaceID(d.Get("machine_learning_workspace_id").(string))
+ if err != nil {
+ return err
+ }
+
+ // Check if Inference Cluster already exists
+ existing, err := mlComputeClient.Get(ctx, workspaceID.ResourceGroup, workspaceID.Name, name)
+ if err != nil {
+ if !utils.ResponseWasNotFound(existing.Response) {
+ return fmt.Errorf("checking for existing Inference Cluster %q in Workspace %q (Resource Group %q): %s", name, workspaceID.Name, workspaceID.ResourceGroup, err)
+ }
+ }
+ if existing.ID != nil && *existing.ID != "" {
+ return tf.ImportAsExistsError("azurerm_machine_learning_inference_cluster", *existing.ID)
+ }
+
+ // Get AKS Compute Properties
+ aksID, err := parse.KubernetesClusterID(d.Get("kubernetes_cluster_id").(string))
+ if err != nil {
+ return err
+ }
+ aks, err := aksClient.Get(ctx, aksID.ResourceGroup, aksID.ManagedClusterName)
+ if err != nil {
+ return err
+ }
+ aksComputeProperties, isAks := (machinelearningservices.BasicCompute).AsAKS(expandAksComputeProperties(&aks, d))
+ if !isAks {
+ return fmt.Errorf("the Compute Properties are not recognized as AKS Compute Properties")
+ }
+
+ inferenceClusterParameters := machinelearningservices.ComputeResource{
+ Properties: aksComputeProperties,
+ Location: utils.String(azure.NormalizeLocation(d.Get("location").(string))),
+ Tags: tags.Expand(d.Get("tags").(map[string]interface{})),
+ }
+
+ future, err := mlComputeClient.CreateOrUpdate(ctx, workspaceID.ResourceGroup, workspaceID.Name, name, inferenceClusterParameters)
+ if err != nil {
+ return fmt.Errorf("creating Inference Cluster %q in workspace %q (Resource Group %q): %+v", name, workspaceID.Name, workspaceID.ResourceGroup, err)
+ }
+ if err := future.WaitForCompletionRef(ctx, mlComputeClient.Client); err != nil {
+ return fmt.Errorf("waiting for creation of Inference Cluster %q in workspace %q (Resource Group %q): %+v", name, workspaceID.Name, workspaceID.ResourceGroup, err)
+ }
+ subscriptionId := meta.(*clients.Client).Account.SubscriptionId
+ id := parse.NewInferenceClusterID(subscriptionId, workspaceID.ResourceGroup, workspaceID.Name, name)
+ d.SetId(id.ID())
+
+ return resourceAksInferenceClusterRead(d, meta)
+}
+
+func resourceAksInferenceClusterRead(d *schema.ResourceData, meta interface{}) error {
+ mlComputeClient := meta.(*clients.Client).MachineLearning.MachineLearningComputeClient
+ ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ id, err := parse.InferenceClusterID(d.Id())
+ if err != nil {
+ return err
+ }
+
+ d.Set("name", id.ComputeName)
+
+ // Check that Inference Cluster Response can be read
+ computeResource, err := mlComputeClient.Get(ctx, id.ResourceGroup, id.WorkspaceName, id.ComputeName)
+ if err != nil {
+ if utils.ResponseWasNotFound(computeResource.Response) {
+ d.SetId("")
+ return nil
+ }
+ return fmt.Errorf("making Read request on Inference Cluster %q in Workspace %q (Resource Group %q): %+v",
+ id.ComputeName, id.WorkspaceName, id.ResourceGroup, err)
+ }
+
+ // Retrieve Machine Learning Workspace ID
+ subscriptionId := meta.(*clients.Client).Account.SubscriptionId
+ workspaceId := parse.NewWorkspaceID(subscriptionId, id.ResourceGroup, id.WorkspaceName)
+ d.Set("machine_learning_workspace_id", workspaceId.ID())
+
+ // use ComputeResource to get to AKS Cluster ID and other properties
+ aksComputeProperties, isAks := (machinelearningservices.BasicCompute).AsAKS(computeResource.Properties)
+ if !isAks {
+ return fmt.Errorf("compute resource %s is not an AKS cluster", id.ComputeName)
+ }
+
+ // Retrieve AKS Cluster ID
+ aksId, err := parse.KubernetesClusterID(*aksComputeProperties.ResourceID)
+ if err != nil {
+ return err
+ }
+ d.Set("kubernetes_cluster_id", aksId.ID())
+ d.Set("cluster_purpose", string(aksComputeProperties.Properties.ClusterPurpose))
+ d.Set("description", aksComputeProperties.Description)
+
+ // Retrieve location
+ if location := computeResource.Location; location != nil {
+ d.Set("location", azure.NormalizeLocation(*location))
+ }
+
+ return tags.FlattenAndSet(d, computeResource.Tags)
+}
+
+func resourceAksInferenceClusterDelete(d *schema.ResourceData, meta interface{}) error {
+ mlComputeClient := meta.(*clients.Client).MachineLearning.MachineLearningComputeClient
+ ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+ id, err := parse.InferenceClusterID(d.Id())
+ if err != nil {
+ return err
+ }
+
+ future, err := mlComputeClient.Delete(ctx, id.ResourceGroup, id.WorkspaceName, id.ComputeName, machinelearningservices.Detach)
+ if err != nil {
+ return fmt.Errorf("deleting Inference Cluster %q in workspace %q (Resource Group %q): %+v",
+ id.ComputeName, id.WorkspaceName, id.ResourceGroup, err)
+ }
+ if err := future.WaitForCompletionRef(ctx, mlComputeClient.Client); err != nil {
+ return fmt.Errorf("waiting for deletion of Inference Cluster %q in workspace %q (Resource Group %q): %+v",
+ id.ComputeName, id.WorkspaceName, id.ResourceGroup, err)
+ }
+ return nil
+}
+
+func expandAksComputeProperties(aks *containerservice.ManagedCluster, d *schema.ResourceData) machinelearningservices.AKS {
+ return machinelearningservices.AKS{
+ Properties: &machinelearningservices.AKSProperties{
+ ClusterFqdn: utils.String(*aks.Fqdn),
+ SslConfiguration: expandSSLConfig(d.Get("ssl").([]interface{})),
+ ClusterPurpose: machinelearningservices.ClusterPurpose(d.Get("cluster_purpose").(string)),
+ },
+ ComputeLocation: aks.Location,
+ Description: utils.String(d.Get("description").(string)),
+ ResourceID: aks.ID,
+ }
+}
+
+func expandSSLConfig(input []interface{}) *machinelearningservices.SslConfiguration {
+ if len(input) == 0 {
+ return nil
+ }
+
+ v := input[0].(map[string]interface{})
+
+ // SSL Certificate default values
+ sslStatus := "Disabled"
+
+ if !(v["cert"].(string) == "" && v["key"].(string) == "" && v["cname"].(string) == "") {
+ sslStatus = "Enabled"
+ }
+
+ return &machinelearningservices.SslConfiguration{
+ Status: machinelearningservices.Status1(sslStatus),
+ Cert: utils.String(v["cert"].(string)),
+ Key: utils.String(v["key"].(string)),
+ Cname: utils.String(v["cname"].(string)),
+ }
+}
diff --git a/azurerm/internal/services/machinelearning/machine_learning_inference_cluster_resource_test.go b/azurerm/internal/services/machinelearning/machine_learning_inference_cluster_resource_test.go
new file mode 100644
index 0000000000000..636d927511dd1
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/machine_learning_inference_cluster_resource_test.go
@@ -0,0 +1,277 @@
+package machinelearning_test
+
+import (
+ "context"
+ "fmt"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
+ "github.com/hashicorp/terraform-plugin-sdk/terraform"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance/check"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/machinelearning/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+type InferenceClusterResource struct{}
+
+func TestAccInferenceCluster_basic(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_machine_learning_inference_cluster", "test")
+ r := InferenceClusterResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.basic(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
+func TestAccInferenceCluster_requiresImport(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_machine_learning_inference_cluster", "test")
+ r := InferenceClusterResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.basic(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.RequiresImportErrorStep(r.requiresImport),
+ })
+}
+
+func TestAccInferenceCluster_complete(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_machine_learning_inference_cluster", "test")
+ r := InferenceClusterResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.complete(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep("ssl"),
+ })
+}
+
+func TestAccInferenceCluster_completeProduction(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_machine_learning_inference_cluster", "test")
+ r := InferenceClusterResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.completeProduction(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep("ssl"),
+ })
+}
+
+func (r InferenceClusterResource) Exists(ctx context.Context, client *clients.Client, state *terraform.InstanceState) (*bool, error) {
+ inferenceClusterClient := client.MachineLearning.MachineLearningComputeClient
+ id, err := parse.InferenceClusterID(state.ID)
+ if err != nil {
+ return nil, err
+ }
+
+ resp, err := inferenceClusterClient.Get(ctx, id.ResourceGroup, id.WorkspaceName, id.ComputeName)
+ if err != nil {
+ if utils.ResponseWasNotFound(resp.Response) {
+ return utils.Bool(false), nil
+ }
+ return nil, fmt.Errorf("retrieving Inference Cluster %q: %+v", state.ID, err)
+ }
+
+ return utils.Bool(resp.Properties != nil), nil
+}
+
+func (r InferenceClusterResource) basic(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_machine_learning_inference_cluster" "test" {
+ name = "AIC-%d"
+ machine_learning_workspace_id = azurerm_machine_learning_workspace.test.id
+ location = azurerm_resource_group.test.location
+ kubernetes_cluster_id = azurerm_kubernetes_cluster.test.id
+ cluster_purpose = "DevTest"
+
+
+ tags = {
+ ENV = "Test"
+ }
+}
+`, r.templateDevTest(data), data.RandomIntOfLength(8))
+}
+
+func (r InferenceClusterResource) complete(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_machine_learning_inference_cluster" "test" {
+ name = "AIC-%d"
+ machine_learning_workspace_id = azurerm_machine_learning_workspace.test.id
+ location = azurerm_resource_group.test.location
+ kubernetes_cluster_id = azurerm_kubernetes_cluster.test.id
+ cluster_purpose = "DevTest"
+ ssl {
+ cert = file("testdata/cert.pem")
+ key = file("testdata/key.pem")
+ cname = "www.contoso.com"
+ }
+
+ tags = {
+ ENV = "Test"
+ }
+
+}
+`, r.templateDevTest(data), data.RandomIntOfLength(8))
+}
+
+func (r InferenceClusterResource) completeProduction(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_machine_learning_inference_cluster" "test" {
+ name = "AIC-%d"
+ machine_learning_workspace_id = azurerm_machine_learning_workspace.test.id
+ location = azurerm_resource_group.test.location
+ kubernetes_cluster_id = azurerm_kubernetes_cluster.test.id
+ cluster_purpose = "FastProd"
+ ssl {
+ cert = file("testdata/cert.pem")
+ key = file("testdata/key.pem")
+ cname = "www.contoso.com"
+ }
+
+ tags = {
+ ENV = "Production"
+ }
+
+}
+`, r.templateFastProd(data), data.RandomIntOfLength(8))
+}
+
+func (r InferenceClusterResource) requiresImport(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_machine_learning_inference_cluster" "import" {
+ name = azurerm_machine_learning_inference_cluster.test.name
+ machine_learning_workspace_id = azurerm_machine_learning_inference_cluster.test.machine_learning_workspace_id
+ location = azurerm_machine_learning_inference_cluster.test.location
+ kubernetes_cluster_id = azurerm_machine_learning_inference_cluster.test.kubernetes_cluster_id
+ cluster_purpose = azurerm_machine_learning_inference_cluster.test.cluster_purpose
+
+ tags = azurerm_machine_learning_inference_cluster.test.tags
+}
+`, r.basic(data))
+}
+
+func (r InferenceClusterResource) templateFastProd(data acceptance.TestData) string {
+ return r.template(data, "Standard_D3_v2", 3)
+}
+func (r InferenceClusterResource) templateDevTest(data acceptance.TestData) string {
+ return r.template(data, "Standard_DS2_v2", 1)
+}
+
+func (r InferenceClusterResource) template(data acceptance.TestData, vmSize string, nodeCount int) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+data "azurerm_client_config" "current" {}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-ml-%[1]d"
+ location = "%[2]s"
+ tags = {
+ "stage" = "test"
+ }
+}
+
+resource "azurerm_application_insights" "test" {
+ name = "acctestai-%[1]d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ application_type = "web"
+}
+
+resource "azurerm_key_vault" "test" {
+ name = "acctestvault%[3]d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ tenant_id = data.azurerm_client_config.current.tenant_id
+
+ sku_name = "standard"
+
+ purge_protection_enabled = true
+}
+
+resource "azurerm_storage_account" "test" {
+ name = "acctestsa%[4]d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ account_tier = "Standard"
+ account_replication_type = "LRS"
+}
+
+resource "azurerm_machine_learning_workspace" "test" {
+ name = "acctest-MLW%[5]d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ application_insights_id = azurerm_application_insights.test.id
+ key_vault_id = azurerm_key_vault.test.id
+ storage_account_id = azurerm_storage_account.test.id
+
+ identity {
+ type = "SystemAssigned"
+ }
+}
+
+resource "azurerm_virtual_network" "test" {
+ name = "acctestvirtnet%[6]d"
+ address_space = ["10.1.0.0/16"]
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+}
+
+resource "azurerm_subnet" "test" {
+ name = "acctestsubnet%[7]d"
+ resource_group_name = azurerm_resource_group.test.name
+ virtual_network_name = azurerm_virtual_network.test.name
+ address_prefix = "10.1.0.0/24"
+}
+
+resource "azurerm_kubernetes_cluster" "test" {
+ name = "acctestaks%d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ dns_prefix = join("", ["acctestaks", azurerm_resource_group.test.location])
+ node_resource_group = "acctestRGAKS-%d"
+
+ default_node_pool {
+ name = "default"
+ node_count = %d
+ vm_size = "%s"
+ vnet_subnet_id = azurerm_subnet.test.id
+ }
+
+ identity {
+ type = "SystemAssigned"
+ }
+}
+`, data.RandomInteger, data.Locations.Primary,
+ data.RandomIntOfLength(12), data.RandomIntOfLength(15), data.RandomIntOfLength(16),
+ data.RandomInteger, data.RandomInteger, data.RandomInteger, data.RandomInteger, nodeCount, vmSize)
+}
diff --git a/azurerm/internal/services/machinelearning/machine_learning_workspace_resource.go b/azurerm/internal/services/machinelearning/machine_learning_workspace_resource.go
index ae09d3008260f..2c4390df16613 100644
--- a/azurerm/internal/services/machinelearning/machine_learning_workspace_resource.go
+++ b/azurerm/internal/services/machinelearning/machine_learning_workspace_resource.go
@@ -1,13 +1,10 @@
package machinelearning
import (
- "context"
"fmt"
"time"
"github.com/Azure/azure-sdk-for-go/services/machinelearningservices/mgmt/2020-04-01/machinelearningservices"
- "github.com/Azure/azure-sdk-for-go/services/preview/containerregistry/mgmt/2020-11-01-preview/containerregistry"
- "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2021-01-01/storage"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
@@ -169,34 +166,26 @@ func resourceMachineLearningWorkspaceCreate(d *schema.ResourceData, meta interfa
existing, err := client.Get(ctx, resGroup, name)
if err != nil {
if !utils.ResponseWasNotFound(existing.Response) {
- return fmt.Errorf("Error checking for existing AML Workspace %q (Resource Group %q): %s", name, resGroup, err)
+ return fmt.Errorf("checking for existing AML Workspace %q (Resource Group %q): %s", name, resGroup, err)
}
}
if existing.ID != nil && *existing.ID != "" {
return tf.ImportAsExistsError("azurerm_machine_learning_workspace", *existing.ID)
}
- location := azure.NormalizeLocation(d.Get("location").(string))
- storageAccountId := d.Get("storage_account_id").(string)
- keyVaultId := d.Get("key_vault_id").(string)
- applicationInsightsId := d.Get("application_insights_id").(string)
- skuName := d.Get("sku_name").(string)
-
- t := d.Get("tags").(map[string]interface{})
-
workspace := machinelearningservices.Workspace{
- Name: &name,
- Location: &location,
- Tags: tags.Expand(t),
+ Name: utils.String(name),
+ Location: utils.String(azure.NormalizeLocation(d.Get("location").(string))),
+ Tags: tags.Expand(d.Get("tags").(map[string]interface{})),
Sku: &machinelearningservices.Sku{
- Name: utils.String(skuName),
- Tier: utils.String(skuName),
+ Name: utils.String(d.Get("sku_name").(string)),
+ Tier: utils.String(d.Get("sku_name").(string)),
},
Identity: expandMachineLearningWorkspaceIdentity(d.Get("identity").([]interface{})),
WorkspaceProperties: &machinelearningservices.WorkspaceProperties{
- StorageAccount: &storageAccountId,
- ApplicationInsights: &applicationInsightsId,
- KeyVault: &keyVaultId,
+ StorageAccount: utils.String(d.Get("storage_account_id").(string)),
+ ApplicationInsights: utils.String(d.Get("application_insights_id").(string)),
+ KeyVault: utils.String(d.Get("key_vault_id").(string)),
},
}
@@ -216,35 +205,18 @@ func resourceMachineLearningWorkspaceCreate(d *schema.ResourceData, meta interfa
workspace.HbiWorkspace = utils.Bool(v.(bool))
}
- accountsClient := meta.(*clients.Client).Storage.AccountsClient
- if err := validateStorageAccount(ctx, *accountsClient, storageAccountId); err != nil {
- return fmt.Errorf("Error creating Machine Learning Workspace %q (Resource Group %q): %+v", name, resGroup, err)
- }
-
- registriesClient := meta.(*clients.Client).Containers.RegistriesClient
- if err := validateContainerRegistry(ctx, *registriesClient, workspace.ContainerRegistry); err != nil {
- return fmt.Errorf("Error creating Machine Learning Workspace %q (Resource Group %q): %+v", name, resGroup, err)
- }
-
future, err := client.CreateOrUpdate(ctx, resGroup, name, workspace)
if err != nil {
- return fmt.Errorf("Error creating Machine Learning Workspace %q (Resource Group %q): %+v", name, resGroup, err)
+ return fmt.Errorf("creating Machine Learning Workspace %q (Resource Group %q): %+v", name, resGroup, err)
}
if err = future.WaitForCompletionRef(ctx, client.Client); err != nil {
- return fmt.Errorf("Error waiting for creation of Machine Learning Workspace %q (Resource Group %q): %+v", name, resGroup, err)
- }
-
- resp, err := client.Get(ctx, resGroup, name)
- if err != nil {
- return fmt.Errorf("Error retrieving Machine Learning Workspace %q (Resource Group %q): %+v", name, resGroup, err)
- }
-
- if resp.ID == nil {
- return fmt.Errorf("Cannot read Machine Learning Workspace %q (Resource Group %q) ID", name, resGroup)
+ return fmt.Errorf("waiting for creation of Machine Learning Workspace %q (Resource Group %q): %+v", name, resGroup, err)
}
- d.SetId(*resp.ID)
+ subscriptionId := meta.(*clients.Client).Account.SubscriptionId
+ id := parse.NewWorkspaceID(subscriptionId, resGroup, name)
+ d.SetId(id.ID())
return resourceMachineLearningWorkspaceRead(d, meta)
}
@@ -256,7 +228,7 @@ func resourceMachineLearningWorkspaceRead(d *schema.ResourceData, meta interface
id, err := parse.WorkspaceID(d.Id())
if err != nil {
- return fmt.Errorf("Error parsing Machine Learning Workspace ID `%q`: %+v", d.Id(), err)
+ return fmt.Errorf("parsing Machine Learning Workspace ID `%q`: %+v", d.Id(), err)
}
resp, err := client.Get(ctx, id.ResourceGroup, id.Name)
@@ -265,7 +237,7 @@ func resourceMachineLearningWorkspaceRead(d *schema.ResourceData, meta interface
d.SetId("")
return nil
}
- return fmt.Errorf("Error making Read request on Workspace %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
+ return fmt.Errorf("making Read request on Workspace %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
}
d.Set("name", id.Name)
@@ -290,7 +262,7 @@ func resourceMachineLearningWorkspaceRead(d *schema.ResourceData, meta interface
}
if err := d.Set("identity", flattenMachineLearningWorkspaceIdentity(resp.Identity)); err != nil {
- return fmt.Errorf("Error flattening identity on Workspace %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
+ return fmt.Errorf("flattening identity on Workspace %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
}
return tags.FlattenAndSet(d, resp.Tags)
@@ -331,7 +303,7 @@ func resourceMachineLearningWorkspaceUpdate(d *schema.ResourceData, meta interfa
}
if _, err := client.Update(ctx, id.ResourceGroup, id.Name, update); err != nil {
- return fmt.Errorf("Error updating Machine Learning Workspace %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
+ return fmt.Errorf("updating Machine Learning Workspace %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
}
return resourceMachineLearningWorkspaceRead(d, meta)
@@ -344,67 +316,16 @@ func resourceMachineLearningWorkspaceDelete(d *schema.ResourceData, meta interfa
id, err := parse.WorkspaceID(d.Id())
if err != nil {
- return fmt.Errorf("Error parsing Machine Learning Workspace ID `%q`: %+v", d.Id(), err)
+ return fmt.Errorf("parsing Machine Learning Workspace ID `%q`: %+v", d.Id(), err)
}
future, err := client.Delete(ctx, id.ResourceGroup, id.Name)
if err != nil {
- return fmt.Errorf("Error deleting Machine Learning Workspace %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
+ return fmt.Errorf("deleting Machine Learning Workspace %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
}
if err := future.WaitForCompletionRef(ctx, client.Client); err != nil {
- return fmt.Errorf("Error waiting for deletion of Machine Learning Workspace %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
- }
-
- return nil
-}
-
-func validateStorageAccount(ctx context.Context, client storage.AccountsClient, accountID string) error {
- if accountID == "" {
- return fmt.Errorf("Error validating Storage Account: Empty ID")
- }
-
- // TODO -- use parse function "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/storage/parsers".ParseAccountID
- // when issue https://github.com/Azure/azure-rest-api-specs/issues/8323 is addressed
- id, err := parse.AccountIDCaseDiffSuppress(accountID)
- if err != nil {
- return fmt.Errorf("Error validating Storage Account: %+v", err)
- }
-
- account, err := client.GetProperties(ctx, id.ResourceGroup, id.Name, "")
- if err != nil {
- return fmt.Errorf("Error validating Storage Account %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
- }
- if sku := account.Sku; sku != nil {
- if sku.Tier == storage.Premium {
- return fmt.Errorf("Error validating Storage Account %q (Resource Group %q): The associated Storage Account must not be Premium", id.Name, id.ResourceGroup)
- }
- }
-
- return nil
-}
-
-func validateContainerRegistry(ctx context.Context, client containerregistry.RegistriesClient, acrID *string) error {
- if acrID == nil {
- return nil
- }
-
- // TODO: use container registry's custom ID parse function when implemented
- id, err := azure.ParseAzureResourceID(*acrID)
- if err != nil {
- return fmt.Errorf("Error validating Container Registry: %+v", err)
- }
-
- acrName := id.Path["registries"]
- resourceGroup := id.ResourceGroup
- client.SubscriptionID = id.SubscriptionID
-
- acr, err := client.Get(ctx, resourceGroup, acrName)
- if err != nil {
- return fmt.Errorf("Error validating Container Registry %q (Resource Group %q): %+v", acrName, resourceGroup, err)
- }
- if acr.AdminUserEnabled == nil || !*acr.AdminUserEnabled {
- return fmt.Errorf("Error validating Container Registry%q (Resource Group %q): The associated Container Registry must set `admin_enabled` to true", acrName, resourceGroup)
+ return fmt.Errorf("waiting for deletion of Machine Learning Workspace %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
}
return nil
@@ -417,13 +338,9 @@ func expandMachineLearningWorkspaceIdentity(input []interface{}) *machinelearnin
v := input[0].(map[string]interface{})
- identityType := machinelearningservices.ResourceIdentityType(v["type"].(string))
-
- identity := machinelearningservices.Identity{
- Type: identityType,
+ return &machinelearningservices.Identity{
+ Type: machinelearningservices.ResourceIdentityType(v["type"].(string)),
}
-
- return &identity
}
func flattenMachineLearningWorkspaceIdentity(identity *machinelearningservices.Identity) []interface{} {
@@ -431,8 +348,6 @@ func flattenMachineLearningWorkspaceIdentity(identity *machinelearningservices.I
return []interface{}{}
}
- t := string(identity.Type)
-
principalID := ""
if identity.PrincipalID != nil {
principalID = *identity.PrincipalID
@@ -445,7 +360,7 @@ func flattenMachineLearningWorkspaceIdentity(identity *machinelearningservices.I
return []interface{}{
map[string]interface{}{
- "type": t,
+ "type": string(identity.Type),
"principal_id": principalID,
"tenant_id": tenantID,
},
diff --git a/azurerm/internal/services/machinelearning/parse/inference_cluster.go b/azurerm/internal/services/machinelearning/parse/inference_cluster.go
new file mode 100644
index 0000000000000..588b8301515a8
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/parse/inference_cluster.go
@@ -0,0 +1,75 @@
+package parse
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "fmt"
+ "strings"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
+)
+
+type InferenceClusterId struct {
+ SubscriptionId string
+ ResourceGroup string
+ WorkspaceName string
+ ComputeName string
+}
+
+func NewInferenceClusterID(subscriptionId, resourceGroup, workspaceName, computeName string) InferenceClusterId {
+ return InferenceClusterId{
+ SubscriptionId: subscriptionId,
+ ResourceGroup: resourceGroup,
+ WorkspaceName: workspaceName,
+ ComputeName: computeName,
+ }
+}
+
+func (id InferenceClusterId) String() string {
+ segments := []string{
+ fmt.Sprintf("Compute Name %q", id.ComputeName),
+ fmt.Sprintf("Workspace Name %q", id.WorkspaceName),
+ fmt.Sprintf("Resource Group %q", id.ResourceGroup),
+ }
+ segmentsStr := strings.Join(segments, " / ")
+ return fmt.Sprintf("%s: (%s)", "Inference Cluster", segmentsStr)
+}
+
+func (id InferenceClusterId) ID() string {
+ fmtString := "/subscriptions/%s/resourceGroups/%s/providers/Microsoft.MachineLearningServices/workspaces/%s/computes/%s"
+ return fmt.Sprintf(fmtString, id.SubscriptionId, id.ResourceGroup, id.WorkspaceName, id.ComputeName)
+}
+
+// InferenceClusterID parses a InferenceCluster ID into an InferenceClusterId struct
+func InferenceClusterID(input string) (*InferenceClusterId, error) {
+ id, err := azure.ParseAzureResourceID(input)
+ if err != nil {
+ return nil, err
+ }
+
+ resourceId := InferenceClusterId{
+ SubscriptionId: id.SubscriptionID,
+ ResourceGroup: id.ResourceGroup,
+ }
+
+ if resourceId.SubscriptionId == "" {
+ return nil, fmt.Errorf("ID was missing the 'subscriptions' element")
+ }
+
+ if resourceId.ResourceGroup == "" {
+ return nil, fmt.Errorf("ID was missing the 'resourceGroups' element")
+ }
+
+ if resourceId.WorkspaceName, err = id.PopSegment("workspaces"); err != nil {
+ return nil, err
+ }
+ if resourceId.ComputeName, err = id.PopSegment("computes"); err != nil {
+ return nil, err
+ }
+
+ if err := id.ValidateNoEmptySegments(input); err != nil {
+ return nil, err
+ }
+
+ return &resourceId, nil
+}
diff --git a/azurerm/internal/services/machinelearning/parse/inference_cluster_test.go b/azurerm/internal/services/machinelearning/parse/inference_cluster_test.go
new file mode 100644
index 0000000000000..337f5645d5f81
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/parse/inference_cluster_test.go
@@ -0,0 +1,128 @@
+package parse
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "testing"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/resourceid"
+)
+
+var _ resourceid.Formatter = InferenceClusterId{}
+
+func TestInferenceClusterIDFormatter(t *testing.T) {
+ actual := NewInferenceClusterID("00000000-0000-0000-0000-000000000000", "resGroup1", "workspace1", "cluster1").ID()
+ expected := "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/workspace1/computes/cluster1"
+ if actual != expected {
+ t.Fatalf("Expected %q but got %q", expected, actual)
+ }
+}
+
+func TestInferenceClusterID(t *testing.T) {
+ testData := []struct {
+ Input string
+ Error bool
+ Expected *InferenceClusterId
+ }{
+
+ {
+ // empty
+ Input: "",
+ Error: true,
+ },
+
+ {
+ // missing SubscriptionId
+ Input: "/",
+ Error: true,
+ },
+
+ {
+ // missing value for SubscriptionId
+ Input: "/subscriptions/",
+ Error: true,
+ },
+
+ {
+ // missing ResourceGroup
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/",
+ Error: true,
+ },
+
+ {
+ // missing value for ResourceGroup
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/",
+ Error: true,
+ },
+
+ {
+ // missing WorkspaceName
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/",
+ Error: true,
+ },
+
+ {
+ // missing value for WorkspaceName
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/",
+ Error: true,
+ },
+
+ {
+ // missing ComputeName
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/workspace1/",
+ Error: true,
+ },
+
+ {
+ // missing value for ComputeName
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/workspace1/computes/",
+ Error: true,
+ },
+
+ {
+ // valid
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/workspace1/computes/cluster1",
+ Expected: &InferenceClusterId{
+ SubscriptionId: "00000000-0000-0000-0000-000000000000",
+ ResourceGroup: "resGroup1",
+ WorkspaceName: "workspace1",
+ ComputeName: "cluster1",
+ },
+ },
+
+ {
+ // upper-cased
+ Input: "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/RESGROUP1/PROVIDERS/MICROSOFT.MACHINELEARNINGSERVICES/WORKSPACES/WORKSPACE1/COMPUTES/CLUSTER1",
+ Error: true,
+ },
+ }
+
+ for _, v := range testData {
+ t.Logf("[DEBUG] Testing %q", v.Input)
+
+ actual, err := InferenceClusterID(v.Input)
+ if err != nil {
+ if v.Error {
+ continue
+ }
+
+ t.Fatalf("Expect a value but got an error: %s", err)
+ }
+ if v.Error {
+ t.Fatal("Expect an error but didn't get one")
+ }
+
+ if actual.SubscriptionId != v.Expected.SubscriptionId {
+ t.Fatalf("Expected %q but got %q for SubscriptionId", v.Expected.SubscriptionId, actual.SubscriptionId)
+ }
+ if actual.ResourceGroup != v.Expected.ResourceGroup {
+ t.Fatalf("Expected %q but got %q for ResourceGroup", v.Expected.ResourceGroup, actual.ResourceGroup)
+ }
+ if actual.WorkspaceName != v.Expected.WorkspaceName {
+ t.Fatalf("Expected %q but got %q for WorkspaceName", v.Expected.WorkspaceName, actual.WorkspaceName)
+ }
+ if actual.ComputeName != v.Expected.ComputeName {
+ t.Fatalf("Expected %q but got %q for ComputeName", v.Expected.ComputeName, actual.ComputeName)
+ }
+ }
+}
diff --git a/azurerm/internal/services/machinelearning/parse/kubernetes_cluster.go b/azurerm/internal/services/machinelearning/parse/kubernetes_cluster.go
new file mode 100644
index 0000000000000..8f9d1f2b117b4
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/parse/kubernetes_cluster.go
@@ -0,0 +1,69 @@
+package parse
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "fmt"
+ "strings"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
+)
+
+type KubernetesClusterId struct {
+ SubscriptionId string
+ ResourceGroup string
+ ManagedClusterName string
+}
+
+func NewKubernetesClusterID(subscriptionId, resourceGroup, managedClusterName string) KubernetesClusterId {
+ return KubernetesClusterId{
+ SubscriptionId: subscriptionId,
+ ResourceGroup: resourceGroup,
+ ManagedClusterName: managedClusterName,
+ }
+}
+
+func (id KubernetesClusterId) String() string {
+ segments := []string{
+ fmt.Sprintf("Managed Cluster Name %q", id.ManagedClusterName),
+ fmt.Sprintf("Resource Group %q", id.ResourceGroup),
+ }
+ segmentsStr := strings.Join(segments, " / ")
+ return fmt.Sprintf("%s: (%s)", "Kubernetes Cluster", segmentsStr)
+}
+
+func (id KubernetesClusterId) ID() string {
+ fmtString := "/subscriptions/%s/resourceGroups/%s/providers/Microsoft.ContainerService/managedClusters/%s"
+ return fmt.Sprintf(fmtString, id.SubscriptionId, id.ResourceGroup, id.ManagedClusterName)
+}
+
+// KubernetesClusterID parses a KubernetesCluster ID into an KubernetesClusterId struct
+func KubernetesClusterID(input string) (*KubernetesClusterId, error) {
+ id, err := azure.ParseAzureResourceID(input)
+ if err != nil {
+ return nil, err
+ }
+
+ resourceId := KubernetesClusterId{
+ SubscriptionId: id.SubscriptionID,
+ ResourceGroup: id.ResourceGroup,
+ }
+
+ if resourceId.SubscriptionId == "" {
+ return nil, fmt.Errorf("ID was missing the 'subscriptions' element")
+ }
+
+ if resourceId.ResourceGroup == "" {
+ return nil, fmt.Errorf("ID was missing the 'resourceGroups' element")
+ }
+
+ if resourceId.ManagedClusterName, err = id.PopSegment("managedClusters"); err != nil {
+ return nil, err
+ }
+
+ if err := id.ValidateNoEmptySegments(input); err != nil {
+ return nil, err
+ }
+
+ return &resourceId, nil
+}
diff --git a/azurerm/internal/services/machinelearning/parse/kubernetes_cluster_test.go b/azurerm/internal/services/machinelearning/parse/kubernetes_cluster_test.go
new file mode 100644
index 0000000000000..07f756b8bc662
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/parse/kubernetes_cluster_test.go
@@ -0,0 +1,112 @@
+package parse
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "testing"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/resourceid"
+)
+
+var _ resourceid.Formatter = KubernetesClusterId{}
+
+func TestKubernetesClusterIDFormatter(t *testing.T) {
+ actual := NewKubernetesClusterID("00000000-0000-0000-0000-000000000000", "resGroup1", "cluster1").ID()
+ expected := "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.ContainerService/managedClusters/cluster1"
+ if actual != expected {
+ t.Fatalf("Expected %q but got %q", expected, actual)
+ }
+}
+
+func TestKubernetesClusterID(t *testing.T) {
+ testData := []struct {
+ Input string
+ Error bool
+ Expected *KubernetesClusterId
+ }{
+
+ {
+ // empty
+ Input: "",
+ Error: true,
+ },
+
+ {
+ // missing SubscriptionId
+ Input: "/",
+ Error: true,
+ },
+
+ {
+ // missing value for SubscriptionId
+ Input: "/subscriptions/",
+ Error: true,
+ },
+
+ {
+ // missing ResourceGroup
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/",
+ Error: true,
+ },
+
+ {
+ // missing value for ResourceGroup
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/",
+ Error: true,
+ },
+
+ {
+ // missing ManagedClusterName
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.ContainerService/",
+ Error: true,
+ },
+
+ {
+ // missing value for ManagedClusterName
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.ContainerService/managedClusters/",
+ Error: true,
+ },
+
+ {
+ // valid
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.ContainerService/managedClusters/cluster1",
+ Expected: &KubernetesClusterId{
+ SubscriptionId: "00000000-0000-0000-0000-000000000000",
+ ResourceGroup: "resGroup1",
+ ManagedClusterName: "cluster1",
+ },
+ },
+
+ {
+ // upper-cased
+ Input: "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/RESGROUP1/PROVIDERS/MICROSOFT.CONTAINERSERVICE/MANAGEDCLUSTERS/CLUSTER1",
+ Error: true,
+ },
+ }
+
+ for _, v := range testData {
+ t.Logf("[DEBUG] Testing %q", v.Input)
+
+ actual, err := KubernetesClusterID(v.Input)
+ if err != nil {
+ if v.Error {
+ continue
+ }
+
+ t.Fatalf("Expect a value but got an error: %s", err)
+ }
+ if v.Error {
+ t.Fatal("Expect an error but didn't get one")
+ }
+
+ if actual.SubscriptionId != v.Expected.SubscriptionId {
+ t.Fatalf("Expected %q but got %q for SubscriptionId", v.Expected.SubscriptionId, actual.SubscriptionId)
+ }
+ if actual.ResourceGroup != v.Expected.ResourceGroup {
+ t.Fatalf("Expected %q but got %q for ResourceGroup", v.Expected.ResourceGroup, actual.ResourceGroup)
+ }
+ if actual.ManagedClusterName != v.Expected.ManagedClusterName {
+ t.Fatalf("Expected %q but got %q for ManagedClusterName", v.Expected.ManagedClusterName, actual.ManagedClusterName)
+ }
+ }
+}
diff --git a/azurerm/internal/services/machinelearning/parse/workspace.go b/azurerm/internal/services/machinelearning/parse/workspace.go
index b5f72ac7a57f9..da44338629d3e 100644
--- a/azurerm/internal/services/machinelearning/parse/workspace.go
+++ b/azurerm/internal/services/machinelearning/parse/workspace.go
@@ -1,57 +1,69 @@
package parse
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
import (
+ "fmt"
+ "strings"
+
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
- accountParser "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/storage/parse"
)
type WorkspaceId struct {
- Name string
- ResourceGroup string
+ SubscriptionId string
+ ResourceGroup string
+ Name string
}
-func WorkspaceID(input string) (*WorkspaceId, error) {
- id, err := azure.ParseAzureResourceID(input)
- if err != nil {
- return nil, err
- }
-
- workspace := WorkspaceId{
- ResourceGroup: id.ResourceGroup,
- }
-
- if workspace.Name, err = id.PopSegment("workspaces"); err != nil {
- return nil, err
+func NewWorkspaceID(subscriptionId, resourceGroup, name string) WorkspaceId {
+ return WorkspaceId{
+ SubscriptionId: subscriptionId,
+ ResourceGroup: resourceGroup,
+ Name: name,
}
+}
- if err := id.ValidateNoEmptySegments(input); err != nil {
- return nil, err
+func (id WorkspaceId) String() string {
+ segments := []string{
+ fmt.Sprintf("Name %q", id.Name),
+ fmt.Sprintf("Resource Group %q", id.ResourceGroup),
}
+ segmentsStr := strings.Join(segments, " / ")
+ return fmt.Sprintf("%s: (%s)", "Workspace", segmentsStr)
+}
- return &workspace, nil
+func (id WorkspaceId) ID() string {
+ fmtString := "/subscriptions/%s/resourceGroups/%s/providers/Microsoft.MachineLearningServices/workspaces/%s"
+ return fmt.Sprintf(fmtString, id.SubscriptionId, id.ResourceGroup, id.Name)
}
-// TODO -- use parse function "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/storage/parsers".ParseAccountID
-// when issue https://github.com/Azure/azure-rest-api-specs/issues/8323 is addressed
-func AccountIDCaseDiffSuppress(input string) (*accountParser.StorageAccountId, error) {
+// WorkspaceID parses a Workspace ID into an WorkspaceId struct
+func WorkspaceID(input string) (*WorkspaceId, error) {
id, err := azure.ParseAzureResourceID(input)
if err != nil {
return nil, err
}
- account := accountParser.StorageAccountId{
- ResourceGroup: id.ResourceGroup,
+ resourceId := WorkspaceId{
+ SubscriptionId: id.SubscriptionID,
+ ResourceGroup: id.ResourceGroup,
}
- if account.Name, err = id.PopSegment("storageAccounts"); err != nil {
- if account.Name, err = id.PopSegment("storageaccounts"); err != nil {
- return nil, err
- }
+ if resourceId.SubscriptionId == "" {
+ return nil, fmt.Errorf("ID was missing the 'subscriptions' element")
+ }
+
+ if resourceId.ResourceGroup == "" {
+ return nil, fmt.Errorf("ID was missing the 'resourceGroups' element")
+ }
+
+ if resourceId.Name, err = id.PopSegment("workspaces"); err != nil {
+ return nil, err
}
if err := id.ValidateNoEmptySegments(input); err != nil {
return nil, err
}
- return &account, nil
+ return &resourceId, nil
}
diff --git a/azurerm/internal/services/machinelearning/parse/workspace_test.go b/azurerm/internal/services/machinelearning/parse/workspace_test.go
index 48c6fbffafc08..00a6b100fd698 100644
--- a/azurerm/internal/services/machinelearning/parse/workspace_test.go
+++ b/azurerm/internal/services/machinelearning/parse/workspace_test.go
@@ -1,139 +1,112 @@
package parse
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
import (
"testing"
- "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/storage/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/resourceid"
)
+var _ resourceid.Formatter = WorkspaceId{}
+
+func TestWorkspaceIDFormatter(t *testing.T) {
+ actual := NewWorkspaceID("00000000-0000-0000-0000-000000000000", "resGroup1", "workspace1").ID()
+ expected := "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/workspace1"
+ if actual != expected {
+ t.Fatalf("Expected %q but got %q", expected, actual)
+ }
+}
+
func TestWorkspaceID(t *testing.T) {
testData := []struct {
- Name string
- Input string
- Error bool
- Expect *WorkspaceId
+ Input string
+ Error bool
+ Expected *WorkspaceId
}{
+
{
- Name: "Empty",
+ // empty
Input: "",
Error: true,
},
+
{
- Name: "No Resource Groups Segment",
- Input: "/subscriptions/00000000-0000-0000-0000-000000000000",
- Error: true,
- },
- {
- Name: "No Resource Groups Value",
- Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/",
- Error: true,
- },
- {
- Name: "Resource Group ID",
- Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1",
+ // missing SubscriptionId
+ Input: "/",
Error: true,
},
+
{
- Name: "Missing Workspace Value",
- Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/",
+ // missing value for SubscriptionId
+ Input: "/subscriptions/",
Error: true,
},
- {
- Name: "Machine Learning Workspace ID",
- Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/workspace1",
- Error: false,
- Expect: &WorkspaceId{
- ResourceGroup: "resGroup1",
- Name: "workspace1",
- },
- },
- }
-
- for _, v := range testData {
- t.Logf("[DEBUG] Testing %q", v.Name)
-
- actual, err := WorkspaceID(v.Input)
- if err != nil {
- if v.Error {
- continue
- }
-
- t.Fatalf("Expected a value but got an error: %+v", err)
- }
-
- if actual.Name != v.Expect.Name {
- t.Fatalf("Expected %q but got %q for Name", v.Expect.Name, actual.Name)
- }
-
- if actual.ResourceGroup != v.Expect.ResourceGroup {
- t.Fatalf("Expected %q but got %q for Resource Group", v.Expect.ResourceGroup, actual.ResourceGroup)
- }
- }
-}
-func TestAccountIDCaseDiffSuppress(t *testing.T) {
- testData := []struct {
- Name string
- Input string
- Error bool
- Expect *parse.StorageAccountId
- }{
{
- Name: "Empty",
- Input: "",
+ // missing ResourceGroup
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/",
Error: true,
},
+
{
- Name: "No Resource Group Segment",
- Input: "/subscriptions/00000000-0000-0000-0000-000000000000",
+ // missing value for ResourceGroup
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/",
Error: true,
},
+
{
- Name: "No Resource Groups Value",
- Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups",
+ // missing Name
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/",
Error: true,
},
+
{
- Name: "Resource Group ID",
- Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1",
+ // missing value for Name
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/",
Error: true,
},
+
{
- Name: "Account ID with right casing",
- Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/storageAccounts/account1",
- Expect: &parse.StorageAccountId{
- Name: "account1",
- ResourceGroup: "resGroup1",
+ // valid
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/workspace1",
+ Expected: &WorkspaceId{
+ SubscriptionId: "00000000-0000-0000-0000-000000000000",
+ ResourceGroup: "resGroup1",
+ Name: "workspace1",
},
},
+
{
- Name: "Wrong Casing",
- Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/resgroup1/storageaccounts/account1",
- Expect: &parse.StorageAccountId{
- Name: "account1",
- ResourceGroup: "resgroup1",
- },
+ // upper-cased
+ Input: "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/RESGROUP1/PROVIDERS/MICROSOFT.MACHINELEARNINGSERVICES/WORKSPACES/WORKSPACE1",
+ Error: true,
},
}
for _, v := range testData {
- t.Logf("[DEBUG] Testing %q", v.Name)
+ t.Logf("[DEBUG] Testing %q", v.Input)
- actual, err := AccountIDCaseDiffSuppress(v.Input)
+ actual, err := WorkspaceID(v.Input)
if err != nil {
if v.Error {
continue
}
- t.Fatalf("Expected a value but got an error: %+v", err)
+ t.Fatalf("Expect a value but got an error: %s", err)
}
-
- if actual.Name != v.Expect.Name {
- t.Fatalf("Expected %q but got %q for Name", v.Expect.Name, actual.Name)
+ if v.Error {
+ t.Fatal("Expect an error but didn't get one")
}
- if actual.ResourceGroup != v.Expect.ResourceGroup {
- t.Fatalf("Expected %q but got %q for Resource Group", v.Expect.ResourceGroup, actual.ResourceGroup)
+ if actual.SubscriptionId != v.Expected.SubscriptionId {
+ t.Fatalf("Expected %q but got %q for SubscriptionId", v.Expected.SubscriptionId, actual.SubscriptionId)
+ }
+ if actual.ResourceGroup != v.Expected.ResourceGroup {
+ t.Fatalf("Expected %q but got %q for ResourceGroup", v.Expected.ResourceGroup, actual.ResourceGroup)
+ }
+ if actual.Name != v.Expected.Name {
+ t.Fatalf("Expected %q but got %q for Name", v.Expected.Name, actual.Name)
}
}
}
diff --git a/azurerm/internal/services/machinelearning/registration.go b/azurerm/internal/services/machinelearning/registration.go
index 7f91e9445cb19..f5b4cd9f8a183 100644
--- a/azurerm/internal/services/machinelearning/registration.go
+++ b/azurerm/internal/services/machinelearning/registration.go
@@ -27,6 +27,7 @@ func (r Registration) SupportedDataSources() map[string]*schema.Resource {
// SupportedResources returns the supported Resources supported by this Service
func (r Registration) SupportedResources() map[string]*schema.Resource {
return map[string]*schema.Resource{
- "azurerm_machine_learning_workspace": resourceMachineLearningWorkspace(),
+ "azurerm_machine_learning_workspace": resourceMachineLearningWorkspace(),
+ "azurerm_machine_learning_inference_cluster": resourceAksInferenceCluster(),
}
}
diff --git a/azurerm/internal/services/machinelearning/resourceids.go b/azurerm/internal/services/machinelearning/resourceids.go
new file mode 100644
index 0000000000000..18ebbab752ba2
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/resourceids.go
@@ -0,0 +1,5 @@
+package machinelearning
+
+//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=InferenceCluster -id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/workspace1/computes/cluster1
+//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=KubernetesCluster -id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.ContainerService/managedClusters/cluster1
+//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=Workspace -id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/workspace1
diff --git a/azurerm/internal/services/machinelearning/testdata/HOWTO.md b/azurerm/internal/services/machinelearning/testdata/HOWTO.md
new file mode 100644
index 0000000000000..5f2a6218a43d0
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/testdata/HOWTO.md
@@ -0,0 +1,4 @@
+# How Key and Certificate was generated
+```bash
+openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout key.pem -out cert.pem
+```
\ No newline at end of file
diff --git a/azurerm/internal/services/machinelearning/testdata/cert.pem b/azurerm/internal/services/machinelearning/testdata/cert.pem
new file mode 100644
index 0000000000000..a0334a6afa4dd
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/testdata/cert.pem
@@ -0,0 +1,22 @@
+-----BEGIN CERTIFICATE-----
+MIIDkjCCAnoCCQDY1A4aUvTZ0TANBgkqhkiG9w0BAQsFADCBijELMAkGA1UEBhMC
+Q0gxCzAJBgNVBAgMAlpIMQswCQYDVQQHDAJaSDESMBAGA1UECgwJVGVycmFmb3Jt
+MQ4wDAYDVQQLDAVBenVyZTEYMBYGA1UEAwwPd3d3LmNvbnRvc28uY29tMSMwIQYJ
+KoZIhvcNAQkBFhR3aGF0ZXZlckBjb250b3NvLmNvbTAeFw0yMTA0MjIxOTU4MTBa
+Fw0zMTA0MjAxOTU4MTBaMIGKMQswCQYDVQQGEwJDSDELMAkGA1UECAwCWkgxCzAJ
+BgNVBAcMAlpIMRIwEAYDVQQKDAlUZXJyYWZvcm0xDjAMBgNVBAsMBUF6dXJlMRgw
+FgYDVQQDDA93d3cuY29udG9zby5jb20xIzAhBgkqhkiG9w0BCQEWFHdoYXRldmVy
+QGNvbnRvc28uY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA5rx3
+fTN0UUV1ktetzM2AEIJ4ZKQlibrLtVORPX2LQp2Vl/n74DPD2Re/ZgO2NtjhjItY
+O65ZSqOgGz3R8ED4r12AokLCFmqhBnnr4IybeaQos7prjLKwSIyj5NbVMGuzNO6P
+55W1zTMfV+CstbCtXtRPa7zizXjYbT3dfpw8FgJLh9sVWaiCO34Nu9PWF9NRIlzI
+e/Ek3ss/JnNqskH+xnxgxq68slaZa4qojBjiLl/IdIs4A9DtyJnFd99xuh8nShMg
+4ykccPr9/+YBaz8/Ef7/zmXj3g9DLTrIa7JV6s80V5oVINaF7KXu9jmjD+a03SsR
+/8eKX6K+xDBtqxpz8wIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQAOGe2knCVxje06
+ihfhzprg7lTM7GCgiXqa4fdCVwq0hJAYpMg29F7Df3OE/zVD/mzdRWZe2yVTY47f
+YFEfDKMmkGepgqICs0wTfhBSham8vkk2yDcoT01Lar+Im3GToP3JSM5YFbqxam0R
+/AVskE5aHQ+tIGUwcuwWhjjKQuWua59tI0USjgGaK3cZ5tyFOQPcE3ZFzndWM3Rz
+ojNHH5UJOT7zt4RebBzGRpcNdrbkOtVkRVZIwH0wJfm44zR+L36UhpXUd8XGKvua
+KFlqJhw/8UtYzXXX5bwHb/JTkOLUbs8gobG23lFhxXG5QhqtwqYnHXRw9Jhclv8p
+weEgmhnj
+-----END CERTIFICATE-----
diff --git a/azurerm/internal/services/machinelearning/testdata/key.pem b/azurerm/internal/services/machinelearning/testdata/key.pem
new file mode 100644
index 0000000000000..87761f7116407
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/testdata/key.pem
@@ -0,0 +1,28 @@
+-----BEGIN PRIVATE KEY-----
+MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDmvHd9M3RRRXWS
+163MzYAQgnhkpCWJusu1U5E9fYtCnZWX+fvgM8PZF79mA7Y22OGMi1g7rllKo6Ab
+PdHwQPivXYCiQsIWaqEGeevgjJt5pCizumuMsrBIjKPk1tUwa7M07o/nlbXNMx9X
+4Ky1sK1e1E9rvOLNeNhtPd1+nDwWAkuH2xVZqII7fg2709YX01EiXMh78STeyz8m
+c2qyQf7GfGDGrryyVplriqiMGOIuX8h0izgD0O3ImcV333G6HydKEyDjKRxw+v3/
+5gFrPz8R/v/OZePeD0MtOshrslXqzzRXmhUg1oXspe72OaMP5rTdKxH/x4pfor7E
+MG2rGnPzAgMBAAECggEAAXJvIWbgNN5FpX0axu0G/5OB48evwJReUK3MfGE8LVfF
+p2VW8goBEWx3s9EUJHXpvDLng8BNKQ2rpGAX3/TYWmkwtFPM2c0jY2ICW68mDnY8
+Fxx1LjW0q0/Oe1HpllsmjY9tcZtbv4SxjqCHFMCd5blZIijWF0nJua2opPGf4tdv
+yvN/D9HYPdRlynj6SjUij0rR2PFN134LLaKhRrAsaeKHqSk+Pngxt6HStRLPSnN+
+dqk+6rA0fJ97YXeiNRjYfRbJEMOedJFW5wavddIXkNzI/3iwrDc5P0A+9X+SbIGm
+6BlKNy6EbFtEsbsbdcefZaVuaRXEVNsiRXBoUHjkYQKBgQD7yZWZDPr0JWtL+Mav
+dewbQUd8xVMzAc9r2mjRCNJi5qanQRDNIDhkYkNVrpBnsKpCe8cDRb3SaawTqYEv
+ASCPGOU92ooQ1AMKMwETaCsrSDAEwpS6NdCSuYZbu6yGqWc8/v4Oi8ZAv10I1TJP
+2WaG2PkvfOpsvXs8ixQWZ4f5QwKBgQDqmLgV86pnDFUEfW8W8f0n6PG0gPSofcF7
+DKDEcRj3ZmkBWDUKiICBInrPgQaxw5rLA4lL1GwRgxMQg51fit52mQcsMK56/aQx
+3BmSIoA3Uf+mzHp+bSL+o1vYmoOtklUF09DGIf+y4XyQy9GjojSzxCVkXqBwFldj
+9+jL0NXXkQKBgECyo8YYF8P0eYWj/ynG20yFkaD181L//BRyosxTv/u52MjRZ0fO
+J69jsHmryV9bfeRnedPVb9lJXfYPcCpr17ntY7ppFWENmVpdkMEz2yPcALq4ZQ8U
+FOwez+9yYfqYPPbnbtC+CctJYNaMMcliy32K8zzIlFQsvCXqdtbq832RAoGAAtPw
+dCNJzJAzfihc7HPiT1bZgwmC6X0Klgci8PtEB8duQJvll8jpc6UMwe+WOxJWjVfv
+kcBvxQ5Fbo+HmB0+bUOO+JNlpwnjrs4uaLqNvRz57fLNDzUVlOg3NTc3myIGcFmL
+TLggMvHQ5JXwYv6TkA8vPDR/zpoWV5gncD2GNmECgYEA8e9f30xeVtce5eUebRkB
+bNCxi1sApTIPq8CXRN5JzX5plFj7K1HUlgqQsIxpdWJhi8G7DMj8C4/K7V+PjCTo
+dU1ulbuFWwrIuSS3W6S1gh+eBhODfU80iO6SvSbGLiq11iRrQL/xMsCLgOExZE5d
+BXgz1uzIrvJt5jmZh6bPSYc=
+-----END PRIVATE KEY-----
diff --git a/azurerm/internal/services/machinelearning/validate/inference_cluster_id.go b/azurerm/internal/services/machinelearning/validate/inference_cluster_id.go
new file mode 100644
index 0000000000000..6da0e63bd6ac5
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/validate/inference_cluster_id.go
@@ -0,0 +1,23 @@
+package validate
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "fmt"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/machinelearning/parse"
+)
+
+func InferenceClusterID(input interface{}, key string) (warnings []string, errors []error) {
+ v, ok := input.(string)
+ if !ok {
+ errors = append(errors, fmt.Errorf("expected %q to be a string", key))
+ return
+ }
+
+ if _, err := parse.InferenceClusterID(v); err != nil {
+ errors = append(errors, err)
+ }
+
+ return
+}
diff --git a/azurerm/internal/services/machinelearning/validate/inference_cluster_id_test.go b/azurerm/internal/services/machinelearning/validate/inference_cluster_id_test.go
new file mode 100644
index 0000000000000..48e24c4d7ca2e
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/validate/inference_cluster_id_test.go
@@ -0,0 +1,88 @@
+package validate
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import "testing"
+
+func TestInferenceClusterID(t *testing.T) {
+ cases := []struct {
+ Input string
+ Valid bool
+ }{
+
+ {
+ // empty
+ Input: "",
+ Valid: false,
+ },
+
+ {
+ // missing SubscriptionId
+ Input: "/",
+ Valid: false,
+ },
+
+ {
+ // missing value for SubscriptionId
+ Input: "/subscriptions/",
+ Valid: false,
+ },
+
+ {
+ // missing ResourceGroup
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/",
+ Valid: false,
+ },
+
+ {
+ // missing value for ResourceGroup
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/",
+ Valid: false,
+ },
+
+ {
+ // missing WorkspaceName
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/",
+ Valid: false,
+ },
+
+ {
+ // missing value for WorkspaceName
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/",
+ Valid: false,
+ },
+
+ {
+ // missing ComputeName
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/workspace1/",
+ Valid: false,
+ },
+
+ {
+ // missing value for ComputeName
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/workspace1/computes/",
+ Valid: false,
+ },
+
+ {
+ // valid
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/workspace1/computes/cluster1",
+ Valid: true,
+ },
+
+ {
+ // upper-cased
+ Input: "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/RESGROUP1/PROVIDERS/MICROSOFT.MACHINELEARNINGSERVICES/WORKSPACES/WORKSPACE1/COMPUTES/CLUSTER1",
+ Valid: false,
+ },
+ }
+ for _, tc := range cases {
+ t.Logf("[DEBUG] Testing Value %s", tc.Input)
+ _, errors := InferenceClusterID(tc.Input, "test")
+ valid := len(errors) == 0
+
+ if tc.Valid != valid {
+ t.Fatalf("Expected %t but got %t", tc.Valid, valid)
+ }
+ }
+}
diff --git a/azurerm/internal/services/machinelearning/validate/kubernetes_cluster_id.go b/azurerm/internal/services/machinelearning/validate/kubernetes_cluster_id.go
new file mode 100644
index 0000000000000..6eab63174bd35
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/validate/kubernetes_cluster_id.go
@@ -0,0 +1,23 @@
+package validate
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "fmt"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/machinelearning/parse"
+)
+
+func KubernetesClusterID(input interface{}, key string) (warnings []string, errors []error) {
+ v, ok := input.(string)
+ if !ok {
+ errors = append(errors, fmt.Errorf("expected %q to be a string", key))
+ return
+ }
+
+ if _, err := parse.KubernetesClusterID(v); err != nil {
+ errors = append(errors, err)
+ }
+
+ return
+}
diff --git a/azurerm/internal/services/machinelearning/validate/kubernetes_cluster_id_test.go b/azurerm/internal/services/machinelearning/validate/kubernetes_cluster_id_test.go
new file mode 100644
index 0000000000000..29c4c30f29abc
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/validate/kubernetes_cluster_id_test.go
@@ -0,0 +1,76 @@
+package validate
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import "testing"
+
+func TestKubernetesClusterID(t *testing.T) {
+ cases := []struct {
+ Input string
+ Valid bool
+ }{
+
+ {
+ // empty
+ Input: "",
+ Valid: false,
+ },
+
+ {
+ // missing SubscriptionId
+ Input: "/",
+ Valid: false,
+ },
+
+ {
+ // missing value for SubscriptionId
+ Input: "/subscriptions/",
+ Valid: false,
+ },
+
+ {
+ // missing ResourceGroup
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/",
+ Valid: false,
+ },
+
+ {
+ // missing value for ResourceGroup
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/",
+ Valid: false,
+ },
+
+ {
+ // missing ManagedClusterName
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.ContainerService/",
+ Valid: false,
+ },
+
+ {
+ // missing value for ManagedClusterName
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.ContainerService/managedClusters/",
+ Valid: false,
+ },
+
+ {
+ // valid
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.ContainerService/managedClusters/cluster1",
+ Valid: true,
+ },
+
+ {
+ // upper-cased
+ Input: "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/RESGROUP1/PROVIDERS/MICROSOFT.CONTAINERSERVICE/MANAGEDCLUSTERS/CLUSTER1",
+ Valid: false,
+ },
+ }
+ for _, tc := range cases {
+ t.Logf("[DEBUG] Testing Value %s", tc.Input)
+ _, errors := KubernetesClusterID(tc.Input, "test")
+ valid := len(errors) == 0
+
+ if tc.Valid != valid {
+ t.Fatalf("Expected %t but got %t", tc.Valid, valid)
+ }
+ }
+}
diff --git a/azurerm/internal/services/machinelearning/validate/workspace_id.go b/azurerm/internal/services/machinelearning/validate/workspace_id.go
new file mode 100644
index 0000000000000..ffe3eacffd645
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/validate/workspace_id.go
@@ -0,0 +1,23 @@
+package validate
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "fmt"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/machinelearning/parse"
+)
+
+func WorkspaceID(input interface{}, key string) (warnings []string, errors []error) {
+ v, ok := input.(string)
+ if !ok {
+ errors = append(errors, fmt.Errorf("expected %q to be a string", key))
+ return
+ }
+
+ if _, err := parse.WorkspaceID(v); err != nil {
+ errors = append(errors, err)
+ }
+
+ return
+}
diff --git a/azurerm/internal/services/machinelearning/validate/workspace_id_test.go b/azurerm/internal/services/machinelearning/validate/workspace_id_test.go
new file mode 100644
index 0000000000000..2afab4d724518
--- /dev/null
+++ b/azurerm/internal/services/machinelearning/validate/workspace_id_test.go
@@ -0,0 +1,76 @@
+package validate
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import "testing"
+
+func TestWorkspaceID(t *testing.T) {
+ cases := []struct {
+ Input string
+ Valid bool
+ }{
+
+ {
+ // empty
+ Input: "",
+ Valid: false,
+ },
+
+ {
+ // missing SubscriptionId
+ Input: "/",
+ Valid: false,
+ },
+
+ {
+ // missing value for SubscriptionId
+ Input: "/subscriptions/",
+ Valid: false,
+ },
+
+ {
+ // missing ResourceGroup
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/",
+ Valid: false,
+ },
+
+ {
+ // missing value for ResourceGroup
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/",
+ Valid: false,
+ },
+
+ {
+ // missing Name
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/",
+ Valid: false,
+ },
+
+ {
+ // missing value for Name
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/",
+ Valid: false,
+ },
+
+ {
+ // valid
+ Input: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/workspace1",
+ Valid: true,
+ },
+
+ {
+ // upper-cased
+ Input: "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/RESGROUP1/PROVIDERS/MICROSOFT.MACHINELEARNINGSERVICES/WORKSPACES/WORKSPACE1",
+ Valid: false,
+ },
+ }
+ for _, tc := range cases {
+ t.Logf("[DEBUG] Testing Value %s", tc.Input)
+ _, errors := WorkspaceID(tc.Input, "test")
+ valid := len(errors) == 0
+
+ if tc.Valid != valid {
+ t.Fatalf("Expected %t but got %t", tc.Valid, valid)
+ }
+ }
+}
diff --git a/azurerm/internal/services/maintenance/maintenance_configuration_resource.go b/azurerm/internal/services/maintenance/maintenance_configuration_resource.go
index ad40183b7582f..8caffdc1e2771 100644
--- a/azurerm/internal/services/maintenance/maintenance_configuration_resource.go
+++ b/azurerm/internal/services/maintenance/maintenance_configuration_resource.go
@@ -12,6 +12,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/location"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/maintenance/migration"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/maintenance/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/maintenance/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tags"
@@ -27,6 +28,11 @@ func resourceArmMaintenanceConfiguration() *schema.Resource {
Update: resourceArmMaintenanceConfigurationCreateUpdate,
Delete: resourceArmMaintenanceConfigurationDelete,
+ SchemaVersion: 1,
+ StateUpgraders: pluginsdk.StateUpgrades(map[int]pluginsdk.StateUpgrade{
+ 0: migration.ConfigurationV0ToV1{},
+ }),
+
Timeouts: &schema.ResourceTimeout{
Create: schema.DefaultTimeout(30 * time.Minute),
Read: schema.DefaultTimeout(5 * time.Minute),
@@ -49,6 +55,7 @@ func resourceArmMaintenanceConfiguration() *schema.Resource {
"location": azure.SchemaLocation(),
+ // TODO use `azure.SchemaResourceGroupName()` in version 3.0
// There's a bug in the Azure API where this is returned in lower-case
// BUG: https://github.com/Azure/azure-rest-api-specs/issues/8653
"resource_group_name": azure.SchemaResourceGroupNameDiffSuppress(),
@@ -82,51 +89,38 @@ func resourceArmMaintenanceConfiguration() *schema.Resource {
func resourceArmMaintenanceConfigurationCreateUpdate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*clients.Client).Maintenance.ConfigurationsClient
+ subscriptionId := meta.(*clients.Client).Account.SubscriptionId
ctx, cancel := timeouts.ForCreateUpdate(meta.(*clients.Client).StopContext, d)
defer cancel()
- name := d.Get("name").(string)
- resGroup := d.Get("resource_group_name").(string)
-
+ id := parse.NewMaintenanceConfigurationID(subscriptionId, d.Get("resource_group_name").(string), d.Get("name").(string))
if d.IsNewResource() {
- existing, err := client.Get(ctx, resGroup, name)
+ existing, err := client.Get(ctx, id.ResourceGroup, id.Name)
if err != nil {
if !utils.ResponseWasNotFound(existing.Response) {
- return fmt.Errorf("failure checking for present of existing MaintenanceConfiguration %q (Resource Group %q): %+v", name, resGroup, err)
+ return fmt.Errorf("checking for presence of existing %s: %+v", id, err)
}
}
- if existing.ID != nil && *existing.ID != "" {
- return tf.ImportAsExistsError("azurerm_maintenance_configuration", *existing.ID)
+ if !utils.ResponseWasNotFound(existing.Response) {
+ return tf.ImportAsExistsError("azurerm_maintenance_configuration", id.ID())
}
}
- location := azure.NormalizeLocation(d.Get("location").(string))
- scope := d.Get("scope").(string)
-
configuration := maintenance.Configuration{
- Name: utils.String(name),
- Location: utils.String(location),
+ Name: utils.String(id.Name),
+ Location: utils.String(location.Normalize(d.Get("location").(string))),
ConfigurationProperties: &maintenance.ConfigurationProperties{
- MaintenanceScope: maintenance.Scope(scope),
+ MaintenanceScope: maintenance.Scope(d.Get("scope").(string)),
Namespace: utils.String("Microsoft.Maintenance"),
},
Tags: tags.Expand(d.Get("tags").(map[string]interface{})),
}
- if _, err := client.CreateOrUpdate(ctx, resGroup, name, configuration); err != nil {
- return fmt.Errorf("failure creating/updating MaintenanceConfiguration %q (Resource Group %q): %+v", name, resGroup, err)
- }
-
- resp, err := client.Get(ctx, resGroup, name)
- if err != nil {
- return fmt.Errorf("failure retrieving MaintenanceConfiguration %q (Resource Group %q): %+v", name, resGroup, err)
- }
-
- if resp.ID == nil || *resp.ID == "" {
- return fmt.Errorf("cannot read MaintenanceConfiguration %q (Resource Group %q) ID", name, resGroup)
+ if _, err := client.CreateOrUpdate(ctx, id.ResourceGroup, id.Name, configuration); err != nil {
+ return fmt.Errorf("creating/updating %s: %+v", id, err)
}
- d.SetId(*resp.ID)
+ d.SetId(id.ID())
return resourceArmMaintenanceConfigurationRead(d, meta)
}
@@ -147,10 +141,10 @@ func resourceArmMaintenanceConfigurationRead(d *schema.ResourceData, meta interf
d.SetId("")
return nil
}
- return fmt.Errorf("failure retrieving MaintenanceConfiguration %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
+ return fmt.Errorf("retrieving %s: %+v", id, err)
}
- d.Set("name", resp.Name)
+ d.Set("name", id.Name)
d.Set("resource_group_name", id.ResourceGroup)
d.Set("location", location.NormalizeNilable(resp.Location))
if props := resp.ConfigurationProperties; props != nil {
@@ -170,7 +164,7 @@ func resourceArmMaintenanceConfigurationDelete(d *schema.ResourceData, meta inte
}
if _, err := client.Delete(ctx, id.ResourceGroup, id.Name); err != nil {
- return fmt.Errorf("failure deleting MaintenanceConfiguration %q (Resource Group %q): %+v", id.Name, id.ResourceGroup, err)
+ return fmt.Errorf("deleting %s: %+v", id, err)
}
return nil
}
diff --git a/azurerm/internal/services/maintenance/migration/configuration_v0_to_v1.go b/azurerm/internal/services/maintenance/migration/configuration_v0_to_v1.go
new file mode 100644
index 0000000000000..57f3b6ceab587
--- /dev/null
+++ b/azurerm/internal/services/maintenance/migration/configuration_v0_to_v1.go
@@ -0,0 +1,68 @@
+package migration
+
+import (
+ "context"
+ "log"
+
+ "github.com/Azure/azure-sdk-for-go/services/preview/maintenance/mgmt/2018-06-01-preview/maintenance"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/maintenance/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
+)
+
+var _ pluginsdk.StateUpgrade = ConfigurationV0ToV1{}
+
+type ConfigurationV0ToV1 struct{}
+
+func (ConfigurationV0ToV1) Schema() map[string]*pluginsdk.Schema {
+ return map[string]*schema.Schema{
+ "name": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+
+ "location": {
+ Type: pluginsdk.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+
+ "resource_group_name": {
+ Type: pluginsdk.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+
+ "scope": {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: string(maintenance.ScopeAll),
+ },
+
+ "tags": {
+ Type: schema.TypeMap,
+ Optional: true,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ },
+ }
+}
+
+func (ConfigurationV0ToV1) UpgradeFunc() pluginsdk.StateUpgraderFunc {
+ return func(ctx context.Context, rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) {
+ subscriptionId := meta.(*clients.Client).Account.SubscriptionId
+
+ log.Printf("[DEBUG] Migrating IDs to correct casing for Maintenance Configuration")
+
+ name := rawState["name"].(string)
+ resourceGroup := rawState["resource_group_name"].(string)
+ id := parse.NewMaintenanceConfigurationID(subscriptionId, resourceGroup, name)
+
+ rawState["id"] = id.ID()
+
+ return rawState, nil
+ }
+}
diff --git a/azurerm/internal/services/maps/client/client.go b/azurerm/internal/services/maps/client/client.go
index a507d902de46d..62fbf40cf5273 100644
--- a/azurerm/internal/services/maps/client/client.go
+++ b/azurerm/internal/services/maps/client/client.go
@@ -1,7 +1,7 @@
package client
import (
- "github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps"
+ "github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/common"
)
diff --git a/azurerm/internal/services/maps/maps_account_data_source.go b/azurerm/internal/services/maps/maps_account_data_source.go
index 4693bf11211b4..fbc2a0db1f181 100644
--- a/azurerm/internal/services/maps/maps_account_data_source.go
+++ b/azurerm/internal/services/maps/maps_account_data_source.go
@@ -82,7 +82,7 @@ func dataSourceMapsAccountRead(d *schema.ResourceData, meta interface{}) error {
d.Set("sku_name", sku.Name)
}
if props := resp.Properties; props != nil {
- d.Set("x_ms_client_id", props.XMsClientID)
+ d.Set("x_ms_client_id", props.UniqueID)
}
keysResp, err := client.ListKeys(ctx, resourceGroup, name)
diff --git a/azurerm/internal/services/maps/maps_account_resource.go b/azurerm/internal/services/maps/maps_account_resource.go
index bc8161ff9624b..774aa9657fe88 100644
--- a/azurerm/internal/services/maps/maps_account_resource.go
+++ b/azurerm/internal/services/maps/maps_account_resource.go
@@ -5,7 +5,7 @@ import (
"log"
"time"
- "github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps"
+ "github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
@@ -53,8 +53,9 @@ func resourceMapsAccount() *schema.Resource {
Required: true,
ForceNew: true,
ValidateFunc: validation.StringInSlice([]string{
- "S0",
- "S1",
+ string(maps.NameS0),
+ string(maps.NameS1),
+ string(maps.NameG2),
}, false),
},
@@ -105,10 +106,10 @@ func resourceMapsAccountCreateUpdate(d *schema.ResourceData, meta interface{}) e
}
}
- parameters := maps.AccountCreateParameters{
+ parameters := maps.Account{
Location: utils.String("global"),
Sku: &maps.Sku{
- Name: &sku,
+ Name: maps.Name(sku),
},
Tags: tags.Expand(t),
}
@@ -157,7 +158,7 @@ func resourceMapsAccountRead(d *schema.ResourceData, meta interface{}) error {
d.Set("sku_name", sku.Name)
}
if props := resp.Properties; props != nil {
- d.Set("x_ms_client_id", props.XMsClientID)
+ d.Set("x_ms_client_id", props.UniqueID)
}
keysResp, err := client.ListKeys(ctx, id.ResourceGroup, id.Name)
diff --git a/azurerm/internal/services/maps/maps_account_resource_test.go b/azurerm/internal/services/maps/maps_account_resource_test.go
index 2d959faa437cc..3bbab138a3b4a 100644
--- a/azurerm/internal/services/maps/maps_account_resource_test.go
+++ b/azurerm/internal/services/maps/maps_account_resource_test.go
@@ -42,7 +42,7 @@ func TestAccMapsAccount_sku(t *testing.T) {
data.ResourceTest(t, r, []resource.TestStep{
{
- Config: r.sku(data),
+ Config: r.sku(data, "S1"),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).Key("name").Exists(),
check.That(data.ResourceName).Key("x_ms_client_id").Exists(),
@@ -55,6 +55,25 @@ func TestAccMapsAccount_sku(t *testing.T) {
})
}
+func TestAccMapsAccount_skuG2(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_maps_account", "test")
+ r := MapsAccountResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.sku(data, "G2"),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).Key("name").Exists(),
+ check.That(data.ResourceName).Key("x_ms_client_id").Exists(),
+ check.That(data.ResourceName).Key("primary_access_key").Exists(),
+ check.That(data.ResourceName).Key("secondary_access_key").Exists(),
+ check.That(data.ResourceName).Key("sku_name").HasValue("G2"),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
func TestAccMapsAccount_tags(t *testing.T) {
data := acceptance.BuildTestData(t, "azurerm_maps_account", "test")
r := MapsAccountResource{}
@@ -113,7 +132,7 @@ resource "azurerm_maps_account" "test" {
`, data.RandomInteger, data.Locations.Primary, data.RandomInteger)
}
-func (MapsAccountResource) sku(data acceptance.TestData) string {
+func (MapsAccountResource) sku(data acceptance.TestData, sku string) string {
return fmt.Sprintf(`
provider "azurerm" {
features {}
@@ -127,9 +146,9 @@ resource "azurerm_resource_group" "test" {
resource "azurerm_maps_account" "test" {
name = "accMapsAccount-%d"
resource_group_name = azurerm_resource_group.test.name
- sku_name = "S1"
+ sku_name = "%s"
}
-`, data.RandomInteger, data.Locations.Primary, data.RandomInteger)
+`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, sku)
}
func (MapsAccountResource) tags(data acceptance.TestData) string {
diff --git a/azurerm/internal/services/media/client/client.go b/azurerm/internal/services/media/client/client.go
index afff6a9323370..ae9445e565378 100644
--- a/azurerm/internal/services/media/client/client.go
+++ b/azurerm/internal/services/media/client/client.go
@@ -1,7 +1,7 @@
package client
import (
- "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media"
+ "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/common"
)
diff --git a/azurerm/internal/services/media/media_asset_filter_resource.go b/azurerm/internal/services/media/media_asset_filter_resource.go
index 82a3c1d05b5f1..941a90d57c28b 100644
--- a/azurerm/internal/services/media/media_asset_filter_resource.go
+++ b/azurerm/internal/services/media/media_asset_filter_resource.go
@@ -6,7 +6,7 @@ import (
"regexp"
"time"
- "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media"
+ "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
@@ -76,35 +76,47 @@ func resourceMediaAssetFilter() *schema.Resource {
Type: schema.TypeInt,
Optional: true,
ValidateFunc: validation.IntAtLeast(0),
+ AtLeastOneOf: []string{"presentation_time_range.0.end_in_units", "presentation_time_range.0.force_end", "presentation_time_range.0.live_backoff_in_units",
+ "presentation_time_range.0.presentation_window_in_units", "presentation_time_range.0.start_in_units", "presentation_time_range.0.unit_timescale_in_miliseconds"},
},
"force_end": {
Type: schema.TypeBool,
Optional: true,
+ AtLeastOneOf: []string{"presentation_time_range.0.end_in_units", "presentation_time_range.0.force_end", "presentation_time_range.0.live_backoff_in_units",
+ "presentation_time_range.0.presentation_window_in_units", "presentation_time_range.0.start_in_units", "presentation_time_range.0.unit_timescale_in_miliseconds"},
},
"live_backoff_in_units": {
Type: schema.TypeInt,
Optional: true,
ValidateFunc: validation.IntAtLeast(0),
+ AtLeastOneOf: []string{"presentation_time_range.0.end_in_units", "presentation_time_range.0.force_end", "presentation_time_range.0.live_backoff_in_units",
+ "presentation_time_range.0.presentation_window_in_units", "presentation_time_range.0.start_in_units", "presentation_time_range.0.unit_timescale_in_miliseconds"},
},
"presentation_window_in_units": {
Type: schema.TypeInt,
Optional: true,
ValidateFunc: validation.IntAtLeast(0),
+ AtLeastOneOf: []string{"presentation_time_range.0.end_in_units", "presentation_time_range.0.force_end", "presentation_time_range.0.live_backoff_in_units",
+ "presentation_time_range.0.presentation_window_in_units", "presentation_time_range.0.start_in_units", "presentation_time_range.0.unit_timescale_in_miliseconds"},
},
"start_in_units": {
Type: schema.TypeInt,
Optional: true,
ValidateFunc: validation.IntAtLeast(0),
+ AtLeastOneOf: []string{"presentation_time_range.0.end_in_units", "presentation_time_range.0.force_end", "presentation_time_range.0.live_backoff_in_units",
+ "presentation_time_range.0.presentation_window_in_units", "presentation_time_range.0.start_in_units", "presentation_time_range.0.unit_timescale_in_miliseconds"},
},
"unit_timescale_in_miliseconds": {
Type: schema.TypeInt,
Optional: true,
ValidateFunc: validation.IntAtLeast(1),
+ AtLeastOneOf: []string{"presentation_time_range.0.end_in_units", "presentation_time_range.0.force_end", "presentation_time_range.0.live_backoff_in_units",
+ "presentation_time_range.0.presentation_window_in_units", "presentation_time_range.0.start_in_units", "presentation_time_range.0.unit_timescale_in_miliseconds"},
},
},
},
@@ -115,17 +127,18 @@ func resourceMediaAssetFilter() *schema.Resource {
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
+ //lintignore:XS003
"condition": {
Type: schema.TypeList,
- Optional: true,
+ Required: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"operation": {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validation.StringInSlice([]string{
- string(media.Equal),
- string(media.NotEqual),
+ string(media.FilterTrackPropertyCompareOperationEqual),
+ string(media.FilterTrackPropertyCompareOperationNotEqual),
}, false),
},
@@ -365,6 +378,9 @@ func expandTracks(input []interface{}) *[]media.FilterTrackSelection {
trackSelectionList := rawSelection.([]interface{})
filterTrackSelections := make([]media.FilterTrackPropertyCondition, 0)
for _, trackSelection := range trackSelectionList {
+ if trackSelection == nil {
+ continue
+ }
filterTrackSelection := media.FilterTrackPropertyCondition{}
track := trackSelection.(map[string]interface{})
diff --git a/azurerm/internal/services/media/media_asset_resource.go b/azurerm/internal/services/media/media_asset_resource.go
index ed81c6b22c93c..c205e8d5b60d7 100644
--- a/azurerm/internal/services/media/media_asset_resource.go
+++ b/azurerm/internal/services/media/media_asset_resource.go
@@ -6,7 +6,7 @@ import (
"regexp"
"time"
- "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media"
+ "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media"
"github.com/hashicorp/go-azure-helpers/response"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
diff --git a/azurerm/internal/services/media/media_content_key_policy_resource.go b/azurerm/internal/services/media/media_content_key_policy_resource.go
index 4c754b0ea7a20..16426c63c2f16 100644
--- a/azurerm/internal/services/media/media_content_key_policy_resource.go
+++ b/azurerm/internal/services/media/media_content_key_policy_resource.go
@@ -12,7 +12,7 @@ import (
b64 "encoding/base64"
"encoding/hex"
- "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media"
+ "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media"
"github.com/Azure/go-autorest/autorest/date"
"github.com/gofrs/uuid"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
@@ -292,10 +292,10 @@ func resourceMediaContentKeyPolicy() *schema.Resource {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validation.StringInSlice([]string{
- string(media.DualExpiry),
- string(media.PersistentLimited),
- string(media.PersistentUnlimited),
- string(media.Undefined),
+ string(media.ContentKeyPolicyFairPlayRentalAndLeaseKeyTypeDualExpiry),
+ string(media.ContentKeyPolicyFairPlayRentalAndLeaseKeyTypePersistentLimited),
+ string(media.ContentKeyPolicyFairPlayRentalAndLeaseKeyTypePersistentUnlimited),
+ string(media.ContentKeyPolicyFairPlayRentalAndLeaseKeyTypeUndefined),
}, false),
},
"rental_duration_seconds": {
@@ -611,11 +611,11 @@ func expandRestriction(option map[string]interface{}) (media.BasicContentKeyPoli
restrictionType := ""
if option["open_restriction_enabled"] != nil && option["open_restriction_enabled"].(bool) {
restrictionCount++
- restrictionType = string(media.OdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction)
+ restrictionType = string(media.OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction)
}
if option["token_restriction"] != nil && len(option["token_restriction"].([]interface{})) > 0 && option["token_restriction"].([]interface{})[0] != nil {
restrictionCount++
- restrictionType = string(media.OdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction)
+ restrictionType = string(media.OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction)
}
if restrictionCount == 0 {
@@ -627,16 +627,16 @@ func expandRestriction(option map[string]interface{}) (media.BasicContentKeyPoli
}
switch restrictionType {
- case string(media.OdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction):
+ case string(media.OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction):
openRestriction := &media.ContentKeyPolicyOpenRestriction{
- OdataType: media.OdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction,
+ OdataType: media.OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction,
}
return openRestriction, nil
- case string(media.OdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction):
+ case string(media.OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction):
tokenRestrictions := option["token_restriction"].([]interface{})
tokenRestriction := tokenRestrictions[0].(map[string]interface{})
contentKeyPolicyTokenRestriction := &media.ContentKeyPolicyTokenRestriction{
- OdataType: media.OdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction,
+ OdataType: media.OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction,
}
if tokenRestriction["audience"] != nil && tokenRestriction["audience"].(string) != "" {
contentKeyPolicyTokenRestriction.Audience = utils.String(tokenRestriction["audience"].(string))
@@ -749,20 +749,20 @@ func expandConfiguration(input map[string]interface{}) (media.BasicContentKeyPol
configurationType := ""
if input["clear_key_configuration_enabled"] != nil && input["clear_key_configuration_enabled"].(bool) {
configurationCount++
- configurationType = string(media.OdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration)
+ configurationType = string(media.OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration)
}
if input["widevine_configuration_template"] != nil && input["widevine_configuration_template"].(string) != "" {
configurationCount++
- configurationType = string(media.OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration)
+ configurationType = string(media.OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration)
}
if input["fairplay_configuration"] != nil && len(input["fairplay_configuration"].([]interface{})) > 0 && input["fairplay_configuration"].([]interface{})[0] != nil {
configurationCount++
- configurationType = string(media.OdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration)
+ configurationType = string(media.OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration)
}
if input["playready_configuration_license"] != nil && len(input["playready_configuration_license"].([]interface{})) > 0 {
configurationCount++
- configurationType = string(media.OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration)
+ configurationType = string(media.OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration)
}
if configurationCount == 0 {
@@ -774,26 +774,26 @@ func expandConfiguration(input map[string]interface{}) (media.BasicContentKeyPol
}
switch configurationType {
- case string(media.OdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration):
+ case string(media.OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration):
clearKeyConfiguration := &media.ContentKeyPolicyClearKeyConfiguration{
- OdataType: media.OdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration,
+ OdataType: media.OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration,
}
return clearKeyConfiguration, nil
- case string(media.OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration):
+ case string(media.OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration):
wideVineConfiguration := &media.ContentKeyPolicyWidevineConfiguration{
- OdataType: media.OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration,
+ OdataType: media.OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration,
WidevineTemplate: utils.String(input["widevine_configuration_template"].(string)),
}
return wideVineConfiguration, nil
- case string(media.OdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration):
+ case string(media.OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration):
fairplayConfiguration, err := expandFairplayConfiguration(input["fairplay_configuration"].([]interface{}))
if err != nil {
return nil, err
}
return fairplayConfiguration, nil
- case string(media.OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration):
+ case string(media.OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration):
playReadyConfiguration := &media.ContentKeyPolicyPlayReadyConfiguration{
- OdataType: media.OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration,
+ OdataType: media.OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration,
}
if input["playready_configuration_license"] != nil {
@@ -815,16 +815,16 @@ func expandVerificationKey(input map[string]interface{}) (media.BasicContentKeyP
verificationKeyType := ""
if input["primary_symmetric_token_key"] != nil && input["primary_symmetric_token_key"].(string) != "" {
verificationKeyCount++
- verificationKeyType = string(media.OdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey)
+ verificationKeyType = string(media.OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey)
}
if (input["primary_rsa_token_key_exponent"] != nil && input["primary_rsa_token_key_exponent"].(string) != "") || (input["primary_rsa_token_key_modulus"] != nil && input["primary_rsa_token_key_modulus"].(string) != "") {
verificationKeyCount++
- verificationKeyType = string(media.OdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey)
+ verificationKeyType = string(media.OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey)
}
if input["primary_x509_token_key_raw"] != nil && input["primary_x509_token_key_raw"].(string) != "" {
verificationKeyCount++
- verificationKeyType = string(media.OdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey)
+ verificationKeyType = string(media.OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey)
}
if verificationKeyCount > 1 {
@@ -832,9 +832,9 @@ func expandVerificationKey(input map[string]interface{}) (media.BasicContentKeyP
}
switch verificationKeyType {
- case string(media.OdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey):
+ case string(media.OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey):
symmetricTokenKey := &media.ContentKeyPolicySymmetricTokenKey{
- OdataType: media.OdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey,
+ OdataType: media.OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey,
}
if input["primary_symmetric_token_key"] != nil && input["primary_symmetric_token_key"].(string) != "" {
@@ -842,9 +842,9 @@ func expandVerificationKey(input map[string]interface{}) (media.BasicContentKeyP
symmetricTokenKey.KeyValue = &keyValue
}
return symmetricTokenKey, nil
- case string(media.OdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey):
+ case string(media.OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey):
rsaTokenKey := &media.ContentKeyPolicyRsaTokenKey{
- OdataType: media.OdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey,
+ OdataType: media.OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey,
}
if input["primary_rsa_token_key_exponent"] != nil && input["primary_rsa_token_key_exponent"].(string) != "" {
exponent := []byte(input["primary_rsa_token_key_exponent"].(string))
@@ -855,9 +855,9 @@ func expandVerificationKey(input map[string]interface{}) (media.BasicContentKeyP
rsaTokenKey.Modulus = &modulus
}
return rsaTokenKey, nil
- case string(media.OdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey):
+ case string(media.OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey):
x509CertificateTokenKey := &media.ContentKeyPolicyX509CertificateTokenKey{
- OdataType: media.OdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey,
+ OdataType: media.OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey,
}
if input["primary_x509_token_key_raw"] != nil && input["primary_x509_token_key_raw"].(string) != "" {
@@ -963,7 +963,7 @@ func flattenRentalConfiguration(input *media.ContentKeyPolicyFairPlayOfflineRent
func expandFairplayConfiguration(input []interface{}) (*media.ContentKeyPolicyFairPlayConfiguration, error) {
fairplayConfiguration := &media.ContentKeyPolicyFairPlayConfiguration{
- OdataType: media.OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration,
+ OdataType: media.OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration,
}
fairplay := input[0].(map[string]interface{})
diff --git a/azurerm/internal/services/media/media_job_resource.go b/azurerm/internal/services/media/media_job_resource.go
index 52286e083d50b..37b8b1c95f35d 100644
--- a/azurerm/internal/services/media/media_job_resource.go
+++ b/azurerm/internal/services/media/media_job_resource.go
@@ -6,7 +6,7 @@ import (
"regexp"
"time"
- "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media"
+ "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media"
"github.com/hashicorp/go-azure-helpers/response"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
diff --git a/azurerm/internal/services/media/media_live_output_resource.go b/azurerm/internal/services/media/media_live_output_resource.go
index d934cbd009ec2..c1bb8f42e89f6 100644
--- a/azurerm/internal/services/media/media_live_output_resource.go
+++ b/azurerm/internal/services/media/media_live_output_resource.go
@@ -6,7 +6,7 @@ import (
"regexp"
"time"
- "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media"
+ "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
diff --git a/azurerm/internal/services/media/media_services_account_resource.go b/azurerm/internal/services/media/media_services_account_resource.go
index 54030b378f03b..79f35e3469b13 100644
--- a/azurerm/internal/services/media/media_services_account_resource.go
+++ b/azurerm/internal/services/media/media_services_account_resource.go
@@ -6,7 +6,7 @@ import (
"regexp"
"time"
- "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media"
+ "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media"
"github.com/hashicorp/go-azure-helpers/response"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
@@ -115,6 +115,34 @@ func resourceMediaServicesAccount() *schema.Resource {
}, true),
},
+ "key_delivery_access_control": {
+ Type: schema.TypeList,
+ Optional: true,
+ Computed: true,
+ MaxItems: 1,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "default_action": {
+ Type: schema.TypeString,
+ Optional: true,
+ ValidateFunc: validation.StringInSlice([]string{
+ string(media.DefaultActionDeny),
+ string(media.DefaultActionAllow),
+ }, true),
+ },
+
+ "ip_allow_list": {
+ Type: schema.TypeSet,
+ Optional: true,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ ValidateFunc: validation.StringIsNotEmpty,
+ },
+ },
+ },
+ },
+ },
+
"tags": tags.Schema(),
},
}
@@ -164,6 +192,10 @@ func resourceMediaServicesAccountCreateUpdate(d *schema.ResourceData, meta inter
parameters.StorageAuthentication = media.StorageAuthentication(v.(string))
}
+ if keyDelivery, ok := d.GetOk("key_delivery_access_control"); ok {
+ parameters.KeyDelivery = expandKeyDelivery(keyDelivery.([]interface{}))
+ }
+
if _, err := client.CreateOrUpdate(ctx, resourceId.ResourceGroup, resourceId.Name, parameters); err != nil {
return fmt.Errorf("creating %s: %+v", resourceId, err)
}
@@ -212,6 +244,10 @@ func resourceMediaServicesAccountRead(d *schema.ResourceData, meta interface{})
return fmt.Errorf("flattening `identity`: %s", err)
}
+ if err := d.Set("key_delivery_access_control", flattenKeyDelivery(resp.KeyDelivery)); err != nil {
+ return fmt.Errorf("flattening `key_delivery_access_control`: %s", err)
+ }
+
return tags.FlattenAndSet(d, resp.Tags)
}
@@ -245,13 +281,13 @@ func expandMediaServicesAccountStorageAccounts(input []interface{}) (*[]media.St
id := accountMap["id"].(string)
- storageType := media.Secondary
+ storageType := media.StorageAccountTypeSecondary
if accountMap["is_primary"].(bool) {
if foundPrimary {
return nil, fmt.Errorf("Only one Storage Account can be set as Primary")
}
- storageType = media.Primary
+ storageType = media.StorageAccountTypePrimary
foundPrimary = true
}
@@ -279,7 +315,7 @@ func flattenMediaServicesAccountStorageAccounts(input *[]media.StorageAccount) [
output["id"] = *storageAccount.ID
}
- output["is_primary"] = storageAccount.Type == media.Primary
+ output["is_primary"] = storageAccount.Type == media.StorageAccountTypePrimary
results = append(results, output)
}
@@ -315,3 +351,38 @@ func flattenAzureRmMediaServicedentity(identity *media.ServiceIdentity) []interf
return []interface{}{result}
}
+
+func expandKeyDelivery(input []interface{}) *media.KeyDelivery {
+ if len(input) == 0 {
+ return nil
+ }
+
+ keyDelivery := input[0].(map[string]interface{})
+ defaultAction := keyDelivery["default_action"].(string)
+
+ var ipAllowList *[]string
+ if v := keyDelivery["ip_allow_list"]; v != nil {
+ ips := keyDelivery["ip_allow_list"].(*schema.Set).List()
+ ipAllowList = utils.ExpandStringSlice(ips)
+ }
+
+ return &media.KeyDelivery{
+ AccessControl: &media.AccessControl{
+ DefaultAction: media.DefaultAction(defaultAction),
+ IPAllowList: ipAllowList,
+ },
+ }
+}
+
+func flattenKeyDelivery(input *media.KeyDelivery) []interface{} {
+ if input == nil && input.AccessControl != nil {
+ return make([]interface{}, 0)
+ }
+
+ return []interface{}{
+ map[string]interface{}{
+ "default_action": string(input.AccessControl.DefaultAction),
+ "ip_allow_list": utils.FlattenStringSlice(input.AccessControl.IPAllowList),
+ },
+ }
+}
diff --git a/azurerm/internal/services/media/media_services_account_resource_test.go b/azurerm/internal/services/media/media_services_account_resource_test.go
index ebf92a4871303..023069289f1f4 100644
--- a/azurerm/internal/services/media/media_services_account_resource_test.go
+++ b/azurerm/internal/services/media/media_services_account_resource_test.go
@@ -81,13 +81,13 @@ func TestAccMediaServicesAccount_multiplePrimaries(t *testing.T) {
})
}
-func TestAccMediaServicesAccount_identitySystemAssigned(t *testing.T) {
+func TestAccMediaServicesAccount_complete(t *testing.T) {
data := acceptance.BuildTestData(t, "azurerm_media_services_account", "test")
r := MediaServicesAccountResource{}
data.ResourceTest(t, r, []resource.TestStep{
{
- Config: r.identitySystemAssigned(data),
+ Config: r.complete(data),
Check: resource.ComposeAggregateTestCheckFunc(
check.That(data.ResourceName).Key("identity.0.type").HasValue("SystemAssigned"),
),
@@ -247,8 +247,8 @@ resource "azurerm_media_services_account" "test" {
`, template, data.RandomString, data.RandomString)
}
-func (MediaServicesAccountResource) identitySystemAssigned(data acceptance.TestData) string {
- template := MediaServicesAccountResource{}.template(data)
+func (r MediaServicesAccountResource) complete(data acceptance.TestData) string {
+ template := r.template(data)
return fmt.Sprintf(`
%s
@@ -262,9 +262,18 @@ resource "azurerm_media_services_account" "test" {
is_primary = true
}
+ tags = {
+ environment = "staging"
+ }
+
identity {
type = "SystemAssigned"
}
+
+ key_delivery_access_control {
+ default_action = "Deny"
+ ip_allow_list = ["0.0.0.0/0"]
+ }
}
`, template, data.RandomString)
}
diff --git a/azurerm/internal/services/media/media_streaming_endpoint_resource.go b/azurerm/internal/services/media/media_streaming_endpoint_resource.go
index 8522ad99fde04..9dcd351a29a64 100644
--- a/azurerm/internal/services/media/media_streaming_endpoint_resource.go
+++ b/azurerm/internal/services/media/media_streaming_endpoint_resource.go
@@ -9,7 +9,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/media/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
- "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media"
+ "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media"
"github.com/Azure/go-autorest/autorest/date"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
@@ -486,10 +486,9 @@ func expandAccessControl(d *schema.ResourceData) (*media.StreamingEndpointAccess
accessControlResult := new(media.StreamingEndpointAccessControl)
accessControl := accessControls[0].(map[string]interface{})
// Get IP information
- if raw, ok := accessControl["ip_allow"]; ok {
- ipAllowsList := raw.([]interface{})
+ if ipAllowsList := accessControl["ip_allow"].([]interface{}); len(ipAllowsList) > 0 {
ipRanges := make([]media.IPRange, 0)
- for index, ipAllow := range ipAllowsList {
+ for _, ipAllow := range ipAllowsList {
if ipAllow == nil {
continue
}
@@ -505,15 +504,14 @@ func expandAccessControl(d *schema.ResourceData) (*media.StreamingEndpointAccess
if subnetPrefixLengthRaw != "" {
ipRange.SubnetPrefixLength = utils.Int32(int32(subnetPrefixLengthRaw.(int)))
}
- ipRanges[index] = ipRange
+ ipRanges = append(ipRanges, ipRange)
}
accessControlResult.IP = &media.IPAccessControl{
Allow: &ipRanges,
}
}
// Get Akamai information
- if raw, ok := accessControl["akamai_signature_header_authentication_key"]; ok {
- akamaiSignatureKeyList := raw.([]interface{})
+ if akamaiSignatureKeyList := accessControl["akamai_signature_header_authentication_key"].([]interface{}); len(akamaiSignatureKeyList) > 0 {
akamaiSignatureHeaderAuthenticationKeyList := make([]media.AkamaiSignatureHeaderAuthenticationKey, 0)
for _, akamaiSignatureKey := range akamaiSignatureKeyList {
if akamaiSignatureKey == nil {
@@ -538,9 +536,9 @@ func expandAccessControl(d *schema.ResourceData) (*media.StreamingEndpointAccess
}
}
akamaiSignatureHeaderAuthenticationKeyList = append(akamaiSignatureHeaderAuthenticationKeyList, akamaiSignatureHeaderAuthenticationKey)
- accessControlResult.Akamai = &media.AkamaiAccessControl{
- AkamaiSignatureHeaderAuthenticationKeyList: &akamaiSignatureHeaderAuthenticationKeyList,
- }
+ }
+ accessControlResult.Akamai = &media.AkamaiAccessControl{
+ AkamaiSignatureHeaderAuthenticationKeyList: &akamaiSignatureHeaderAuthenticationKeyList,
}
}
diff --git a/azurerm/internal/services/media/media_streaming_live_event_resource.go b/azurerm/internal/services/media/media_streaming_live_event_resource.go
index 10df5e0e48390..607538efd0c23 100644
--- a/azurerm/internal/services/media/media_streaming_live_event_resource.go
+++ b/azurerm/internal/services/media/media_streaming_live_event_resource.go
@@ -8,7 +8,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/media/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
- "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media"
+ "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
@@ -140,8 +140,8 @@ func resourceMediaLiveEvent() *schema.Resource {
Optional: true,
ForceNew: true,
ValidateFunc: validation.StringInSlice([]string{
- string(media.RTMP),
- string(media.FragmentedMP4),
+ string(media.LiveEventInputProtocolRTMP),
+ string(media.LiveEventInputProtocolFragmentedMP4),
}, false),
AtLeastOneOf: []string{"input.0.ip_access_control_allow", "input.0.access_token",
"input.0.key_frame_interval_duration", "input.0.streaming_protocol",
@@ -513,7 +513,7 @@ func resourceMediaLiveEventDelete(d *schema.ResourceData, meta interface{}) erro
return fmt.Errorf("reading %s: %+v", id, err)
}
if props := resp.LiveEventProperties; props != nil {
- if props.ResourceState == media.Running {
+ if props.ResourceState == media.LiveEventResourceStateRunning {
stopFuture, err := client.Stop(ctx, id.ResourceGroup, id.MediaserviceName, id.Name, media.LiveEventActionInput{RemoveOutputsOnStop: utils.Bool(false)})
if err != nil {
return fmt.Errorf("stopping %s: %+v", id, err)
diff --git a/azurerm/internal/services/media/media_streaming_locator_resource.go b/azurerm/internal/services/media/media_streaming_locator_resource.go
index 7f25e2cb6d117..24d2448f8e401 100644
--- a/azurerm/internal/services/media/media_streaming_locator_resource.go
+++ b/azurerm/internal/services/media/media_streaming_locator_resource.go
@@ -9,7 +9,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/media/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
- "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media"
+ "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media"
"github.com/Azure/go-autorest/autorest/date"
"github.com/gofrs/uuid"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
diff --git a/azurerm/internal/services/media/media_streaming_policy_resource.go b/azurerm/internal/services/media/media_streaming_policy_resource.go
index 84400aa4d12c4..56370af17d2da 100644
--- a/azurerm/internal/services/media/media_streaming_policy_resource.go
+++ b/azurerm/internal/services/media/media_streaming_policy_resource.go
@@ -9,7 +9,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/media/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
- "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media"
+ "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
diff --git a/azurerm/internal/services/media/media_transform_resource.go b/azurerm/internal/services/media/media_transform_resource.go
index 61d7edf7be60c..2f63fdfc7ba93 100644
--- a/azurerm/internal/services/media/media_transform_resource.go
+++ b/azurerm/internal/services/media/media_transform_resource.go
@@ -6,7 +6,7 @@ import (
"regexp"
"time"
- "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media"
+ "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media"
"github.com/hashicorp/go-azure-helpers/response"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
@@ -78,8 +78,8 @@ func resourceMediaTransform() *schema.Resource {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validation.StringInSlice([]string{
- string(media.ContinueJob),
- string(media.StopProcessingJob),
+ string(media.OnErrorTypeContinueJob),
+ string(media.OnErrorTypeStopProcessingJob),
}, false),
},
//lintignore:XS003
@@ -93,17 +93,17 @@ func resourceMediaTransform() *schema.Resource {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validation.StringInSlice([]string{
- string(media.AACGoodQualityAudio),
- string(media.AdaptiveStreaming),
- string(media.ContentAwareEncoding),
- string(media.ContentAwareEncodingExperimental),
- string(media.CopyAllBitrateNonInterleaved),
- string(media.H264MultipleBitrate1080p),
- string(media.H264MultipleBitrate720p),
- string(media.H264MultipleBitrateSD),
- string(media.H264SingleBitrate1080p),
- string(media.H264SingleBitrate720p),
- string(media.H264MultipleBitrateSD),
+ string(media.EncoderNamedPresetAACGoodQualityAudio),
+ string(media.EncoderNamedPresetAdaptiveStreaming),
+ string(media.EncoderNamedPresetContentAwareEncoding),
+ string(media.EncoderNamedPresetContentAwareEncodingExperimental),
+ string(media.EncoderNamedPresetCopyAllBitrateNonInterleaved),
+ string(media.EncoderNamedPresetH264MultipleBitrate1080p),
+ string(media.EncoderNamedPresetH264MultipleBitrate720p),
+ string(media.EncoderNamedPresetH264MultipleBitrateSD),
+ string(media.EncoderNamedPresetH264SingleBitrate1080p),
+ string(media.EncoderNamedPresetH264SingleBitrate720p),
+ string(media.EncoderNamedPresetH264SingleBitrateSD),
}, false),
},
},
@@ -142,8 +142,8 @@ func resourceMediaTransform() *schema.Resource {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validation.StringInSlice([]string{
- string(media.Basic),
- string(media.Standard),
+ string(media.AudioAnalysisModeBasic),
+ string(media.AudioAnalysisModeStandard),
}, false),
},
},
@@ -182,17 +182,17 @@ func resourceMediaTransform() *schema.Resource {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validation.StringInSlice([]string{
- string(media.Basic),
- string(media.Standard),
+ string(media.AudioAnalysisModeBasic),
+ string(media.AudioAnalysisModeStandard),
}, false),
},
"insights_type": {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validation.StringInSlice([]string{
- string(media.AllInsights),
- string(media.AudioInsightsOnly),
- string(media.VideoInsightsOnly),
+ string(media.InsightsTypeAllInsights),
+ string(media.InsightsTypeAudioInsightsOnly),
+ string(media.InsightsTypeVideoInsightsOnly),
}, false),
},
},
@@ -209,8 +209,8 @@ func resourceMediaTransform() *schema.Resource {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validation.StringInSlice([]string{
- string(media.SourceResolution),
- string(media.StandardDefinition),
+ string(media.AnalysisResolutionSourceResolution),
+ string(media.AnalysisResolutionStandardDefinition),
}, false),
},
},
@@ -391,19 +391,19 @@ func expandPreset(transform map[string]interface{}) (media.BasicPreset, error) {
presetType := ""
if transform["builtin_preset"] != nil && len(transform["builtin_preset"].([]interface{})) > 0 && transform["builtin_preset"].([]interface{})[0] != nil {
presetsCount++
- presetType = string(media.OdataTypeMicrosoftMediaBuiltInStandardEncoderPreset)
+ presetType = string(media.OdataTypeBasicPresetOdataTypeMicrosoftMediaBuiltInStandardEncoderPreset)
}
if transform["audio_analyzer_preset"] != nil && len(transform["audio_analyzer_preset"].([]interface{})) > 0 && transform["audio_analyzer_preset"].([]interface{})[0] != nil {
presetsCount++
- presetType = string(media.OdataTypeMicrosoftMediaAudioAnalyzerPreset)
+ presetType = string(media.OdataTypeBasicPresetOdataTypeMicrosoftMediaAudioAnalyzerPreset)
}
if transform["video_analyzer_preset"] != nil && len(transform["video_analyzer_preset"].([]interface{})) > 0 && transform["video_analyzer_preset"].([]interface{})[0] != nil {
presetsCount++
- presetType = string(media.OdataTypeMicrosoftMediaVideoAnalyzerPreset)
+ presetType = string(media.OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset)
}
if transform["face_detector_preset"] != nil && len(transform["face_detector_preset"].([]interface{})) > 0 && transform["face_detector_preset"].([]interface{})[0] != nil {
presetsCount++
- presetType = string(media.OdataTypeMicrosoftMediaFaceDetectorPreset)
+ presetType = string(media.OdataTypeBasicPresetOdataTypeMicrosoftMediaFaceDetectorPreset)
}
if presetsCount == 0 {
@@ -415,7 +415,7 @@ func expandPreset(transform map[string]interface{}) (media.BasicPreset, error) {
}
switch presetType {
- case string(media.OdataTypeMicrosoftMediaBuiltInStandardEncoderPreset):
+ case string(media.OdataTypeBasicPresetOdataTypeMicrosoftMediaBuiltInStandardEncoderPreset):
presets := transform["builtin_preset"].([]interface{})
preset := presets[0].(map[string]interface{})
if preset["preset_name"] == nil {
@@ -424,14 +424,14 @@ func expandPreset(transform map[string]interface{}) (media.BasicPreset, error) {
presetName := preset["preset_name"].(string)
builtInPreset := &media.BuiltInStandardEncoderPreset{
PresetName: media.EncoderNamedPreset(presetName),
- OdataType: media.OdataTypeMicrosoftMediaBuiltInStandardEncoderPreset,
+ OdataType: media.OdataTypeBasicPresetOdataTypeMicrosoftMediaBuiltInStandardEncoderPreset,
}
return builtInPreset, nil
- case string(media.OdataTypeMicrosoftMediaAudioAnalyzerPreset):
+ case string(media.OdataTypeBasicPresetOdataTypeMicrosoftMediaAudioAnalyzerPreset):
presets := transform["audio_analyzer_preset"].([]interface{})
preset := presets[0].(map[string]interface{})
audioAnalyzerPreset := &media.AudioAnalyzerPreset{
- OdataType: media.OdataTypeMicrosoftMediaAudioAnalyzerPreset,
+ OdataType: media.OdataTypeBasicPresetOdataTypeMicrosoftMediaAudioAnalyzerPreset,
}
if preset["audio_language"] != nil && preset["audio_language"].(string) != "" {
audioAnalyzerPreset.AudioLanguage = utils.String(preset["audio_language"].(string))
@@ -440,21 +440,21 @@ func expandPreset(transform map[string]interface{}) (media.BasicPreset, error) {
audioAnalyzerPreset.Mode = media.AudioAnalysisMode(preset["audio_analysis_mode"].(string))
}
return audioAnalyzerPreset, nil
- case string(media.OdataTypeMicrosoftMediaFaceDetectorPreset):
+ case string(media.OdataTypeBasicPresetOdataTypeMicrosoftMediaFaceDetectorPreset):
presets := transform["face_detector_preset"].([]interface{})
preset := presets[0].(map[string]interface{})
faceDetectorPreset := &media.FaceDetectorPreset{
- OdataType: media.OdataTypeMicrosoftMediaFaceDetectorPreset,
+ OdataType: media.OdataTypeBasicPresetOdataTypeMicrosoftMediaFaceDetectorPreset,
}
if preset["analysis_resolution"] != nil {
faceDetectorPreset.Resolution = media.AnalysisResolution(preset["analysis_resolution"].(string))
}
return faceDetectorPreset, nil
- case string(media.OdataTypeMicrosoftMediaVideoAnalyzerPreset):
+ case string(media.OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset):
presets := transform["video_analyzer_preset"].([]interface{})
preset := presets[0].(map[string]interface{})
videoAnalyzerPreset := &media.VideoAnalyzerPreset{
- OdataType: media.OdataTypeMicrosoftMediaVideoAnalyzerPreset,
+ OdataType: media.OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset,
}
if preset["audio_language"] != nil {
videoAnalyzerPreset.AudioLanguage = utils.String(preset["audio_language"].(string))
diff --git a/azurerm/internal/services/monitor/client/client.go b/azurerm/internal/services/monitor/client/client.go
index 328f28ce55c69..3a97a00b0618f 100644
--- a/azurerm/internal/services/monitor/client/client.go
+++ b/azurerm/internal/services/monitor/client/client.go
@@ -1,6 +1,7 @@
package client
import (
+ "github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad"
"github.com/Azure/azure-sdk-for-go/services/monitor/mgmt/2020-10-01/insights"
"github.com/Azure/azure-sdk-for-go/services/preview/alertsmanagement/mgmt/2019-06-01-preview/alertsmanagement"
classic "github.com/Azure/azure-sdk-for-go/services/preview/monitor/mgmt/2019-06-01/insights"
@@ -8,6 +9,9 @@ import (
)
type Client struct {
+ // AAD
+ AADDiagnosticSettingsClient *aad.DiagnosticSettingsClient
+
// Autoscale Settings
AutoscaleSettingsClient *classic.AutoscaleSettingsClient
@@ -27,6 +31,9 @@ type Client struct {
}
func NewClient(o *common.ClientOptions) *Client {
+ AADDiagnosticSettingsClient := aad.NewDiagnosticSettingsClientWithBaseURI(o.ResourceManagerEndpoint)
+ o.ConfigureClient(&AADDiagnosticSettingsClient.Client, o.ResourceManagerAuthorizer)
+
AutoscaleSettingsClient := classic.NewAutoscaleSettingsClientWithBaseURI(o.ResourceManagerEndpoint, o.SubscriptionId)
o.ConfigureClient(&AutoscaleSettingsClient.Client, o.ResourceManagerAuthorizer)
@@ -61,6 +68,7 @@ func NewClient(o *common.ClientOptions) *Client {
o.ConfigureClient(&ScheduledQueryRulesClient.Client, o.ResourceManagerAuthorizer)
return &Client{
+ AADDiagnosticSettingsClient: &AADDiagnosticSettingsClient,
AutoscaleSettingsClient: &AutoscaleSettingsClient,
ActionRulesClient: &ActionRulesClient,
SmartDetectorAlertRulesClient: &SmartDetectorAlertRulesClient,
diff --git a/azurerm/internal/services/monitor/monitor_aad_diagnostic_setting_resource.go b/azurerm/internal/services/monitor/monitor_aad_diagnostic_setting_resource.go
new file mode 100644
index 0000000000000..13b36c66b4d4a
--- /dev/null
+++ b/azurerm/internal/services/monitor/monitor_aad_diagnostic_setting_resource.go
@@ -0,0 +1,391 @@
+package monitor
+
+import (
+ "context"
+ "fmt"
+ "log"
+ "time"
+
+ "github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/monitor/parse"
+
+ "github.com/hashicorp/go-azure-helpers/response"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/validation"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ eventhubParse "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/eventhub/parse"
+ eventhubValidate "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/eventhub/validate"
+ logAnalyticsParse "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/loganalytics/parse"
+ logAnalyticsValidate "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/loganalytics/validate"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/monitor/validate"
+ storageParse "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/storage/parse"
+ storageValidate "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/storage/validate"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+func resourceMonitorAADDiagnosticSetting() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceMonitorAADDiagnosticSettingCreateUpdate,
+ Read: resourceMonitorAADDiagnosticSettingRead,
+ Update: resourceMonitorAADDiagnosticSettingCreateUpdate,
+ Delete: resourceMonitorAADDiagnosticSettingDelete,
+ Importer: pluginsdk.ImporterValidatingResourceId(func(id string) error {
+ _, err := parse.MonitorAADDiagnosticSettingID(id)
+ return err
+ }),
+
+ Timeouts: &schema.ResourceTimeout{
+ Create: schema.DefaultTimeout(5 * time.Minute),
+ Read: schema.DefaultTimeout(5 * time.Minute),
+ Update: schema.DefaultTimeout(5 * time.Minute),
+ Delete: schema.DefaultTimeout(5 * time.Minute),
+ },
+
+ Schema: map[string]*schema.Schema{
+ "name": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ ValidateFunc: validate.MonitorDiagnosticSettingName,
+ },
+
+ // When absent, will use the default eventhub, whilst the Diagnostic Setting API will return this property as an empty string. Therefore, it is useless to make this property as Computed.
+ "eventhub_name": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ ValidateFunc: eventhubValidate.ValidateEventHubName(),
+ },
+
+ "eventhub_authorization_rule_id": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ ValidateFunc: eventhubValidate.NamespaceAuthorizationRuleID,
+ AtLeastOneOf: []string{"eventhub_authorization_rule_id", "log_analytics_workspace_id", "storage_account_id"},
+ },
+
+ "log_analytics_workspace_id": {
+ Type: schema.TypeString,
+ Optional: true,
+ ValidateFunc: logAnalyticsValidate.LogAnalyticsWorkspaceID,
+ AtLeastOneOf: []string{"eventhub_authorization_rule_id", "log_analytics_workspace_id", "storage_account_id"},
+ },
+
+ "storage_account_id": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ ValidateFunc: storageValidate.StorageAccountID,
+ AtLeastOneOf: []string{"eventhub_authorization_rule_id", "log_analytics_workspace_id", "storage_account_id"},
+ },
+
+ "log": {
+ Type: schema.TypeSet,
+ Required: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "category": {
+ Type: schema.TypeString,
+ Required: true,
+ ValidateFunc: validation.StringInSlice([]string{
+ string(aad.AuditLogs),
+ string(aad.SignInLogs),
+ "ADFSSignInLogs",
+ "ManagedIdentitySignInLogs",
+ "NonInteractiveUserSignInLogs",
+ "ProvisioningLogs",
+ "ServicePrincipalSignInLogs",
+ }, false),
+ },
+
+ "enabled": {
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: true,
+ },
+
+ "retention_policy": {
+ Type: schema.TypeList,
+ Required: true,
+ MaxItems: 1,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "enabled": {
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: false,
+ },
+
+ "days": {
+ Type: schema.TypeInt,
+ Optional: true,
+ ValidateFunc: validation.IntAtLeast(0),
+ Default: 0,
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ }
+}
+
+func resourceMonitorAADDiagnosticSettingCreateUpdate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*clients.Client).Monitor.AADDiagnosticSettingsClient
+ ctx, cancel := timeouts.ForCreateUpdate(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+ log.Printf("[INFO] preparing arguments for Azure ARM AAD Diagnostic Setting.")
+
+ name := d.Get("name").(string)
+ id := parse.NewMonitorAADDiagnosticSettingID(name)
+
+ if d.IsNewResource() {
+ existing, err := client.Get(ctx, name)
+ if err != nil {
+ if !utils.ResponseWasNotFound(existing.Response) {
+ return fmt.Errorf("checking for presence of existing %s: %s", id, err)
+ }
+ }
+
+ if !utils.ResponseWasNotFound(existing.Response) {
+ return tf.ImportAsExistsError("azurerm_monitor_aad_diagnostic_setting", id.ID())
+ }
+ }
+
+ logs := expandMonitorAADDiagnosticsSettingsLogs(d.Get("log").(*schema.Set).List())
+
+ // If there is no `enabled` log entry, the PUT will succeed while the next GET will return a 404.
+ // Therefore, ensure users has at least one enabled log entry.
+ valid := false
+ for _, log := range logs {
+ if log.Enabled != nil && *log.Enabled {
+ valid = true
+ break
+ }
+ }
+ if !valid {
+ return fmt.Errorf("At least one of the `log` of the %s should be enabled", id)
+ }
+
+ properties := aad.DiagnosticSettingsResource{
+ DiagnosticSettings: &aad.DiagnosticSettings{
+ Logs: &logs,
+ },
+ }
+
+ eventHubAuthorizationRuleId := d.Get("eventhub_authorization_rule_id").(string)
+ eventHubName := d.Get("eventhub_name").(string)
+ if eventHubAuthorizationRuleId != "" {
+ properties.DiagnosticSettings.EventHubAuthorizationRuleID = utils.String(eventHubAuthorizationRuleId)
+ properties.DiagnosticSettings.EventHubName = utils.String(eventHubName)
+ }
+
+ workspaceId := d.Get("log_analytics_workspace_id").(string)
+ if workspaceId != "" {
+ properties.DiagnosticSettings.WorkspaceID = utils.String(workspaceId)
+ }
+
+ storageAccountId := d.Get("storage_account_id").(string)
+ if storageAccountId != "" {
+ properties.DiagnosticSettings.StorageAccountID = utils.String(storageAccountId)
+ }
+
+ if _, err := client.CreateOrUpdate(ctx, properties, name); err != nil {
+ return fmt.Errorf("creating %s: %+v", id, err)
+ }
+
+ d.SetId(id.ID())
+
+ return resourceMonitorAADDiagnosticSettingRead(d, meta)
+}
+
+func resourceMonitorAADDiagnosticSettingRead(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*clients.Client).Monitor.AADDiagnosticSettingsClient
+ ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ id, err := parse.MonitorAADDiagnosticSettingID(d.Id())
+ if err != nil {
+ return err
+ }
+
+ resp, err := client.Get(ctx, id.Name)
+ if err != nil {
+ if utils.ResponseWasNotFound(resp.Response) {
+ log.Printf("[WARN] %s was not found - removing from state!", id)
+ d.SetId("")
+ return nil
+ }
+
+ return fmt.Errorf("retrieving %s: %+v", id, err)
+ }
+
+ d.Set("name", id.Name)
+
+ d.Set("eventhub_name", resp.EventHubName)
+ eventhubAuthorizationRuleId := ""
+ if resp.EventHubAuthorizationRuleID != nil && *resp.EventHubAuthorizationRuleID != "" {
+ parsedId, err := eventhubParse.NamespaceAuthorizationRuleIDInsensitively(*resp.EventHubAuthorizationRuleID)
+ if err != nil {
+ return err
+ }
+
+ eventhubAuthorizationRuleId = parsedId.ID()
+ }
+ d.Set("eventhub_authorization_rule_id", eventhubAuthorizationRuleId)
+
+ workspaceId := ""
+ if resp.WorkspaceID != nil && *resp.WorkspaceID != "" {
+ parsedId, err := logAnalyticsParse.LogAnalyticsWorkspaceID(*resp.WorkspaceID)
+ if err != nil {
+ return err
+ }
+
+ workspaceId = parsedId.ID()
+ }
+ d.Set("log_analytics_workspace_id", workspaceId)
+
+ storageAccountId := ""
+ if resp.StorageAccountID != nil && *resp.StorageAccountID != "" {
+ parsedId, err := storageParse.StorageAccountID(*resp.StorageAccountID)
+ if err != nil {
+ return err
+ }
+
+ storageAccountId = parsedId.ID()
+ }
+ d.Set("storage_account_id", storageAccountId)
+
+ if err := d.Set("log", flattenMonitorAADDiagnosticLogs(resp.Logs)); err != nil {
+ return fmt.Errorf("setting `log`: %+v", err)
+ }
+
+ return nil
+}
+
+func resourceMonitorAADDiagnosticSettingDelete(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*clients.Client).Monitor.AADDiagnosticSettingsClient
+ ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ id, err := parse.MonitorAADDiagnosticSettingID(d.Id())
+ if err != nil {
+ return err
+ }
+
+ resp, err := client.Delete(ctx, id.Name)
+ if err != nil {
+ if !response.WasNotFound(resp.Response) {
+ return fmt.Errorf("deleting %s: %+v", id, err)
+ }
+ }
+
+ // API appears to be eventually consistent (identified during tainting this resource)
+ log.Printf("[DEBUG] Waiting for %s to disappear", id)
+ timeout, _ := ctx.Deadline()
+ stateConf := &resource.StateChangeConf{
+ Pending: []string{"Exists"},
+ Target: []string{"NotFound"},
+ Refresh: monitorAADDiagnosticSettingDeletedRefreshFunc(ctx, client, id.Name),
+ MinTimeout: 15 * time.Second,
+ ContinuousTargetOccurence: 5,
+ Timeout: time.Until(timeout),
+ }
+
+ if _, err = stateConf.WaitForState(); err != nil {
+ return fmt.Errorf("waiting for %s to become available: %s", id, err)
+ }
+
+ return nil
+}
+
+func monitorAADDiagnosticSettingDeletedRefreshFunc(ctx context.Context, client *aad.DiagnosticSettingsClient, name string) resource.StateRefreshFunc {
+ return func() (interface{}, string, error) {
+ res, err := client.Get(ctx, name)
+ if err != nil {
+ if utils.ResponseWasNotFound(res.Response) {
+ return "NotFound", "NotFound", nil
+ }
+ return nil, "", fmt.Errorf("issuing read request in monitorAADDiagnosticSettingDeletedRefreshFunc: %s", err)
+ }
+
+ return res, "Exists", nil
+ }
+}
+
+func expandMonitorAADDiagnosticsSettingsLogs(input []interface{}) []aad.LogSettings {
+ results := make([]aad.LogSettings, 0)
+
+ for _, raw := range input {
+ v := raw.(map[string]interface{})
+
+ category := v["category"].(string)
+ enabled := v["enabled"].(bool)
+
+ policyRaw := v["retention_policy"].([]interface{})[0].(map[string]interface{})
+ retentionDays := policyRaw["days"].(int)
+ retentionEnabled := policyRaw["enabled"].(bool)
+
+ output := aad.LogSettings{
+ Category: aad.Category(category),
+ Enabled: utils.Bool(enabled),
+ RetentionPolicy: &aad.RetentionPolicy{
+ Days: utils.Int32(int32(retentionDays)),
+ Enabled: utils.Bool(retentionEnabled),
+ },
+ }
+
+ results = append(results, output)
+ }
+
+ return results
+}
+
+func flattenMonitorAADDiagnosticLogs(input *[]aad.LogSettings) []interface{} {
+ results := make([]interface{}, 0)
+ if input == nil {
+ return results
+ }
+
+ for _, v := range *input {
+ category := string(v.Category)
+
+ enabled := false
+ if v.Enabled != nil {
+ enabled = *v.Enabled
+ }
+
+ policies := make([]interface{}, 0)
+ if inputPolicy := v.RetentionPolicy; inputPolicy != nil {
+ days := 0
+ if inputPolicy.Days != nil {
+ days = int(*inputPolicy.Days)
+ }
+
+ enabled := false
+ if inputPolicy.Enabled != nil {
+ enabled = *inputPolicy.Enabled
+ }
+
+ policies = append(policies, map[string]interface{}{
+ "days": days,
+ "enabled": enabled,
+ })
+ }
+
+ results = append(results, map[string]interface{}{
+ "category": category,
+ "enabled": enabled,
+ "retention_policy": policies,
+ })
+ }
+
+ return results
+}
diff --git a/azurerm/internal/services/monitor/monitor_aad_diagnostic_setting_resource_test.go b/azurerm/internal/services/monitor/monitor_aad_diagnostic_setting_resource_test.go
new file mode 100644
index 0000000000000..4c31cb1179e5a
--- /dev/null
+++ b/azurerm/internal/services/monitor/monitor_aad_diagnostic_setting_resource_test.go
@@ -0,0 +1,445 @@
+package monitor_test
+
+import (
+ "context"
+ "fmt"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
+ "github.com/hashicorp/terraform-plugin-sdk/terraform"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance/check"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/monitor/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+type MonitorAADDiagnosticSettingResource struct {
+}
+
+func TestAccMonitorAADDiagnosticSetting_eventhubDefault(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_monitor_aad_diagnostic_setting", "test")
+ r := MonitorAADDiagnosticSettingResource{}
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.eventhubDefault(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
+func TestAccMonitorAADDiagnosticSetting_eventhub(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_monitor_aad_diagnostic_setting", "test")
+ r := MonitorAADDiagnosticSettingResource{}
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.eventhub(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
+func TestAccMonitorAADDiagnosticSetting_requiresImport(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_monitor_aad_diagnostic_setting", "test")
+ r := MonitorAADDiagnosticSettingResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.eventhub(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ {
+ Config: r.requiresImport(data),
+ ExpectError: acceptance.RequiresImportError("azurerm_monitor_aad_diagnostic_setting"),
+ },
+ })
+}
+
+func TestAccMonitorAADDiagnosticSetting_logAnalyticsWorkspace(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_monitor_aad_diagnostic_setting", "test")
+ r := MonitorAADDiagnosticSettingResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.logAnalyticsWorkspace(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
+func TestAccMonitorAADDiagnosticSetting_storageAccount(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_monitor_aad_diagnostic_setting", "test")
+ r := MonitorAADDiagnosticSettingResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.storageAccount(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
+func (t MonitorAADDiagnosticSettingResource) Exists(ctx context.Context, clients *clients.Client, state *terraform.InstanceState) (*bool, error) {
+ id, err := parse.MonitorAADDiagnosticSettingID(state.ID)
+ if err != nil {
+ return nil, err
+ }
+ resp, err := clients.Monitor.AADDiagnosticSettingsClient.Get(ctx, id.Name)
+ if err != nil {
+ return nil, fmt.Errorf("reading %s: %+v", id, err)
+ }
+
+ return utils.Bool(resp.ID != nil), nil
+}
+
+func (MonitorAADDiagnosticSettingResource) eventhub(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-%[1]d"
+ location = "%[2]s"
+}
+
+resource "azurerm_eventhub_namespace" "test" {
+ name = "acctest-EHN-%[1]d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ sku = "Basic"
+}
+
+resource "azurerm_eventhub" "test" {
+ name = "acctest-EH-%[1]d"
+ namespace_name = azurerm_eventhub_namespace.test.name
+ resource_group_name = azurerm_resource_group.test.name
+ partition_count = 2
+ message_retention = 1
+}
+
+resource "azurerm_eventhub_namespace_authorization_rule" "test" {
+ name = "example"
+ namespace_name = azurerm_eventhub_namespace.test.name
+ resource_group_name = azurerm_resource_group.test.name
+ listen = true
+ send = true
+ manage = true
+}
+
+resource "azurerm_monitor_aad_diagnostic_setting" "test" {
+ name = "acctest-DS-%[1]d"
+ eventhub_authorization_rule_id = azurerm_eventhub_namespace_authorization_rule.test.id
+ eventhub_name = azurerm_eventhub.test.name
+ log {
+ category = "SignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "AuditLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "NonInteractiveUserSignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "ServicePrincipalSignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "ManagedIdentitySignInLogs"
+ enabled = false
+ retention_policy {}
+ }
+ log {
+ category = "ProvisioningLogs"
+ enabled = false
+ retention_policy {}
+ }
+ log {
+ category = "ADFSSignInLogs"
+ enabled = false
+ retention_policy {}
+ }
+}
+
+`, data.RandomInteger, data.Locations.Primary)
+}
+
+func (MonitorAADDiagnosticSettingResource) eventhubDefault(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-%[1]d"
+ location = "%[2]s"
+}
+
+resource "azurerm_eventhub_namespace" "test" {
+ name = "acctest-EHN-%[1]d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ sku = "Basic"
+}
+
+resource "azurerm_eventhub_namespace_authorization_rule" "test" {
+ name = "example"
+ namespace_name = azurerm_eventhub_namespace.test.name
+ resource_group_name = azurerm_resource_group.test.name
+ listen = true
+ send = true
+ manage = true
+}
+
+resource "azurerm_monitor_aad_diagnostic_setting" "test" {
+ name = "acctest-DS-%[1]d"
+ eventhub_authorization_rule_id = azurerm_eventhub_namespace_authorization_rule.test.id
+ log {
+ category = "SignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "AuditLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "NonInteractiveUserSignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "ServicePrincipalSignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "ManagedIdentitySignInLogs"
+ enabled = false
+ retention_policy {}
+ }
+ log {
+ category = "ProvisioningLogs"
+ enabled = false
+ retention_policy {}
+ }
+ log {
+ category = "ADFSSignInLogs"
+ enabled = false
+ retention_policy {}
+ }
+}
+
+`, data.RandomInteger, data.Locations.Primary)
+}
+
+func (r MonitorAADDiagnosticSettingResource) requiresImport(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_monitor_aad_diagnostic_setting" "import" {
+ name = azurerm_monitor_aad_diagnostic_setting.test.name
+ eventhub_authorization_rule_id = azurerm_monitor_aad_diagnostic_setting.test.eventhub_authorization_rule_id
+ eventhub_name = azurerm_monitor_aad_diagnostic_setting.test.eventhub_name
+
+ log {
+ category = "SignInLogs"
+ enabled = true
+ retention_policy {}
+ }
+}
+`, r.eventhub(data))
+}
+
+func (MonitorAADDiagnosticSettingResource) logAnalyticsWorkspace(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-%[1]d"
+ location = "%[2]s"
+}
+
+resource "azurerm_log_analytics_workspace" "test" {
+ name = "acctest-LAW-%[1]d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ sku = "PerGB2018"
+ retention_in_days = 30
+}
+
+resource "azurerm_monitor_aad_diagnostic_setting" "test" {
+ name = "acctest-DS-%[1]d"
+ log_analytics_workspace_id = azurerm_log_analytics_workspace.test.id
+ log {
+ category = "SignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "AuditLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "NonInteractiveUserSignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "ServicePrincipalSignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "ManagedIdentitySignInLogs"
+ enabled = false
+ retention_policy {}
+ }
+ log {
+ category = "ProvisioningLogs"
+ enabled = false
+ retention_policy {}
+ }
+ log {
+ category = "ADFSSignInLogs"
+ enabled = false
+ retention_policy {}
+ }
+}
+`, data.RandomInteger, data.Locations.Primary)
+}
+
+func (MonitorAADDiagnosticSettingResource) storageAccount(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-%[1]d"
+ location = "%[2]s"
+}
+
+resource "azurerm_storage_account" "test" {
+ name = "acctestsa%[3]s"
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+ account_tier = "Standard"
+ account_kind = "StorageV2"
+ account_replication_type = "LRS"
+}
+
+resource "azurerm_monitor_aad_diagnostic_setting" "test" {
+ name = "acctest-DS-%[1]d"
+ storage_account_id = azurerm_storage_account.test.id
+ log {
+ category = "SignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "AuditLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "NonInteractiveUserSignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "ServicePrincipalSignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "ManagedIdentitySignInLogs"
+ enabled = false
+ retention_policy {}
+ }
+ log {
+ category = "ProvisioningLogs"
+ enabled = false
+ retention_policy {}
+ }
+ log {
+ category = "ADFSSignInLogs"
+ enabled = false
+ retention_policy {}
+ }
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomStringOfLength(5))
+}
diff --git a/azurerm/internal/services/monitor/monitor_activity_log_alert_resource.go b/azurerm/internal/services/monitor/monitor_activity_log_alert_resource.go
index a8dc68829518b..534f74dfa3e1d 100644
--- a/azurerm/internal/services/monitor/monitor_activity_log_alert_resource.go
+++ b/azurerm/internal/services/monitor/monitor_activity_log_alert_resource.go
@@ -168,6 +168,7 @@ func resourceMonitorActivityLogAlert() *schema.Resource {
"Maintenance",
"Informational",
"ActionRequired",
+ "Security",
},
false,
),
diff --git a/azurerm/internal/services/monitor/monitor_activity_log_alert_resource_test.go b/azurerm/internal/services/monitor/monitor_activity_log_alert_resource_test.go
index 3a71ee520f249..a47c3bcc5d19b 100644
--- a/azurerm/internal/services/monitor/monitor_activity_log_alert_resource_test.go
+++ b/azurerm/internal/services/monitor/monitor_activity_log_alert_resource_test.go
@@ -474,7 +474,7 @@ resource "azurerm_monitor_activity_log_alert" "test" {
criteria {
category = "ServiceHealth"
service_health {
- events = ["Incident", "Maintenance", "ActionRequired"]
+ events = ["Incident", "Maintenance", "ActionRequired", "Security"]
services = ["Action Groups"]
locations = ["Global", "West Europe", "East US"]
}
diff --git a/azurerm/internal/services/monitor/parse/monitor_aad_diagnositc_setting_test.go b/azurerm/internal/services/monitor/parse/monitor_aad_diagnositc_setting_test.go
new file mode 100644
index 0000000000000..b6eda8eaaec63
--- /dev/null
+++ b/azurerm/internal/services/monitor/parse/monitor_aad_diagnositc_setting_test.go
@@ -0,0 +1,78 @@
+package parse
+
+import (
+ "testing"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/resourceid"
+)
+
+var _ resourceid.Formatter = MonitorAADDiagnosticSettingId{}
+
+func TestMonitorAADDiagnosticSettingIDFormatter(t *testing.T) {
+ actual := NewMonitorAADDiagnosticSettingID("setting1").ID()
+ expected := "/providers/Microsoft.AADIAM/diagnosticSettings/setting1"
+ if actual != expected {
+ t.Fatalf("Expected %q but got %q", expected, actual)
+ }
+}
+
+func TestMonitorAADDiagnosticSettingID(t *testing.T) {
+ testData := []struct {
+ Input string
+ Error bool
+ Expected *MonitorAADDiagnosticSettingId
+ }{
+
+ {
+ // empty
+ Input: "",
+ Error: true,
+ },
+
+ {
+ // missing prefix
+ Input: "/",
+ Error: true,
+ },
+
+ {
+ // missing value for Name
+ Input: "/providers/Microsoft.AADIAM/diagnosticSettings/",
+ Error: true,
+ },
+
+ {
+ // valid
+ Input: "/providers/Microsoft.AADIAM/diagnosticSettings/setting1",
+ Expected: &MonitorAADDiagnosticSettingId{
+ Name: "setting1",
+ },
+ },
+
+ {
+ // upper-cased
+ Input: "/PROVIDERS/MICROSOFT.AADIAM/DIAGNOSTICSETTINGS/SETTING1",
+ Error: true,
+ },
+ }
+
+ for _, v := range testData {
+ t.Logf("[DEBUG] Testing %q", v.Input)
+
+ actual, err := MonitorAADDiagnosticSettingID(v.Input)
+ if err != nil {
+ if v.Error {
+ continue
+ }
+
+ t.Fatalf("Expect a value but got an error: %s", err)
+ }
+ if v.Error {
+ t.Fatal("Expect an error but didn't get one")
+ }
+
+ if actual.Name != v.Expected.Name {
+ t.Fatalf("Expected %q but got %q for Name", v.Expected.Name, actual.Name)
+ }
+ }
+}
diff --git a/azurerm/internal/services/monitor/parse/monitor_aad_diagnostic_setting.go b/azurerm/internal/services/monitor/parse/monitor_aad_diagnostic_setting.go
new file mode 100644
index 0000000000000..58701848f7e41
--- /dev/null
+++ b/azurerm/internal/services/monitor/parse/monitor_aad_diagnostic_setting.go
@@ -0,0 +1,41 @@
+package parse
+
+import (
+ "fmt"
+ "strings"
+)
+
+const aadDiagnosticSettingIdPrefix = "/providers/Microsoft.AADIAM/diagnosticSettings/"
+
+type MonitorAADDiagnosticSettingId struct {
+ Name string
+}
+
+func NewMonitorAADDiagnosticSettingID(name string) MonitorAADDiagnosticSettingId {
+ return MonitorAADDiagnosticSettingId{Name: name}
+}
+
+func (id MonitorAADDiagnosticSettingId) String() string {
+ segments := []string{
+ fmt.Sprintf("Name %q", id.Name),
+ }
+ segmentsStr := strings.Join(segments, " / ")
+ return fmt.Sprintf("%s: (%s)", "Monitor AAD Diagnostic Setting", segmentsStr)
+}
+
+func (id MonitorAADDiagnosticSettingId) ID() string {
+ fmtString := aadDiagnosticSettingIdPrefix + "%s"
+ return fmt.Sprintf(fmtString, id.Name)
+}
+
+// MonitorAADDiagnosticSettingID parses a MonitorAADDiagnosticSetting ID into an MonitorAADDiagnosticSettingId struct
+func MonitorAADDiagnosticSettingID(input string) (*MonitorAADDiagnosticSettingId, error) {
+ if !strings.HasPrefix(input, aadDiagnosticSettingIdPrefix) {
+ return nil, fmt.Errorf("invalid Monitor AAD Diagnostic Setting ID - ID should starts with %s", aadDiagnosticSettingIdPrefix)
+ }
+ name := strings.TrimPrefix(input, aadDiagnosticSettingIdPrefix)
+ if name == "" {
+ return nil, fmt.Errorf("ID was missing the 'diagnosticSettings' element")
+ }
+ return &MonitorAADDiagnosticSettingId{Name: name}, nil
+}
diff --git a/azurerm/internal/services/monitor/registration.go b/azurerm/internal/services/monitor/registration.go
index 237542f7bdcd8..2922eceabc659 100644
--- a/azurerm/internal/services/monitor/registration.go
+++ b/azurerm/internal/services/monitor/registration.go
@@ -32,6 +32,7 @@ func (r Registration) SupportedDataSources() map[string]*schema.Resource {
// SupportedResources returns the supported Resources supported by this Service
func (r Registration) SupportedResources() map[string]*schema.Resource {
return map[string]*schema.Resource{
+ "azurerm_monitor_aad_diagnostic_setting": resourceMonitorAADDiagnosticSetting(),
"azurerm_monitor_autoscale_setting": resourceMonitorAutoScaleSetting(),
"azurerm_monitor_action_group": resourceMonitorActionGroup(),
"azurerm_monitor_action_rule_action_group": resourceMonitorActionRuleActionGroup(),
diff --git a/azurerm/internal/services/monitor/validate/monitor_aad_diagnostic_setting_id.go b/azurerm/internal/services/monitor/validate/monitor_aad_diagnostic_setting_id.go
new file mode 100644
index 0000000000000..796ce4c7d83d7
--- /dev/null
+++ b/azurerm/internal/services/monitor/validate/monitor_aad_diagnostic_setting_id.go
@@ -0,0 +1,21 @@
+package validate
+
+import (
+ "fmt"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/monitor/parse"
+)
+
+func MonitorAADDiagnosticSettingID(input interface{}, key string) (warnings []string, errors []error) {
+ v, ok := input.(string)
+ if !ok {
+ errors = append(errors, fmt.Errorf("expected %q to be a string", key))
+ return
+ }
+
+ if _, err := parse.MonitorAADDiagnosticSettingID(v); err != nil {
+ errors = append(errors, err)
+ }
+
+ return
+}
diff --git a/azurerm/internal/services/monitor/validate/monitor_aad_diagnostic_setting_id_test.go b/azurerm/internal/services/monitor/validate/monitor_aad_diagnostic_setting_id_test.go
new file mode 100644
index 0000000000000..4d4ecce100b58
--- /dev/null
+++ b/azurerm/internal/services/monitor/validate/monitor_aad_diagnostic_setting_id_test.go
@@ -0,0 +1,49 @@
+package validate
+
+import "testing"
+
+func TestMonitorAADDiagnosticSettingID(t *testing.T) {
+ cases := []struct {
+ Input string
+ Valid bool
+ }{
+ {
+ // empty
+ Input: "",
+ Valid: false,
+ },
+
+ {
+ // missing prefix
+ Input: "/",
+ Valid: false,
+ },
+
+ {
+ // missing value for Name
+ Input: "/providers/Microsoft.AADIAM/diagnosticSettings/",
+ Valid: false,
+ },
+
+ {
+ // valid
+ Input: "/providers/Microsoft.AADIAM/diagnosticSettings/setting1",
+ Valid: true,
+ },
+
+ {
+ // upper-cased
+ Input: "/PROVIDERS/MICROSOFT.AADIAM/DIAGNOSTICSETTINGS/SETTING1",
+ Valid: false,
+ },
+ }
+ for _, tc := range cases {
+ t.Logf("[DEBUG] Testing Value %s", tc.Input)
+ _, errors := MonitorAADDiagnosticSettingID(tc.Input, "test")
+ valid := len(errors) == 0
+
+ if tc.Valid != valid {
+ t.Fatalf("Expected %t but got %t", tc.Valid, valid)
+ }
+ }
+}
diff --git a/azurerm/internal/services/mssql/migration/database_v0_to_v1.go b/azurerm/internal/services/mssql/migration/database_v0_to_v1.go
index a90b5c5216120..4911e7e10f3ed 100644
--- a/azurerm/internal/services/mssql/migration/database_v0_to_v1.go
+++ b/azurerm/internal/services/mssql/migration/database_v0_to_v1.go
@@ -104,6 +104,7 @@ func databaseV0V1Schema() *schema.Resource {
Computed: true,
},
+ //lintignore:XS003
"long_term_retention_policy": {
Type: schema.TypeList,
Optional: true,
diff --git a/azurerm/internal/services/mssql/mssql_database_resource.go b/azurerm/internal/services/mssql/mssql_database_resource.go
index 5df0d30dfb35f..26c818c346766 100644
--- a/azurerm/internal/services/mssql/mssql_database_resource.go
+++ b/azurerm/internal/services/mssql/mssql_database_resource.go
@@ -10,6 +10,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/mssql/migration"
"github.com/Azure/azure-sdk-for-go/services/preview/sql/mgmt/v3.0/sql"
+ "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2020-06-01/resources"
"github.com/Azure/go-autorest/autorest/date"
"github.com/hashicorp/go-azure-helpers/response"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
@@ -39,6 +40,7 @@ func resourceMsSqlDatabase() *schema.Resource {
return err
}, func(ctx context.Context, d *pluginsdk.ResourceData, meta interface{}) ([]*pluginsdk.ResourceData, error) {
replicationLinksClient := meta.(*clients.Client).MSSQL.ReplicationLinksClient
+ resourceClient := meta.(*clients.Client).Resource.ResourcesClient
id, err := parse.DatabaseID(d.Id())
if err != nil {
@@ -50,9 +52,50 @@ func resourceMsSqlDatabase() *schema.Resource {
}
for _, link := range *resp.Value {
- props := *link.ReplicationLinkProperties
- if props.Role == sql.ReplicationRoleSecondary || props.Role == sql.ReplicationRoleNonReadableSecondary {
+ linkProps := *link.ReplicationLinkProperties
+ if linkProps.Role == sql.ReplicationRoleSecondary || linkProps.Role == sql.ReplicationRoleNonReadableSecondary {
d.Set("create_mode", string(sql.CreateModeSecondary))
+ log.Printf("[INFO] replication link found for %s MsSql Database %s (MsSql Server Name %q / Resource Group %q) with Database %q on MsSql Server %q ", string(sql.CreateModeSecondary), id.Name, id.ServerName, id.ResourceGroup, *linkProps.PartnerDatabase, *linkProps.PartnerServer)
+
+ // get all SQL Servers with the name of the linked Primary
+ filter := fmt.Sprintf("(resourceType eq 'Microsoft.Sql/servers') and ((name eq '%s'))", *linkProps.PartnerServer)
+ var resourceList []resources.GenericResourceExpanded
+ for resourcesIterator, err := resourceClient.ListComplete(ctx, filter, "", nil); resourcesIterator.NotDone(); err = resourcesIterator.NextWithContext(ctx) {
+ if err != nil {
+ return nil, fmt.Errorf("loading SQL Server List: %+v", err)
+ }
+
+ resourceList = append(resourceList, resourcesIterator.Value())
+ }
+ if err != nil {
+ return nil, fmt.Errorf("reading Linked Servers for MsSql Database %s (MsSql Server Name %q / Resource Group %q): %s", id.Name, id.ServerName, id.ResourceGroup, err)
+ }
+
+ for _, server := range resourceList {
+ serverID, err := parse.ServerID(*server.ID)
+ if err != nil {
+ return nil, err
+ }
+
+ // check if server named like the replication linked server has a database named like the partner database with a replication link
+ linksPossiblePrimary, err := replicationLinksClient.ListByDatabase(ctx, serverID.ResourceGroup, serverID.Name, *linkProps.PartnerDatabase)
+ if err != nil && !utils.ResponseWasNotFound(linksPossiblePrimary.Response) {
+ return nil, fmt.Errorf("reading Replication Links for MsSql Database %s (MsSql Server Name %q / Resource Group %q): %s", *linkProps.PartnerDatabase, serverID.Name, serverID.ResourceGroup, err)
+ }
+ if err != nil && utils.ResponseWasNotFound(linksPossiblePrimary.Response) {
+ log.Printf("[INFO] no replication link found for Database %q (MsSql Server %q / Resource Group %q): %s", *linkProps.PartnerDatabase, serverID.Name, serverID.ResourceGroup, err)
+ continue
+ }
+
+ for _, linkPossiblePrimary := range *linksPossiblePrimary.Value {
+ linkPropsPossiblePrimary := *linkPossiblePrimary.ReplicationLinkProperties
+
+ // check if the database has a replication link for a primary role and specific partner location
+ if linkPropsPossiblePrimary.Role == sql.ReplicationRolePrimary && *linkPossiblePrimary.Location == *linkProps.PartnerLocation {
+ d.Set("creation_source_database_id", parse.NewDatabaseID(serverID.SubscriptionId, serverID.ResourceGroup, serverID.Name, *linkProps.PartnerDatabase).ID())
+ }
+ }
+ }
return []*pluginsdk.ResourceData{d}, nil
}
}
diff --git a/azurerm/internal/services/mssql/mssql_database_resource_test.go b/azurerm/internal/services/mssql/mssql_database_resource_test.go
index a11f72d2ff04d..2a357cdd3b925 100644
--- a/azurerm/internal/services/mssql/mssql_database_resource_test.go
+++ b/azurerm/internal/services/mssql/mssql_database_resource_test.go
@@ -268,7 +268,7 @@ func TestAccMsSqlDatabase_createSecondaryMode(t *testing.T) {
check.That(data.ResourceName).Key("sku_name").HasValue("GP_Gen5_2"),
),
},
- data.ImportStep("creation_source_database_id", "sample_name"),
+ data.ImportStep("sample_name"),
})
}
@@ -286,7 +286,7 @@ func TestAccMsSqlDatabase_scaleReplicaSetWithFailovergroup(t *testing.T) {
check.That(data.ResourceName).Key("sku_name").HasValue("GP_Gen5_2"),
),
},
- data.ImportStep("creation_source_database_id"),
+ data.ImportStep(),
{
Config: r.scaleReplicaSetWithFailovergroup(data, "GP_Gen5_8", 25),
Check: resource.ComposeTestCheckFunc(
@@ -296,7 +296,7 @@ func TestAccMsSqlDatabase_scaleReplicaSetWithFailovergroup(t *testing.T) {
check.That(data.ResourceName).Key("sku_name").HasValue("GP_Gen5_8"),
),
},
- data.ImportStep("creation_source_database_id"),
+ data.ImportStep(),
{
Config: r.scaleReplicaSetWithFailovergroup(data, "GP_Gen5_2", 5),
Check: resource.ComposeTestCheckFunc(
@@ -306,7 +306,7 @@ func TestAccMsSqlDatabase_scaleReplicaSetWithFailovergroup(t *testing.T) {
check.That(data.ResourceName).Key("sku_name").HasValue("GP_Gen5_2"),
),
},
- data.ImportStep("creation_source_database_id"),
+ data.ImportStep(),
})
}
diff --git a/azurerm/internal/services/netapp/netapp_volume_data_source.go b/azurerm/internal/services/netapp/netapp_volume_data_source.go
index b6a6ead702491..7523740cfd519 100644
--- a/azurerm/internal/services/netapp/netapp_volume_data_source.go
+++ b/azurerm/internal/services/netapp/netapp_volume_data_source.go
@@ -78,6 +78,11 @@ func dataSourceNetAppVolume() *schema.Resource {
Elem: &schema.Schema{Type: schema.TypeString},
},
+ "security_style": {
+ Type: schema.TypeString,
+ Optional: true,
+ },
+
"data_protection_replication": {
Type: schema.TypeList,
Computed: true,
@@ -156,6 +161,8 @@ func dataSourceNetAppVolumeRead(d *schema.ResourceData, meta interface{}) error
}
d.Set("protocols", protocolTypes)
+ d.Set("security_style", props.SecurityStyle)
+
if props.UsageThreshold != nil {
d.Set("storage_quota_in_gb", *props.UsageThreshold/1073741824)
}
diff --git a/azurerm/internal/services/netapp/netapp_volume_resource.go b/azurerm/internal/services/netapp/netapp_volume_resource.go
index dec139fc2f226..7b7dbb6dd5a96 100644
--- a/azurerm/internal/services/netapp/netapp_volume_resource.go
+++ b/azurerm/internal/services/netapp/netapp_volume_resource.go
@@ -118,6 +118,17 @@ func resourceNetAppVolume() *schema.Resource {
},
},
+ "security_style": {
+ Type: schema.TypeString,
+ Optional: true,
+ ForceNew: true,
+ Computed: true,
+ ValidateFunc: validation.StringInSlice([]string{
+ "Unix", // Using hardcoded values instead of SDK enum since no matter what case is passed,
+ "Ntfs", // ANF changes casing to Pascal case in the backend. Please refer to https://github.com/Azure/azure-sdk-for-go/issues/14684
+ }, false),
+ },
+
"storage_quota_in_gb": {
Type: schema.TypeInt,
Required: true,
@@ -281,6 +292,15 @@ func resourceNetAppVolumeCreateUpdate(d *schema.ResourceData, meta interface{})
protocols = append(protocols, "NFSv3")
}
+ // Handling security style property
+ securityStyle := d.Get("security_style").(string)
+ if strings.EqualFold(securityStyle, "unix") && len(protocols) == 1 && strings.EqualFold(protocols[0].(string), "cifs") {
+ return fmt.Errorf("Unix security style cannot be used in a CIFS enabled volume for volume %q (Resource Group %q)", name, resourceGroup)
+ }
+ if strings.EqualFold(securityStyle, "ntfs") && len(protocols) == 1 && (strings.EqualFold(protocols[0].(string), "nfsv3") || strings.EqualFold(protocols[0].(string), "nfsv4.1")) {
+ return fmt.Errorf("Ntfs security style cannot be used in a NFSv3/NFSv4.1 enabled volume for volume %q (Resource Group %q)", name, resourceGroup)
+ }
+
storageQuotaInGB := int64(d.Get("storage_quota_in_gb").(int) * 1073741824)
exportPolicyRuleRaw := d.Get("export_policy_rule").([]interface{})
@@ -370,6 +390,7 @@ func resourceNetAppVolumeCreateUpdate(d *schema.ResourceData, meta interface{})
ServiceLevel: netapp.ServiceLevel(serviceLevel),
SubnetID: utils.String(subnetID),
ProtocolTypes: utils.ExpandStringSlice(protocols),
+ SecurityStyle: netapp.SecurityStyle(securityStyle),
UsageThreshold: utils.Int64(storageQuotaInGB),
ExportPolicy: exportPolicyRule,
VolumeType: utils.String(volumeType),
@@ -464,6 +485,7 @@ func resourceNetAppVolumeRead(d *schema.ResourceData, meta interface{}) error {
d.Set("service_level", props.ServiceLevel)
d.Set("subnet_id", props.SubnetID)
d.Set("protocols", props.ProtocolTypes)
+ d.Set("security_style", props.SecurityStyle)
if props.UsageThreshold != nil {
d.Set("storage_quota_in_gb", *props.UsageThreshold/1073741824)
}
diff --git a/azurerm/internal/services/netapp/netapp_volume_resource_test.go b/azurerm/internal/services/netapp/netapp_volume_resource_test.go
index 2e32688cf8782..03a6c87f685d1 100644
--- a/azurerm/internal/services/netapp/netapp_volume_resource_test.go
+++ b/azurerm/internal/services/netapp/netapp_volume_resource_test.go
@@ -264,6 +264,7 @@ resource "azurerm_netapp_volume" "test" {
service_level = "Standard"
subnet_id = azurerm_subnet.test.id
protocols = ["NFSv4.1"]
+ security_style = "Unix"
storage_quota_in_gb = 100
export_policy_rule {
diff --git a/azurerm/internal/services/network/application_gateway_resource_test.go b/azurerm/internal/services/network/application_gateway_resource_test.go
index 7aa29a7cef3c3..8e096b9aac60c 100644
--- a/azurerm/internal/services/network/application_gateway_resource_test.go
+++ b/azurerm/internal/services/network/application_gateway_resource_test.go
@@ -1702,7 +1702,7 @@ resource "azurerm_key_vault" "test" {
tenant_id = "${data.azurerm_client_config.test.tenant_id}"
object_id = "${data.azurerm_client_config.test.object_id}"
secret_permissions = ["delete", "get", "set"]
- certificate_permissions = ["create", "delete", "get", "import"]
+ certificate_permissions = ["create", "delete", "get", "import", "purge"]
}
access_policy {
@@ -3470,7 +3470,7 @@ resource "azurerm_key_vault" "test" {
tenant_id = data.azurerm_client_config.test.tenant_id
object_id = data.azurerm_client_config.test.object_id
secret_permissions = ["delete", "get", "set"]
- certificate_permissions = ["create", "delete", "get", "import"]
+ certificate_permissions = ["create", "delete", "get", "import", "purge"]
}
access_policy {
@@ -3620,7 +3620,7 @@ resource "azurerm_key_vault" "test" {
tenant_id = data.azurerm_client_config.test.tenant_id
object_id = data.azurerm_client_config.test.object_id
secret_permissions = ["delete", "get", "set"]
- certificate_permissions = ["create", "delete", "get", "import"]
+ certificate_permissions = ["create", "delete", "get", "import", "purge"]
}
access_policy {
diff --git a/azurerm/internal/services/network/bastion_host_resource.go b/azurerm/internal/services/network/bastion_host_resource.go
index e5575e762fed9..d40f838f53552 100644
--- a/azurerm/internal/services/network/bastion_host_resource.go
+++ b/azurerm/internal/services/network/bastion_host_resource.go
@@ -61,16 +61,19 @@ func resourceBastionHost() *schema.Resource {
"name": {
Type: schema.TypeString,
Required: true,
+ ForceNew: true,
ValidateFunc: validate.BastionIPConfigName,
},
"subnet_id": {
Type: schema.TypeString,
Required: true,
+ ForceNew: true,
ValidateFunc: azure.ValidateResourceID,
},
"public_ip_address_id": {
Type: schema.TypeString,
Required: true,
+ ForceNew: true,
ValidateFunc: azure.ValidateResourceID,
},
},
diff --git a/azurerm/internal/services/network/network_security_rule_resource_test.go b/azurerm/internal/services/network/network_security_rule_resource_test.go
index f50b9f32af357..e8134d9120a1f 100644
--- a/azurerm/internal/services/network/network_security_rule_resource_test.go
+++ b/azurerm/internal/services/network/network_security_rule_resource_test.go
@@ -63,7 +63,7 @@ func TestAccNetworkSecurityRule_disappears(t *testing.T) {
}
func TestAccNetworkSecurityRule_addingRules(t *testing.T) {
- data := acceptance.BuildTestData(t, "azurerm_network_security_rule", "test")
+ data := acceptance.BuildTestData(t, "azurerm_network_security_rule", "test1")
r := NetworkSecurityRuleResource{}
data.ResourceTest(t, r, []resource.TestStep{
diff --git a/azurerm/internal/services/network/private_link_service_endpoint_connections_data_source_test.go b/azurerm/internal/services/network/private_link_service_endpoint_connections_data_source_test.go
index 0ccd3350ceceb..20fce6969d65c 100644
--- a/azurerm/internal/services/network/private_link_service_endpoint_connections_data_source_test.go
+++ b/azurerm/internal/services/network/private_link_service_endpoint_connections_data_source_test.go
@@ -43,5 +43,5 @@ data "azurerm_private_link_service_endpoint_connections" "test" {
service_id = azurerm_private_endpoint.test.private_service_connection.0.private_connection_resource_id
resource_group_name = azurerm_resource_group.test.name
}
-`, PrivateLinkServiceResource{}.basic(data))
+`, PrivateEndpointResource{}.basic(data))
}
diff --git a/azurerm/internal/services/network/public_ip_resource_test.go b/azurerm/internal/services/network/public_ip_resource_test.go
index b856ce1346d8b..7335e58469643 100644
--- a/azurerm/internal/services/network/public_ip_resource_test.go
+++ b/azurerm/internal/services/network/public_ip_resource_test.go
@@ -319,7 +319,7 @@ func TestAccPublicIpStatic_importIdError(t *testing.T) {
ImportState: true,
ImportStateVerify: true,
ImportStateId: fmt.Sprintf("/subscriptions/%s/resourceGroups/acctestRG-%d/providers/Microsoft.Network/publicIPAdresses/acctestpublicip-%d", os.Getenv("ARM_SUBSCRIPTION_ID"), data.RandomInteger, data.RandomInteger),
- ExpectError: regexp.MustCompile("Error parsing supplied resource id."),
+ ExpectError: regexp.MustCompile("Error: parsing Resource ID"),
},
})
}
diff --git a/azurerm/internal/services/network/route_filter_resource_test.go b/azurerm/internal/services/network/route_filter_resource_test.go
index a329efa90d2d4..e165790c6f01f 100644
--- a/azurerm/internal/services/network/route_filter_resource_test.go
+++ b/azurerm/internal/services/network/route_filter_resource_test.go
@@ -71,7 +71,8 @@ func TestAccRouteFilter_disappears(t *testing.T) {
data.ResourceTest(t, r, []resource.TestStep{
data.DisappearsStep(acceptance.DisappearsStepData{
- Config: r.basic,
+ Config: r.basic,
+ TestResource: r,
}),
})
}
diff --git a/azurerm/internal/services/network/virtual_network_gateway_connection_resource.go b/azurerm/internal/services/network/virtual_network_gateway_connection_resource.go
index 90ff2db87cb99..8f43e494e1ad8 100644
--- a/azurerm/internal/services/network/virtual_network_gateway_connection_resource.go
+++ b/azurerm/internal/services/network/virtual_network_gateway_connection_resource.go
@@ -341,6 +341,18 @@ func resourceVirtualNetworkGatewayConnectionCreateUpdate(d *schema.ResourceData,
return fmt.Errorf("Error waiting for completion of Virtual Network Gateway Connection %q (Resource Group %q): %+v", name, resGroup, err)
}
+ if properties.SharedKey != nil && !d.IsNewResource() {
+ future, err := client.SetSharedKey(ctx, resGroup, name, network.ConnectionSharedKey{
+ Value: properties.SharedKey,
+ })
+ if err != nil {
+ return fmt.Errorf("Updating Shared Key for Virtual Network Gateway Connection %q (Resource Group %q): %+v", name, resGroup, err)
+ }
+ if err = future.WaitForCompletionRef(ctx, client.Client); err != nil {
+ return fmt.Errorf("Waiting for updating Shared Key for Virtual Network Gateway Connection %q (Resource Group %q): %+v", name, resGroup, err)
+ }
+ }
+
read, err := client.Get(ctx, resGroup, name)
if err != nil {
return err
diff --git a/azurerm/internal/services/network/virtual_network_resource.go b/azurerm/internal/services/network/virtual_network_resource.go
index 8e4bf47ae9b17..b2f3eba89b991 100644
--- a/azurerm/internal/services/network/virtual_network_resource.go
+++ b/azurerm/internal/services/network/virtual_network_resource.go
@@ -100,10 +100,12 @@ func resourceVirtualNetwork() *schema.Resource {
},
},
+ // TODO 3.0: Remove this property
"vm_protection_enabled": {
- Type: schema.TypeBool,
- Optional: true,
- Default: false,
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: false,
+ Deprecated: "This is deprecated in favor of `ddos_protection_plan`",
},
"guid": {
diff --git a/azurerm/internal/services/network/virtual_network_resource_test.go b/azurerm/internal/services/network/virtual_network_resource_test.go
index a70f011d17ba1..78128b03030c8 100644
--- a/azurerm/internal/services/network/virtual_network_resource_test.go
+++ b/azurerm/internal/services/network/virtual_network_resource_test.go
@@ -27,7 +27,7 @@ func TestAccVirtualNetwork_basic(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("subnet.#").HasValue("1"),
- check.That(data.ResourceName).Key("subnet.1472110187.id").Exists(),
+ check.That(data.ResourceName).Key("subnet.0.id").Exists(),
),
},
data.ImportStep(),
@@ -59,15 +59,16 @@ func TestAccVirtualNetwork_basicUpdated(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("subnet.#").HasValue("1"),
- check.That(data.ResourceName).Key("subnet.1472110187.id").Exists(),
+ check.That(data.ResourceName).Key("subnet.0.id").Exists(),
),
},
+ data.ImportStep(),
{
Config: r.complete(data),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("subnet.#").HasValue("2"),
- check.That(data.ResourceName).Key("subnet.1472110187.id").Exists(),
+ check.That(data.ResourceName).Key("subnet.0.id").Exists(),
),
},
data.ImportStep(),
@@ -131,7 +132,7 @@ func TestAccVirtualNetwork_withTags(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("subnet.#").HasValue("1"),
- check.That(data.ResourceName).Key("subnet.1472110187.id").Exists(),
+ check.That(data.ResourceName).Key("subnet.0.id").Exists(),
check.That(data.ResourceName).Key("tags.%").HasValue("2"),
check.That(data.ResourceName).Key("tags.environment").HasValue("Production"),
check.That(data.ResourceName).Key("tags.cost_center").HasValue("MSFT"),
@@ -142,7 +143,7 @@ func TestAccVirtualNetwork_withTags(t *testing.T) {
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
check.That(data.ResourceName).Key("subnet.#").HasValue("1"),
- check.That(data.ResourceName).Key("subnet.1472110187.id").Exists(),
+ check.That(data.ResourceName).Key("subnet.0.id").Exists(),
check.That(data.ResourceName).Key("tags.%").HasValue("1"),
check.That(data.ResourceName).Key("tags.environment").HasValue("staging"),
),
@@ -202,42 +203,6 @@ func TestAccVirtualNetwork_bgpCommunity(t *testing.T) {
})
}
-func TestAccVirtualNetwork_vmProtection(t *testing.T) {
- data := acceptance.BuildTestData(t, "azurerm_virtual_network", "test")
- r := VirtualNetworkResource{}
-
- data.ResourceTest(t, r, []resource.TestStep{
- {
- Config: r.basic(data),
- Check: resource.ComposeTestCheckFunc(
- check.That(data.ResourceName).ExistsInAzure(r),
- ),
- },
- data.ImportStep(),
- {
- Config: r.vmProtection(data, true),
- Check: resource.ComposeTestCheckFunc(
- check.That(data.ResourceName).ExistsInAzure(r),
- ),
- },
- data.ImportStep(),
- {
- Config: r.vmProtection(data, false),
- Check: resource.ComposeTestCheckFunc(
- check.That(data.ResourceName).ExistsInAzure(r),
- ),
- },
- data.ImportStep(),
- {
- Config: r.basic(data),
- Check: resource.ComposeTestCheckFunc(
- check.That(data.ResourceName).ExistsInAzure(r),
- ),
- },
- data.ImportStep(),
- })
-}
-
func (t VirtualNetworkResource) Exists(ctx context.Context, clients *clients.Client, state *terraform.InstanceState) (*bool, error) {
id, err := parse.VirtualNetworkID(state.ID)
if err != nil {
@@ -486,30 +451,3 @@ resource "azurerm_virtual_network" "test" {
}
`, data.RandomInteger, data.Locations.Primary, data.RandomInteger)
}
-
-func (VirtualNetworkResource) vmProtection(data acceptance.TestData, enabled bool) string {
- return fmt.Sprintf(`
-provider "azurerm" {
- features {}
-}
-
-resource "azurerm_resource_group" "test" {
- name = "acctestRG-%d"
- location = "%s"
-}
-
-resource "azurerm_virtual_network" "test" {
- name = "acctestvirtnet%d"
- address_space = ["10.0.0.0/16"]
- location = azurerm_resource_group.test.location
- resource_group_name = azurerm_resource_group.test.name
-
- subnet {
- name = "subnet1"
- address_prefix = "10.0.1.0/24"
- }
-
- vm_protection_enabled = %t
-}
-`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, enabled)
-}
diff --git a/azurerm/internal/services/postgres/postgresql_server_resource_test.go b/azurerm/internal/services/postgres/postgresql_server_resource_test.go
index 85b0f9b2297d8..85e481fd5a5a6 100644
--- a/azurerm/internal/services/postgres/postgresql_server_resource_test.go
+++ b/azurerm/internal/services/postgres/postgresql_server_resource_test.go
@@ -388,7 +388,7 @@ func TestAccPostgreSQLServer_scaleReplicas(t *testing.T) {
func TestAccPostgreSQLServer_createPointInTimeRestore(t *testing.T) {
data := acceptance.BuildTestData(t, "azurerm_postgresql_server", "test")
r := PostgreSQLServerResource{}
- restoreTime := time.Now().Add(11 * time.Minute)
+ restoreTime := time.Now().Add(30 * time.Minute)
data.ResourceTest(t, r, []resource.TestStep{
{
@@ -399,7 +399,7 @@ func TestAccPostgreSQLServer_createPointInTimeRestore(t *testing.T) {
},
data.ImportStep("administrator_login_password"),
{
- PreConfig: func() { time.Sleep(restoreTime.Sub(time.Now().Add(-7 * time.Minute))) },
+ PreConfig: func() { time.Sleep(30 * time.Minute) },
Config: r.createPointInTimeRestore(data, "11", restoreTime.Format(time.RFC3339)),
Check: resource.ComposeTestCheckFunc(
check.That(data.ResourceName).ExistsInAzure(r),
diff --git a/azurerm/internal/services/redis/redis_cache_resource.go b/azurerm/internal/services/redis/redis_cache_resource.go
index 263b0d866ad18..72b41b803cf6d 100644
--- a/azurerm/internal/services/redis/redis_cache_resource.go
+++ b/azurerm/internal/services/redis/redis_cache_resource.go
@@ -279,6 +279,13 @@ func resourceRedisCache() *schema.Resource {
Default: true,
},
+ "replicas_per_master": {
+ Type: schema.TypeInt,
+ Optional: true,
+ // Can't make more than 3 replicas in portal, assuming it's a limitation
+ ValidateFunc: validation.IntBetween(1, 3),
+ },
+
"tags": tags.Schema(),
},
}
@@ -345,6 +352,10 @@ func resourceRedisCacheCreate(d *schema.ResourceData, meta interface{}) error {
parameters.ShardCount = &shardCount
}
+ if v, ok := d.GetOk("replicas_per_master"); ok {
+ parameters.ReplicasPerMaster = utils.Int32(int32(v.(int)))
+ }
+
if v, ok := d.GetOk("private_static_ip_address"); ok {
parameters.StaticIP = utils.String(v.(string))
}
@@ -442,6 +453,12 @@ func resourceRedisCacheUpdate(d *schema.ResourceData, meta interface{}) error {
}
}
+ if v, ok := d.GetOk("replicas_per_master"); ok {
+ if d.HasChange("replicas_per_master") {
+ parameters.ReplicasPerMaster = utils.Int32(int32(v.(int)))
+ }
+ }
+
if d.HasChange("public_network_access_enabled") {
publicNetworkAccess := redis.Enabled
if !d.Get("public_network_access_enabled").(bool) {
@@ -562,6 +579,7 @@ func resourceRedisCacheRead(d *schema.ResourceData, meta interface{}) error {
d.Set("subnet_id", subnetId)
d.Set("public_network_access_enabled", props.PublicNetworkAccess == redis.Enabled)
+ d.Set("replicas_per_master", props.ReplicasPerMaster)
}
redisConfiguration, err := flattenRedisConfiguration(resp.RedisConfiguration)
diff --git a/azurerm/internal/services/redis/redis_cache_resource_test.go b/azurerm/internal/services/redis/redis_cache_resource_test.go
index 04dfbd8c44023..bf2606024e81f 100644
--- a/azurerm/internal/services/redis/redis_cache_resource_test.go
+++ b/azurerm/internal/services/redis/redis_cache_resource_test.go
@@ -397,6 +397,20 @@ func TestAccRedisCache_WithoutAuth(t *testing.T) {
})
}
+func TestAccRedisCache_ReplicasPerMaster(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_redis_cache", "test")
+ r := RedisCacheResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.replicasPerMaster(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ })
+}
+
func (t RedisCacheResource) Exists(ctx context.Context, clients *clients.Client, state *terraform.InstanceState) (*bool, error) {
id, err := parse.CacheID(state.ID)
if err != nil {
@@ -1002,6 +1016,30 @@ resource "azurerm_redis_cache" "test" {
`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, data.RandomInteger)
}
+func (RedisCacheResource) replicasPerMaster(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-redis-%d"
+ location = "%s"
+}
+
+resource "azurerm_redis_cache" "test" {
+ name = "acctestRedis-%d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ capacity = 3
+ family = "P"
+ sku_name = "Premium"
+ enable_non_ssl_port = false
+ replicas_per_master = 3
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomInteger)
+}
+
func testCheckSSLInConnectionString(resourceName string, propertyName string, requireSSL bool) resource.TestCheckFunc {
return func(s *terraform.State) error {
// Ensure we have enough information in state to look up in API
diff --git a/azurerm/internal/services/redisenterprise/redis_enterprise_database_data_source.go b/azurerm/internal/services/redisenterprise/redis_enterprise_database_data_source.go
new file mode 100644
index 0000000000000..9480d318be841
--- /dev/null
+++ b/azurerm/internal/services/redisenterprise/redis_enterprise_database_data_source.go
@@ -0,0 +1,77 @@
+package redisenterprise
+
+import (
+ "fmt"
+ "time"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/redisenterprise/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/redisenterprise/validate"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
+)
+
+func dataSourceRedisEnterpriseDatabase() *schema.Resource {
+ return &schema.Resource{
+ Read: dataSourceRedisEnterpriseDatabaseRead,
+
+ Timeouts: &schema.ResourceTimeout{
+ Read: schema.DefaultTimeout(5 * time.Minute),
+ },
+
+ Schema: map[string]*schema.Schema{
+ "name": {
+ Type: schema.TypeString,
+ Required: true,
+ },
+
+ "resource_group_name": azure.SchemaResourceGroupNameForDataSource(),
+
+ "cluster_id": {
+ Type: schema.TypeString,
+ Required: true,
+ ValidateFunc: validate.RedisEnterpriseClusterID,
+ },
+
+ "primary_access_key": {
+ Type: schema.TypeString,
+ Computed: true,
+ Sensitive: true,
+ },
+
+ "secondary_access_key": {
+ Type: schema.TypeString,
+ Computed: true,
+ Sensitive: true,
+ },
+ },
+ }
+}
+func dataSourceRedisEnterpriseDatabaseRead(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*clients.Client).RedisEnterprise.DatabaseClient
+ subscriptionId := meta.(*clients.Client).Account.SubscriptionId
+ ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ clusterId, err := parse.RedisEnterpriseClusterID(d.Get("cluster_id").(string))
+ if err != nil {
+ return err
+ }
+
+ id := parse.NewRedisEnterpriseDatabaseID(subscriptionId, d.Get("resource_group_name").(string), clusterId.RedisEnterpriseName, d.Get("name").(string))
+
+ keysResp, err := client.ListKeys(ctx, id.ResourceGroup, id.RedisEnterpriseName, id.DatabaseName)
+ if err != nil {
+ return fmt.Errorf("listing keys for Redis Enterprise Database %q (Resource Group %q / Cluster Name %q): %+v", id.DatabaseName, id.ResourceGroup, id.RedisEnterpriseName, err)
+ }
+
+ d.SetId(id.ID())
+ d.Set("name", id.DatabaseName)
+ d.Set("resource_group_name", id.ResourceGroup)
+ d.Set("cluster_id", clusterId.ID())
+ d.Set("primary_access_key", keysResp.PrimaryKey)
+ d.Set("secondary_access_key", keysResp.SecondaryKey)
+
+ return nil
+}
diff --git a/azurerm/internal/services/redisenterprise/redis_enterprise_database_data_source_test.go b/azurerm/internal/services/redisenterprise/redis_enterprise_database_data_source_test.go
new file mode 100644
index 0000000000000..ddb0b2ee9c413
--- /dev/null
+++ b/azurerm/internal/services/redisenterprise/redis_enterprise_database_data_source_test.go
@@ -0,0 +1,47 @@
+package redisenterprise_test
+
+import (
+ "fmt"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance/check"
+)
+
+type RedisEnterpriseDatabaseDataSource struct {
+}
+
+func TestAccRedisEnterpriseDatabaseDataSource_standard(t *testing.T) {
+ data := acceptance.BuildTestData(t, "data.azurerm_redis_enterprise_database", "test")
+ r := RedisEnterpriseDatabaseDataSource{}
+
+ resourceGroupName := fmt.Sprintf("acctestRG-redisEnterprise-%d", data.RandomInteger)
+
+ data.DataSourceTest(t, []resource.TestStep{
+ {
+ Config: r.dataSource(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).Key("name").HasValue("default"),
+ check.That(data.ResourceName).Key("resource_group_name").HasValue(resourceGroupName),
+ check.That(data.ResourceName).Key("cluster_id").Exists(),
+ check.That(data.ResourceName).Key("primary_access_key").Exists(),
+ check.That(data.ResourceName).Key("secondary_access_key").Exists(),
+ ),
+ },
+ })
+}
+
+func (r RedisEnterpriseDatabaseDataSource) dataSource(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+data "azurerm_redis_enterprise_database" "test" {
+ depends_on = [azurerm_redis_enterprise_database.test]
+
+ name = "default"
+ resource_group_name = azurerm_resource_group.test.name
+ cluster_id = azurerm_redis_enterprise_cluster.test.id
+}
+`, RedisenterpriseDatabaseResource{}.basic(data))
+}
diff --git a/azurerm/internal/services/redisenterprise/redis_enterprise_database_resource.go b/azurerm/internal/services/redisenterprise/redis_enterprise_database_resource.go
index 2a475a423ed47..5f0e208a41e59 100644
--- a/azurerm/internal/services/redisenterprise/redis_enterprise_database_resource.go
+++ b/azurerm/internal/services/redisenterprise/redis_enterprise_database_resource.go
@@ -175,6 +175,18 @@ func resourceRedisEnterpriseDatabase() *schema.Resource {
Default: 10000,
ValidateFunc: validation.IntBetween(0, 65353),
},
+
+ "primary_access_key": {
+ Type: schema.TypeString,
+ Computed: true,
+ Sensitive: true,
+ },
+
+ "secondary_access_key": {
+ Type: schema.TypeString,
+ Computed: true,
+ Sensitive: true,
+ },
},
}
}
@@ -259,6 +271,11 @@ func resourceRedisEnterpriseDatabaseRead(d *schema.ResourceData, meta interface{
return fmt.Errorf("retrieving Redis Enterprise Database %q (Resource Group %q / Cluster Name %q): %+v", id.DatabaseName, id.ResourceGroup, id.RedisEnterpriseName, err)
}
+ keysResp, err := client.ListKeys(ctx, id.ResourceGroup, id.RedisEnterpriseName, id.DatabaseName)
+ if err != nil {
+ return fmt.Errorf("listing keys for Redis Enterprise Database %q (Resource Group %q / Cluster Name %q): %+v", id.DatabaseName, id.ResourceGroup, id.RedisEnterpriseName, err)
+ }
+
d.Set("name", id.DatabaseName)
d.Set("resource_group_name", id.ResourceGroup)
d.Set("cluster_id", parse.NewRedisEnterpriseClusterID(id.SubscriptionId, id.ResourceGroup, id.RedisEnterpriseName).ID())
@@ -276,6 +293,9 @@ func resourceRedisEnterpriseDatabaseRead(d *schema.ResourceData, meta interface{
d.Set("port", props.Port)
}
+ d.Set("primary_access_key", keysResp.PrimaryKey)
+ d.Set("secondary_access_key", keysResp.SecondaryKey)
+
return nil
}
@@ -342,6 +362,14 @@ func flattenArmDatabaseModuleArray(input *[]redisenterprise.Module) []interface{
args := ""
if item.Args != nil {
args = *item.Args
+ // new behavior if you do not pass args the RP sets the args to "PARTITIONS AUTO" by default
+ // (for RediSearch) which causes the the database to be force new on every plan after creation
+ // feels like an RP bug, but I added this workaround...
+ // NOTE: You also cannot set the args to PARTITIONS AUTO by default else you will get an error on create:
+ // Code="InvalidRequestBody" Message="The value of the parameter 'properties.modules' is invalid."
+ if strings.EqualFold(args, "PARTITIONS AUTO") {
+ args = ""
+ }
}
var version string
diff --git a/azurerm/internal/services/redisenterprise/registration.go b/azurerm/internal/services/redisenterprise/registration.go
index 02b9036c44b1e..f9d098b56145e 100644
--- a/azurerm/internal/services/redisenterprise/registration.go
+++ b/azurerm/internal/services/redisenterprise/registration.go
@@ -20,7 +20,9 @@ func (r Registration) WebsiteCategories() []string {
// SupportedDataSources returns the supported Data Sources supported by this Service
func (r Registration) SupportedDataSources() map[string]*schema.Resource {
- return nil
+ return map[string]*schema.Resource{
+ "azurerm_redis_enterprise_database": dataSourceRedisEnterpriseDatabase(),
+ }
}
// SupportedResources returns the supported Resources supported by this Service
diff --git a/azurerm/internal/services/sentinel/sentinel_alert_rule_scheduled_resource.go b/azurerm/internal/services/sentinel/sentinel_alert_rule_scheduled_resource.go
index 88522f84a69d5..28aab68f7a88a 100644
--- a/azurerm/internal/services/sentinel/sentinel_alert_rule_scheduled_resource.go
+++ b/azurerm/internal/services/sentinel/sentinel_alert_rule_scheduled_resource.go
@@ -109,6 +109,7 @@ func resourceSentinelAlertRuleScheduled() *schema.Resource {
string(securityinsight.AttackTacticLateralMovement),
string(securityinsight.AttackTacticPersistence),
string(securityinsight.AttackTacticPrivilegeEscalation),
+ string(securityinsight.AttackTacticPreAttack),
}, false),
},
},
diff --git a/azurerm/internal/services/servicebus/client/client.go b/azurerm/internal/services/servicebus/client/client.go
index a3e85213e7c14..6ed9e714cddc5 100644
--- a/azurerm/internal/services/servicebus/client/client.go
+++ b/azurerm/internal/services/servicebus/client/client.go
@@ -7,18 +7,22 @@ import (
)
type Client struct {
- QueuesClient *servicebus.QueuesClient
- NamespacesClient *servicebus.NamespacesClient
- NamespacesClientPreview *servicebusPreview.NamespacesClient
- TopicsClient *servicebus.TopicsClient
- SubscriptionsClient *servicebus.SubscriptionsClient
- SubscriptionRulesClient *servicebus.RulesClient
+ QueuesClient *servicebus.QueuesClient
+ DisasterRecoveryConfigsClient *servicebus.DisasterRecoveryConfigsClient
+ NamespacesClient *servicebus.NamespacesClient
+ NamespacesClientPreview *servicebusPreview.NamespacesClient
+ TopicsClient *servicebus.TopicsClient
+ SubscriptionsClient *servicebus.SubscriptionsClient
+ SubscriptionRulesClient *servicebus.RulesClient
}
func NewClient(o *common.ClientOptions) *Client {
QueuesClient := servicebus.NewQueuesClientWithBaseURI(o.ResourceManagerEndpoint, o.SubscriptionId)
o.ConfigureClient(&QueuesClient.Client, o.ResourceManagerAuthorizer)
+ DisasterRecoveryConfigsClient := servicebus.NewDisasterRecoveryConfigsClientWithBaseURI(o.ResourceManagerEndpoint, o.SubscriptionId)
+ o.ConfigureClient(&DisasterRecoveryConfigsClient.Client, o.ResourceManagerAuthorizer)
+
NamespacesClient := servicebus.NewNamespacesClientWithBaseURI(o.ResourceManagerEndpoint, o.SubscriptionId)
o.ConfigureClient(&NamespacesClient.Client, o.ResourceManagerAuthorizer)
@@ -35,11 +39,12 @@ func NewClient(o *common.ClientOptions) *Client {
o.ConfigureClient(&SubscriptionRulesClient.Client, o.ResourceManagerAuthorizer)
return &Client{
- QueuesClient: &QueuesClient,
- NamespacesClient: &NamespacesClient,
- NamespacesClientPreview: &NamespacesClientPreview,
- TopicsClient: &TopicsClient,
- SubscriptionsClient: &SubscriptionsClient,
- SubscriptionRulesClient: &SubscriptionRulesClient,
+ QueuesClient: &QueuesClient,
+ DisasterRecoveryConfigsClient: &DisasterRecoveryConfigsClient,
+ NamespacesClient: &NamespacesClient,
+ NamespacesClientPreview: &NamespacesClientPreview,
+ TopicsClient: &TopicsClient,
+ SubscriptionsClient: &SubscriptionsClient,
+ SubscriptionRulesClient: &SubscriptionRulesClient,
}
}
diff --git a/azurerm/internal/services/servicebus/parse/namespace_disaster_recovery_config.go b/azurerm/internal/services/servicebus/parse/namespace_disaster_recovery_config.go
new file mode 100644
index 0000000000000..e174ca05a25c9
--- /dev/null
+++ b/azurerm/internal/services/servicebus/parse/namespace_disaster_recovery_config.go
@@ -0,0 +1,75 @@
+package parse
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "fmt"
+ "strings"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
+)
+
+type NamespaceDisasterRecoveryConfigId struct {
+ SubscriptionId string
+ ResourceGroup string
+ NamespaceName string
+ DisasterRecoveryConfigName string
+}
+
+func NewNamespaceDisasterRecoveryConfigID(subscriptionId, resourceGroup, namespaceName, disasterRecoveryConfigName string) NamespaceDisasterRecoveryConfigId {
+ return NamespaceDisasterRecoveryConfigId{
+ SubscriptionId: subscriptionId,
+ ResourceGroup: resourceGroup,
+ NamespaceName: namespaceName,
+ DisasterRecoveryConfigName: disasterRecoveryConfigName,
+ }
+}
+
+func (id NamespaceDisasterRecoveryConfigId) String() string {
+ segments := []string{
+ fmt.Sprintf("Disaster Recovery Config Name %q", id.DisasterRecoveryConfigName),
+ fmt.Sprintf("Namespace Name %q", id.NamespaceName),
+ fmt.Sprintf("Resource Group %q", id.ResourceGroup),
+ }
+ segmentsStr := strings.Join(segments, " / ")
+ return fmt.Sprintf("%s: (%s)", "Namespace Disaster Recovery Config", segmentsStr)
+}
+
+func (id NamespaceDisasterRecoveryConfigId) ID() string {
+ fmtString := "/subscriptions/%s/resourceGroups/%s/providers/Microsoft.ServiceBus/namespaces/%s/disasterRecoveryConfigs/%s"
+ return fmt.Sprintf(fmtString, id.SubscriptionId, id.ResourceGroup, id.NamespaceName, id.DisasterRecoveryConfigName)
+}
+
+// NamespaceDisasterRecoveryConfigID parses a NamespaceDisasterRecoveryConfig ID into an NamespaceDisasterRecoveryConfigId struct
+func NamespaceDisasterRecoveryConfigID(input string) (*NamespaceDisasterRecoveryConfigId, error) {
+ id, err := azure.ParseAzureResourceID(input)
+ if err != nil {
+ return nil, err
+ }
+
+ resourceId := NamespaceDisasterRecoveryConfigId{
+ SubscriptionId: id.SubscriptionID,
+ ResourceGroup: id.ResourceGroup,
+ }
+
+ if resourceId.SubscriptionId == "" {
+ return nil, fmt.Errorf("ID was missing the 'subscriptions' element")
+ }
+
+ if resourceId.ResourceGroup == "" {
+ return nil, fmt.Errorf("ID was missing the 'resourceGroups' element")
+ }
+
+ if resourceId.NamespaceName, err = id.PopSegment("namespaces"); err != nil {
+ return nil, err
+ }
+ if resourceId.DisasterRecoveryConfigName, err = id.PopSegment("disasterRecoveryConfigs"); err != nil {
+ return nil, err
+ }
+
+ if err := id.ValidateNoEmptySegments(input); err != nil {
+ return nil, err
+ }
+
+ return &resourceId, nil
+}
diff --git a/azurerm/internal/services/servicebus/parse/namespace_disaster_recovery_config_test.go b/azurerm/internal/services/servicebus/parse/namespace_disaster_recovery_config_test.go
new file mode 100644
index 0000000000000..d21592a78430c
--- /dev/null
+++ b/azurerm/internal/services/servicebus/parse/namespace_disaster_recovery_config_test.go
@@ -0,0 +1,128 @@
+package parse
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "testing"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/resourceid"
+)
+
+var _ resourceid.Formatter = NamespaceDisasterRecoveryConfigId{}
+
+func TestNamespaceDisasterRecoveryConfigIDFormatter(t *testing.T) {
+ actual := NewNamespaceDisasterRecoveryConfigID("12345678-1234-9876-4563-123456789012", "resGroup1", "namespace1", "aliasName1").ID()
+ expected := "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/namespace1/disasterRecoveryConfigs/aliasName1"
+ if actual != expected {
+ t.Fatalf("Expected %q but got %q", expected, actual)
+ }
+}
+
+func TestNamespaceDisasterRecoveryConfigID(t *testing.T) {
+ testData := []struct {
+ Input string
+ Error bool
+ Expected *NamespaceDisasterRecoveryConfigId
+ }{
+
+ {
+ // empty
+ Input: "",
+ Error: true,
+ },
+
+ {
+ // missing SubscriptionId
+ Input: "/",
+ Error: true,
+ },
+
+ {
+ // missing value for SubscriptionId
+ Input: "/subscriptions/",
+ Error: true,
+ },
+
+ {
+ // missing ResourceGroup
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/",
+ Error: true,
+ },
+
+ {
+ // missing value for ResourceGroup
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/",
+ Error: true,
+ },
+
+ {
+ // missing NamespaceName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/",
+ Error: true,
+ },
+
+ {
+ // missing value for NamespaceName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/",
+ Error: true,
+ },
+
+ {
+ // missing DisasterRecoveryConfigName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/namespace1/",
+ Error: true,
+ },
+
+ {
+ // missing value for DisasterRecoveryConfigName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/namespace1/disasterRecoveryConfigs/",
+ Error: true,
+ },
+
+ {
+ // valid
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/namespace1/disasterRecoveryConfigs/aliasName1",
+ Expected: &NamespaceDisasterRecoveryConfigId{
+ SubscriptionId: "12345678-1234-9876-4563-123456789012",
+ ResourceGroup: "resGroup1",
+ NamespaceName: "namespace1",
+ DisasterRecoveryConfigName: "aliasName1",
+ },
+ },
+
+ {
+ // upper-cased
+ Input: "/SUBSCRIPTIONS/12345678-1234-9876-4563-123456789012/RESOURCEGROUPS/RESGROUP1/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/NAMESPACE1/DISASTERRECOVERYCONFIGS/ALIASNAME1",
+ Error: true,
+ },
+ }
+
+ for _, v := range testData {
+ t.Logf("[DEBUG] Testing %q", v.Input)
+
+ actual, err := NamespaceDisasterRecoveryConfigID(v.Input)
+ if err != nil {
+ if v.Error {
+ continue
+ }
+
+ t.Fatalf("Expect a value but got an error: %s", err)
+ }
+ if v.Error {
+ t.Fatal("Expect an error but didn't get one")
+ }
+
+ if actual.SubscriptionId != v.Expected.SubscriptionId {
+ t.Fatalf("Expected %q but got %q for SubscriptionId", v.Expected.SubscriptionId, actual.SubscriptionId)
+ }
+ if actual.ResourceGroup != v.Expected.ResourceGroup {
+ t.Fatalf("Expected %q but got %q for ResourceGroup", v.Expected.ResourceGroup, actual.ResourceGroup)
+ }
+ if actual.NamespaceName != v.Expected.NamespaceName {
+ t.Fatalf("Expected %q but got %q for NamespaceName", v.Expected.NamespaceName, actual.NamespaceName)
+ }
+ if actual.DisasterRecoveryConfigName != v.Expected.DisasterRecoveryConfigName {
+ t.Fatalf("Expected %q but got %q for DisasterRecoveryConfigName", v.Expected.DisasterRecoveryConfigName, actual.DisasterRecoveryConfigName)
+ }
+ }
+}
diff --git a/azurerm/internal/services/servicebus/registration.go b/azurerm/internal/services/servicebus/registration.go
index 23505d9620635..78064ad0179d7 100644
--- a/azurerm/internal/services/servicebus/registration.go
+++ b/azurerm/internal/services/servicebus/registration.go
@@ -22,27 +22,29 @@ func (r Registration) WebsiteCategories() []string {
// SupportedDataSources returns the supported Data Sources supported by this Service
func (r Registration) SupportedDataSources() map[string]*schema.Resource {
return map[string]*schema.Resource{
- "azurerm_servicebus_namespace": dataSourceServiceBusNamespace(),
- "azurerm_servicebus_namespace_authorization_rule": dataSourceServiceBusNamespaceAuthorizationRule(),
- "azurerm_servicebus_topic_authorization_rule": dataSourceServiceBusTopicAuthorizationRule(),
- "azurerm_servicebus_queue_authorization_rule": dataSourceServiceBusQueueAuthorizationRule(),
- "azurerm_servicebus_subscription": dataSourceServiceBusSubscription(),
- "azurerm_servicebus_topic": dataSourceServiceBusTopic(),
- "azurerm_servicebus_queue": dataSourceServiceBusQueue(),
+ "azurerm_servicebus_namespace": dataSourceServiceBusNamespace(),
+ "azurerm_servicebus_namespace_disaster_recovery_config": dataSourceServiceBusNamespaceDisasterRecoveryConfig(),
+ "azurerm_servicebus_namespace_authorization_rule": dataSourceServiceBusNamespaceAuthorizationRule(),
+ "azurerm_servicebus_topic_authorization_rule": dataSourceServiceBusTopicAuthorizationRule(),
+ "azurerm_servicebus_queue_authorization_rule": dataSourceServiceBusQueueAuthorizationRule(),
+ "azurerm_servicebus_subscription": dataSourceServiceBusSubscription(),
+ "azurerm_servicebus_topic": dataSourceServiceBusTopic(),
+ "azurerm_servicebus_queue": dataSourceServiceBusQueue(),
}
}
// SupportedResources returns the supported Resources supported by this Service
func (r Registration) SupportedResources() map[string]*schema.Resource {
return map[string]*schema.Resource{
- "azurerm_servicebus_namespace": resourceServiceBusNamespace(),
- "azurerm_servicebus_namespace_authorization_rule": resourceServiceBusNamespaceAuthorizationRule(),
- "azurerm_servicebus_namespace_network_rule_set": resourceServiceBusNamespaceNetworkRuleSet(),
- "azurerm_servicebus_queue": resourceServiceBusQueue(),
- "azurerm_servicebus_queue_authorization_rule": resourceServiceBusQueueAuthorizationRule(),
- "azurerm_servicebus_subscription": resourceServiceBusSubscription(),
- "azurerm_servicebus_subscription_rule": resourceServiceBusSubscriptionRule(),
- "azurerm_servicebus_topic_authorization_rule": resourceServiceBusTopicAuthorizationRule(),
- "azurerm_servicebus_topic": resourceServiceBusTopic(),
+ "azurerm_servicebus_namespace": resourceServiceBusNamespace(),
+ "azurerm_servicebus_namespace_disaster_recovery_config": resourceServiceBusNamespaceDisasterRecoveryConfig(),
+ "azurerm_servicebus_namespace_authorization_rule": resourceServiceBusNamespaceAuthorizationRule(),
+ "azurerm_servicebus_namespace_network_rule_set": resourceServiceBusNamespaceNetworkRuleSet(),
+ "azurerm_servicebus_queue": resourceServiceBusQueue(),
+ "azurerm_servicebus_queue_authorization_rule": resourceServiceBusQueueAuthorizationRule(),
+ "azurerm_servicebus_subscription": resourceServiceBusSubscription(),
+ "azurerm_servicebus_subscription_rule": resourceServiceBusSubscriptionRule(),
+ "azurerm_servicebus_topic_authorization_rule": resourceServiceBusTopicAuthorizationRule(),
+ "azurerm_servicebus_topic": resourceServiceBusTopic(),
}
}
diff --git a/azurerm/internal/services/servicebus/resourceids.go b/azurerm/internal/services/servicebus/resourceids.go
index 7a3dd7dddabcc..5a13eba35779f 100644
--- a/azurerm/internal/services/servicebus/resourceids.go
+++ b/azurerm/internal/services/servicebus/resourceids.go
@@ -2,6 +2,7 @@ package servicebus
//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=Queue -id=/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/namespace1/queues/queue1
//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=QueueAuthorizationRule -id=/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/namespace1/queues/queue1/authorizationRules/authorizationRule1
+//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=NamespaceDisasterRecoveryConfig -id=/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/namespace1/disasterRecoveryConfigs/aliasName1
//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=Namespace -id=/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/namespace1
//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=NamespaceAuthorizationRule -id=/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/namespace1/AuthorizationRules/authorizationRule1
//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=NamespaceNetworkRuleSet -id=/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/namespace1/networkrulesets/networkRuleSet1
diff --git a/azurerm/internal/services/servicebus/servicebus_namespace_disaster_recovery_config_data_source.go b/azurerm/internal/services/servicebus/servicebus_namespace_disaster_recovery_config_data_source.go
new file mode 100644
index 0000000000000..82df3a49db31a
--- /dev/null
+++ b/azurerm/internal/services/servicebus/servicebus_namespace_disaster_recovery_config_data_source.go
@@ -0,0 +1,103 @@
+package servicebus
+
+import (
+ "fmt"
+ "log"
+ "time"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/servicebus/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+func dataSourceServiceBusNamespaceDisasterRecoveryConfig() *schema.Resource {
+ return &schema.Resource{
+ Read: dataSourceServiceBusNamespaceDisasterRecoveryConfigRead,
+
+ Timeouts: &schema.ResourceTimeout{
+ Read: schema.DefaultTimeout(5 * time.Minute),
+ },
+
+ Schema: map[string]*schema.Schema{
+ "name": {
+ Type: schema.TypeString,
+ Required: true,
+ },
+
+ "namespace_name": {
+ Type: schema.TypeString,
+ Required: true,
+ },
+
+ "resource_group_name": azure.SchemaResourceGroupNameForDataSource(),
+
+ "partner_namespace_id": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+
+ "alias_primary_connection_string": {
+ Type: schema.TypeString,
+ Computed: true,
+ Sensitive: true,
+ },
+
+ "alias_secondary_connection_string": {
+ Type: schema.TypeString,
+ Computed: true,
+ Sensitive: true,
+ },
+
+ "default_primary_key": {
+ Type: schema.TypeString,
+ Computed: true,
+ Sensitive: true,
+ },
+
+ "default_secondary_key": {
+ Type: schema.TypeString,
+ Computed: true,
+ Sensitive: true,
+ },
+ },
+ }
+}
+
+func dataSourceServiceBusNamespaceDisasterRecoveryConfigRead(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*clients.Client).ServiceBus.DisasterRecoveryConfigsClient
+ subscriptionId := meta.(*clients.Client).Account.SubscriptionId
+ ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ id := parse.NewNamespaceDisasterRecoveryConfigID(subscriptionId, d.Get("resource_group_name").(string), d.Get("namespace_name").(string), d.Get("name").(string))
+
+ resp, err := client.Get(ctx, id.ResourceGroup, id.NamespaceName, id.DisasterRecoveryConfigName)
+ if err != nil {
+ if utils.ResponseWasNotFound(resp.Response) {
+ d.SetId("")
+ return nil
+ }
+ return fmt.Errorf("retrieving %s: %+v", id, err)
+ }
+
+ d.Set("name", id.DisasterRecoveryConfigName)
+ d.Set("resource_group_name", id.ResourceGroup)
+ d.Set("namespace_name", id.NamespaceName)
+ d.Set("partner_namespace_id", resp.ArmDisasterRecoveryProperties.PartnerNamespace)
+ d.SetId(*resp.ID)
+
+ keys, err := client.ListKeys(ctx, id.ResourceGroup, id.NamespaceName, id.DisasterRecoveryConfigName, serviceBusNamespaceDefaultAuthorizationRule)
+
+ if err != nil {
+ log.Printf("[WARN] listing default keys for %s: %+v", id, err)
+ } else {
+ d.Set("alias_primary_connection_string", keys.AliasPrimaryConnectionString)
+ d.Set("alias_secondary_connection_string", keys.AliasSecondaryConnectionString)
+ d.Set("default_primary_key", keys.PrimaryKey)
+ d.Set("default_secondary_key", keys.SecondaryKey)
+ }
+ return nil
+}
diff --git a/azurerm/internal/services/servicebus/servicebus_namespace_disaster_recovery_config_data_source_test.go b/azurerm/internal/services/servicebus/servicebus_namespace_disaster_recovery_config_data_source_test.go
new file mode 100644
index 0000000000000..334f3f611bbfb
--- /dev/null
+++ b/azurerm/internal/services/servicebus/servicebus_namespace_disaster_recovery_config_data_source_test.go
@@ -0,0 +1,45 @@
+package servicebus_test
+
+import (
+ "fmt"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance/check"
+)
+
+type ServiceBusNamespaceDisasterRecoveryDataSource struct {
+}
+
+func TestAccDataSourceServiceBusNamespaceDisasterRecoveryConfig_basic(t *testing.T) {
+ data := acceptance.BuildTestData(t, "data.azurerm_servicebus_namespace_disaster_recovery_config", "test")
+ r := ServiceBusNamespaceDisasterRecoveryDataSource{}
+
+ data.DataSourceTest(t, []resource.TestStep{
+ {
+ Config: r.basic(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).Key("name").Exists(),
+ check.That(data.ResourceName).Key("resource_group_name").Exists(),
+ check.That(data.ResourceName).Key("partner_namespace_id").Exists(),
+ check.That(data.ResourceName).Key("alias_primary_connection_string").Exists(),
+ check.That(data.ResourceName).Key("alias_secondary_connection_string").Exists(),
+ check.That(data.ResourceName).Key("default_primary_key").Exists(),
+ check.That(data.ResourceName).Key("default_secondary_key").Exists(),
+ ),
+ },
+ })
+}
+
+func (ServiceBusNamespaceDisasterRecoveryDataSource) basic(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+data "azurerm_servicebus_namespace_disaster_recovery_config" "test" {
+ name = azurerm_servicebus_namespace_disaster_recovery_config.pairing_test.name
+ resource_group_name = azurerm_resource_group.primary.name
+ namespace_name = azurerm_servicebus_namespace.primary_namespace_test.name
+}
+`, ServiceBusNamespaceDisasterRecoveryConfigResource{}.basic(data))
+}
diff --git a/azurerm/internal/services/servicebus/servicebus_namespace_disaster_recovery_config_resource.go b/azurerm/internal/services/servicebus/servicebus_namespace_disaster_recovery_config_resource.go
new file mode 100644
index 0000000000000..c58397ee81176
--- /dev/null
+++ b/azurerm/internal/services/servicebus/servicebus_namespace_disaster_recovery_config_resource.go
@@ -0,0 +1,329 @@
+package servicebus
+
+import (
+ "context"
+ "fmt"
+ "log"
+ "net/http"
+ "strconv"
+ "time"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/locks"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
+
+ "github.com/Azure/azure-sdk-for-go/services/servicebus/mgmt/2017-04-01/servicebus"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/servicebus/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+type ServiceBusNamespaceDisasterRecoveryConfigResource struct {
+}
+
+func resourceServiceBusNamespaceDisasterRecoveryConfig() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceServiceBusNamespaceDisasterRecoveryConfigCreate,
+ Read: resourceServiceBusNamespaceDisasterRecoveryConfigRead,
+ Update: resourceServiceBusNamespaceDisasterRecoveryConfigUpdate,
+ Delete: resourceServiceBusNamespaceDisasterRecoveryConfigDelete,
+
+ Importer: pluginsdk.ImporterValidatingResourceId(func(id string) error {
+ _, err := parse.NamespaceDisasterRecoveryConfigID(id)
+ return err
+ }),
+
+ Timeouts: &schema.ResourceTimeout{
+ Create: schema.DefaultTimeout(30 * time.Minute),
+ Read: schema.DefaultTimeout(5 * time.Minute),
+ Update: schema.DefaultTimeout(30 * time.Minute),
+ Delete: schema.DefaultTimeout(30 * time.Minute),
+ },
+
+ Schema: map[string]*schema.Schema{
+
+ "name": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+
+ "primary_namespace_id": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+
+ "partner_namespace_id": {
+ Type: schema.TypeString,
+ Required: true,
+ ValidateFunc: azure.ValidateResourceIDOrEmpty,
+ },
+
+ "alias_primary_connection_string": {
+ Type: schema.TypeString,
+ Computed: true,
+ Sensitive: true,
+ },
+
+ "alias_secondary_connection_string": {
+ Type: schema.TypeString,
+ Computed: true,
+ Sensitive: true,
+ },
+
+ "default_primary_key": {
+ Type: schema.TypeString,
+ Computed: true,
+ Sensitive: true,
+ },
+
+ "default_secondary_key": {
+ Type: schema.TypeString,
+ Computed: true,
+ Sensitive: true,
+ },
+ },
+ }
+}
+
+func resourceServiceBusNamespaceDisasterRecoveryConfigCreate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*clients.Client).ServiceBus.DisasterRecoveryConfigsClient
+ ctx, cancel := timeouts.ForCreateUpdate(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ log.Printf("[INFO] preparing arguments for ServiceBus Namespace pairing create/update.")
+
+ id, err := parse.NamespaceID(d.Get("primary_namespace_id").(string))
+ if err != nil {
+ return err
+ }
+
+ aliasName := d.Get("name").(string)
+ partnerNamespaceId := d.Get("partner_namespace_id").(string)
+
+ if d.IsNewResource() {
+ existing, err := client.Get(ctx, id.ResourceGroup, id.Name, aliasName)
+ if err != nil {
+ if !utils.ResponseWasNotFound(existing.Response) {
+ return fmt.Errorf("error checking for presence of existing Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q): %s", aliasName, id.Name, id.ResourceGroup, err)
+ }
+ }
+
+ if existing.ID != nil && *existing.ID != "" {
+ return tf.ImportAsExistsError("azurerm_servicebus_namespace_disaster_recovery_config", *existing.ID)
+ }
+ }
+
+ parameters := servicebus.ArmDisasterRecovery{
+ ArmDisasterRecoveryProperties: &servicebus.ArmDisasterRecoveryProperties{
+ PartnerNamespace: utils.String(partnerNamespaceId),
+ },
+ }
+
+ if _, err := client.CreateOrUpdate(ctx, id.ResourceGroup, id.Name, aliasName, parameters); err != nil {
+ return fmt.Errorf("error creating/updating Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q): %s", aliasName, id.Name, id.ResourceGroup, err)
+ }
+
+ if err := resourceServiceBusNamespaceDisasterRecoveryConfigWaitForState(ctx, client, id.ResourceGroup, id.Name, aliasName, d.Timeout(schema.TimeoutCreate)); err != nil {
+ return fmt.Errorf("error waiting for replication to complete for Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q): %s", aliasName, id.Name, id.ResourceGroup, err)
+ }
+
+ read, err := client.Get(ctx, id.ResourceGroup, id.Name, aliasName)
+ if err != nil {
+ return fmt.Errorf("error reading Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q): %v", aliasName, id.Name, id.ResourceGroup, err)
+ }
+
+ if read.ID == nil {
+ return fmt.Errorf("got nil ID for Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q)", aliasName, id.Name, id.ResourceGroup)
+ }
+
+ d.SetId(*read.ID)
+
+ return resourceServiceBusNamespaceDisasterRecoveryConfigRead(d, meta)
+}
+
+func resourceServiceBusNamespaceDisasterRecoveryConfigUpdate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*clients.Client).ServiceBus.DisasterRecoveryConfigsClient
+ ctx, cancel := timeouts.ForUpdate(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ id, err := parse.NamespaceDisasterRecoveryConfigID(d.State().ID)
+ if err != nil {
+ return err
+ }
+
+ locks.ByName(id.NamespaceName, serviceBusNamespaceResourceName)
+ defer locks.UnlockByName(id.NamespaceName, serviceBusNamespaceResourceName)
+
+ if d.HasChange("partner_namespace_id") {
+ breakPair, err := client.BreakPairing(ctx, id.ResourceGroup, id.NamespaceName, id.DisasterRecoveryConfigName)
+ if breakPair.StatusCode != http.StatusOK {
+ return fmt.Errorf("error issuing break pairing request for Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q): %s", id.DisasterRecoveryConfigName, id.NamespaceName, id.ResourceGroup, err)
+ }
+
+ if err := resourceServiceBusNamespaceDisasterRecoveryConfigWaitForState(ctx, client, id.ResourceGroup, id.NamespaceName, id.DisasterRecoveryConfigName, d.Timeout(schema.TimeoutUpdate)); err != nil {
+ return fmt.Errorf("error waiting for break pairing request to complete for Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q): %s", id.DisasterRecoveryConfigName, id.NamespaceName, id.ResourceGroup, err)
+ }
+ }
+
+ parameters := servicebus.ArmDisasterRecovery{
+ ArmDisasterRecoveryProperties: &servicebus.ArmDisasterRecoveryProperties{
+ PartnerNamespace: utils.String(d.Get("partner_namespace_id").(string)),
+ },
+ }
+
+ if _, err := client.CreateOrUpdate(ctx, id.ResourceGroup, id.NamespaceName, id.DisasterRecoveryConfigName, parameters); err != nil {
+ return fmt.Errorf("error creating/updating Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q): %s", id.DisasterRecoveryConfigName, id.NamespaceName, id.ResourceGroup, err)
+ }
+
+ if err := resourceServiceBusNamespaceDisasterRecoveryConfigWaitForState(ctx, client, id.ResourceGroup, id.NamespaceName, id.DisasterRecoveryConfigName, d.Timeout(schema.TimeoutUpdate)); err != nil {
+ return fmt.Errorf("error waiting for replication to complete for Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q): %s", id.DisasterRecoveryConfigName, id.NamespaceName, id.ResourceGroup, err)
+ }
+
+ return resourceServiceBusNamespaceDisasterRecoveryConfigRead(d, meta)
+}
+
+func resourceServiceBusNamespaceDisasterRecoveryConfigRead(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*clients.Client).ServiceBus.DisasterRecoveryConfigsClient
+ ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ id, err := parse.NamespaceDisasterRecoveryConfigID(d.Id())
+ if err != nil {
+ return err
+ }
+
+ resp, err := client.Get(ctx, id.ResourceGroup, id.NamespaceName, id.DisasterRecoveryConfigName)
+ if err != nil {
+ if utils.ResponseWasNotFound(resp.Response) {
+ d.SetId("")
+ return nil
+ }
+ return fmt.Errorf("retrieving %s: %+v", id, err)
+ }
+
+ primaryId := parse.NewNamespaceID(id.SubscriptionId, id.ResourceGroup, id.NamespaceName)
+
+ d.Set("name", id.DisasterRecoveryConfigName)
+ d.Set("primary_namespace_id", primaryId.ID())
+ d.Set("partner_namespace_id", resp.ArmDisasterRecoveryProperties.PartnerNamespace)
+
+ keys, err := client.ListKeys(ctx, id.ResourceGroup, id.NamespaceName, id.DisasterRecoveryConfigName, serviceBusNamespaceDefaultAuthorizationRule)
+
+ if err != nil {
+ log.Printf("[WARN] listing default keys for %s: %+v", id, err)
+ } else {
+ d.Set("alias_primary_connection_string", keys.AliasPrimaryConnectionString)
+ d.Set("alias_secondary_connection_string", keys.AliasSecondaryConnectionString)
+ d.Set("default_primary_key", keys.PrimaryKey)
+ d.Set("default_secondary_key", keys.SecondaryKey)
+ }
+
+ return nil
+}
+
+func resourceServiceBusNamespaceDisasterRecoveryConfigDelete(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*clients.Client).ServiceBus.DisasterRecoveryConfigsClient
+ ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ id, err := parse.NamespaceDisasterRecoveryConfigID(d.Id())
+ if err != nil {
+ return err
+ }
+
+ breakPair, err := client.BreakPairing(ctx, id.ResourceGroup, id.NamespaceName, id.DisasterRecoveryConfigName)
+ if err != nil {
+ return fmt.Errorf("breaking pairing %s: %+v", id, err)
+ }
+
+ if breakPair.StatusCode != http.StatusOK {
+ return fmt.Errorf("error breaking pairing for Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q): %s", id.DisasterRecoveryConfigName, id.NamespaceName, id.ResourceGroup, err)
+ }
+
+ if err := resourceServiceBusNamespaceDisasterRecoveryConfigWaitForState(ctx, client, id.ResourceGroup, id.NamespaceName, id.DisasterRecoveryConfigName, d.Timeout(schema.TimeoutDelete)); err != nil {
+ return fmt.Errorf("error waiting for break pairing request to complete for Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q): %s", id.DisasterRecoveryConfigName, id.NamespaceName, id.ResourceGroup, err)
+ }
+
+ if _, err := client.Delete(ctx, id.ResourceGroup, id.NamespaceName, id.DisasterRecoveryConfigName); err != nil {
+ return fmt.Errorf("error issuing delete request for Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q): %s", id.DisasterRecoveryConfigName, id.NamespaceName, id.ResourceGroup, err)
+ }
+
+ // no future for deletion so wait for it to vanish
+ deleteWait := &resource.StateChangeConf{
+ Pending: []string{"200"},
+ Target: []string{"404"},
+ MinTimeout: 30 * time.Second,
+ Timeout: d.Timeout(schema.TimeoutDelete),
+ Refresh: func() (interface{}, string, error) {
+ resp, err := client.Get(ctx, id.ResourceGroup, id.NamespaceName, id.DisasterRecoveryConfigName)
+ if err != nil {
+ if utils.ResponseWasNotFound(resp.Response) {
+ return resp, strconv.Itoa(resp.StatusCode), nil
+ }
+ return nil, "nil", fmt.Errorf("error polling for the status of the Service Bus Namespace Disaster Recovery Configs %q deletion (Namespace %q / Resource Group %q): %v", id.DisasterRecoveryConfigName, id.NamespaceName, id.ResourceGroup, err)
+ }
+
+ return resp, strconv.Itoa(resp.StatusCode), nil
+ },
+ }
+
+ if _, err := deleteWait.WaitForState(); err != nil {
+ return fmt.Errorf("error waiting the deletion of Service Bus Namespace Disaster Recovery Configs %q deletion (Namespace %q / Resource Group %q): %v", id.DisasterRecoveryConfigName, id.NamespaceName, id.ResourceGroup, err)
+ }
+
+ // it can take some time for the name to become available again
+ // this is mainly here to enable updating the resource in place
+ nameFreeWait := &resource.StateChangeConf{
+ Pending: []string{"NameInUse"},
+ Target: []string{"None"},
+ MinTimeout: 30 * time.Second,
+ Timeout: d.Timeout(schema.TimeoutDelete),
+ Refresh: func() (interface{}, string, error) {
+ resp, err := client.CheckNameAvailabilityMethod(ctx, id.ResourceGroup, id.NamespaceName, servicebus.CheckNameAvailability{Name: utils.String(id.DisasterRecoveryConfigName)})
+ if err != nil {
+ return resp, "Error", fmt.Errorf("error checking if the Service Bus Namespace Disaster Recovery Configs %q name has been freed (Namespace %q / Resource Group %q): %v", id.DisasterRecoveryConfigName, id.NamespaceName, id.ResourceGroup, err)
+ }
+
+ return resp, string(resp.Reason), nil
+ },
+ }
+
+ if _, err := nameFreeWait.WaitForState(); err != nil {
+ return fmt.Errorf("error waiting the the Service Bus Namespace Disaster Recovery Configs %q name to be available (Namespace %q / Resource Group %q): %v", id.DisasterRecoveryConfigName, id.NamespaceName, id.ResourceGroup, err)
+ }
+
+ return nil
+}
+
+func resourceServiceBusNamespaceDisasterRecoveryConfigWaitForState(ctx context.Context, client *servicebus.DisasterRecoveryConfigsClient, resourceGroup, namespaceName, name string, timeout time.Duration) error {
+ stateConf := &resource.StateChangeConf{
+ Pending: []string{string(servicebus.Accepted)},
+ Target: []string{string(servicebus.Succeeded)},
+ MinTimeout: 30 * time.Second,
+ Timeout: timeout,
+ Refresh: func() (interface{}, string, error) {
+ read, err := client.Get(ctx, resourceGroup, namespaceName, name)
+ if err != nil {
+ return nil, "error", fmt.Errorf("wait read Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q): %v", name, namespaceName, resourceGroup, err)
+ }
+
+ if props := read.ArmDisasterRecoveryProperties; props != nil {
+ if props.ProvisioningState == servicebus.Failed {
+ return read, "failed", fmt.Errorf("replication for Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q) failed", name, namespaceName, resourceGroup)
+ }
+ return read, string(props.ProvisioningState), nil
+ }
+
+ return read, "nil", fmt.Errorf("waiting for replication error Service Bus Namespace Disaster Recovery Configs %q (Namespace %q / Resource Group %q): provisioning state is nil", name, namespaceName, resourceGroup)
+ },
+ }
+
+ _, err := stateConf.WaitForState()
+ return err
+}
diff --git a/azurerm/internal/services/servicebus/servicebus_namespace_disaster_recovery_config_test.go b/azurerm/internal/services/servicebus/servicebus_namespace_disaster_recovery_config_test.go
new file mode 100644
index 0000000000000..7fa53d99c0297
--- /dev/null
+++ b/azurerm/internal/services/servicebus/servicebus_namespace_disaster_recovery_config_test.go
@@ -0,0 +1,87 @@
+package servicebus_test
+
+import (
+ "context"
+ "fmt"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
+ "github.com/hashicorp/terraform-plugin-sdk/terraform"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance/check"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/servicebus/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+type ServiceBusNamespaceDisasterRecoveryConfigResource struct {
+}
+
+func TestAccAzureRMServiceBusNamespacePairing_basic(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_servicebus_namespace_disaster_recovery_config", "pairing_test")
+ r := ServiceBusNamespaceDisasterRecoveryConfigResource{}
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.basic(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
+func (t ServiceBusNamespaceDisasterRecoveryConfigResource) Exists(ctx context.Context, clients *clients.Client, state *terraform.InstanceState) (*bool, error) {
+ id, err := parse.NamespaceDisasterRecoveryConfigID(state.ID)
+ if err != nil {
+ return nil, err
+ }
+
+ resp, err := clients.ServiceBus.DisasterRecoveryConfigsClient.Get(ctx, id.ResourceGroup, id.NamespaceName, id.DisasterRecoveryConfigName)
+ if err != nil {
+ return nil, fmt.Errorf("reading Service Bus NameSpace (%s): %+v", id.String(), err)
+ }
+
+ return utils.Bool(resp.ID != nil), nil
+}
+
+func (ServiceBusNamespaceDisasterRecoveryConfigResource) basic(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "primary" {
+ name = "acctest1RG-%[1]d"
+ location = "%[2]s"
+}
+
+resource "azurerm_resource_group" "secondary" {
+ name = "acctest2RG-%[1]d"
+ location = "%[3]s"
+}
+
+resource "azurerm_servicebus_namespace" "primary_namespace_test" {
+ name = "acctest1-%[1]d"
+ location = azurerm_resource_group.primary.location
+ resource_group_name = azurerm_resource_group.primary.name
+ sku = "Premium"
+ capacity = "1"
+}
+
+resource "azurerm_servicebus_namespace" "secondary_namespace_test" {
+ name = "acctest2-%[1]d"
+ location = azurerm_resource_group.secondary.location
+ resource_group_name = azurerm_resource_group.secondary.name
+ sku = "Premium"
+ capacity = "1"
+}
+
+resource "azurerm_servicebus_namespace_disaster_recovery_config" "pairing_test" {
+ name = "acctest-alias-%[1]d"
+ primary_namespace_id = azurerm_servicebus_namespace.primary_namespace_test.id
+ partner_namespace_id = azurerm_servicebus_namespace.secondary_namespace_test.id
+}
+
+`, data.RandomInteger, data.Locations.Primary, data.Locations.Secondary)
+}
diff --git a/azurerm/internal/services/servicebus/servicebus_namespace_resource.go b/azurerm/internal/services/servicebus/servicebus_namespace_resource.go
index a6b62b8dddeb0..e8462f4412f6d 100644
--- a/azurerm/internal/services/servicebus/servicebus_namespace_resource.go
+++ b/azurerm/internal/services/servicebus/servicebus_namespace_resource.go
@@ -27,7 +27,10 @@ import (
// Default Authorization Rule/Policy created by Azure, used to populate the
// default connection strings and keys
-var serviceBusNamespaceDefaultAuthorizationRule = "RootManageSharedAccessKey"
+var (
+ serviceBusNamespaceDefaultAuthorizationRule = "RootManageSharedAccessKey"
+ serviceBusNamespaceResourceName = "azurerm_servicebus_namespace"
+)
func resourceServiceBusNamespace() *schema.Resource {
return &schema.Resource{
diff --git a/azurerm/internal/services/servicebus/validate/namespace_disaster_recovery_config_id.go b/azurerm/internal/services/servicebus/validate/namespace_disaster_recovery_config_id.go
new file mode 100644
index 0000000000000..154d671f3be7f
--- /dev/null
+++ b/azurerm/internal/services/servicebus/validate/namespace_disaster_recovery_config_id.go
@@ -0,0 +1,23 @@
+package validate
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "fmt"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/servicebus/parse"
+)
+
+func NamespaceDisasterRecoveryConfigID(input interface{}, key string) (warnings []string, errors []error) {
+ v, ok := input.(string)
+ if !ok {
+ errors = append(errors, fmt.Errorf("expected %q to be a string", key))
+ return
+ }
+
+ if _, err := parse.NamespaceDisasterRecoveryConfigID(v); err != nil {
+ errors = append(errors, err)
+ }
+
+ return
+}
diff --git a/azurerm/internal/services/servicebus/validate/namespace_disaster_recovery_config_id_test.go b/azurerm/internal/services/servicebus/validate/namespace_disaster_recovery_config_id_test.go
new file mode 100644
index 0000000000000..3cfe79782170e
--- /dev/null
+++ b/azurerm/internal/services/servicebus/validate/namespace_disaster_recovery_config_id_test.go
@@ -0,0 +1,88 @@
+package validate
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import "testing"
+
+func TestNamespaceDisasterRecoveryConfigID(t *testing.T) {
+ cases := []struct {
+ Input string
+ Valid bool
+ }{
+
+ {
+ // empty
+ Input: "",
+ Valid: false,
+ },
+
+ {
+ // missing SubscriptionId
+ Input: "/",
+ Valid: false,
+ },
+
+ {
+ // missing value for SubscriptionId
+ Input: "/subscriptions/",
+ Valid: false,
+ },
+
+ {
+ // missing ResourceGroup
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/",
+ Valid: false,
+ },
+
+ {
+ // missing value for ResourceGroup
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/",
+ Valid: false,
+ },
+
+ {
+ // missing NamespaceName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/",
+ Valid: false,
+ },
+
+ {
+ // missing value for NamespaceName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/",
+ Valid: false,
+ },
+
+ {
+ // missing DisasterRecoveryConfigName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/namespace1/",
+ Valid: false,
+ },
+
+ {
+ // missing value for DisasterRecoveryConfigName
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/namespace1/disasterRecoveryConfigs/",
+ Valid: false,
+ },
+
+ {
+ // valid
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.ServiceBus/namespaces/namespace1/disasterRecoveryConfigs/aliasName1",
+ Valid: true,
+ },
+
+ {
+ // upper-cased
+ Input: "/SUBSCRIPTIONS/12345678-1234-9876-4563-123456789012/RESOURCEGROUPS/RESGROUP1/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/NAMESPACE1/DISASTERRECOVERYCONFIGS/ALIASNAME1",
+ Valid: false,
+ },
+ }
+ for _, tc := range cases {
+ t.Logf("[DEBUG] Testing Value %s", tc.Input)
+ _, errors := NamespaceDisasterRecoveryConfigID(tc.Input, "test")
+ valid := len(errors) == 0
+
+ if tc.Valid != valid {
+ t.Fatalf("Expected %t but got %t", tc.Valid, valid)
+ }
+ }
+}
diff --git a/azurerm/internal/services/springcloud/spring_cloud_service_data_source.go b/azurerm/internal/services/springcloud/spring_cloud_service_data_source.go
index edfb003a8b438..0e89cd23370cb 100644
--- a/azurerm/internal/services/springcloud/spring_cloud_service_data_source.go
+++ b/azurerm/internal/services/springcloud/spring_cloud_service_data_source.go
@@ -111,6 +111,45 @@ func dataSourceSpringCloudService() *schema.Resource {
},
},
+ "required_network_traffic_rules": {
+ Type: schema.TypeList,
+ Computed: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "protocol": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+
+ "port": {
+ Type: schema.TypeInt,
+ Computed: true,
+ },
+
+ "ip_addresses": {
+ Type: schema.TypeList,
+ Computed: true,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ },
+
+ "fqdns": {
+ Type: schema.TypeList,
+ Computed: true,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ },
+
+ "direction": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ },
+ },
+ },
+
"tags": tags.SchemaDataSource(),
},
}
@@ -153,6 +192,10 @@ func dataSourceSpringCloudServiceRead(d *schema.ResourceData, meta interface{})
if err := d.Set("outbound_public_ip_addresses", outboundPublicIPAddresses); err != nil {
return fmt.Errorf("setting `outbound_public_ip_addresses`: %+v", err)
}
+
+ if err := d.Set("required_network_traffic_rules", flattenRequiredTraffic(props.NetworkProfile)); err != nil {
+ return fmt.Errorf("setting `required_network_traffic_rules`: %+v", err)
+ }
}
return tags.FlattenAndSet(d, resp.Tags)
diff --git a/azurerm/internal/services/springcloud/spring_cloud_service_resource.go b/azurerm/internal/services/springcloud/spring_cloud_service_resource.go
index c53d36c449750..87e67908f4ad0 100644
--- a/azurerm/internal/services/springcloud/spring_cloud_service_resource.go
+++ b/azurerm/internal/services/springcloud/spring_cloud_service_resource.go
@@ -225,6 +225,45 @@ func resourceSpringCloudService() *schema.Resource {
},
},
+ "required_network_traffic_rules": {
+ Type: schema.TypeList,
+ Computed: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "protocol": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+
+ "port": {
+ Type: schema.TypeInt,
+ Computed: true,
+ },
+
+ "ip_addresses": {
+ Type: schema.TypeList,
+ Computed: true,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ },
+
+ "fqdns": {
+ Type: schema.TypeList,
+ Computed: true,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ },
+
+ "direction": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ },
+ },
+ },
+
"tags": tags.Schema(),
},
}
@@ -419,6 +458,10 @@ func resourceSpringCloudServiceRead(d *schema.ResourceData, meta interface{}) er
if err := d.Set("outbound_public_ip_addresses", outboundPublicIPAddresses); err != nil {
return fmt.Errorf("setting `outbound_public_ip_addresses`: %+v", err)
}
+
+ if err := d.Set("required_network_traffic_rules", flattenRequiredTraffic(props.NetworkProfile)); err != nil {
+ return fmt.Errorf("setting `required_network_traffic_rules`: %+v", err)
+ }
}
return tags.FlattenAndSet(d, resp.Tags)
@@ -891,3 +934,31 @@ func flattenOutboundPublicIPAddresses(input *appplatform.NetworkProfile) []inter
return utils.FlattenStringSlice(input.OutboundIPs.PublicIPs)
}
+
+func flattenRequiredTraffic(input *appplatform.NetworkProfile) []interface{} {
+ if input == nil || input.RequiredTraffics == nil {
+ return []interface{}{}
+ }
+
+ result := make([]interface{}, 0)
+ for _, v := range *input.RequiredTraffics {
+ protocol := ""
+ if v.Protocol != nil {
+ protocol = *v.Protocol
+ }
+
+ port := 0
+ if v.Port != nil {
+ port = int(*v.Port)
+ }
+
+ result = append(result, map[string]interface{}{
+ "protocol": protocol,
+ "port": port,
+ "ip_addresses": utils.FlattenStringSlice(v.Ips),
+ "fqdns": utils.FlattenStringSlice(v.Fqdns),
+ "direction": string(v.Direction),
+ })
+ }
+ return result
+}
diff --git a/azurerm/internal/services/springcloud/spring_cloud_service_resource_test.go b/azurerm/internal/services/springcloud/spring_cloud_service_resource_test.go
index 8f5f2f95629bb..78abdfdcd2391 100644
--- a/azurerm/internal/services/springcloud/spring_cloud_service_resource_test.go
+++ b/azurerm/internal/services/springcloud/spring_cloud_service_resource_test.go
@@ -5,13 +5,12 @@ import (
"fmt"
"testing"
- "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/springcloud/parse"
-
"github.com/hashicorp/terraform-plugin-sdk/helper/resource"
"github.com/hashicorp/terraform-plugin-sdk/terraform"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance/check"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/springcloud/parse"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
)
@@ -105,6 +104,11 @@ func TestAccSpringCloudService_virtualNetwork(t *testing.T) {
check.That(data.ResourceName).Key("network.0.service_runtime_network_resource_group").Exists(),
check.That(data.ResourceName).Key("network.0.app_network_resource_group").Exists(),
check.That(data.ResourceName).Key("outbound_public_ip_addresses.0").Exists(),
+ check.That(data.ResourceName).Key("required_network_traffic_rules.0.protocol").Exists(),
+ check.That(data.ResourceName).Key("required_network_traffic_rules.0.port").Exists(),
+ check.That(data.ResourceName).Key("required_network_traffic_rules.0.ip_addresses.#").Exists(),
+ check.That(data.ResourceName).Key("required_network_traffic_rules.0.fqdns.#").Exists(),
+ check.That(data.ResourceName).Key("required_network_traffic_rules.0.direction").Exists(),
),
},
data.ImportStep(
diff --git a/azurerm/internal/services/storage/helpers.go b/azurerm/internal/services/storage/helpers.go
index b5dcb6babdc7f..165828ed1856a 100644
--- a/azurerm/internal/services/storage/helpers.go
+++ b/azurerm/internal/services/storage/helpers.go
@@ -40,18 +40,18 @@ func schemaStorageAccountCorsRule(patchEnabled bool) *schema.Schema {
Type: schema.TypeList,
Required: true,
MaxItems: 64,
+ MinItems: 1,
Elem: &schema.Schema{
- Type: schema.TypeString,
- ValidateFunc: validation.StringIsNotEmpty,
+ Type: schema.TypeString,
},
},
"allowed_headers": {
Type: schema.TypeList,
Required: true,
MaxItems: 64,
+ MinItems: 1,
Elem: &schema.Schema{
- Type: schema.TypeString,
- ValidateFunc: validation.StringIsNotEmpty,
+ Type: schema.TypeString,
},
},
"allowed_methods": {
diff --git a/azurerm/internal/services/storage/storage_account_network_rules_resource.go b/azurerm/internal/services/storage/storage_account_network_rules_resource.go
index 189b7764a49cf..509c8df6ebb4c 100644
--- a/azurerm/internal/services/storage/storage_account_network_rules_resource.go
+++ b/azurerm/internal/services/storage/storage_account_network_rules_resource.go
@@ -5,9 +5,6 @@ import (
"strings"
"time"
- "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/storage/validate"
- "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
-
"github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2021-01-01/storage"
"github.com/hashicorp/terraform-plugin-sdk/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/helper/validation"
@@ -15,6 +12,9 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/locks"
+ networkValidate "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/network/validate"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/storage/validate"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/pluginsdk"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
)
@@ -93,11 +93,33 @@ func resourceStorageAccountNetworkRules() *schema.Resource {
string(storage.DefaultActionDeny),
}, false),
},
+
+ "private_link_access": {
+ Type: schema.TypeList,
+ Optional: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "endpoint_resource_id": {
+ Type: schema.TypeString,
+ Required: true,
+ ValidateFunc: networkValidate.PrivateEndpointID,
+ },
+
+ "endpoint_tenant_id": {
+ Type: schema.TypeString,
+ Optional: true,
+ Computed: true,
+ ValidateFunc: validation.IsUUID,
+ },
+ },
+ },
+ },
},
}
}
func resourceStorageAccountNetworkRulesCreateUpdate(d *schema.ResourceData, meta interface{}) error {
+ tenantId := meta.(*clients.Client).Account.TenantId
client := meta.(*clients.Client).Storage.AccountsClient
ctx, cancel := timeouts.ForCreateUpdate(meta.(*clients.Client).StopContext, d)
defer cancel()
@@ -136,6 +158,7 @@ func resourceStorageAccountNetworkRulesCreateUpdate(d *schema.ResourceData, meta
rules.Bypass = expandStorageAccountNetworkRuleBypass(d.Get("bypass").(*schema.Set).List())
rules.IPRules = expandStorageAccountNetworkRuleIpRules(d.Get("ip_rules").(*schema.Set).List())
rules.VirtualNetworkRules = expandStorageAccountNetworkRuleVirtualRules(d.Get("virtual_network_subnet_ids").(*schema.Set).List())
+ rules.ResourceAccessRules = expandStorageAccountPrivateLinkAccess(d.Get("private_link_access").([]interface{}), tenantId)
opts := storage.AccountUpdateParameters{
AccountPropertiesUpdateParameters: &storage.AccountPropertiesUpdateParameters{
@@ -184,6 +207,9 @@ func resourceStorageAccountNetworkRulesRead(d *schema.ResourceData, meta interfa
return fmt.Errorf("Error setting `bypass`: %+v", err)
}
d.Set("default_action", string(rules.DefaultAction))
+ if err := d.Set("private_link_access", flattenStorageAccountPrivateLinkAccess(rules.ResourceAccessRules)); err != nil {
+ return fmt.Errorf("setting `private_link_access`: %+v", err)
+ }
}
return nil
diff --git a/azurerm/internal/services/storage/storage_account_network_rules_resource_test.go b/azurerm/internal/services/storage/storage_account_network_rules_resource_test.go
index c2666a37e55d0..0803a819a2f54 100644
--- a/azurerm/internal/services/storage/storage_account_network_rules_resource_test.go
+++ b/azurerm/internal/services/storage/storage_account_network_rules_resource_test.go
@@ -66,6 +66,35 @@ func TestAccStorageAccountNetworkRules_update(t *testing.T) {
})
}
+func TestAccStorageAccountNetworkRules_privateLinkAccess(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_storage_account_network_rules", "test")
+ r := StorageAccountNetworkRulesResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.disablePrivateLinkAccess(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That("azurerm_storage_account.test").ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ {
+ Config: r.privateLinkAccess(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That("azurerm_storage_account.test").ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ {
+ Config: r.disablePrivateLinkAccess(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That("azurerm_storage_account.test").ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
func TestAccStorageAccountNetworkRules_empty(t *testing.T) {
data := acceptance.BuildTestData(t, "azurerm_storage_account_network_rules", "test")
r := StorageAccountNetworkRulesResource{}
@@ -236,3 +265,64 @@ resource "azurerm_storage_account_network_rules" "test" {
}
`, data.RandomInteger, data.Locations.Primary, data.RandomString)
}
+
+func (r StorageAccountNetworkRulesResource) disablePrivateLinkAccess(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_storage_account" "test" {
+ name = "unlikely23exst2acct%s"
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+ account_tier = "Standard"
+ account_replication_type = "LRS"
+
+ tags = {
+ environment = "production"
+ }
+}
+
+resource "azurerm_storage_account_network_rules" "test" {
+ resource_group_name = azurerm_resource_group.test.name
+ storage_account_name = azurerm_storage_account.test.name
+
+ default_action = "Deny"
+ bypass = ["None"]
+ ip_rules = []
+ virtual_network_subnet_ids = []
+}
+`, StorageAccountResource{}.networkRulesPrivateEndpointTemplate(data), data.RandomString)
+}
+
+func (r StorageAccountNetworkRulesResource) privateLinkAccess(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_storage_account" "test" {
+ name = "unlikely23exst2acct%s"
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+ account_tier = "Standard"
+ account_replication_type = "LRS"
+
+ tags = {
+ environment = "production"
+ }
+}
+
+resource "azurerm_storage_account_network_rules" "test" {
+ resource_group_name = azurerm_resource_group.test.name
+ storage_account_name = azurerm_storage_account.test.name
+
+ default_action = "Deny"
+ ip_rules = ["127.0.0.1"]
+ virtual_network_subnet_ids = [azurerm_subnet.test.id]
+ private_link_access {
+ endpoint_resource_id = azurerm_private_endpoint.blob.id
+ }
+ private_link_access {
+ endpoint_resource_id = azurerm_private_endpoint.table.id
+ }
+}
+`, StorageAccountResource{}.networkRulesPrivateEndpointTemplate(data), data.RandomString)
+}
diff --git a/azurerm/internal/services/storage/storage_account_resource.go b/azurerm/internal/services/storage/storage_account_resource.go
index eae0c05363815..2b7260286296d 100644
--- a/azurerm/internal/services/storage/storage_account_resource.go
+++ b/azurerm/internal/services/storage/storage_account_resource.go
@@ -20,6 +20,7 @@ import (
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/locks"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/network"
+ networkValidate "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/network/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/storage/migration"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/storage/validate"
"github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tags"
@@ -287,6 +288,27 @@ func resourceStorageAccount() *schema.Resource {
string(storage.DefaultActionDeny),
}, false),
},
+
+ "private_link_access": {
+ Type: schema.TypeList,
+ Optional: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "endpoint_resource_id": {
+ Type: schema.TypeString,
+ Required: true,
+ ValidateFunc: networkValidate.PrivateEndpointID,
+ },
+
+ "endpoint_tenant_id": {
+ Type: schema.TypeString,
+ Optional: true,
+ Computed: true,
+ ValidateFunc: validation.IsUUID,
+ },
+ },
+ },
+ },
},
},
},
@@ -348,6 +370,12 @@ func resourceStorageAccount() *schema.Resource {
Default: false,
},
+ "change_feed_enabled": {
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: false,
+ },
+
"default_service_version": {
Type: schema.TypeString,
Optional: true,
@@ -737,6 +765,7 @@ func resourceStorageAccount() *schema.Resource {
func resourceStorageAccountCreate(d *schema.ResourceData, meta interface{}) error {
envName := meta.(*clients.Client).Account.Environment.Name
+ tenantId := meta.(*clients.Client).Account.TenantId
client := meta.(*clients.Client).Storage.AccountsClient
ctx, cancel := timeouts.ForCreate(meta.(*clients.Client).StopContext, d)
defer cancel()
@@ -770,6 +799,11 @@ func resourceStorageAccountCreate(d *schema.ResourceData, meta interface{}) erro
accountTier := d.Get("account_tier").(string)
replicationType := d.Get("account_replication_type").(string)
storageType := fmt.Sprintf("%s_%s", accountTier, replicationType)
+ // this is the default behavior for the resource if the attribute is nil
+ // we are making this change in Terraform https://github.com/terraform-providers/terraform-provider-azurerm/issues/11689
+ // because the portal UI team has a bug in their code ignoring the ARM API documention which state that nil is true
+ // TODO: Remove code when Portal UI team fixes their code
+ allowSharedKeyAccess := true
parameters := storage.AccountCreateParameters{
Location: &location,
@@ -780,9 +814,11 @@ func resourceStorageAccountCreate(d *schema.ResourceData, meta interface{}) erro
Kind: storage.Kind(accountKind),
AccountPropertiesCreateParameters: &storage.AccountPropertiesCreateParameters{
EnableHTTPSTrafficOnly: &enableHTTPSTrafficOnly,
- NetworkRuleSet: expandStorageAccountNetworkRules(d),
+ NetworkRuleSet: expandStorageAccountNetworkRules(d, tenantId),
IsHnsEnabled: &isHnsEnabled,
EnableNfsV3: &nfsV3Enabled,
+ // TODO: Remove AllowSharedKeyAcces assignment when Portal UI team fixes their code (e.g. nil is true)
+ AllowSharedKeyAccess: &allowSharedKeyAccess,
},
}
@@ -964,6 +1000,7 @@ func resourceStorageAccountCreate(d *schema.ResourceData, meta interface{}) erro
func resourceStorageAccountUpdate(d *schema.ResourceData, meta interface{}) error {
envName := meta.(*clients.Client).Account.Environment.Name
+ tenantId := meta.(*clients.Client).Account.TenantId
client := meta.(*clients.Client).Storage.AccountsClient
ctx, cancel := timeouts.ForUpdate(meta.(*clients.Client).StopContext, d)
defer cancel()
@@ -989,6 +1026,36 @@ func resourceStorageAccountUpdate(d *schema.ResourceData, meta interface{}) erro
}
}
+ // AllowSharedKeyAccess can only be true due to issue: https://github.com/terraform-providers/terraform-provider-azurerm/issues/11460
+ // if value is nil that brakes the Portal UI as reported in https://github.com/terraform-providers/terraform-provider-azurerm/issues/11689
+ // currently the Portal UI reports nil as false, and per the ARM API documentation nil is true. This manafests itself in the Portal UI
+ // when a storage account is created by terraform that the AllowSharedKeyAccess is Disabled when it is actually Enabled, thus confusing out customers
+ // to fix this, I have added this code to explicitly to set the value to true if is nil to workaround the Portal UI bug for our customers.
+ // this is designed as a passive change, meaning the change will only take effect when the existing storage account is modified in some way if the
+ // account already exists. since I have also switched up the default behavor for net new storage accounts to always set this value as true, this issue
+ // should automatically correct itself over time with these changes.
+ // TODO: Remove code when Portal UI team fixes their code
+ existing, err := client.GetProperties(ctx, resourceGroupName, storageAccountName, "")
+ if err == nil {
+ if sharedKeyAccess := existing.AccountProperties.AllowSharedKeyAccess; sharedKeyAccess == nil {
+ allowSharedKeyAccess := true
+
+ opts := storage.AccountUpdateParameters{
+ AccountPropertiesUpdateParameters: &storage.AccountPropertiesUpdateParameters{
+ AllowSharedKeyAccess: &allowSharedKeyAccess,
+ },
+ }
+
+ if _, err := client.Update(ctx, resourceGroupName, storageAccountName, opts); err != nil {
+ return fmt.Errorf("Error updating Azure Storage Account AllowSharedKeyAccess %q: %+v", storageAccountName, err)
+ }
+ }
+ } else {
+ // Should never hit this, but added due to an abundance of caution
+ return fmt.Errorf("Error retrieving Azure Storage Account %q AllowSharedKeyAccess: %+v", storageAccountName, err)
+ }
+ // TODO: end remove changes when Portal UI team fixed their code
+
if d.HasChange("account_replication_type") {
sku := storage.Sku{
Name: storage.SkuName(storageType),
@@ -1126,12 +1193,12 @@ func resourceStorageAccountUpdate(d *schema.ResourceData, meta interface{}) erro
if d.HasChange("network_rules") {
opts := storage.AccountUpdateParameters{
AccountPropertiesUpdateParameters: &storage.AccountPropertiesUpdateParameters{
- NetworkRuleSet: expandStorageAccountNetworkRules(d),
+ NetworkRuleSet: expandStorageAccountNetworkRules(d, tenantId),
},
}
if _, err := client.Update(ctx, resourceGroupName, storageAccountName, opts); err != nil {
- return fmt.Errorf("Error updating Azure Storage Account network_rules %q: %+v", storageAccountName, err)
+ return fmt.Errorf("updating Azure Storage Account network_rules %q: %+v", storageAccountName, err)
}
}
@@ -1646,7 +1713,7 @@ func expandArmStorageAccountRouting(input []interface{}) *storage.RoutingPrefere
}
}
-func expandStorageAccountNetworkRules(d *schema.ResourceData) *storage.NetworkRuleSet {
+func expandStorageAccountNetworkRules(d *schema.ResourceData, tenantId string) *storage.NetworkRuleSet {
networkRules := d.Get("network_rules").([]interface{})
if len(networkRules) == 0 {
// Default access is enabled when no network rules are set.
@@ -1658,6 +1725,7 @@ func expandStorageAccountNetworkRules(d *schema.ResourceData) *storage.NetworkRu
IPRules: expandStorageAccountIPRules(networkRule),
VirtualNetworkRules: expandStorageAccountVirtualNetworks(networkRule),
Bypass: expandStorageAccountBypass(networkRule),
+ ResourceAccessRules: expandStorageAccountPrivateLinkAccess(networkRule["private_link_access"].([]interface{}), tenantId),
}
if v := networkRule["default_action"]; v != nil {
@@ -1710,6 +1778,25 @@ func expandStorageAccountBypass(networkRule map[string]interface{}) storage.Bypa
return storage.Bypass(strings.Join(bypassValues, ", "))
}
+func expandStorageAccountPrivateLinkAccess(inputs []interface{}, tenantId string) *[]storage.ResourceAccessRule {
+ privateLinkAccess := make([]storage.ResourceAccessRule, 0)
+ if len(inputs) == 0 {
+ return &privateLinkAccess
+ }
+ for _, input := range inputs {
+ accessRule := input.(map[string]interface{})
+ if v := accessRule["endpoint_tenant_id"].(string); v != "" {
+ tenantId = v
+ }
+ privateLinkAccess = append(privateLinkAccess, storage.ResourceAccessRule{
+ TenantID: utils.String(tenantId),
+ ResourceID: utils.String(accessRule["endpoint_resource_id"].(string)),
+ })
+ }
+
+ return &privateLinkAccess
+}
+
func expandBlobProperties(input []interface{}) *storage.BlobServiceProperties {
props := storage.BlobServiceProperties{
BlobServicePropertiesProperties: &storage.BlobServicePropertiesProperties{
@@ -1717,6 +1804,9 @@ func expandBlobProperties(input []interface{}) *storage.BlobServiceProperties {
CorsRules: &[]storage.CorsRule{},
},
IsVersioningEnabled: utils.Bool(false),
+ ChangeFeed: &storage.ChangeFeed{
+ Enabled: utils.Bool(false),
+ },
LastAccessTimeTrackingPolicy: &storage.LastAccessTimeTrackingPolicy{
Enable: utils.Bool(false),
},
@@ -1742,6 +1832,10 @@ func expandBlobProperties(input []interface{}) *storage.BlobServiceProperties {
props.IsVersioningEnabled = utils.Bool(v["versioning_enabled"].(bool))
+ props.ChangeFeed = &storage.ChangeFeed{
+ Enabled: utils.Bool(v["change_feed_enabled"].(bool)),
+ }
+
if version, ok := v["default_service_version"].(string); ok && version != "" {
props.DefaultServiceVersion = utils.String(version)
}
@@ -2036,6 +2130,7 @@ func flattenStorageAccountNetworkRules(input *storage.NetworkRuleSet) []interfac
networkRules["virtual_network_subnet_ids"] = schema.NewSet(schema.HashString, flattenStorageAccountVirtualNetworks(input.VirtualNetworkRules))
networkRules["bypass"] = schema.NewSet(schema.HashString, flattenStorageAccountBypass(input.Bypass))
networkRules["default_action"] = string(input.DefaultAction)
+ networkRules["private_link_access"] = flattenStorageAccountPrivateLinkAccess(input.ResourceAccessRules)
return []interface{}{networkRules}
}
@@ -2074,6 +2169,31 @@ func flattenStorageAccountVirtualNetworks(input *[]storage.VirtualNetworkRule) [
return virtualNetworks
}
+func flattenStorageAccountPrivateLinkAccess(inputs *[]storage.ResourceAccessRule) []interface{} {
+ if inputs == nil || len(*inputs) == 0 {
+ return []interface{}{}
+ }
+
+ accessRules := make([]interface{}, 0)
+ for _, input := range *inputs {
+ var resourceId, tenantId string
+ if input.ResourceID != nil {
+ resourceId = *input.ResourceID
+ }
+
+ if input.TenantID != nil {
+ tenantId = *input.TenantID
+ }
+
+ accessRules = append(accessRules, map[string]interface{}{
+ "endpoint_resource_id": resourceId,
+ "endpoint_tenant_id": tenantId,
+ })
+ }
+
+ return accessRules
+}
+
func flattenBlobProperties(input storage.BlobServiceProperties) []interface{} {
if input.BlobServicePropertiesProperties == nil {
return []interface{}{}
@@ -2094,11 +2214,15 @@ func flattenBlobProperties(input storage.BlobServiceProperties) []interface{} {
flattenedContainerDeletePolicy = flattenBlobPropertiesDeleteRetentionPolicy(containerDeletePolicy)
}
- versioning := false
+ versioning, changeFeed := false, false
if input.BlobServicePropertiesProperties.IsVersioningEnabled != nil {
versioning = *input.BlobServicePropertiesProperties.IsVersioningEnabled
}
+ if v := input.BlobServicePropertiesProperties.ChangeFeed; v != nil && v.Enabled != nil {
+ changeFeed = *v.Enabled
+ }
+
var defaultServiceVersion string
if input.BlobServicePropertiesProperties.DefaultServiceVersion != nil {
defaultServiceVersion = *input.BlobServicePropertiesProperties.DefaultServiceVersion
@@ -2114,6 +2238,7 @@ func flattenBlobProperties(input storage.BlobServiceProperties) []interface{} {
"cors_rule": flattenedCorsRules,
"delete_retention_policy": flattenedDeletePolicy,
"versioning_enabled": versioning,
+ "change_feed_enabled": changeFeed,
"default_service_version": defaultServiceVersion,
"last_access_time_enabled": LastAccessTimeTrackingPolicy,
"container_delete_retention_policy": flattenedContainerDeletePolicy,
diff --git a/azurerm/internal/services/storage/storage_account_resource_test.go b/azurerm/internal/services/storage/storage_account_resource_test.go
index 38a316cff2e78..1b78d157b5c5b 100644
--- a/azurerm/internal/services/storage/storage_account_resource_test.go
+++ b/azurerm/internal/services/storage/storage_account_resource_test.go
@@ -514,6 +514,35 @@ func TestAccStorageAccount_networkRulesDeleted(t *testing.T) {
})
}
+func TestAccStorageAccount_privateLinkAccess(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_storage_account", "test")
+ r := StorageAccountResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.networkRules(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ {
+ Config: r.networkRulesPrivateLinkAccess(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ {
+ Config: r.networkRulesReverted(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
func TestAccStorageAccount_blobProperties(t *testing.T) {
data := acceptance.BuildTestData(t, "azurerm_storage_account", "test")
r := StorageAccountResource{}
@@ -535,6 +564,7 @@ func TestAccStorageAccount_blobProperties(t *testing.T) {
check.That(data.ResourceName).Key("blob_properties.0.cors_rule.#").HasValue("2"),
check.That(data.ResourceName).Key("blob_properties.0.delete_retention_policy.0.days").HasValue("7"),
check.That(data.ResourceName).Key("blob_properties.0.versioning_enabled").HasValue("false"),
+ check.That(data.ResourceName).Key("blob_properties.0.change_feed_enabled").HasValue("false"),
),
},
data.ImportStep(),
@@ -548,6 +578,23 @@ func TestAccStorageAccount_blobProperties(t *testing.T) {
})
}
+func TestAccStorageAccount_blobPropertiesEmptyAllowedExposedHeaders(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_storage_account", "test")
+ r := StorageAccountResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.blobPropertiesUpdatedEmptyAllowedExposedHeaders(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ check.That(data.ResourceName).Key("blob_properties.0.cors_rule.#").HasValue("1"),
+ check.That(data.ResourceName).Key("blob_properties.0.cors_rule.0.allowed_headers.#").HasValue("1"),
+ check.That(data.ResourceName).Key("blob_properties.0.cors_rule.0.exposed_headers.#").HasValue("1"),
+ ),
+ },
+ })
+}
+
func TestAccStorageAccount_queueProperties(t *testing.T) {
data := acceptance.BuildTestData(t, "azurerm_storage_account", "test")
r := StorageAccountResource{}
@@ -864,7 +911,7 @@ resource "azurerm_storage_account" "test" {
account_replication_type = "LRS"
tags = {
- %s
+ %s
}
}
`, data.RandomInteger, data.Locations.Primary, data.RandomString, tags)
@@ -1485,31 +1532,115 @@ resource "azurerm_storage_account" "test" {
`, data.RandomInteger, data.Locations.Primary, data.RandomString)
}
-func (r StorageAccountResource) networkRules(data acceptance.TestData) string {
+func (r StorageAccountResource) networkRulesTemplate(data acceptance.TestData) string {
return fmt.Sprintf(`
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "test" {
- name = "acctestRG-storage-%d"
- location = "%s"
+ name = "acctestRG-storage-%[1]d"
+ location = "%[2]s"
}
resource "azurerm_virtual_network" "test" {
- name = "acctestvirtnet%d"
+ name = "acctestvirtnet%[1]d"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.test.location
resource_group_name = azurerm_resource_group.test.name
}
resource "azurerm_subnet" "test" {
- name = "acctestsubnet%d"
+ name = "acctestsubnet%[1]d"
resource_group_name = azurerm_resource_group.test.name
virtual_network_name = azurerm_virtual_network.test.name
address_prefix = "10.0.2.0/24"
service_endpoints = ["Microsoft.Storage"]
}
+`, data.RandomInteger, data.Locations.Primary)
+}
+
+func (r StorageAccountResource) networkRulesPrivateEndpointTemplate(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%[1]s
+
+resource "azurerm_subnet" "blob_endpoint" {
+ name = "acctestsnetblobendpoint-%[2]d"
+ resource_group_name = azurerm_resource_group.test.name
+ virtual_network_name = azurerm_virtual_network.test.name
+ address_prefixes = ["10.0.5.0/24"]
+
+ enforce_private_link_endpoint_network_policies = true
+}
+
+resource "azurerm_subnet" "table_endpoint" {
+ name = "acctestsnettableendpoint-%[2]d"
+ resource_group_name = azurerm_resource_group.test.name
+ virtual_network_name = azurerm_virtual_network.test.name
+ address_prefixes = ["10.0.6.0/24"]
+
+ enforce_private_link_endpoint_network_policies = true
+}
+
+resource "azurerm_storage_account" "blob_connection" {
+ name = "accblobconnacct%[3]s"
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+ account_tier = "Standard"
+ account_replication_type = "LRS"
+}
+
+resource "azurerm_storage_account" "table_connection" {
+ name = "acctableconnacct%[3]s"
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+ account_tier = "Standard"
+ account_replication_type = "LRS"
+}
+
+resource "azurerm_private_dns_zone" "blob" {
+ name = "privatelink.blob.core.windows.net"
+ resource_group_name = azurerm_resource_group.test.name
+}
+
+resource "azurerm_private_dns_zone" "table" {
+ name = "privatelink.table.core.windows.net"
+ resource_group_name = azurerm_resource_group.test.name
+}
+
+resource "azurerm_private_endpoint" "blob" {
+ name = "acctest-privatelink-blob-%[2]d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ subnet_id = azurerm_subnet.blob_endpoint.id
+
+ private_service_connection {
+ name = "acctest-privatelink-mssc-%[2]d"
+ private_connection_resource_id = azurerm_storage_account.blob_connection.id
+ subresource_names = ["blob"]
+ is_manual_connection = false
+ }
+}
+
+resource "azurerm_private_endpoint" "table" {
+ name = "acctest-privatelink-table-%[2]d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+ subnet_id = azurerm_subnet.table_endpoint.id
+
+ private_service_connection {
+ name = "acctest-privatelink-mssc-%[2]d"
+ private_connection_resource_id = azurerm_storage_account.table_connection.id
+ subresource_names = ["table"]
+ is_manual_connection = false
+ }
+}
+`, r.networkRulesTemplate(data), data.RandomInteger, data.RandomString)
+}
+
+func (r StorageAccountResource) networkRules(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
resource "azurerm_storage_account" "test" {
name = "unlikely23exst2acct%s"
@@ -1528,34 +1659,12 @@ resource "azurerm_storage_account" "test" {
environment = "production"
}
}
-`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, data.RandomInteger, data.RandomString)
+`, r.networkRulesTemplate(data), data.RandomString)
}
func (r StorageAccountResource) networkRulesUpdate(data acceptance.TestData) string {
return fmt.Sprintf(`
-provider "azurerm" {
- features {}
-}
-
-resource "azurerm_resource_group" "test" {
- name = "acctestRG-storage-%d"
- location = "%s"
-}
-
-resource "azurerm_virtual_network" "test" {
- name = "acctestvirtnet%d"
- address_space = ["10.0.0.0/16"]
- location = azurerm_resource_group.test.location
- resource_group_name = azurerm_resource_group.test.name
-}
-
-resource "azurerm_subnet" "test" {
- name = "acctestsubnet%d"
- resource_group_name = azurerm_resource_group.test.name
- virtual_network_name = azurerm_virtual_network.test.name
- address_prefix = "10.0.2.0/24"
- service_endpoints = ["Microsoft.Storage"]
-}
+%s
resource "azurerm_storage_account" "test" {
name = "unlikely23exst2acct%s"
@@ -1574,35 +1683,37 @@ resource "azurerm_storage_account" "test" {
environment = "production"
}
}
-`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, data.RandomInteger, data.RandomString)
+`, r.networkRulesTemplate(data), data.RandomString)
}
func (r StorageAccountResource) networkRulesReverted(data acceptance.TestData) string {
return fmt.Sprintf(`
-provider "azurerm" {
- features {}
-}
+%s
-resource "azurerm_resource_group" "test" {
- name = "acctestRG-storage-%d"
- location = "%s"
-}
+resource "azurerm_storage_account" "test" {
+ name = "unlikely23exst2acct%s"
+ resource_group_name = azurerm_resource_group.test.name
+ location = azurerm_resource_group.test.location
+ account_tier = "Standard"
+ account_replication_type = "LRS"
-resource "azurerm_virtual_network" "test" {
- name = "acctestvirtnet%d"
- address_space = ["10.0.0.0/16"]
- location = azurerm_resource_group.test.location
- resource_group_name = azurerm_resource_group.test.name
-}
+ network_rules {
+ default_action = "Allow"
+ ip_rules = ["127.0.0.1"]
+ virtual_network_subnet_ids = [azurerm_subnet.test.id]
+ }
-resource "azurerm_subnet" "test" {
- name = "acctestsubnet%d"
- resource_group_name = azurerm_resource_group.test.name
- virtual_network_name = azurerm_virtual_network.test.name
- address_prefix = "10.0.2.0/24"
- service_endpoints = ["Microsoft.Storage"]
+ tags = {
+ environment = "production"
+ }
+}
+`, r.networkRulesTemplate(data), data.RandomString)
}
+func (r StorageAccountResource) networkRulesPrivateLinkAccess(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+%s
+
resource "azurerm_storage_account" "test" {
name = "unlikely23exst2acct%s"
resource_group_name = azurerm_resource_group.test.name
@@ -1611,16 +1722,22 @@ resource "azurerm_storage_account" "test" {
account_replication_type = "LRS"
network_rules {
- default_action = "Allow"
+ default_action = "Deny"
ip_rules = ["127.0.0.1"]
virtual_network_subnet_ids = [azurerm_subnet.test.id]
+ private_link_access {
+ endpoint_resource_id = azurerm_private_endpoint.blob.id
+ }
+ private_link_access {
+ endpoint_resource_id = azurerm_private_endpoint.table.id
+ }
}
tags = {
environment = "production"
}
}
-`, data.RandomInteger, data.Locations.Primary, data.RandomInteger, data.RandomInteger, data.RandomString)
+`, r.networkRulesPrivateEndpointTemplate(data), data.RandomString)
}
func (r StorageAccountResource) blobProperties(data acceptance.TestData) string {
@@ -1657,6 +1774,7 @@ resource "azurerm_storage_account" "test" {
default_service_version = "2019-07-07"
versioning_enabled = true
+ change_feed_enabled = true
last_access_time_enabled = true
container_delete_retention_policy {
days = 7
@@ -1733,7 +1851,42 @@ resource "azurerm_storage_account" "test" {
account_replication_type = "LRS"
blob_properties {
- versioning_enabled = true
+ versioning_enabled = true
+ change_feed_enabled = true
+ }
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomString)
+}
+
+func (r StorageAccountResource) blobPropertiesUpdatedEmptyAllowedExposedHeaders(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestAzureRMSA-%d"
+ location = "%s"
+}
+
+resource "azurerm_storage_account" "test" {
+ name = "unlikely23exst2acct%s"
+ resource_group_name = azurerm_resource_group.test.name
+
+ location = azurerm_resource_group.test.location
+ account_tier = "Standard"
+ account_replication_type = "LRS"
+ enable_https_traffic_only = true
+ allow_blob_public_access = true
+
+ blob_properties {
+ cors_rule {
+ allowed_headers = [""]
+ exposed_headers = [""]
+ allowed_origins = ["*"]
+ allowed_methods = ["GET"]
+ max_age_in_seconds = 3600
+ }
}
}
`, data.RandomInteger, data.Locations.Primary, data.RandomString)
diff --git a/azurerm/internal/services/storage/storage_management_policy_resource.go b/azurerm/internal/services/storage/storage_management_policy_resource.go
index da3957ecc570c..f44a53c1ab6fe 100644
--- a/azurerm/internal/services/storage/storage_management_policy_resource.go
+++ b/azurerm/internal/services/storage/storage_management_policy_resource.go
@@ -49,7 +49,7 @@ func resourceStorageManagementPolicy() *schema.Resource {
Type: schema.TypeString,
Required: true,
ValidateFunc: validation.StringMatch(
- regexp.MustCompile(`^[a-zA-Z0-9]*$`),
+ regexp.MustCompile(`^[a-zA-Z0-9-]*$`),
"A rule name can contain any combination of alpha numeric characters.",
),
},
diff --git a/azurerm/internal/services/storage/storage_management_policy_resource_test.go b/azurerm/internal/services/storage/storage_management_policy_resource_test.go
index 9216751878b3e..cb979938163f8 100644
--- a/azurerm/internal/services/storage/storage_management_policy_resource_test.go
+++ b/azurerm/internal/services/storage/storage_management_policy_resource_test.go
@@ -400,7 +400,7 @@ resource "azurerm_storage_management_policy" "test" {
storage_account_id = azurerm_storage_account.test.id
rule {
- name = "rule1"
+ name = "rule-1"
enabled = true
filters {
prefix_match = ["container1/prefix1"]
diff --git a/azurerm/internal/services/web/app_service.go b/azurerm/internal/services/web/app_service.go
index d5705a90103ce..9ed79fe9b1822 100644
--- a/azurerm/internal/services/web/app_service.go
+++ b/azurerm/internal/services/web/app_service.go
@@ -874,6 +874,7 @@ func schemaAppServiceIpRestriction() *schema.Schema {
}, false),
},
+ //lintignore:XS003
"headers": {
Type: schema.TypeList,
Optional: true,
diff --git a/azurerm/internal/services/web/client/client.go b/azurerm/internal/services/web/client/client.go
index 7bef029e92fa6..56c14da1e2893 100644
--- a/azurerm/internal/services/web/client/client.go
+++ b/azurerm/internal/services/web/client/client.go
@@ -12,6 +12,7 @@ type Client struct {
BaseClient *web.BaseClient
CertificatesClient *web.CertificatesClient
CertificatesOrderClient *web.AppServiceCertificateOrdersClient
+ StaticSitesClient *web.StaticSitesClient
}
func NewClient(o *common.ClientOptions) *Client {
@@ -33,6 +34,9 @@ func NewClient(o *common.ClientOptions) *Client {
certificatesOrderClient := web.NewAppServiceCertificateOrdersClientWithBaseURI(o.ResourceManagerEndpoint, o.SubscriptionId)
o.ConfigureClient(&certificatesOrderClient.Client, o.ResourceManagerAuthorizer)
+ staticSitesClient := web.NewStaticSitesClientWithBaseURI(o.ResourceManagerEndpoint, o.SubscriptionId)
+ o.ConfigureClient(&staticSitesClient.Client, o.ResourceManagerAuthorizer)
+
return &Client{
AppServiceEnvironmentsClient: &appServiceEnvironmentsClient,
AppServicePlansClient: &appServicePlansClient,
@@ -40,5 +44,6 @@ func NewClient(o *common.ClientOptions) *Client {
BaseClient: &baseClient,
CertificatesClient: &certificatesClient,
CertificatesOrderClient: &certificatesOrderClient,
+ StaticSitesClient: &staticSitesClient,
}
}
diff --git a/azurerm/internal/services/web/function_app.go b/azurerm/internal/services/web/function_app.go
index 7f95fd4b3cd40..16bfc3905301b 100644
--- a/azurerm/internal/services/web/function_app.go
+++ b/azurerm/internal/services/web/function_app.go
@@ -70,7 +70,7 @@ func schemaAppServiceFunctionAppSiteConfig() *schema.Schema {
Type: schema.TypeInt,
Optional: true,
Computed: true,
- ValidateFunc: validation.IntBetween(0, 10),
+ ValidateFunc: validation.IntBetween(0, 20),
},
"scm_ip_restriction": schemaAppServiceIpRestriction(),
diff --git a/azurerm/internal/services/web/parse/static_site.go b/azurerm/internal/services/web/parse/static_site.go
new file mode 100644
index 0000000000000..be6e64a6f1204
--- /dev/null
+++ b/azurerm/internal/services/web/parse/static_site.go
@@ -0,0 +1,69 @@
+package parse
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "fmt"
+ "strings"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
+)
+
+type StaticSiteId struct {
+ SubscriptionId string
+ ResourceGroup string
+ Name string
+}
+
+func NewStaticSiteID(subscriptionId, resourceGroup, name string) StaticSiteId {
+ return StaticSiteId{
+ SubscriptionId: subscriptionId,
+ ResourceGroup: resourceGroup,
+ Name: name,
+ }
+}
+
+func (id StaticSiteId) String() string {
+ segments := []string{
+ fmt.Sprintf("Name %q", id.Name),
+ fmt.Sprintf("Resource Group %q", id.ResourceGroup),
+ }
+ segmentsStr := strings.Join(segments, " / ")
+ return fmt.Sprintf("%s: (%s)", "Static Site", segmentsStr)
+}
+
+func (id StaticSiteId) ID() string {
+ fmtString := "/subscriptions/%s/resourceGroups/%s/providers/Microsoft.Web/staticSites/%s"
+ return fmt.Sprintf(fmtString, id.SubscriptionId, id.ResourceGroup, id.Name)
+}
+
+// StaticSiteID parses a StaticSite ID into an StaticSiteId struct
+func StaticSiteID(input string) (*StaticSiteId, error) {
+ id, err := azure.ParseAzureResourceID(input)
+ if err != nil {
+ return nil, err
+ }
+
+ resourceId := StaticSiteId{
+ SubscriptionId: id.SubscriptionID,
+ ResourceGroup: id.ResourceGroup,
+ }
+
+ if resourceId.SubscriptionId == "" {
+ return nil, fmt.Errorf("ID was missing the 'subscriptions' element")
+ }
+
+ if resourceId.ResourceGroup == "" {
+ return nil, fmt.Errorf("ID was missing the 'resourceGroups' element")
+ }
+
+ if resourceId.Name, err = id.PopSegment("staticSites"); err != nil {
+ return nil, err
+ }
+
+ if err := id.ValidateNoEmptySegments(input); err != nil {
+ return nil, err
+ }
+
+ return &resourceId, nil
+}
diff --git a/azurerm/internal/services/web/parse/static_site_test.go b/azurerm/internal/services/web/parse/static_site_test.go
new file mode 100644
index 0000000000000..9d4f7fb75ba90
--- /dev/null
+++ b/azurerm/internal/services/web/parse/static_site_test.go
@@ -0,0 +1,112 @@
+package parse
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "testing"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/resourceid"
+)
+
+var _ resourceid.Formatter = StaticSiteId{}
+
+func TestStaticSiteIDFormatter(t *testing.T) {
+ actual := NewStaticSiteID("12345678-1234-9876-4563-123456789012", "group1", "my-static-site1").ID()
+ expected := "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/group1/providers/Microsoft.Web/staticSites/my-static-site1"
+ if actual != expected {
+ t.Fatalf("Expected %q but got %q", expected, actual)
+ }
+}
+
+func TestStaticSiteID(t *testing.T) {
+ testData := []struct {
+ Input string
+ Error bool
+ Expected *StaticSiteId
+ }{
+
+ {
+ // empty
+ Input: "",
+ Error: true,
+ },
+
+ {
+ // missing SubscriptionId
+ Input: "/",
+ Error: true,
+ },
+
+ {
+ // missing value for SubscriptionId
+ Input: "/subscriptions/",
+ Error: true,
+ },
+
+ {
+ // missing ResourceGroup
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/",
+ Error: true,
+ },
+
+ {
+ // missing value for ResourceGroup
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/",
+ Error: true,
+ },
+
+ {
+ // missing Name
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/group1/providers/Microsoft.Web/",
+ Error: true,
+ },
+
+ {
+ // missing value for Name
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/group1/providers/Microsoft.Web/staticSites/",
+ Error: true,
+ },
+
+ {
+ // valid
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/group1/providers/Microsoft.Web/staticSites/my-static-site1",
+ Expected: &StaticSiteId{
+ SubscriptionId: "12345678-1234-9876-4563-123456789012",
+ ResourceGroup: "group1",
+ Name: "my-static-site1",
+ },
+ },
+
+ {
+ // upper-cased
+ Input: "/SUBSCRIPTIONS/12345678-1234-9876-4563-123456789012/RESOURCEGROUPS/GROUP1/PROVIDERS/MICROSOFT.WEB/STATICSITES/MY-STATIC-SITE1",
+ Error: true,
+ },
+ }
+
+ for _, v := range testData {
+ t.Logf("[DEBUG] Testing %q", v.Input)
+
+ actual, err := StaticSiteID(v.Input)
+ if err != nil {
+ if v.Error {
+ continue
+ }
+
+ t.Fatalf("Expect a value but got an error: %s", err)
+ }
+ if v.Error {
+ t.Fatal("Expect an error but didn't get one")
+ }
+
+ if actual.SubscriptionId != v.Expected.SubscriptionId {
+ t.Fatalf("Expected %q but got %q for SubscriptionId", v.Expected.SubscriptionId, actual.SubscriptionId)
+ }
+ if actual.ResourceGroup != v.Expected.ResourceGroup {
+ t.Fatalf("Expected %q but got %q for ResourceGroup", v.Expected.ResourceGroup, actual.ResourceGroup)
+ }
+ if actual.Name != v.Expected.Name {
+ t.Fatalf("Expected %q but got %q for Name", v.Expected.Name, actual.Name)
+ }
+ }
+}
diff --git a/azurerm/internal/services/web/registration.go b/azurerm/internal/services/web/registration.go
index e3ae37c7e4957..0d18dcc70ebf7 100644
--- a/azurerm/internal/services/web/registration.go
+++ b/azurerm/internal/services/web/registration.go
@@ -51,6 +51,7 @@ func (r Registration) SupportedResources() map[string]*schema.Resource {
"azurerm_app_service": resourceAppService(),
"azurerm_function_app": resourceFunctionApp(),
"azurerm_function_app_slot": resourceFunctionAppSlot(),
+ "azurerm_static_site": resourceStaticSite(),
}
}
diff --git a/azurerm/internal/services/web/resourceids.go b/azurerm/internal/services/web/resourceids.go
index 734e806aaa7b0..aa1e9219d44db 100644
--- a/azurerm/internal/services/web/resourceids.go
+++ b/azurerm/internal/services/web/resourceids.go
@@ -12,4 +12,5 @@ package web
//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=HybridConnection -id=/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.Web/sites/site1/hybridConnectionNamespaces/hybridConnectionNamespace1/relays/relay1
//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=ManagedCertificate -id=/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.Web/certificates/customhost.contoso.com
//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=SlotVirtualNetworkSwiftConnection -id=/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.Web/sites/site1/slots/slot1/config/virtualNetwork
+//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=StaticSite -id=/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/group1/providers/Microsoft.Web/staticSites/my-static-site1
//go:generate go run ../../tools/generator-resource-id/main.go -path=./ -name=VirtualNetworkSwiftConnection -id=/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/resGroup1/providers/Microsoft.Web/sites/site1/config/virtualNetwork
diff --git a/azurerm/internal/services/web/static_site_resource.go b/azurerm/internal/services/web/static_site_resource.go
new file mode 100644
index 0000000000000..18a0a0233f7a3
--- /dev/null
+++ b/azurerm/internal/services/web/static_site_resource.go
@@ -0,0 +1,204 @@
+package web
+
+import (
+ "fmt"
+ "log"
+ "time"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/location"
+ azSchema "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/tf/schema"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/validation"
+
+ "github.com/Azure/azure-sdk-for-go/services/web/mgmt/2020-06-01/web"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/azure"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/helpers/tf"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/web/parse"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/web/validate"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/timeouts"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+func resourceStaticSite() *schema.Resource {
+ return &schema.Resource{
+ Create: resourceStaticSiteCreateOrUpdate,
+ Read: resourceStaticSiteRead,
+ Update: resourceStaticSiteCreateOrUpdate,
+ Delete: resourceStaticSiteDelete,
+ Importer: azSchema.ValidateResourceIDPriorToImport(func(id string) error {
+ _, err := parse.StaticSiteID(id)
+ return err
+ }),
+
+ Timeouts: &schema.ResourceTimeout{
+ Create: schema.DefaultTimeout(30 * time.Minute),
+ Read: schema.DefaultTimeout(5 * time.Minute),
+ Update: schema.DefaultTimeout(30 * time.Minute),
+ Delete: schema.DefaultTimeout(30 * time.Minute),
+ },
+
+ Schema: map[string]*schema.Schema{
+ "name": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ ValidateFunc: validate.StaticSiteName,
+ },
+
+ "resource_group_name": azure.SchemaResourceGroupName(),
+
+ "location": azure.SchemaLocation(),
+
+ "sku_tier": {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "Free",
+ ValidateFunc: validation.StringInSlice([]string{"Free"}, false),
+ },
+
+ "sku_size": {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "Free",
+ ValidateFunc: validation.StringInSlice([]string{"Free"}, false),
+ },
+
+ "default_host_name": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+
+ "api_key": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ },
+ }
+}
+
+func resourceStaticSiteCreateOrUpdate(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*clients.Client).Web.StaticSitesClient
+ subscriptionId := meta.(*clients.Client).Account.SubscriptionId
+ ctx, cancel := timeouts.ForCreate(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ log.Printf("[INFO] preparing arguments for AzureRM Static Site creation.")
+
+ id := parse.NewStaticSiteID(subscriptionId, d.Get("resource_group_name").(string), d.Get("name").(string))
+
+ if d.IsNewResource() {
+ existing, err := client.GetStaticSite(ctx, id.ResourceGroup, id.Name)
+ if err != nil {
+ if !utils.ResponseWasNotFound(existing.Response) {
+ return fmt.Errorf("failed checking for presence of existing %s: %+v", id, err)
+ }
+ }
+
+ if existing.ID != nil && *existing.ID != "" {
+ return tf.ImportAsExistsError("azurerm_static_site", id.ID())
+ }
+ }
+
+ loc := location.Normalize(d.Get("location").(string))
+
+ siteEnvelope := web.StaticSiteARMResource{
+ Sku: &web.SkuDescription{
+ Name: utils.String(d.Get("sku_size").(string)),
+ Tier: utils.String(d.Get("sku_tier").(string)),
+ },
+ StaticSite: &web.StaticSite{},
+ Location: &loc,
+ }
+
+ if _, err := client.CreateOrUpdateStaticSite(ctx, id.ResourceGroup, id.Name, siteEnvelope); err != nil {
+ return fmt.Errorf("failed creating %s: %+v", id, err)
+ }
+
+ d.SetId(id.ID())
+
+ return resourceStaticSiteRead(d, meta)
+}
+
+func resourceStaticSiteRead(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*clients.Client).Web.StaticSitesClient
+ ctx, cancel := timeouts.ForRead(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ id, err := parse.StaticSiteID(d.Id())
+ if err != nil {
+ return err
+ }
+
+ resp, err := client.GetStaticSite(ctx, id.ResourceGroup, id.Name)
+ if err != nil {
+ if utils.ResponseWasNotFound(resp.Response) {
+ log.Printf("[DEBUG] Static Site %q (resource group %q) was not found - removing from state", id.Name, id.ResourceGroup)
+ d.SetId("")
+ return nil
+ }
+ return fmt.Errorf("failed making Read request on %s: %+v", id, err)
+ }
+ d.Set("name", id.Name)
+ d.Set("resource_group_name", id.ResourceGroup)
+
+ d.Set("location", location.NormalizeNilable(resp.Location))
+
+ if prop := resp.StaticSite; prop != nil {
+ defaultHostname := ""
+ if prop.DefaultHostname != nil {
+ defaultHostname = *prop.DefaultHostname
+ }
+ d.Set("default_host_name", defaultHostname)
+ }
+
+ skuName := ""
+ skuTier := ""
+ if sku := resp.Sku; sku != nil {
+ if v := sku.Name; v != nil {
+ skuName = *v
+ }
+
+ if v := sku.Tier; v != nil {
+ skuTier = *v
+ }
+ }
+ d.Set("sku_size", skuName)
+ d.Set("sku_tier", skuTier)
+
+ secretResp, err := client.ListStaticSiteSecrets(ctx, id.ResourceGroup, id.Name)
+ if err != nil {
+ return fmt.Errorf("listing secretes for %s: %v", id, err)
+ }
+
+ apiKey := ""
+ if pkey := secretResp.Properties["apiKey"]; pkey != nil {
+ apiKey = *pkey
+ }
+ d.Set("api_key", apiKey)
+
+ return nil
+}
+
+func resourceStaticSiteDelete(d *schema.ResourceData, meta interface{}) error {
+ client := meta.(*clients.Client).Web.StaticSitesClient
+ ctx, cancel := timeouts.ForDelete(meta.(*clients.Client).StopContext, d)
+ defer cancel()
+
+ id, err := parse.StaticSiteID(d.Id())
+ if err != nil {
+ return err
+ }
+
+ log.Printf("[DEBUG] Deleting Static Site %q (resource group %q)", id.Name, id.ResourceGroup)
+
+ resp, err := client.DeleteStaticSite(ctx, id.ResourceGroup, id.Name)
+ if err != nil {
+ if !utils.ResponseWasNotFound(resp) {
+ return err
+ }
+ }
+
+ return nil
+}
diff --git a/azurerm/internal/services/web/static_site_resource_test.go b/azurerm/internal/services/web/static_site_resource_test.go
new file mode 100644
index 0000000000000..a6839a4722b9f
--- /dev/null
+++ b/azurerm/internal/services/web/static_site_resource_test.go
@@ -0,0 +1,100 @@
+package web_test
+
+import (
+ "context"
+ "fmt"
+ "testing"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/web/parse"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance/check"
+
+ "github.com/hashicorp/terraform-plugin-sdk/helper/resource"
+ "github.com/hashicorp/terraform-plugin-sdk/terraform"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/acceptance"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/clients"
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/utils"
+)
+
+type StaticSiteResource struct{}
+
+func TestAccAzureStaticSite_basic(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_static_site", "test")
+ r := StaticSiteResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.basic(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ check.That(data.ResourceName).Key("default_host_name").Exists(),
+ check.That(data.ResourceName).Key("api_key").Exists(),
+ ),
+ },
+ data.ImportStep(),
+ })
+}
+
+func TestAccAzureStaticSite_requiresImport(t *testing.T) {
+ data := acceptance.BuildTestData(t, "azurerm_static_site", "test")
+ r := StaticSiteResource{}
+
+ data.ResourceTest(t, r, []resource.TestStep{
+ {
+ Config: r.basic(data),
+ Check: resource.ComposeTestCheckFunc(
+ check.That(data.ResourceName).ExistsInAzure(r),
+ ),
+ },
+ data.RequiresImportErrorStep(r.requiresImport),
+ })
+}
+
+func (r StaticSiteResource) Exists(ctx context.Context, clients *clients.Client, state *terraform.InstanceState) (*bool, error) {
+ id, err := parse.StaticSiteID(state.ID)
+ if err != nil {
+ return nil, err
+ }
+
+ resp, err := clients.Web.StaticSitesClient.GetStaticSite(ctx, id.ResourceGroup, id.Name)
+ if err != nil {
+ if utils.ResponseWasNotFound(resp.Response) {
+ return utils.Bool(false), nil
+ }
+ return nil, fmt.Errorf("retrieving Static Site %q: %+v", id, err)
+ }
+
+ return utils.Bool(true), nil
+}
+
+func (r StaticSiteResource) basic(data acceptance.TestData) string {
+ return fmt.Sprintf(`
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "test" {
+ name = "acctestRG-%d"
+ location = "%s"
+}
+
+resource "azurerm_static_site" "test" {
+ name = "acctestSS-%d"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+}
+`, data.RandomInteger, data.Locations.Primary, data.RandomInteger)
+}
+
+func (r StaticSiteResource) requiresImport(data acceptance.TestData) string {
+ template := r.basic(data)
+ return fmt.Sprintf(`
+%s
+
+resource "azurerm_static_site" "import" {
+ name = azurerm_static_site.test.name
+ location = azurerm_static_site.test.location
+ resource_group_name = azurerm_static_site.test.resource_group_name
+}
+`, template)
+}
diff --git a/azurerm/internal/services/web/validate/static_site.go b/azurerm/internal/services/web/validate/static_site.go
new file mode 100644
index 0000000000000..19bbe2b9a4e24
--- /dev/null
+++ b/azurerm/internal/services/web/validate/static_site.go
@@ -0,0 +1,16 @@
+package validate
+
+import (
+ "fmt"
+ "regexp"
+)
+
+func StaticSiteName(v interface{}, k string) (warnings []string, errors []error) {
+ value := v.(string)
+
+ if matched := regexp.MustCompile(`^[0-9a-zA-Z-]{1,60}$`).Match([]byte(value)); !matched {
+ errors = append(errors, fmt.Errorf("%q may only contain alphanumeric characters and dashes and up to 60 characters in length", k))
+ }
+
+ return warnings, errors
+}
diff --git a/azurerm/internal/services/web/validate/static_site_id.go b/azurerm/internal/services/web/validate/static_site_id.go
new file mode 100644
index 0000000000000..cfcfcfcad9e21
--- /dev/null
+++ b/azurerm/internal/services/web/validate/static_site_id.go
@@ -0,0 +1,23 @@
+package validate
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import (
+ "fmt"
+
+ "github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/web/parse"
+)
+
+func StaticSiteID(input interface{}, key string) (warnings []string, errors []error) {
+ v, ok := input.(string)
+ if !ok {
+ errors = append(errors, fmt.Errorf("expected %q to be a string", key))
+ return
+ }
+
+ if _, err := parse.StaticSiteID(v); err != nil {
+ errors = append(errors, err)
+ }
+
+ return
+}
diff --git a/azurerm/internal/services/web/validate/static_site_id_test.go b/azurerm/internal/services/web/validate/static_site_id_test.go
new file mode 100644
index 0000000000000..73dce4065710f
--- /dev/null
+++ b/azurerm/internal/services/web/validate/static_site_id_test.go
@@ -0,0 +1,76 @@
+package validate
+
+// NOTE: this file is generated via 'go:generate' - manual changes will be overwritten
+
+import "testing"
+
+func TestStaticSiteID(t *testing.T) {
+ cases := []struct {
+ Input string
+ Valid bool
+ }{
+
+ {
+ // empty
+ Input: "",
+ Valid: false,
+ },
+
+ {
+ // missing SubscriptionId
+ Input: "/",
+ Valid: false,
+ },
+
+ {
+ // missing value for SubscriptionId
+ Input: "/subscriptions/",
+ Valid: false,
+ },
+
+ {
+ // missing ResourceGroup
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/",
+ Valid: false,
+ },
+
+ {
+ // missing value for ResourceGroup
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/",
+ Valid: false,
+ },
+
+ {
+ // missing Name
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/group1/providers/Microsoft.Web/",
+ Valid: false,
+ },
+
+ {
+ // missing value for Name
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/group1/providers/Microsoft.Web/staticSites/",
+ Valid: false,
+ },
+
+ {
+ // valid
+ Input: "/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/group1/providers/Microsoft.Web/staticSites/my-static-site1",
+ Valid: true,
+ },
+
+ {
+ // upper-cased
+ Input: "/SUBSCRIPTIONS/12345678-1234-9876-4563-123456789012/RESOURCEGROUPS/GROUP1/PROVIDERS/MICROSOFT.WEB/STATICSITES/MY-STATIC-SITE1",
+ Valid: false,
+ },
+ }
+ for _, tc := range cases {
+ t.Logf("[DEBUG] Testing Value %s", tc.Input)
+ _, errors := StaticSiteID(tc.Input, "test")
+ valid := len(errors) == 0
+
+ if tc.Valid != valid {
+ t.Fatalf("Expected %t but got %t", tc.Valid, valid)
+ }
+ }
+}
diff --git a/azurerm/internal/tf/pluginsdk/state_upgrades.go b/azurerm/internal/tf/pluginsdk/state_upgrades.go
index 2fffe76392179..fb943971cbed1 100644
--- a/azurerm/internal/tf/pluginsdk/state_upgrades.go
+++ b/azurerm/internal/tf/pluginsdk/state_upgrades.go
@@ -28,7 +28,7 @@ type StateUpgrade interface {
// PR's and attempts to make this interface a little less verbose.
func StateUpgrades(upgrades map[int]StateUpgrade) []StateUpgrader {
versions := make([]int, 0)
- for version := range versions {
+ for version := range upgrades {
versions = append(versions, version)
}
sort.Ints(versions)
diff --git a/examples/web/static-site/azure-static-web-app.tpl b/examples/web/static-site/azure-static-web-app.tpl
new file mode 100644
index 0000000000000..1783e85fd3dfd
--- /dev/null
+++ b/examples/web/static-site/azure-static-web-app.tpl
@@ -0,0 +1,48 @@
+name: Azure Static Web Apps CI/CD
+
+on:
+ push:
+ branches:
+ - main
+ pull_request:
+ types: [opened, synchronize, reopened, closed]
+ branches:
+ - main
+
+jobs:
+ build_and_deploy_job:
+ if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed')
+ runs-on: ubuntu-latest
+ name: Build and Deploy Job
+ steps:
+ - uses: actions/checkout@v2
+ with:
+ submodules: true
+ - name: setup vue environment file
+ run: |
+ echo "VUE_APP_NOT_SECRET_CODE=some_value" > $GITHUB_WORKSPACE/.env
+ - name: Build And Deploy
+ id: builddeploy
+ uses: Azure/static-web-apps-deploy@v0.0.1-preview
+ with:
+ azure_static_web_apps_api_token: $${{ secrets.${ api_token_var } }}
+ repo_token: $${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
+ action: "upload"
+ ###### Repository/Build Configurations - These values can be configured to match you app requirements. ######
+ # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
+ app_location: "${ app_location }" # App source code path
+ api_location: "${ api_location }" # Api source code path - optional
+ output_location: "${ output_location }" # Built app content directory - optional
+ ###### End of Repository/Build Configurations ######
+
+ close_pull_request_job:
+ if: github.event_name == 'pull_request' && github.event.action == 'closed'
+ runs-on: ubuntu-latest
+ name: Close Pull Request Job
+ steps:
+ - name: Close Pull Request
+ id: closepullrequest
+ uses: Azure/static-web-apps-deploy@v0.0.1-preview
+ with:
+ azure_static_web_apps_api_token: $${{ secrets.${ api_token_var } }}
+ action: "close"
diff --git a/examples/web/static-site/main.tf b/examples/web/static-site/main.tf
new file mode 100644
index 0000000000000..16181f82b05ee
--- /dev/null
+++ b/examples/web/static-site/main.tf
@@ -0,0 +1,51 @@
+locals {
+ api_token_var = "AZURE_STATIC_WEB_APPS_API_TOKEN"
+}
+
+variable "github_token" {}
+variable "github_owner" {}
+
+provider "azurerm" {
+ features {}
+}
+
+output hostname {
+ value = azurerm_static_site.test.default_host_name
+}
+
+provider "github" {
+ token = var.github_token
+ owner = var.github_owner
+}
+
+resource "azurerm_resource_group" "test" {
+ name = "example"
+ location = "west europe"
+}
+
+resource "azurerm_static_site" "test" {
+ name = "example"
+ location = azurerm_resource_group.test.location
+ resource_group_name = azurerm_resource_group.test.name
+}
+
+resource "github_actions_secret" "test" {
+ repository = "my-first-static-web-app"
+ secret_name = local.api_token_var
+ plaintext_value = azurerm_static_site.test.api_key
+}
+
+# This will cause github provider crash, until https://github.com/integrations/terraform-provider-github/pull/732 is merged.
+resource "github_repository_file" "foo" {
+ repository = "my-first-static-web-app"
+ branch = "main"
+ file = ".github/workflows/azure-static-web-app.yml"
+ content = templatefile("./azure-static-web-app.tpl",
+ {
+ app_location = "/"
+ api_location = "api"
+ output_location = ""
+ api_token_var = local.api_token_var
+ }
+ )
+}
diff --git a/go.mod b/go.mod
index a574344c7aef5..d47d65a9b7b4c 100644
--- a/go.mod
+++ b/go.mod
@@ -1,7 +1,7 @@
module github.com/terraform-providers/terraform-provider-azurerm
require (
- github.com/Azure/azure-sdk-for-go v54.0.0+incompatible
+ github.com/Azure/azure-sdk-for-go v54.2.0+incompatible
github.com/Azure/go-autorest/autorest v0.11.18
github.com/Azure/go-autorest/autorest/date v0.3.0
github.com/Azure/go-autorest/autorest/validation v0.3.1
@@ -13,12 +13,14 @@ require (
github.com/hashicorp/go-azure-helpers v0.15.0
github.com/hashicorp/go-getter v1.5.3
github.com/hashicorp/go-multierror v1.0.0
+ github.com/hashicorp/go-plugin v1.4.0 // indirect
github.com/hashicorp/go-uuid v1.0.1
github.com/hashicorp/go-version v1.3.0
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/hashicorp/terraform-plugin-sdk v1.17.2
github.com/rickb777/date v1.12.5-0.20200422084442-6300e543c4d9
github.com/sergi/go-diff v1.2.0
+ github.com/shopspring/decimal v1.2.0
github.com/terraform-providers/terraform-provider-azuread v0.9.0
github.com/tombuildsstuff/giovanni v0.15.1
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2
diff --git a/go.sum b/go.sum
index 49d24e3adf4e8..b2a0f36b0cbe7 100644
--- a/go.sum
+++ b/go.sum
@@ -38,8 +38,8 @@ github.com/Azure/azure-sdk-for-go v42.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9mo
github.com/Azure/azure-sdk-for-go v45.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/azure-sdk-for-go v47.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/azure-sdk-for-go v51.2.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
-github.com/Azure/azure-sdk-for-go v54.0.0+incompatible h1:Bq3L9LF0DHCexlT0fccwxgrOMfjHx8LGz+d+L7gGQv4=
-github.com/Azure/azure-sdk-for-go v54.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
+github.com/Azure/azure-sdk-for-go v54.2.0+incompatible h1:LYKBbC9PubUJnrkLZttkPmtOPNEQDhtzTjw114FJKBQ=
+github.com/Azure/azure-sdk-for-go v54.2.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI=
@@ -245,13 +245,15 @@ github.com/hashicorp/go-getter v1.4.0/go.mod h1:7qxyCd8rBfcShwsvxgIguu4KbS3l8bUC
github.com/hashicorp/go-getter v1.5.3 h1:NF5+zOlQegim+w/EUhSLh6QhXHmZMEeHLQzllkQ3ROU=
github.com/hashicorp/go-getter v1.5.3/go.mod h1:BrrV/1clo8cCYu6mxvboYg+KutTiFnXjMEgDD8+i7ZI=
github.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd/go.mod h1:9bjs9uLqI8l75knNv3lV1kA55veR+WUPSiKIWcQHudI=
-github.com/hashicorp/go-hclog v0.9.2 h1:CG6TE5H9/JXsFWJCfoIVpKFIkFe6ysEuHirp4DxCsHI=
github.com/hashicorp/go-hclog v0.9.2/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ=
+github.com/hashicorp/go-hclog v0.14.1 h1:nQcJDQwIAGnmoUWp8ubocEX40cCml/17YkF6csQLReU=
+github.com/hashicorp/go-hclog v0.14.1/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
github.com/hashicorp/go-multierror v1.0.0 h1:iVjPR7a6H0tWELX5NxNe7bYopibicUzc7uPribsnS6o=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-plugin v1.0.1/go.mod h1:++UyYGoz3o5w9ZzAdZxtQKrWWP+iqPBn3cQptSMzBuY=
-github.com/hashicorp/go-plugin v1.3.0 h1:4d/wJojzvHV1I4i/rrjVaeuyxWrLzDE1mDCyDy8fXS8=
github.com/hashicorp/go-plugin v1.3.0/go.mod h1:F9eH4LrE/ZsRdbwhfjs9k9HoDUwAHnYtXdgmf1AVNs0=
+github.com/hashicorp/go-plugin v1.4.0 h1:b0O7rs5uiJ99Iu9HugEzsM67afboErkHUWddUSpUO3A=
+github.com/hashicorp/go-plugin v1.4.0/go.mod h1:5fGEH17QVwTTcR0zV7yhDPLLmFX9YSZ38b18Udy6vYQ=
github.com/hashicorp/go-safetemp v1.0.0 h1:2HR189eFNrjHQyENnQMMpCiBAsRxzbTMIgBhEyExpmo=
github.com/hashicorp/go-safetemp v1.0.0/go.mod h1:oaerMy3BhqiTbVye6QuFhFtIceqFoDHxNAB65b+Rj1I=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
@@ -328,12 +330,15 @@ github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LE
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
-github.com/mattn/go-colorable v0.1.1 h1:G1f5SKeVxmagw/IyvzvtZE4Gybcc4Tr1tf7I8z0XgOg=
github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=
+github.com/mattn/go-colorable v0.1.4 h1:snbPLB8fVfU9iwbbo30TPtbLRzwWu6aJS6Xh4eaaviA=
+github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
-github.com/mattn/go-isatty v0.0.5 h1:tHXDdz1cpzGaovsTB+TVB8q90WEokoVmfMqoVcrLUgw=
github.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
+github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
+github.com/mattn/go-isatty v0.0.10 h1:qxFzApOv4WsAL965uUPIsXzAKCZxN2p9UqdhFS4ZW10=
+github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
github.com/mattn/go-runewidth v0.0.4/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/goveralls v0.0.2/go.mod h1:8d1ZMHsd7fW6IRPKQh46F2WRpyib5/X4FOpevwGNQEw=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
@@ -379,6 +384,8 @@ github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAm
github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/sergi/go-diff v1.2.0 h1:XU+rvMAioB0UC3q1MFrIQy4Vo5/4VsRDQQXHsEya6xQ=
github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
+github.com/shopspring/decimal v1.2.0 h1:abSATXmQEYyShuxI4/vyW3tV1MrKAJzCZ/0zLUXYbsQ=
+github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
@@ -542,6 +549,7 @@ golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190804053845-51ab0e2deafa/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/CHANGELOG.md b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/CHANGELOG.md
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/CHANGELOG.md
rename to vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/CHANGELOG.md
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/_meta.json b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/_meta.json
new file mode 100644
index 0000000000000..980b2079d528b
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/_meta.json
@@ -0,0 +1,11 @@
+{
+ "commit": "3c764635e7d442b3e74caf593029fcd440b3ef82",
+ "readme": "/_/azure-rest-api-specs/specification/azureactivedirectory/resource-manager/readme.md",
+ "tag": "package-2017-04-01",
+ "use": "@microsoft.azure/autorest.go@2.1.180",
+ "repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
+ "autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.180 --tag=package-2017-04-01 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/azureactivedirectory/resource-manager/readme.md",
+ "additional_properties": {
+ "additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
+ }
+}
\ No newline at end of file
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/client.go b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/client.go
new file mode 100644
index 0000000000000..2b6102b8e6f94
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/client.go
@@ -0,0 +1,39 @@
+// Package aad implements the Azure ARM Aad service API version 2017-04-01.
+//
+// Azure Active Directory Client.
+package aad
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "github.com/Azure/go-autorest/autorest"
+)
+
+const (
+ // DefaultBaseURI is the default URI used for the service Aad
+ DefaultBaseURI = "https://management.azure.com"
+)
+
+// BaseClient is the base client for Aad.
+type BaseClient struct {
+ autorest.Client
+ BaseURI string
+}
+
+// New creates an instance of the BaseClient client.
+func New() BaseClient {
+ return NewWithBaseURI(DefaultBaseURI)
+}
+
+// NewWithBaseURI creates an instance of the BaseClient client using a custom endpoint. Use this when interacting with
+// an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewWithBaseURI(baseURI string) BaseClient {
+ return BaseClient{
+ Client: autorest.NewClientWithUserAgent(UserAgent()),
+ BaseURI: baseURI,
+ }
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/diagnosticsettings.go b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/diagnosticsettings.go
new file mode 100644
index 0000000000000..f9eaeec61a174
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/diagnosticsettings.go
@@ -0,0 +1,320 @@
+package aad
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// DiagnosticSettingsClient is the azure Active Directory Client.
+type DiagnosticSettingsClient struct {
+ BaseClient
+}
+
+// NewDiagnosticSettingsClient creates an instance of the DiagnosticSettingsClient client.
+func NewDiagnosticSettingsClient() DiagnosticSettingsClient {
+ return NewDiagnosticSettingsClientWithBaseURI(DefaultBaseURI)
+}
+
+// NewDiagnosticSettingsClientWithBaseURI creates an instance of the DiagnosticSettingsClient client using a custom
+// endpoint. Use this when interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure
+// stack).
+func NewDiagnosticSettingsClientWithBaseURI(baseURI string) DiagnosticSettingsClient {
+ return DiagnosticSettingsClient{NewWithBaseURI(baseURI)}
+}
+
+// CreateOrUpdate creates or updates diagnostic settings for AadIam.
+// Parameters:
+// parameters - parameters supplied to the operation.
+// name - the name of the diagnostic setting.
+func (client DiagnosticSettingsClient) CreateOrUpdate(ctx context.Context, parameters DiagnosticSettingsResource, name string) (result DiagnosticSettingsResource, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/DiagnosticSettingsClient.CreateOrUpdate")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.CreateOrUpdatePreparer(ctx, parameters, name)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsClient", "CreateOrUpdate", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.CreateOrUpdateSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsClient", "CreateOrUpdate", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.CreateOrUpdateResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsClient", "CreateOrUpdate", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// CreateOrUpdatePreparer prepares the CreateOrUpdate request.
+func (client DiagnosticSettingsClient) CreateOrUpdatePreparer(ctx context.Context, parameters DiagnosticSettingsResource, name string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "name": autorest.Encode("path", name),
+ }
+
+ const APIVersion = "2017-04-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsContentType("application/json; charset=utf-8"),
+ autorest.AsPut(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/microsoft.aadiam/diagnosticSettings/{name}", pathParameters),
+ autorest.WithJSON(parameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the
+// http.Response Body if it receives an error.
+func (client DiagnosticSettingsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always
+// closes the http.Response Body.
+func (client DiagnosticSettingsClient) CreateOrUpdateResponder(resp *http.Response) (result DiagnosticSettingsResource, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// Delete deletes existing diagnostic setting for AadIam.
+// Parameters:
+// name - the name of the diagnostic setting.
+func (client DiagnosticSettingsClient) Delete(ctx context.Context, name string) (result autorest.Response, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/DiagnosticSettingsClient.Delete")
+ defer func() {
+ sc := -1
+ if result.Response != nil {
+ sc = result.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.DeletePreparer(ctx, name)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsClient", "Delete", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.DeleteSender(req)
+ if err != nil {
+ result.Response = resp
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsClient", "Delete", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.DeleteResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsClient", "Delete", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// DeletePreparer prepares the Delete request.
+func (client DiagnosticSettingsClient) DeletePreparer(ctx context.Context, name string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "name": autorest.Encode("path", name),
+ }
+
+ const APIVersion = "2017-04-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsDelete(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/microsoft.aadiam/diagnosticSettings/{name}", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// DeleteSender sends the Delete request. The method will close the
+// http.Response Body if it receives an error.
+func (client DiagnosticSettingsClient) DeleteSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// DeleteResponder handles the response to the Delete request. The method always
+// closes the http.Response Body.
+func (client DiagnosticSettingsClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent),
+ autorest.ByClosing())
+ result.Response = resp
+ return
+}
+
+// Get gets the active diagnostic setting for AadIam.
+// Parameters:
+// name - the name of the diagnostic setting.
+func (client DiagnosticSettingsClient) Get(ctx context.Context, name string) (result DiagnosticSettingsResource, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/DiagnosticSettingsClient.Get")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.GetPreparer(ctx, name)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsClient", "Get", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsClient", "Get", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsClient", "Get", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetPreparer prepares the Get request.
+func (client DiagnosticSettingsClient) GetPreparer(ctx context.Context, name string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "name": autorest.Encode("path", name),
+ }
+
+ const APIVersion = "2017-04-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/microsoft.aadiam/diagnosticSettings/{name}", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetSender sends the Get request. The method will close the
+// http.Response Body if it receives an error.
+func (client DiagnosticSettingsClient) GetSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// GetResponder handles the response to the Get request. The method always
+// closes the http.Response Body.
+func (client DiagnosticSettingsClient) GetResponder(resp *http.Response) (result DiagnosticSettingsResource, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// List gets the active diagnostic settings list for AadIam.
+func (client DiagnosticSettingsClient) List(ctx context.Context) (result DiagnosticSettingsResourceCollection, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/DiagnosticSettingsClient.List")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.ListPreparer(ctx)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsClient", "List", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client DiagnosticSettingsClient) ListPreparer(ctx context.Context) (*http.Request, error) {
+ const APIVersion = "2017-04-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPath("/providers/microsoft.aadiam/diagnosticSettings"),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client DiagnosticSettingsClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client DiagnosticSettingsClient) ListResponder(resp *http.Response) (result DiagnosticSettingsResourceCollection, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/diagnosticsettingscategory.go b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/diagnosticsettingscategory.go
new file mode 100644
index 0000000000000..d26bd7b59074e
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/diagnosticsettingscategory.go
@@ -0,0 +1,99 @@
+package aad
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// DiagnosticSettingsCategoryClient is the azure Active Directory Client.
+type DiagnosticSettingsCategoryClient struct {
+ BaseClient
+}
+
+// NewDiagnosticSettingsCategoryClient creates an instance of the DiagnosticSettingsCategoryClient client.
+func NewDiagnosticSettingsCategoryClient() DiagnosticSettingsCategoryClient {
+ return NewDiagnosticSettingsCategoryClientWithBaseURI(DefaultBaseURI)
+}
+
+// NewDiagnosticSettingsCategoryClientWithBaseURI creates an instance of the DiagnosticSettingsCategoryClient client
+// using a custom endpoint. Use this when interacting with an Azure cloud that uses a non-standard base URI (sovereign
+// clouds, Azure stack).
+func NewDiagnosticSettingsCategoryClientWithBaseURI(baseURI string) DiagnosticSettingsCategoryClient {
+ return DiagnosticSettingsCategoryClient{NewWithBaseURI(baseURI)}
+}
+
+// List lists the diagnostic settings categories for AadIam.
+func (client DiagnosticSettingsCategoryClient) List(ctx context.Context) (result DiagnosticSettingsCategoryResourceCollection, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/DiagnosticSettingsCategoryClient.List")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.ListPreparer(ctx)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsCategoryClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsCategoryClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "aad.DiagnosticSettingsCategoryClient", "List", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client DiagnosticSettingsCategoryClient) ListPreparer(ctx context.Context) (*http.Request, error) {
+ const APIVersion = "2017-04-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPath("/providers/microsoft.aadiam/diagnosticSettingsCategories"),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client DiagnosticSettingsCategoryClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client DiagnosticSettingsCategoryClient) ListResponder(resp *http.Response) (result DiagnosticSettingsCategoryResourceCollection, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/enums.go b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/enums.go
new file mode 100644
index 0000000000000..45f1e678df713
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/enums.go
@@ -0,0 +1,35 @@
+package aad
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+// Category enumerates the values for category.
+type Category string
+
+const (
+ // AuditLogs ...
+ AuditLogs Category = "AuditLogs"
+ // SignInLogs ...
+ SignInLogs Category = "SignInLogs"
+)
+
+// PossibleCategoryValues returns an array of possible values for the Category const type.
+func PossibleCategoryValues() []Category {
+ return []Category{AuditLogs, SignInLogs}
+}
+
+// CategoryType enumerates the values for category type.
+type CategoryType string
+
+const (
+ // Logs ...
+ Logs CategoryType = "Logs"
+)
+
+// PossibleCategoryTypeValues returns an array of possible values for the CategoryType const type.
+func PossibleCategoryTypeValues() []CategoryType {
+ return []CategoryType{Logs}
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/models.go
new file mode 100644
index 0000000000000..84c4441e84d5e
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/models.go
@@ -0,0 +1,276 @@
+package aad
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "encoding/json"
+ "github.com/Azure/go-autorest/autorest"
+)
+
+// The package's fully qualified name.
+const fqdn = "github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad"
+
+// DiagnosticSettings the diagnostic settings.
+type DiagnosticSettings struct {
+ // StorageAccountID - The resource ID of the storage account to which you would like to send Diagnostic Logs.
+ StorageAccountID *string `json:"storageAccountId,omitempty"`
+ // ServiceBusRuleID - The service bus rule Id of the diagnostic setting. This is here to maintain backwards compatibility.
+ ServiceBusRuleID *string `json:"serviceBusRuleId,omitempty"`
+ // WorkspaceID - The workspace ID (resource ID of a Log Analytics workspace) for a Log Analytics workspace to which you would like to send Diagnostic Logs. Example: /subscriptions/4b9e8510-67ab-4e9a-95a9-e2f1e570ea9c/resourceGroups/insights-integration/providers/Microsoft.OperationalInsights/workspaces/viruela2
+ WorkspaceID *string `json:"workspaceId,omitempty"`
+ // EventHubAuthorizationRuleID - The resource Id for the event hub authorization rule.
+ EventHubAuthorizationRuleID *string `json:"eventHubAuthorizationRuleId,omitempty"`
+ // EventHubName - The name of the event hub. If none is specified, the default event hub will be selected.
+ EventHubName *string `json:"eventHubName,omitempty"`
+ // Logs - The list of logs settings.
+ Logs *[]LogSettings `json:"logs,omitempty"`
+}
+
+// DiagnosticSettingsCategory the diagnostic settings Category.
+type DiagnosticSettingsCategory struct {
+ // CategoryType - The type of the diagnostic settings category. Possible values include: 'Logs'
+ CategoryType CategoryType `json:"categoryType,omitempty"`
+}
+
+// DiagnosticSettingsCategoryResource the diagnostic settings category resource.
+type DiagnosticSettingsCategoryResource struct {
+ // DiagnosticSettingsCategory - The properties of a Diagnostic Settings Category.
+ *DiagnosticSettingsCategory `json:"properties,omitempty"`
+ // ID - READ-ONLY; Azure resource Id
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Azure resource name
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Azure resource type
+ Type *string `json:"type,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for DiagnosticSettingsCategoryResource.
+func (dscr DiagnosticSettingsCategoryResource) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if dscr.DiagnosticSettingsCategory != nil {
+ objectMap["properties"] = dscr.DiagnosticSettingsCategory
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for DiagnosticSettingsCategoryResource struct.
+func (dscr *DiagnosticSettingsCategoryResource) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var diagnosticSettingsCategory DiagnosticSettingsCategory
+ err = json.Unmarshal(*v, &diagnosticSettingsCategory)
+ if err != nil {
+ return err
+ }
+ dscr.DiagnosticSettingsCategory = &diagnosticSettingsCategory
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ dscr.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ dscr.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ dscr.Type = &typeVar
+ }
+ }
+ }
+
+ return nil
+}
+
+// DiagnosticSettingsCategoryResourceCollection represents a collection of diagnostic setting category
+// resources.
+type DiagnosticSettingsCategoryResourceCollection struct {
+ autorest.Response `json:"-"`
+ // Value - The collection of diagnostic settings category resources.
+ Value *[]DiagnosticSettingsCategoryResource `json:"value,omitempty"`
+}
+
+// DiagnosticSettingsResource the diagnostic setting resource.
+type DiagnosticSettingsResource struct {
+ autorest.Response `json:"-"`
+ // DiagnosticSettings - Properties of a Diagnostic Settings Resource.
+ *DiagnosticSettings `json:"properties,omitempty"`
+ // ID - READ-ONLY; Azure resource Id
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Azure resource name
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Azure resource type
+ Type *string `json:"type,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for DiagnosticSettingsResource.
+func (dsr DiagnosticSettingsResource) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if dsr.DiagnosticSettings != nil {
+ objectMap["properties"] = dsr.DiagnosticSettings
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for DiagnosticSettingsResource struct.
+func (dsr *DiagnosticSettingsResource) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var diagnosticSettings DiagnosticSettings
+ err = json.Unmarshal(*v, &diagnosticSettings)
+ if err != nil {
+ return err
+ }
+ dsr.DiagnosticSettings = &diagnosticSettings
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ dsr.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ dsr.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ dsr.Type = &typeVar
+ }
+ }
+ }
+
+ return nil
+}
+
+// DiagnosticSettingsResourceCollection represents a collection of alert rule resources.
+type DiagnosticSettingsResourceCollection struct {
+ autorest.Response `json:"-"`
+ // Value - The collection of diagnostic settings resources.
+ Value *[]DiagnosticSettingsResource `json:"value,omitempty"`
+}
+
+// Display contains the localized display information for this particular operation / action. These value
+// will be used by several clients for (1) custom role definitions for RBAC; (2) complex query filters for
+// the event service; and (3) audit history / records for management operations.
+type Display struct {
+ // Publisher - The publisher. The localized friendly form of the resource publisher name.
+ Publisher *string `json:"publisher,omitempty"`
+ // Provider - The provider. The localized friendly form of the resource provider name – it is expected to also include the publisher/company responsible. It should use Title Casing and begin with "Microsoft" for 1st party services. e.g. "Microsoft Monitoring Insights" or "Microsoft Compute."
+ Provider *string `json:"provider,omitempty"`
+ // Resource - The resource. The localized friendly form of the resource related to this action/operation – it should match the public documentation for the resource provider. It should use Title Casing. This value should be unique for a particular URL type (e.g. nested types should *not* reuse their parent’s display.resource field). e.g. "Virtual Machines" or "Scheduler Job Collections", or "Virtual Machine VM Sizes" or "Scheduler Jobs"
+ Resource *string `json:"resource,omitempty"`
+ // Operation - The operation. The localized friendly name for the operation, as it should be shown to the user. It should be concise (to fit in drop downs) but clear (i.e. self-documenting). It should use Title Casing. Prescriptive guidance: Read Create or Update Delete 'ActionName'
+ Operation *string `json:"operation,omitempty"`
+ // Description - The description. The localized friendly description for the operation, as it should be shown to the user. It should be thorough, yet concise – it will be used in tool tips and detailed views. Prescriptive guidance for namespaces: Read any 'display.provider' resource Create or Update any 'display.provider' resource Delete any 'display.provider' resource Perform any other action on any 'display.provider' resource Prescriptive guidance for namespaces: Read any 'display.resource' Create or Update any 'display.resource' Delete any 'display.resource' 'ActionName' any 'display.resources'
+ Description *string `json:"description,omitempty"`
+}
+
+// ErrorDefinition error definition.
+type ErrorDefinition struct {
+ // Code - READ-ONLY; Service specific error code which serves as the substatus for the HTTP error code.
+ Code *string `json:"code,omitempty"`
+ // Message - READ-ONLY; Description of the error.
+ Message *string `json:"message,omitempty"`
+ // Details - READ-ONLY; Internal error details.
+ Details *[]ErrorDefinition `json:"details,omitempty"`
+}
+
+// ErrorResponse error response.
+type ErrorResponse struct {
+ // Error - The error details.
+ Error *ErrorDefinition `json:"error,omitempty"`
+}
+
+// LogSettings part of MultiTenantDiagnosticSettings. Specifies the settings for a particular log.
+type LogSettings struct {
+ // Category - Name of a Diagnostic Log category for a resource type this setting is applied to. To obtain the list of Diagnostic Log categories for a resource, first perform a GET diagnostic settings operation. Possible values include: 'AuditLogs', 'SignInLogs'
+ Category Category `json:"category,omitempty"`
+ // Enabled - A value indicating whether this log is enabled.
+ Enabled *bool `json:"enabled,omitempty"`
+ // RetentionPolicy - The retention policy for this log.
+ RetentionPolicy *RetentionPolicy `json:"retentionPolicy,omitempty"`
+}
+
+// OperationsDiscovery operations discovery class.
+type OperationsDiscovery struct {
+ // Name - Name of the API. The name of the operation being performed on this particular object. It should match the action name that appears in RBAC / the event service. Examples of operations include: * Microsoft.Compute/virtualMachine/capture/action * Microsoft.Compute/virtualMachine/restart/action * Microsoft.Compute/virtualMachine/write * Microsoft.Compute/virtualMachine/read * Microsoft.Compute/virtualMachine/delete Each action should include, in order: (1) Resource Provider Namespace (2) Type hierarchy for which the action applies (e.g. server/databases for a SQL Azure database) (3) Read, Write, Action or Delete indicating which type applies. If it is a PUT/PATCH on a collection or named value, Write should be used. If it is a GET, Read should be used. If it is a DELETE, Delete should be used. If it is a POST, Action should be used.
+ Name *string `json:"name,omitempty"`
+ // Display - Object type
+ Display *Display `json:"display,omitempty"`
+ // Origin - Origin. The intended executor of the operation; governs the display of the operation in the RBAC UX and the audit logs UX. Default value is "user,system"
+ Origin *string `json:"origin,omitempty"`
+ // Properties - Properties. Reserved for future use.
+ Properties interface{} `json:"properties,omitempty"`
+}
+
+// OperationsDiscoveryCollection collection of ClientDiscovery details.
+type OperationsDiscoveryCollection struct {
+ autorest.Response `json:"-"`
+ // Value - The ClientDiscovery details.
+ Value *[]OperationsDiscovery `json:"value,omitempty"`
+}
+
+// ProxyOnlyResource a proxy only azure resource object.
+type ProxyOnlyResource struct {
+ // ID - READ-ONLY; Azure resource Id
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Azure resource name
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Azure resource type
+ Type *string `json:"type,omitempty"`
+}
+
+// RetentionPolicy specifies the retention policy for the log.
+type RetentionPolicy struct {
+ // Enabled - A value indicating whether the retention policy is enabled.
+ Enabled *bool `json:"enabled,omitempty"`
+ // Days - The number of days for the retention in days. A value of 0 will retain the events indefinitely.
+ Days *int32 `json:"days,omitempty"`
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/operations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/operations.go
new file mode 100644
index 0000000000000..397b8b757bd6b
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/operations.go
@@ -0,0 +1,98 @@
+package aad
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// OperationsClient is the azure Active Directory Client.
+type OperationsClient struct {
+ BaseClient
+}
+
+// NewOperationsClient creates an instance of the OperationsClient client.
+func NewOperationsClient() OperationsClient {
+ return NewOperationsClientWithBaseURI(DefaultBaseURI)
+}
+
+// NewOperationsClientWithBaseURI creates an instance of the OperationsClient client using a custom endpoint. Use this
+// when interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewOperationsClientWithBaseURI(baseURI string) OperationsClient {
+ return OperationsClient{NewWithBaseURI(baseURI)}
+}
+
+// List operation to return the list of available operations.
+func (client OperationsClient) List(ctx context.Context) (result OperationsDiscoveryCollection, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/OperationsClient.List")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.ListPreparer(ctx)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "aad.OperationsClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "aad.OperationsClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "aad.OperationsClient", "List", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client OperationsClient) ListPreparer(ctx context.Context) (*http.Request, error) {
+ const APIVersion = "2017-04-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPath("/providers/microsoft.aadiam/operations"),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client OperationsClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client OperationsClient) ListResponder(resp *http.Response) (result OperationsDiscoveryCollection, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/version.go b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/version.go
new file mode 100644
index 0000000000000..a93e25f82dd1c
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/aad/mgmt/2017-04-01/aad/version.go
@@ -0,0 +1,19 @@
+package aad
+
+import "github.com/Azure/azure-sdk-for-go/version"
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+// UserAgent returns the UserAgent string to use when sending http.Requests.
+func UserAgent() string {
+ return "Azure-SDK-For-Go/" + Version() + " aad/2017-04-01"
+}
+
+// Version returns the semantic version (see http://semver.org) of the client.
+func Version() string {
+ return version.Number
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/CHANGELOG.md b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/CHANGELOG.md
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/CHANGELOG.md
rename to vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/CHANGELOG.md
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/_meta.json b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/_meta.json
similarity index 69%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/_meta.json
rename to vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/_meta.json
index 38b1b276ff9a2..0b7ebaddfa62d 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/_meta.json
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/_meta.json
@@ -1,10 +1,10 @@
{
"commit": "3c764635e7d442b3e74caf593029fcd440b3ef82",
- "readme": "/_/azure-rest-api-specs/specification/maps/resource-manager/readme.md",
- "tag": "package-2018-05",
+ "readme": "/_/azure-rest-api-specs/specification/consumption/resource-manager/readme.md",
+ "tag": "package-2019-10",
"use": "@microsoft.azure/autorest.go@2.1.180",
"repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
- "autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.180 --tag=package-2018-05 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/maps/resource-manager/readme.md",
+ "autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.180 --tag=package-2019-10 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/consumption/resource-manager/readme.md",
"additional_properties": {
"additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/aggregatedcost.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/aggregatedcost.go
new file mode 100644
index 0000000000000..b5f7ec411ff38
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/aggregatedcost.go
@@ -0,0 +1,188 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// AggregatedCostClient is the consumption management client provides access to consumption resources for Azure
+// Enterprise Subscriptions.
+type AggregatedCostClient struct {
+ BaseClient
+}
+
+// NewAggregatedCostClient creates an instance of the AggregatedCostClient client.
+func NewAggregatedCostClient(subscriptionID string) AggregatedCostClient {
+ return NewAggregatedCostClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewAggregatedCostClientWithBaseURI creates an instance of the AggregatedCostClient client using a custom endpoint.
+// Use this when interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewAggregatedCostClientWithBaseURI(baseURI string, subscriptionID string) AggregatedCostClient {
+ return AggregatedCostClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// GetByManagementGroup provides the aggregate cost of a management group and all child management groups by current
+// billing period.
+// Parameters:
+// managementGroupID - azure Management Group ID.
+// filter - may be used to filter aggregated cost by properties/usageStart (Utc time), properties/usageEnd (Utc
+// time). The filter supports 'eq', 'lt', 'gt', 'le', 'ge', and 'and'. It does not currently support 'ne',
+// 'or', or 'not'. Tag filter is a key value pair string where key and value is separated by a colon (:).
+func (client AggregatedCostClient) GetByManagementGroup(ctx context.Context, managementGroupID string, filter string) (result ManagementGroupAggregatedCostResult, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/AggregatedCostClient.GetByManagementGroup")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.GetByManagementGroupPreparer(ctx, managementGroupID, filter)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.AggregatedCostClient", "GetByManagementGroup", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetByManagementGroupSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.AggregatedCostClient", "GetByManagementGroup", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetByManagementGroupResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.AggregatedCostClient", "GetByManagementGroup", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetByManagementGroupPreparer prepares the GetByManagementGroup request.
+func (client AggregatedCostClient) GetByManagementGroupPreparer(ctx context.Context, managementGroupID string, filter string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "managementGroupId": autorest.Encode("path", managementGroupID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+ if len(filter) > 0 {
+ queryParameters["$filter"] = autorest.Encode("query", filter)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/Microsoft.Management/managementGroups/{managementGroupId}/providers/Microsoft.Consumption/aggregatedcost", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetByManagementGroupSender sends the GetByManagementGroup request. The method will close the
+// http.Response Body if it receives an error.
+func (client AggregatedCostClient) GetByManagementGroupSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// GetByManagementGroupResponder handles the response to the GetByManagementGroup request. The method always
+// closes the http.Response Body.
+func (client AggregatedCostClient) GetByManagementGroupResponder(resp *http.Response) (result ManagementGroupAggregatedCostResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// GetForBillingPeriodByManagementGroup provides the aggregate cost of a management group and all child management
+// groups by specified billing period
+// Parameters:
+// managementGroupID - azure Management Group ID.
+// billingPeriodName - billing Period Name.
+func (client AggregatedCostClient) GetForBillingPeriodByManagementGroup(ctx context.Context, managementGroupID string, billingPeriodName string) (result ManagementGroupAggregatedCostResult, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/AggregatedCostClient.GetForBillingPeriodByManagementGroup")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.GetForBillingPeriodByManagementGroupPreparer(ctx, managementGroupID, billingPeriodName)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.AggregatedCostClient", "GetForBillingPeriodByManagementGroup", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetForBillingPeriodByManagementGroupSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.AggregatedCostClient", "GetForBillingPeriodByManagementGroup", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetForBillingPeriodByManagementGroupResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.AggregatedCostClient", "GetForBillingPeriodByManagementGroup", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetForBillingPeriodByManagementGroupPreparer prepares the GetForBillingPeriodByManagementGroup request.
+func (client AggregatedCostClient) GetForBillingPeriodByManagementGroupPreparer(ctx context.Context, managementGroupID string, billingPeriodName string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "billingPeriodName": autorest.Encode("path", billingPeriodName),
+ "managementGroupId": autorest.Encode("path", managementGroupID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/Microsoft.Management/managementGroups/{managementGroupId}/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}/Microsoft.Consumption/aggregatedcost", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetForBillingPeriodByManagementGroupSender sends the GetForBillingPeriodByManagementGroup request. The method will close the
+// http.Response Body if it receives an error.
+func (client AggregatedCostClient) GetForBillingPeriodByManagementGroupSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// GetForBillingPeriodByManagementGroupResponder handles the response to the GetForBillingPeriodByManagementGroup request. The method always
+// closes the http.Response Body.
+func (client AggregatedCostClient) GetForBillingPeriodByManagementGroupResponder(resp *http.Response) (result ManagementGroupAggregatedCostResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/balances.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/balances.go
new file mode 100644
index 0000000000000..efe5d5bfedb8e
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/balances.go
@@ -0,0 +1,182 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// BalancesClient is the consumption management client provides access to consumption resources for Azure Enterprise
+// Subscriptions.
+type BalancesClient struct {
+ BaseClient
+}
+
+// NewBalancesClient creates an instance of the BalancesClient client.
+func NewBalancesClient(subscriptionID string) BalancesClient {
+ return NewBalancesClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewBalancesClientWithBaseURI creates an instance of the BalancesClient client using a custom endpoint. Use this
+// when interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewBalancesClientWithBaseURI(baseURI string, subscriptionID string) BalancesClient {
+ return BalancesClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// GetByBillingAccount gets the balances for a scope by billingAccountId. Balances are available via this API only for
+// May 1, 2014 or later.
+// Parameters:
+// billingAccountID - billingAccount ID
+func (client BalancesClient) GetByBillingAccount(ctx context.Context, billingAccountID string) (result Balance, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/BalancesClient.GetByBillingAccount")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.GetByBillingAccountPreparer(ctx, billingAccountID)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.BalancesClient", "GetByBillingAccount", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetByBillingAccountSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.BalancesClient", "GetByBillingAccount", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetByBillingAccountResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.BalancesClient", "GetByBillingAccount", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetByBillingAccountPreparer prepares the GetByBillingAccount request.
+func (client BalancesClient) GetByBillingAccountPreparer(ctx context.Context, billingAccountID string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "billingAccountId": autorest.Encode("path", billingAccountID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/providers/Microsoft.Consumption/balances", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetByBillingAccountSender sends the GetByBillingAccount request. The method will close the
+// http.Response Body if it receives an error.
+func (client BalancesClient) GetByBillingAccountSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// GetByBillingAccountResponder handles the response to the GetByBillingAccount request. The method always
+// closes the http.Response Body.
+func (client BalancesClient) GetByBillingAccountResponder(resp *http.Response) (result Balance, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// GetForBillingPeriodByBillingAccount gets the balances for a scope by billing period and billingAccountId. Balances
+// are available via this API only for May 1, 2014 or later.
+// Parameters:
+// billingAccountID - billingAccount ID
+// billingPeriodName - billing Period Name.
+func (client BalancesClient) GetForBillingPeriodByBillingAccount(ctx context.Context, billingAccountID string, billingPeriodName string) (result Balance, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/BalancesClient.GetForBillingPeriodByBillingAccount")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.GetForBillingPeriodByBillingAccountPreparer(ctx, billingAccountID, billingPeriodName)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.BalancesClient", "GetForBillingPeriodByBillingAccount", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetForBillingPeriodByBillingAccountSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.BalancesClient", "GetForBillingPeriodByBillingAccount", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetForBillingPeriodByBillingAccountResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.BalancesClient", "GetForBillingPeriodByBillingAccount", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetForBillingPeriodByBillingAccountPreparer prepares the GetForBillingPeriodByBillingAccount request.
+func (client BalancesClient) GetForBillingPeriodByBillingAccountPreparer(ctx context.Context, billingAccountID string, billingPeriodName string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "billingAccountId": autorest.Encode("path", billingAccountID),
+ "billingPeriodName": autorest.Encode("path", billingPeriodName),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}/providers/Microsoft.Consumption/balances", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetForBillingPeriodByBillingAccountSender sends the GetForBillingPeriodByBillingAccount request. The method will close the
+// http.Response Body if it receives an error.
+func (client BalancesClient) GetForBillingPeriodByBillingAccountSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// GetForBillingPeriodByBillingAccountResponder handles the response to the GetForBillingPeriodByBillingAccount request. The method always
+// closes the http.Response Body.
+func (client BalancesClient) GetForBillingPeriodByBillingAccountResponder(resp *http.Response) (result Balance, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/budgets.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/budgets.go
new file mode 100644
index 0000000000000..43ee3836ee10a
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/budgets.go
@@ -0,0 +1,462 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/autorest/validation"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// BudgetsClient is the consumption management client provides access to consumption resources for Azure Enterprise
+// Subscriptions.
+type BudgetsClient struct {
+ BaseClient
+}
+
+// NewBudgetsClient creates an instance of the BudgetsClient client.
+func NewBudgetsClient(subscriptionID string) BudgetsClient {
+ return NewBudgetsClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewBudgetsClientWithBaseURI creates an instance of the BudgetsClient client using a custom endpoint. Use this when
+// interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewBudgetsClientWithBaseURI(baseURI string, subscriptionID string) BudgetsClient {
+ return BudgetsClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// CreateOrUpdate the operation to create or update a budget. Update operation requires latest eTag to be set in the
+// request mandatorily. You may obtain the latest eTag by performing a get operation. Create operation does not require
+// eTag.
+// Parameters:
+// scope - the scope associated with budget operations. This includes '/subscriptions/{subscriptionId}/' for
+// subscription scope, '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}' for resourceGroup
+// scope, '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}' for Billing Account scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/departments/{departmentId}' for Department
+// scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/enrollmentAccounts/{enrollmentAccountId}'
+// for EnrollmentAccount scope, '/providers/Microsoft.Management/managementGroups/{managementGroupId}' for
+// Management Group scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}' for
+// billingProfile scope,
+// 'providers/Microsoft.Billing/billingAccounts/{billingAccountId}/invoiceSections/{invoiceSectionId}' for
+// invoiceSection scope.
+// budgetName - budget Name.
+// parameters - parameters supplied to the Create Budget operation.
+func (client BudgetsClient) CreateOrUpdate(ctx context.Context, scope string, budgetName string, parameters Budget) (result Budget, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/BudgetsClient.CreateOrUpdate")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: parameters,
+ Constraints: []validation.Constraint{{Target: "parameters.BudgetProperties", Name: validation.Null, Rule: false,
+ Chain: []validation.Constraint{{Target: "parameters.BudgetProperties.Category", Name: validation.Null, Rule: true, Chain: nil},
+ {Target: "parameters.BudgetProperties.Amount", Name: validation.Null, Rule: true, Chain: nil},
+ {Target: "parameters.BudgetProperties.TimePeriod", Name: validation.Null, Rule: true,
+ Chain: []validation.Constraint{{Target: "parameters.BudgetProperties.TimePeriod.StartDate", Name: validation.Null, Rule: true, Chain: nil}}},
+ {Target: "parameters.BudgetProperties.Filter", Name: validation.Null, Rule: false,
+ Chain: []validation.Constraint{{Target: "parameters.BudgetProperties.Filter.And", Name: validation.Null, Rule: false,
+ Chain: []validation.Constraint{{Target: "parameters.BudgetProperties.Filter.And", Name: validation.MinItems, Rule: 2, Chain: nil}}},
+ {Target: "parameters.BudgetProperties.Filter.Not", Name: validation.Null, Rule: false,
+ Chain: []validation.Constraint{{Target: "parameters.BudgetProperties.Filter.Not.Dimensions", Name: validation.Null, Rule: false,
+ Chain: []validation.Constraint{{Target: "parameters.BudgetProperties.Filter.Not.Dimensions.Name", Name: validation.Null, Rule: true, Chain: nil},
+ {Target: "parameters.BudgetProperties.Filter.Not.Dimensions.Operator", Name: validation.Null, Rule: true, Chain: nil},
+ {Target: "parameters.BudgetProperties.Filter.Not.Dimensions.Values", Name: validation.Null, Rule: true,
+ Chain: []validation.Constraint{{Target: "parameters.BudgetProperties.Filter.Not.Dimensions.Values", Name: validation.MinItems, Rule: 1, Chain: nil}}},
+ }},
+ {Target: "parameters.BudgetProperties.Filter.Not.Tags", Name: validation.Null, Rule: false,
+ Chain: []validation.Constraint{{Target: "parameters.BudgetProperties.Filter.Not.Tags.Name", Name: validation.Null, Rule: true, Chain: nil},
+ {Target: "parameters.BudgetProperties.Filter.Not.Tags.Operator", Name: validation.Null, Rule: true, Chain: nil},
+ {Target: "parameters.BudgetProperties.Filter.Not.Tags.Values", Name: validation.Null, Rule: true,
+ Chain: []validation.Constraint{{Target: "parameters.BudgetProperties.Filter.Not.Tags.Values", Name: validation.MinItems, Rule: 1, Chain: nil}}},
+ }},
+ }},
+ {Target: "parameters.BudgetProperties.Filter.Dimensions", Name: validation.Null, Rule: false,
+ Chain: []validation.Constraint{{Target: "parameters.BudgetProperties.Filter.Dimensions.Name", Name: validation.Null, Rule: true, Chain: nil},
+ {Target: "parameters.BudgetProperties.Filter.Dimensions.Operator", Name: validation.Null, Rule: true, Chain: nil},
+ {Target: "parameters.BudgetProperties.Filter.Dimensions.Values", Name: validation.Null, Rule: true,
+ Chain: []validation.Constraint{{Target: "parameters.BudgetProperties.Filter.Dimensions.Values", Name: validation.MinItems, Rule: 1, Chain: nil}}},
+ }},
+ {Target: "parameters.BudgetProperties.Filter.Tags", Name: validation.Null, Rule: false,
+ Chain: []validation.Constraint{{Target: "parameters.BudgetProperties.Filter.Tags.Name", Name: validation.Null, Rule: true, Chain: nil},
+ {Target: "parameters.BudgetProperties.Filter.Tags.Operator", Name: validation.Null, Rule: true, Chain: nil},
+ {Target: "parameters.BudgetProperties.Filter.Tags.Values", Name: validation.Null, Rule: true,
+ Chain: []validation.Constraint{{Target: "parameters.BudgetProperties.Filter.Tags.Values", Name: validation.MinItems, Rule: 1, Chain: nil}}},
+ }},
+ }},
+ }}}}}); err != nil {
+ return result, validation.NewError("consumption.BudgetsClient", "CreateOrUpdate", err.Error())
+ }
+
+ req, err := client.CreateOrUpdatePreparer(ctx, scope, budgetName, parameters)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.BudgetsClient", "CreateOrUpdate", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.CreateOrUpdateSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.BudgetsClient", "CreateOrUpdate", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.CreateOrUpdateResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.BudgetsClient", "CreateOrUpdate", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// CreateOrUpdatePreparer prepares the CreateOrUpdate request.
+func (client BudgetsClient) CreateOrUpdatePreparer(ctx context.Context, scope string, budgetName string, parameters Budget) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "budgetName": autorest.Encode("path", budgetName),
+ "scope": scope,
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsContentType("application/json; charset=utf-8"),
+ autorest.AsPut(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/{scope}/providers/Microsoft.Consumption/budgets/{budgetName}", pathParameters),
+ autorest.WithJSON(parameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the
+// http.Response Body if it receives an error.
+func (client BudgetsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always
+// closes the http.Response Body.
+func (client BudgetsClient) CreateOrUpdateResponder(resp *http.Response) (result Budget, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// Delete the operation to delete a budget.
+// Parameters:
+// scope - the scope associated with budget operations. This includes '/subscriptions/{subscriptionId}/' for
+// subscription scope, '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}' for resourceGroup
+// scope, '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}' for Billing Account scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/departments/{departmentId}' for Department
+// scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/enrollmentAccounts/{enrollmentAccountId}'
+// for EnrollmentAccount scope, '/providers/Microsoft.Management/managementGroups/{managementGroupId}' for
+// Management Group scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}' for
+// billingProfile scope,
+// 'providers/Microsoft.Billing/billingAccounts/{billingAccountId}/invoiceSections/{invoiceSectionId}' for
+// invoiceSection scope.
+// budgetName - budget Name.
+func (client BudgetsClient) Delete(ctx context.Context, scope string, budgetName string) (result autorest.Response, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/BudgetsClient.Delete")
+ defer func() {
+ sc := -1
+ if result.Response != nil {
+ sc = result.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.DeletePreparer(ctx, scope, budgetName)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.BudgetsClient", "Delete", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.DeleteSender(req)
+ if err != nil {
+ result.Response = resp
+ err = autorest.NewErrorWithError(err, "consumption.BudgetsClient", "Delete", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.DeleteResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.BudgetsClient", "Delete", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// DeletePreparer prepares the Delete request.
+func (client BudgetsClient) DeletePreparer(ctx context.Context, scope string, budgetName string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "budgetName": autorest.Encode("path", budgetName),
+ "scope": scope,
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsDelete(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/{scope}/providers/Microsoft.Consumption/budgets/{budgetName}", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// DeleteSender sends the Delete request. The method will close the
+// http.Response Body if it receives an error.
+func (client BudgetsClient) DeleteSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// DeleteResponder handles the response to the Delete request. The method always
+// closes the http.Response Body.
+func (client BudgetsClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByClosing())
+ result.Response = resp
+ return
+}
+
+// Get gets the budget for the scope by budget name.
+// Parameters:
+// scope - the scope associated with budget operations. This includes '/subscriptions/{subscriptionId}/' for
+// subscription scope, '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}' for resourceGroup
+// scope, '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}' for Billing Account scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/departments/{departmentId}' for Department
+// scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/enrollmentAccounts/{enrollmentAccountId}'
+// for EnrollmentAccount scope, '/providers/Microsoft.Management/managementGroups/{managementGroupId}' for
+// Management Group scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}' for
+// billingProfile scope,
+// 'providers/Microsoft.Billing/billingAccounts/{billingAccountId}/invoiceSections/{invoiceSectionId}' for
+// invoiceSection scope.
+// budgetName - budget Name.
+func (client BudgetsClient) Get(ctx context.Context, scope string, budgetName string) (result Budget, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/BudgetsClient.Get")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.GetPreparer(ctx, scope, budgetName)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.BudgetsClient", "Get", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.BudgetsClient", "Get", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.BudgetsClient", "Get", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetPreparer prepares the Get request.
+func (client BudgetsClient) GetPreparer(ctx context.Context, scope string, budgetName string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "budgetName": autorest.Encode("path", budgetName),
+ "scope": scope,
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/{scope}/providers/Microsoft.Consumption/budgets/{budgetName}", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetSender sends the Get request. The method will close the
+// http.Response Body if it receives an error.
+func (client BudgetsClient) GetSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// GetResponder handles the response to the Get request. The method always
+// closes the http.Response Body.
+func (client BudgetsClient) GetResponder(resp *http.Response) (result Budget, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// List lists all budgets for the defined scope.
+// Parameters:
+// scope - the scope associated with budget operations. This includes '/subscriptions/{subscriptionId}/' for
+// subscription scope, '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}' for resourceGroup
+// scope, '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}' for Billing Account scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/departments/{departmentId}' for Department
+// scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/enrollmentAccounts/{enrollmentAccountId}'
+// for EnrollmentAccount scope, '/providers/Microsoft.Management/managementGroups/{managementGroupId}' for
+// Management Group scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}' for
+// billingProfile scope,
+// 'providers/Microsoft.Billing/billingAccounts/{billingAccountId}/invoiceSections/{invoiceSectionId}' for
+// invoiceSection scope.
+func (client BudgetsClient) List(ctx context.Context, scope string) (result BudgetsListResultPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/BudgetsClient.List")
+ defer func() {
+ sc := -1
+ if result.blr.Response.Response != nil {
+ sc = result.blr.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.fn = client.listNextResults
+ req, err := client.ListPreparer(ctx, scope)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.BudgetsClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.blr.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.BudgetsClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result.blr, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.BudgetsClient", "List", resp, "Failure responding to request")
+ return
+ }
+ if result.blr.hasNextLink() && result.blr.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client BudgetsClient) ListPreparer(ctx context.Context, scope string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "scope": scope,
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/{scope}/providers/Microsoft.Consumption/budgets", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client BudgetsClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client BudgetsClient) ListResponder(resp *http.Response) (result BudgetsListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listNextResults retrieves the next set of results, if any.
+func (client BudgetsClient) listNextResults(ctx context.Context, lastResults BudgetsListResult) (result BudgetsListResult, err error) {
+ req, err := lastResults.budgetsListResultPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.BudgetsClient", "listNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.BudgetsClient", "listNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.BudgetsClient", "listNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListComplete enumerates all values, automatically crossing page boundaries as required.
+func (client BudgetsClient) ListComplete(ctx context.Context, scope string) (result BudgetsListResultIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/BudgetsClient.List")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.List(ctx, scope)
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/charges.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/charges.go
new file mode 100644
index 0000000000000..74140c882445a
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/charges.go
@@ -0,0 +1,140 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// ChargesClient is the consumption management client provides access to consumption resources for Azure Enterprise
+// Subscriptions.
+type ChargesClient struct {
+ BaseClient
+}
+
+// NewChargesClient creates an instance of the ChargesClient client.
+func NewChargesClient(subscriptionID string) ChargesClient {
+ return NewChargesClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewChargesClientWithBaseURI creates an instance of the ChargesClient client using a custom endpoint. Use this when
+// interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewChargesClientWithBaseURI(baseURI string, subscriptionID string) ChargesClient {
+ return ChargesClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// List lists the charges based for the defined scope.
+// Parameters:
+// scope - the scope associated with charges operations. This includes
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/departments/{departmentId}' for Department
+// scope, and
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/enrollmentAccounts/{enrollmentAccountId}'
+// for EnrollmentAccount scope. For department and enrollment accounts, you can also add billing period to the
+// scope using '/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}'. For e.g. to specify billing
+// period at department scope use
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/departments/{departmentId}/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}'.
+// Also, Modern Commerce Account scopes are '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}'
+// for billingAccount scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}' for
+// billingProfile scope,
+// 'providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}/invoiceSections/{invoiceSectionId}'
+// for invoiceSection scope, and
+// 'providers/Microsoft.Billing/billingAccounts/{billingAccountId}/customers/{customerId}' specific for
+// partners.
+// startDate - start date
+// endDate - end date
+// filter - may be used to filter charges by properties/usageEnd (Utc time), properties/usageStart (Utc time).
+// The filter supports 'eq', 'lt', 'gt', 'le', 'ge', and 'and'. It does not currently support 'ne', 'or', or
+// 'not'. Tag filter is a key value pair string where key and value is separated by a colon (:).
+// apply - may be used to group charges for billingAccount scope by properties/billingProfileId,
+// properties/invoiceSectionId, properties/customerId (specific for Partner Led), or for billingProfile scope
+// by properties/invoiceSectionId.
+func (client ChargesClient) List(ctx context.Context, scope string, startDate string, endDate string, filter string, apply string) (result ChargesListResult, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ChargesClient.List")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.ListPreparer(ctx, scope, startDate, endDate, filter, apply)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ChargesClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.ChargesClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ChargesClient", "List", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client ChargesClient) ListPreparer(ctx context.Context, scope string, startDate string, endDate string, filter string, apply string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "scope": scope,
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+ if len(startDate) > 0 {
+ queryParameters["startDate"] = autorest.Encode("query", startDate)
+ }
+ if len(endDate) > 0 {
+ queryParameters["endDate"] = autorest.Encode("query", endDate)
+ }
+ if len(filter) > 0 {
+ queryParameters["$filter"] = autorest.Encode("query", filter)
+ }
+ if len(apply) > 0 {
+ queryParameters["$apply"] = autorest.Encode("query", apply)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/{scope}/providers/Microsoft.Consumption/charges", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client ChargesClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client ChargesClient) ListResponder(resp *http.Response) (result ChargesListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/client.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/client.go
new file mode 100644
index 0000000000000..c6f7fefebbbe8
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/client.go
@@ -0,0 +1,41 @@
+// Package consumption implements the Azure ARM Consumption service API version 2019-10-01.
+//
+// Consumption management client provides access to consumption resources for Azure Enterprise Subscriptions.
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "github.com/Azure/go-autorest/autorest"
+)
+
+const (
+ // DefaultBaseURI is the default URI used for the service Consumption
+ DefaultBaseURI = "https://management.azure.com"
+)
+
+// BaseClient is the base client for Consumption.
+type BaseClient struct {
+ autorest.Client
+ BaseURI string
+ SubscriptionID string
+}
+
+// New creates an instance of the BaseClient client.
+func New(subscriptionID string) BaseClient {
+ return NewWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewWithBaseURI creates an instance of the BaseClient client using a custom endpoint. Use this when interacting with
+// an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewWithBaseURI(baseURI string, subscriptionID string) BaseClient {
+ return BaseClient{
+ Client: autorest.NewClientWithUserAgent(UserAgent()),
+ BaseURI: baseURI,
+ SubscriptionID: subscriptionID,
+ }
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/credits.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/credits.go
new file mode 100644
index 0000000000000..a0ac2787ed618
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/credits.go
@@ -0,0 +1,107 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// CreditsClient is the consumption management client provides access to consumption resources for Azure Enterprise
+// Subscriptions.
+type CreditsClient struct {
+ BaseClient
+}
+
+// NewCreditsClient creates an instance of the CreditsClient client.
+func NewCreditsClient(subscriptionID string) CreditsClient {
+ return NewCreditsClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewCreditsClientWithBaseURI creates an instance of the CreditsClient client using a custom endpoint. Use this when
+// interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewCreditsClientWithBaseURI(baseURI string, subscriptionID string) CreditsClient {
+ return CreditsClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// Get the credit summary by billingAccountId and billingProfileId.
+// Parameters:
+// billingAccountID - billingAccount ID
+// billingProfileID - azure Billing Profile ID.
+func (client CreditsClient) Get(ctx context.Context, billingAccountID string, billingProfileID string) (result CreditSummary, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/CreditsClient.Get")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.GetPreparer(ctx, billingAccountID, billingProfileID)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.CreditsClient", "Get", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.CreditsClient", "Get", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.CreditsClient", "Get", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetPreparer prepares the Get request.
+func (client CreditsClient) GetPreparer(ctx context.Context, billingAccountID string, billingProfileID string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "billingAccountId": autorest.Encode("path", billingAccountID),
+ "billingProfileId": autorest.Encode("path", billingProfileID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}/providers/Microsoft.Consumption/credits/balanceSummary", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetSender sends the Get request. The method will close the
+// http.Response Body if it receives an error.
+func (client CreditsClient) GetSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// GetResponder handles the response to the Get request. The method always
+// closes the http.Response Body.
+func (client CreditsClient) GetResponder(resp *http.Response) (result CreditSummary, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/enums.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/enums.go
new file mode 100644
index 0000000000000..05b889fc3ef9c
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/enums.go
@@ -0,0 +1,309 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+// BillingFrequency enumerates the values for billing frequency.
+type BillingFrequency string
+
+const (
+ // Month ...
+ Month BillingFrequency = "Month"
+ // Quarter ...
+ Quarter BillingFrequency = "Quarter"
+ // Year ...
+ Year BillingFrequency = "Year"
+)
+
+// PossibleBillingFrequencyValues returns an array of possible values for the BillingFrequency const type.
+func PossibleBillingFrequencyValues() []BillingFrequency {
+ return []BillingFrequency{Month, Quarter, Year}
+}
+
+// Bound enumerates the values for bound.
+type Bound string
+
+const (
+ // Lower ...
+ Lower Bound = "Lower"
+ // Upper ...
+ Upper Bound = "Upper"
+)
+
+// PossibleBoundValues returns an array of possible values for the Bound const type.
+func PossibleBoundValues() []Bound {
+ return []Bound{Lower, Upper}
+}
+
+// ChargeType enumerates the values for charge type.
+type ChargeType string
+
+const (
+ // ChargeTypeActual ...
+ ChargeTypeActual ChargeType = "Actual"
+ // ChargeTypeForecast ...
+ ChargeTypeForecast ChargeType = "Forecast"
+)
+
+// PossibleChargeTypeValues returns an array of possible values for the ChargeType const type.
+func PossibleChargeTypeValues() []ChargeType {
+ return []ChargeType{ChargeTypeActual, ChargeTypeForecast}
+}
+
+// Datagrain enumerates the values for datagrain.
+type Datagrain string
+
+const (
+ // DailyGrain Daily grain of data
+ DailyGrain Datagrain = "daily"
+ // MonthlyGrain Monthly grain of data
+ MonthlyGrain Datagrain = "monthly"
+)
+
+// PossibleDatagrainValues returns an array of possible values for the Datagrain const type.
+func PossibleDatagrainValues() []Datagrain {
+ return []Datagrain{DailyGrain, MonthlyGrain}
+}
+
+// EventType enumerates the values for event type.
+type EventType string
+
+const (
+ // NewCredit ...
+ NewCredit EventType = "NewCredit"
+ // PendingAdjustments ...
+ PendingAdjustments EventType = "PendingAdjustments"
+ // PendingCharges ...
+ PendingCharges EventType = "PendingCharges"
+ // PendingExpiredCredit ...
+ PendingExpiredCredit EventType = "PendingExpiredCredit"
+ // PendingNewCredit ...
+ PendingNewCredit EventType = "PendingNewCredit"
+ // SettledCharges ...
+ SettledCharges EventType = "SettledCharges"
+ // UnKnown ...
+ UnKnown EventType = "UnKnown"
+)
+
+// PossibleEventTypeValues returns an array of possible values for the EventType const type.
+func PossibleEventTypeValues() []EventType {
+ return []EventType{NewCredit, PendingAdjustments, PendingCharges, PendingExpiredCredit, PendingNewCredit, SettledCharges, UnKnown}
+}
+
+// Grain enumerates the values for grain.
+type Grain string
+
+const (
+ // Daily ...
+ Daily Grain = "Daily"
+ // Monthly ...
+ Monthly Grain = "Monthly"
+ // Yearly ...
+ Yearly Grain = "Yearly"
+)
+
+// PossibleGrainValues returns an array of possible values for the Grain const type.
+func PossibleGrainValues() []Grain {
+ return []Grain{Daily, Monthly, Yearly}
+}
+
+// Kind enumerates the values for kind.
+type Kind string
+
+const (
+ // KindLegacy ...
+ KindLegacy Kind = "legacy"
+ // KindModern ...
+ KindModern Kind = "modern"
+ // KindUsageDetail ...
+ KindUsageDetail Kind = "UsageDetail"
+)
+
+// PossibleKindValues returns an array of possible values for the Kind const type.
+func PossibleKindValues() []Kind {
+ return []Kind{KindLegacy, KindModern, KindUsageDetail}
+}
+
+// KindBasicChargeSummary enumerates the values for kind basic charge summary.
+type KindBasicChargeSummary string
+
+const (
+ // KindBasicChargeSummaryKindChargeSummary ...
+ KindBasicChargeSummaryKindChargeSummary KindBasicChargeSummary = "ChargeSummary"
+ // KindBasicChargeSummaryKindLegacy ...
+ KindBasicChargeSummaryKindLegacy KindBasicChargeSummary = "legacy"
+ // KindBasicChargeSummaryKindModern ...
+ KindBasicChargeSummaryKindModern KindBasicChargeSummary = "modern"
+)
+
+// PossibleKindBasicChargeSummaryValues returns an array of possible values for the KindBasicChargeSummary const type.
+func PossibleKindBasicChargeSummaryValues() []KindBasicChargeSummary {
+ return []KindBasicChargeSummary{KindBasicChargeSummaryKindChargeSummary, KindBasicChargeSummaryKindLegacy, KindBasicChargeSummaryKindModern}
+}
+
+// KindBasicReservationRecommendation enumerates the values for kind basic reservation recommendation.
+type KindBasicReservationRecommendation string
+
+const (
+ // KindBasicReservationRecommendationKindLegacy ...
+ KindBasicReservationRecommendationKindLegacy KindBasicReservationRecommendation = "legacy"
+ // KindBasicReservationRecommendationKindModern ...
+ KindBasicReservationRecommendationKindModern KindBasicReservationRecommendation = "modern"
+ // KindBasicReservationRecommendationKindReservationRecommendation ...
+ KindBasicReservationRecommendationKindReservationRecommendation KindBasicReservationRecommendation = "ReservationRecommendation"
+)
+
+// PossibleKindBasicReservationRecommendationValues returns an array of possible values for the KindBasicReservationRecommendation const type.
+func PossibleKindBasicReservationRecommendationValues() []KindBasicReservationRecommendation {
+ return []KindBasicReservationRecommendation{KindBasicReservationRecommendationKindLegacy, KindBasicReservationRecommendationKindModern, KindBasicReservationRecommendationKindReservationRecommendation}
+}
+
+// LookBackPeriod enumerates the values for look back period.
+type LookBackPeriod string
+
+const (
+ // Last07Days Use 7 days of data for recommendations
+ Last07Days LookBackPeriod = "Last7Days"
+ // Last30Days Use 30 days of data for recommendations
+ Last30Days LookBackPeriod = "Last30Days"
+ // Last60Days Use 60 days of data for recommendations
+ Last60Days LookBackPeriod = "Last60Days"
+)
+
+// PossibleLookBackPeriodValues returns an array of possible values for the LookBackPeriod const type.
+func PossibleLookBackPeriodValues() []LookBackPeriod {
+ return []LookBackPeriod{Last07Days, Last30Days, Last60Days}
+}
+
+// LotSource enumerates the values for lot source.
+type LotSource string
+
+const (
+ // PromotionalCredit ...
+ PromotionalCredit LotSource = "PromotionalCredit"
+ // PurchasedCredit ...
+ PurchasedCredit LotSource = "PurchasedCredit"
+)
+
+// PossibleLotSourceValues returns an array of possible values for the LotSource const type.
+func PossibleLotSourceValues() []LotSource {
+ return []LotSource{PromotionalCredit, PurchasedCredit}
+}
+
+// Metrictype enumerates the values for metrictype.
+type Metrictype string
+
+const (
+ // ActualCostMetricType Actual cost data.
+ ActualCostMetricType Metrictype = "actualcost"
+ // AmortizedCostMetricType Amortized cost data.
+ AmortizedCostMetricType Metrictype = "amortizedcost"
+ // UsageMetricType Usage data.
+ UsageMetricType Metrictype = "usage"
+)
+
+// PossibleMetrictypeValues returns an array of possible values for the Metrictype const type.
+func PossibleMetrictypeValues() []Metrictype {
+ return []Metrictype{ActualCostMetricType, AmortizedCostMetricType, UsageMetricType}
+}
+
+// OperatorType enumerates the values for operator type.
+type OperatorType string
+
+const (
+ // EqualTo ...
+ EqualTo OperatorType = "EqualTo"
+ // GreaterThan ...
+ GreaterThan OperatorType = "GreaterThan"
+ // GreaterThanOrEqualTo ...
+ GreaterThanOrEqualTo OperatorType = "GreaterThanOrEqualTo"
+)
+
+// PossibleOperatorTypeValues returns an array of possible values for the OperatorType const type.
+func PossibleOperatorTypeValues() []OperatorType {
+ return []OperatorType{EqualTo, GreaterThan, GreaterThanOrEqualTo}
+}
+
+// Scope11 enumerates the values for scope 11.
+type Scope11 string
+
+const (
+ // Shared ...
+ Shared Scope11 = "Shared"
+ // Single ...
+ Single Scope11 = "Single"
+)
+
+// PossibleScope11Values returns an array of possible values for the Scope11 const type.
+func PossibleScope11Values() []Scope11 {
+ return []Scope11{Shared, Single}
+}
+
+// Scope9 enumerates the values for scope 9.
+type Scope9 string
+
+const (
+ // Scope9Shared ...
+ Scope9Shared Scope9 = "Shared"
+ // Scope9Single ...
+ Scope9Single Scope9 = "Single"
+)
+
+// PossibleScope9Values returns an array of possible values for the Scope9 const type.
+func PossibleScope9Values() []Scope9 {
+ return []Scope9{Scope9Shared, Scope9Single}
+}
+
+// Term enumerates the values for term.
+type Term string
+
+const (
+ // P1Y 1 year reservation term
+ P1Y Term = "P1Y"
+ // P3Y 3 year reservation term
+ P3Y Term = "P3Y"
+)
+
+// PossibleTermValues returns an array of possible values for the Term const type.
+func PossibleTermValues() []Term {
+ return []Term{P1Y, P3Y}
+}
+
+// ThresholdType enumerates the values for threshold type.
+type ThresholdType string
+
+const (
+ // Actual ...
+ Actual ThresholdType = "Actual"
+)
+
+// PossibleThresholdTypeValues returns an array of possible values for the ThresholdType const type.
+func PossibleThresholdTypeValues() []ThresholdType {
+ return []ThresholdType{Actual}
+}
+
+// TimeGrainType enumerates the values for time grain type.
+type TimeGrainType string
+
+const (
+ // TimeGrainTypeAnnually ...
+ TimeGrainTypeAnnually TimeGrainType = "Annually"
+ // TimeGrainTypeBillingAnnual ...
+ TimeGrainTypeBillingAnnual TimeGrainType = "BillingAnnual"
+ // TimeGrainTypeBillingMonth ...
+ TimeGrainTypeBillingMonth TimeGrainType = "BillingMonth"
+ // TimeGrainTypeBillingQuarter ...
+ TimeGrainTypeBillingQuarter TimeGrainType = "BillingQuarter"
+ // TimeGrainTypeMonthly ...
+ TimeGrainTypeMonthly TimeGrainType = "Monthly"
+ // TimeGrainTypeQuarterly ...
+ TimeGrainTypeQuarterly TimeGrainType = "Quarterly"
+)
+
+// PossibleTimeGrainTypeValues returns an array of possible values for the TimeGrainType const type.
+func PossibleTimeGrainTypeValues() []TimeGrainType {
+ return []TimeGrainType{TimeGrainTypeAnnually, TimeGrainTypeBillingAnnual, TimeGrainTypeBillingMonth, TimeGrainTypeBillingQuarter, TimeGrainTypeMonthly, TimeGrainTypeQuarterly}
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/events.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/events.go
new file mode 100644
index 0000000000000..699f93c41f07e
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/events.go
@@ -0,0 +1,153 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// EventsClient is the consumption management client provides access to consumption resources for Azure Enterprise
+// Subscriptions.
+type EventsClient struct {
+ BaseClient
+}
+
+// NewEventsClient creates an instance of the EventsClient client.
+func NewEventsClient(subscriptionID string) EventsClient {
+ return NewEventsClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewEventsClientWithBaseURI creates an instance of the EventsClient client using a custom endpoint. Use this when
+// interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewEventsClientWithBaseURI(baseURI string, subscriptionID string) EventsClient {
+ return EventsClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// List lists the events by billingAccountId and billingProfileId for given start and end date.
+// Parameters:
+// billingAccountID - billingAccount ID
+// billingProfileID - azure Billing Profile ID.
+// startDate - start date
+// endDate - end date
+func (client EventsClient) List(ctx context.Context, billingAccountID string, billingProfileID string, startDate string, endDate string) (result EventsPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/EventsClient.List")
+ defer func() {
+ sc := -1
+ if result.e.Response.Response != nil {
+ sc = result.e.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.fn = client.listNextResults
+ req, err := client.ListPreparer(ctx, billingAccountID, billingProfileID, startDate, endDate)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.EventsClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.e.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.EventsClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result.e, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.EventsClient", "List", resp, "Failure responding to request")
+ return
+ }
+ if result.e.hasNextLink() && result.e.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client EventsClient) ListPreparer(ctx context.Context, billingAccountID string, billingProfileID string, startDate string, endDate string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "billingAccountId": autorest.Encode("path", billingAccountID),
+ "billingProfileId": autorest.Encode("path", billingProfileID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ "endDate": autorest.Encode("query", endDate),
+ "startDate": autorest.Encode("query", startDate),
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}/providers/Microsoft.Consumption/events", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client EventsClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client EventsClient) ListResponder(resp *http.Response) (result Events, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listNextResults retrieves the next set of results, if any.
+func (client EventsClient) listNextResults(ctx context.Context, lastResults Events) (result Events, err error) {
+ req, err := lastResults.eventsPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.EventsClient", "listNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.EventsClient", "listNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.EventsClient", "listNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListComplete enumerates all values, automatically crossing page boundaries as required.
+func (client EventsClient) ListComplete(ctx context.Context, billingAccountID string, billingProfileID string, startDate string, endDate string) (result EventsIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/EventsClient.List")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.List(ctx, billingAccountID, billingProfileID, startDate, endDate)
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/forecasts.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/forecasts.go
new file mode 100644
index 0000000000000..ca46f018b28d3
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/forecasts.go
@@ -0,0 +1,110 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// ForecastsClient is the consumption management client provides access to consumption resources for Azure Enterprise
+// Subscriptions.
+type ForecastsClient struct {
+ BaseClient
+}
+
+// NewForecastsClient creates an instance of the ForecastsClient client.
+func NewForecastsClient(subscriptionID string) ForecastsClient {
+ return NewForecastsClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewForecastsClientWithBaseURI creates an instance of the ForecastsClient client using a custom endpoint. Use this
+// when interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewForecastsClientWithBaseURI(baseURI string, subscriptionID string) ForecastsClient {
+ return ForecastsClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// List lists the forecast charges by subscriptionId.
+// Parameters:
+// filter - may be used to filter forecasts by properties/usageDate (Utc time), properties/chargeType or
+// properties/grain. The filter supports 'eq', 'lt', 'gt', 'le', 'ge', and 'and'. It does not currently support
+// 'ne', 'or', or 'not'.
+func (client ForecastsClient) List(ctx context.Context, filter string) (result ForecastsListResult, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ForecastsClient.List")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.ListPreparer(ctx, filter)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ForecastsClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.ForecastsClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ForecastsClient", "List", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client ForecastsClient) ListPreparer(ctx context.Context, filter string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "subscriptionId": autorest.Encode("path", client.SubscriptionID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+ if len(filter) > 0 {
+ queryParameters["$filter"] = autorest.Encode("query", filter)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Consumption/forecasts", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client ForecastsClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, azure.DoRetryWithRegistration(client.Client))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client ForecastsClient) ListResponder(resp *http.Response) (result ForecastsListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/lots.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/lots.go
new file mode 100644
index 0000000000000..1c6584b210e6a
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/lots.go
@@ -0,0 +1,149 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// LotsClient is the consumption management client provides access to consumption resources for Azure Enterprise
+// Subscriptions.
+type LotsClient struct {
+ BaseClient
+}
+
+// NewLotsClient creates an instance of the LotsClient client.
+func NewLotsClient(subscriptionID string) LotsClient {
+ return NewLotsClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewLotsClientWithBaseURI creates an instance of the LotsClient client using a custom endpoint. Use this when
+// interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewLotsClientWithBaseURI(baseURI string, subscriptionID string) LotsClient {
+ return LotsClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// List lists the lots by billingAccountId and billingProfileId.
+// Parameters:
+// billingAccountID - billingAccount ID
+// billingProfileID - azure Billing Profile ID.
+func (client LotsClient) List(ctx context.Context, billingAccountID string, billingProfileID string) (result LotsPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/LotsClient.List")
+ defer func() {
+ sc := -1
+ if result.l.Response.Response != nil {
+ sc = result.l.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.fn = client.listNextResults
+ req, err := client.ListPreparer(ctx, billingAccountID, billingProfileID)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.LotsClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.l.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.LotsClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result.l, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.LotsClient", "List", resp, "Failure responding to request")
+ return
+ }
+ if result.l.hasNextLink() && result.l.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client LotsClient) ListPreparer(ctx context.Context, billingAccountID string, billingProfileID string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "billingAccountId": autorest.Encode("path", billingAccountID),
+ "billingProfileId": autorest.Encode("path", billingProfileID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}/providers/Microsoft.Consumption/lots", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client LotsClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client LotsClient) ListResponder(resp *http.Response) (result Lots, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listNextResults retrieves the next set of results, if any.
+func (client LotsClient) listNextResults(ctx context.Context, lastResults Lots) (result Lots, err error) {
+ req, err := lastResults.lotsPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.LotsClient", "listNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.LotsClient", "listNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.LotsClient", "listNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListComplete enumerates all values, automatically crossing page boundaries as required.
+func (client LotsClient) ListComplete(ctx context.Context, billingAccountID string, billingProfileID string) (result LotsIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/LotsClient.List")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.List(ctx, billingAccountID, billingProfileID)
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/marketplaces.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/marketplaces.go
new file mode 100644
index 0000000000000..e24afb851e59b
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/marketplaces.go
@@ -0,0 +1,182 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/autorest/validation"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// MarketplacesClient is the consumption management client provides access to consumption resources for Azure
+// Enterprise Subscriptions.
+type MarketplacesClient struct {
+ BaseClient
+}
+
+// NewMarketplacesClient creates an instance of the MarketplacesClient client.
+func NewMarketplacesClient(subscriptionID string) MarketplacesClient {
+ return NewMarketplacesClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewMarketplacesClientWithBaseURI creates an instance of the MarketplacesClient client using a custom endpoint. Use
+// this when interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewMarketplacesClientWithBaseURI(baseURI string, subscriptionID string) MarketplacesClient {
+ return MarketplacesClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// List lists the marketplaces for a scope at the defined scope. Marketplaces are available via this API only for May
+// 1, 2014 or later.
+// Parameters:
+// scope - the scope associated with marketplace operations. This includes '/subscriptions/{subscriptionId}/'
+// for subscription scope, '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}' for Billing
+// Account scope, '/providers/Microsoft.Billing/departments/{departmentId}' for Department scope,
+// '/providers/Microsoft.Billing/enrollmentAccounts/{enrollmentAccountId}' for EnrollmentAccount scope and
+// '/providers/Microsoft.Management/managementGroups/{managementGroupId}' for Management Group scope. For
+// subscription, billing account, department, enrollment account and ManagementGroup, you can also add billing
+// period to the scope using '/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}'. For e.g. to
+// specify billing period at department scope use
+// '/providers/Microsoft.Billing/departments/{departmentId}/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}'
+// filter - may be used to filter marketplaces by properties/usageEnd (Utc time), properties/usageStart (Utc
+// time), properties/resourceGroup, properties/instanceName or properties/instanceId. The filter supports 'eq',
+// 'lt', 'gt', 'le', 'ge', and 'and'. It does not currently support 'ne', 'or', or 'not'.
+// top - may be used to limit the number of results to the most recent N marketplaces.
+// skiptoken - skiptoken is only used if a previous operation returned a partial result. If a previous response
+// contains a nextLink element, the value of the nextLink element will include a skiptoken parameter that
+// specifies a starting point to use for subsequent calls.
+func (client MarketplacesClient) List(ctx context.Context, scope string, filter string, top *int32, skiptoken string) (result MarketplacesListResultPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/MarketplacesClient.List")
+ defer func() {
+ sc := -1
+ if result.mlr.Response.Response != nil {
+ sc = result.mlr.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: top,
+ Constraints: []validation.Constraint{{Target: "top", Name: validation.Null, Rule: false,
+ Chain: []validation.Constraint{{Target: "top", Name: validation.InclusiveMaximum, Rule: int64(1000), Chain: nil},
+ {Target: "top", Name: validation.InclusiveMinimum, Rule: int64(1), Chain: nil},
+ }}}}}); err != nil {
+ return result, validation.NewError("consumption.MarketplacesClient", "List", err.Error())
+ }
+
+ result.fn = client.listNextResults
+ req, err := client.ListPreparer(ctx, scope, filter, top, skiptoken)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.MarketplacesClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.mlr.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.MarketplacesClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result.mlr, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.MarketplacesClient", "List", resp, "Failure responding to request")
+ return
+ }
+ if result.mlr.hasNextLink() && result.mlr.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client MarketplacesClient) ListPreparer(ctx context.Context, scope string, filter string, top *int32, skiptoken string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "scope": scope,
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+ if len(filter) > 0 {
+ queryParameters["$filter"] = autorest.Encode("query", filter)
+ }
+ if top != nil {
+ queryParameters["$top"] = autorest.Encode("query", *top)
+ }
+ if len(skiptoken) > 0 {
+ queryParameters["$skiptoken"] = autorest.Encode("query", skiptoken)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/{scope}/providers/Microsoft.Consumption/marketplaces", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client MarketplacesClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client MarketplacesClient) ListResponder(resp *http.Response) (result MarketplacesListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listNextResults retrieves the next set of results, if any.
+func (client MarketplacesClient) listNextResults(ctx context.Context, lastResults MarketplacesListResult) (result MarketplacesListResult, err error) {
+ req, err := lastResults.marketplacesListResultPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.MarketplacesClient", "listNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.MarketplacesClient", "listNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.MarketplacesClient", "listNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListComplete enumerates all values, automatically crossing page boundaries as required.
+func (client MarketplacesClient) ListComplete(ctx context.Context, scope string, filter string, top *int32, skiptoken string) (result MarketplacesListResultIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/MarketplacesClient.List")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.List(ctx, scope, filter, top, skiptoken)
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/models.go
new file mode 100644
index 0000000000000..7131f3c1b875f
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/models.go
@@ -0,0 +1,5417 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "encoding/json"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/date"
+ "github.com/Azure/go-autorest/autorest/to"
+ "github.com/Azure/go-autorest/tracing"
+ "github.com/gofrs/uuid"
+ "github.com/shopspring/decimal"
+ "net/http"
+)
+
+// The package's fully qualified name.
+const fqdn = "github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption"
+
+// Amount the amount plus currency .
+type Amount struct {
+ // Currency - READ-ONLY; Amount currency.
+ Currency *string `json:"currency,omitempty"`
+ // Value - READ-ONLY; Amount.
+ Value *decimal.Decimal `json:"value,omitempty"`
+}
+
+// Balance a balance resource.
+type Balance struct {
+ autorest.Response `json:"-"`
+ *BalanceProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for Balance.
+func (b Balance) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if b.BalanceProperties != nil {
+ objectMap["properties"] = b.BalanceProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for Balance struct.
+func (b *Balance) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var balanceProperties BalanceProperties
+ err = json.Unmarshal(*v, &balanceProperties)
+ if err != nil {
+ return err
+ }
+ b.BalanceProperties = &balanceProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ b.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ b.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ b.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ b.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// BalanceProperties the properties of the balance.
+type BalanceProperties struct {
+ // Currency - READ-ONLY; The ISO currency in which the meter is charged, for example, USD.
+ Currency *string `json:"currency,omitempty"`
+ // BeginningBalance - READ-ONLY; The beginning balance for the billing period.
+ BeginningBalance *decimal.Decimal `json:"beginningBalance,omitempty"`
+ // EndingBalance - READ-ONLY; The ending balance for the billing period (for open periods this will be updated daily).
+ EndingBalance *decimal.Decimal `json:"endingBalance,omitempty"`
+ // NewPurchases - READ-ONLY; Total new purchase amount.
+ NewPurchases *decimal.Decimal `json:"newPurchases,omitempty"`
+ // Adjustments - READ-ONLY; Total adjustment amount.
+ Adjustments *decimal.Decimal `json:"adjustments,omitempty"`
+ // Utilized - READ-ONLY; Total Commitment usage.
+ Utilized *decimal.Decimal `json:"utilized,omitempty"`
+ // ServiceOverage - READ-ONLY; Overage for Azure services.
+ ServiceOverage *decimal.Decimal `json:"serviceOverage,omitempty"`
+ // ChargesBilledSeparately - READ-ONLY; Charges Billed separately.
+ ChargesBilledSeparately *decimal.Decimal `json:"chargesBilledSeparately,omitempty"`
+ // TotalOverage - READ-ONLY; serviceOverage + chargesBilledSeparately.
+ TotalOverage *decimal.Decimal `json:"totalOverage,omitempty"`
+ // TotalUsage - READ-ONLY; Azure service commitment + total Overage.
+ TotalUsage *decimal.Decimal `json:"totalUsage,omitempty"`
+ // AzureMarketplaceServiceCharges - READ-ONLY; Total charges for Azure Marketplace.
+ AzureMarketplaceServiceCharges *decimal.Decimal `json:"azureMarketplaceServiceCharges,omitempty"`
+ // BillingFrequency - The billing frequency. Possible values include: 'Month', 'Quarter', 'Year'
+ BillingFrequency BillingFrequency `json:"billingFrequency,omitempty"`
+ // PriceHidden - READ-ONLY; Price is hidden or not.
+ PriceHidden *bool `json:"priceHidden,omitempty"`
+ // NewPurchasesDetails - READ-ONLY; List of new purchases.
+ NewPurchasesDetails *[]BalancePropertiesNewPurchasesDetailsItem `json:"newPurchasesDetails,omitempty"`
+ // AdjustmentDetails - READ-ONLY; List of Adjustments (Promo credit, SIE credit etc.).
+ AdjustmentDetails *[]BalancePropertiesAdjustmentDetailsItem `json:"adjustmentDetails,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for BalanceProperties.
+func (bp BalanceProperties) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if bp.BillingFrequency != "" {
+ objectMap["billingFrequency"] = bp.BillingFrequency
+ }
+ return json.Marshal(objectMap)
+}
+
+// BalancePropertiesAdjustmentDetailsItem ...
+type BalancePropertiesAdjustmentDetailsItem struct {
+ // Name - READ-ONLY; the name of new adjustment.
+ Name *string `json:"name,omitempty"`
+ // Value - READ-ONLY; the value of new adjustment.
+ Value *decimal.Decimal `json:"value,omitempty"`
+}
+
+// BalancePropertiesNewPurchasesDetailsItem ...
+type BalancePropertiesNewPurchasesDetailsItem struct {
+ // Name - READ-ONLY; the name of new purchase.
+ Name *string `json:"name,omitempty"`
+ // Value - READ-ONLY; the value of new purchase.
+ Value *decimal.Decimal `json:"value,omitempty"`
+}
+
+// Budget a budget resource.
+type Budget struct {
+ autorest.Response `json:"-"`
+ *BudgetProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // ETag - eTag of the resource. To handle concurrent update scenario, this field will be used to determine whether the user is updating the latest version or not.
+ ETag *string `json:"eTag,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for Budget.
+func (b Budget) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if b.BudgetProperties != nil {
+ objectMap["properties"] = b.BudgetProperties
+ }
+ if b.ETag != nil {
+ objectMap["eTag"] = b.ETag
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for Budget struct.
+func (b *Budget) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var budgetProperties BudgetProperties
+ err = json.Unmarshal(*v, &budgetProperties)
+ if err != nil {
+ return err
+ }
+ b.BudgetProperties = &budgetProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ b.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ b.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ b.Type = &typeVar
+ }
+ case "eTag":
+ if v != nil {
+ var eTag string
+ err = json.Unmarshal(*v, &eTag)
+ if err != nil {
+ return err
+ }
+ b.ETag = &eTag
+ }
+ }
+ }
+
+ return nil
+}
+
+// BudgetComparisonExpression the comparison expression to be used in the budgets.
+type BudgetComparisonExpression struct {
+ // Name - The name of the column to use in comparison.
+ Name *string `json:"name,omitempty"`
+ // Operator - The operator to use for comparison.
+ Operator *string `json:"operator,omitempty"`
+ // Values - Array of values to use for comparison
+ Values *[]string `json:"values,omitempty"`
+}
+
+// BudgetFilter may be used to filter budgets by resource group, resource, or meter.
+type BudgetFilter struct {
+ // And - The logical "AND" expression. Must have at least 2 items.
+ And *[]BudgetFilterProperties `json:"and,omitempty"`
+ // Not - The logical "NOT" expression.
+ Not *BudgetFilterProperties `json:"not,omitempty"`
+ // Dimensions - Has comparison expression for a dimension
+ Dimensions *BudgetComparisonExpression `json:"dimensions,omitempty"`
+ // Tags - Has comparison expression for a tag
+ Tags *BudgetComparisonExpression `json:"tags,omitempty"`
+}
+
+// BudgetFilterProperties the Dimensions or Tags to filter a budget by.
+type BudgetFilterProperties struct {
+ // Dimensions - Has comparison expression for a dimension
+ Dimensions *BudgetComparisonExpression `json:"dimensions,omitempty"`
+ // Tags - Has comparison expression for a tag
+ Tags *BudgetComparisonExpression `json:"tags,omitempty"`
+}
+
+// BudgetProperties the properties of the budget.
+type BudgetProperties struct {
+ // Category - The category of the budget, whether the budget tracks cost or usage.
+ Category *string `json:"category,omitempty"`
+ // Amount - The total amount of cost to track with the budget
+ Amount *decimal.Decimal `json:"amount,omitempty"`
+ // TimeGrain - The time covered by a budget. Tracking of the amount will be reset based on the time grain. BillingMonth, BillingQuarter, and BillingAnnual are only supported by WD customers. Possible values include: 'TimeGrainTypeMonthly', 'TimeGrainTypeQuarterly', 'TimeGrainTypeAnnually', 'TimeGrainTypeBillingMonth', 'TimeGrainTypeBillingQuarter', 'TimeGrainTypeBillingAnnual'
+ TimeGrain TimeGrainType `json:"timeGrain,omitempty"`
+ // TimePeriod - Has start and end date of the budget. The start date must be first of the month and should be less than the end date. Budget start date must be on or after June 1, 2017. Future start date should not be more than twelve months. Past start date should be selected within the timegrain period. There are no restrictions on the end date.
+ TimePeriod *BudgetTimePeriod `json:"timePeriod,omitempty"`
+ // Filter - May be used to filter budgets by resource group, resource, or meter.
+ Filter *BudgetFilter `json:"filter,omitempty"`
+ // CurrentSpend - READ-ONLY; The current amount of cost which is being tracked for a budget.
+ CurrentSpend *CurrentSpend `json:"currentSpend,omitempty"`
+ // Notifications - Dictionary of notifications associated with the budget. Budget can have up to five notifications.
+ Notifications map[string]*Notification `json:"notifications"`
+}
+
+// MarshalJSON is the custom marshaler for BudgetProperties.
+func (bp BudgetProperties) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if bp.Category != nil {
+ objectMap["category"] = bp.Category
+ }
+ if bp.Amount != nil {
+ objectMap["amount"] = bp.Amount
+ }
+ if bp.TimeGrain != "" {
+ objectMap["timeGrain"] = bp.TimeGrain
+ }
+ if bp.TimePeriod != nil {
+ objectMap["timePeriod"] = bp.TimePeriod
+ }
+ if bp.Filter != nil {
+ objectMap["filter"] = bp.Filter
+ }
+ if bp.Notifications != nil {
+ objectMap["notifications"] = bp.Notifications
+ }
+ return json.Marshal(objectMap)
+}
+
+// BudgetsListResult result of listing budgets. It contains a list of available budgets in the scope
+// provided.
+type BudgetsListResult struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; The list of budgets.
+ Value *[]Budget `json:"value,omitempty"`
+ // NextLink - READ-ONLY; The link (url) to the next page of results.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// BudgetsListResultIterator provides access to a complete listing of Budget values.
+type BudgetsListResultIterator struct {
+ i int
+ page BudgetsListResultPage
+}
+
+// NextWithContext advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+func (iter *BudgetsListResultIterator) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/BudgetsListResultIterator.NextWithContext")
+ defer func() {
+ sc := -1
+ if iter.Response().Response.Response != nil {
+ sc = iter.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ iter.i++
+ if iter.i < len(iter.page.Values()) {
+ return nil
+ }
+ err = iter.page.NextWithContext(ctx)
+ if err != nil {
+ iter.i--
+ return err
+ }
+ iter.i = 0
+ return nil
+}
+
+// Next advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (iter *BudgetsListResultIterator) Next() error {
+ return iter.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the enumeration should be started or is not yet complete.
+func (iter BudgetsListResultIterator) NotDone() bool {
+ return iter.page.NotDone() && iter.i < len(iter.page.Values())
+}
+
+// Response returns the raw server response from the last page request.
+func (iter BudgetsListResultIterator) Response() BudgetsListResult {
+ return iter.page.Response()
+}
+
+// Value returns the current value or a zero-initialized value if the
+// iterator has advanced beyond the end of the collection.
+func (iter BudgetsListResultIterator) Value() Budget {
+ if !iter.page.NotDone() {
+ return Budget{}
+ }
+ return iter.page.Values()[iter.i]
+}
+
+// Creates a new instance of the BudgetsListResultIterator type.
+func NewBudgetsListResultIterator(page BudgetsListResultPage) BudgetsListResultIterator {
+ return BudgetsListResultIterator{page: page}
+}
+
+// IsEmpty returns true if the ListResult contains no values.
+func (blr BudgetsListResult) IsEmpty() bool {
+ return blr.Value == nil || len(*blr.Value) == 0
+}
+
+// hasNextLink returns true if the NextLink is not empty.
+func (blr BudgetsListResult) hasNextLink() bool {
+ return blr.NextLink != nil && len(*blr.NextLink) != 0
+}
+
+// budgetsListResultPreparer prepares a request to retrieve the next set of results.
+// It returns nil if no more results exist.
+func (blr BudgetsListResult) budgetsListResultPreparer(ctx context.Context) (*http.Request, error) {
+ if !blr.hasNextLink() {
+ return nil, nil
+ }
+ return autorest.Prepare((&http.Request{}).WithContext(ctx),
+ autorest.AsJSON(),
+ autorest.AsGet(),
+ autorest.WithBaseURL(to.String(blr.NextLink)))
+}
+
+// BudgetsListResultPage contains a page of Budget values.
+type BudgetsListResultPage struct {
+ fn func(context.Context, BudgetsListResult) (BudgetsListResult, error)
+ blr BudgetsListResult
+}
+
+// NextWithContext advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+func (page *BudgetsListResultPage) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/BudgetsListResultPage.NextWithContext")
+ defer func() {
+ sc := -1
+ if page.Response().Response.Response != nil {
+ sc = page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ for {
+ next, err := page.fn(ctx, page.blr)
+ if err != nil {
+ return err
+ }
+ page.blr = next
+ if !next.hasNextLink() || !next.IsEmpty() {
+ break
+ }
+ }
+ return nil
+}
+
+// Next advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (page *BudgetsListResultPage) Next() error {
+ return page.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the page enumeration should be started or is not yet complete.
+func (page BudgetsListResultPage) NotDone() bool {
+ return !page.blr.IsEmpty()
+}
+
+// Response returns the raw server response from the last page request.
+func (page BudgetsListResultPage) Response() BudgetsListResult {
+ return page.blr
+}
+
+// Values returns the slice of values for the current page or nil if there are no values.
+func (page BudgetsListResultPage) Values() []Budget {
+ if page.blr.IsEmpty() {
+ return nil
+ }
+ return *page.blr.Value
+}
+
+// Creates a new instance of the BudgetsListResultPage type.
+func NewBudgetsListResultPage(cur BudgetsListResult, getNextPage func(context.Context, BudgetsListResult) (BudgetsListResult, error)) BudgetsListResultPage {
+ return BudgetsListResultPage{
+ fn: getNextPage,
+ blr: cur,
+ }
+}
+
+// BudgetTimePeriod the start and end date for a budget.
+type BudgetTimePeriod struct {
+ // StartDate - The start date for the budget.
+ StartDate *date.Time `json:"startDate,omitempty"`
+ // EndDate - The end date for the budget. If not provided, we default this to 10 years from the start date.
+ EndDate *date.Time `json:"endDate,omitempty"`
+}
+
+// ChargesListResult result of listing charge summary.
+type ChargesListResult struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; The list of charge summary
+ Value *[]BasicChargeSummary `json:"value,omitempty"`
+}
+
+// UnmarshalJSON is the custom unmarshaler for ChargesListResult struct.
+func (clr *ChargesListResult) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "value":
+ if v != nil {
+ value, err := unmarshalBasicChargeSummaryArray(*v)
+ if err != nil {
+ return err
+ }
+ clr.Value = &value
+ }
+ }
+ }
+
+ return nil
+}
+
+// BasicChargeSummary a charge summary resource.
+type BasicChargeSummary interface {
+ AsLegacyChargeSummary() (*LegacyChargeSummary, bool)
+ AsModernChargeSummary() (*ModernChargeSummary, bool)
+ AsChargeSummary() (*ChargeSummary, bool)
+}
+
+// ChargeSummary a charge summary resource.
+type ChargeSummary struct {
+ // Kind - Possible values include: 'KindBasicChargeSummaryKindChargeSummary', 'KindBasicChargeSummaryKindLegacy', 'KindBasicChargeSummaryKindModern'
+ Kind KindBasicChargeSummary `json:"kind,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+func unmarshalBasicChargeSummary(body []byte) (BasicChargeSummary, error) {
+ var m map[string]interface{}
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return nil, err
+ }
+
+ switch m["kind"] {
+ case string(KindBasicChargeSummaryKindLegacy):
+ var lcs LegacyChargeSummary
+ err := json.Unmarshal(body, &lcs)
+ return lcs, err
+ case string(KindBasicChargeSummaryKindModern):
+ var mcs ModernChargeSummary
+ err := json.Unmarshal(body, &mcs)
+ return mcs, err
+ default:
+ var cs ChargeSummary
+ err := json.Unmarshal(body, &cs)
+ return cs, err
+ }
+}
+func unmarshalBasicChargeSummaryArray(body []byte) ([]BasicChargeSummary, error) {
+ var rawMessages []*json.RawMessage
+ err := json.Unmarshal(body, &rawMessages)
+ if err != nil {
+ return nil, err
+ }
+
+ csArray := make([]BasicChargeSummary, len(rawMessages))
+
+ for index, rawMessage := range rawMessages {
+ cs, err := unmarshalBasicChargeSummary(*rawMessage)
+ if err != nil {
+ return nil, err
+ }
+ csArray[index] = cs
+ }
+ return csArray, nil
+}
+
+// MarshalJSON is the custom marshaler for ChargeSummary.
+func (cs ChargeSummary) MarshalJSON() ([]byte, error) {
+ cs.Kind = KindBasicChargeSummaryKindChargeSummary
+ objectMap := make(map[string]interface{})
+ if cs.Kind != "" {
+ objectMap["kind"] = cs.Kind
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsLegacyChargeSummary is the BasicChargeSummary implementation for ChargeSummary.
+func (cs ChargeSummary) AsLegacyChargeSummary() (*LegacyChargeSummary, bool) {
+ return nil, false
+}
+
+// AsModernChargeSummary is the BasicChargeSummary implementation for ChargeSummary.
+func (cs ChargeSummary) AsModernChargeSummary() (*ModernChargeSummary, bool) {
+ return nil, false
+}
+
+// AsChargeSummary is the BasicChargeSummary implementation for ChargeSummary.
+func (cs ChargeSummary) AsChargeSummary() (*ChargeSummary, bool) {
+ return &cs, true
+}
+
+// AsBasicChargeSummary is the BasicChargeSummary implementation for ChargeSummary.
+func (cs ChargeSummary) AsBasicChargeSummary() (BasicChargeSummary, bool) {
+ return &cs, true
+}
+
+// CreditBalanceSummary summary of credit balances.
+type CreditBalanceSummary struct {
+ // EstimatedBalance - READ-ONLY; Estimated balance.
+ EstimatedBalance *Amount `json:"estimatedBalance,omitempty"`
+ // CurrentBalance - READ-ONLY; Current balance.
+ CurrentBalance *Amount `json:"currentBalance,omitempty"`
+}
+
+// CreditSummary a credit summary resource.
+type CreditSummary struct {
+ autorest.Response `json:"-"`
+ *CreditSummaryProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for CreditSummary.
+func (cs CreditSummary) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if cs.CreditSummaryProperties != nil {
+ objectMap["properties"] = cs.CreditSummaryProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for CreditSummary struct.
+func (cs *CreditSummary) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var creditSummaryProperties CreditSummaryProperties
+ err = json.Unmarshal(*v, &creditSummaryProperties)
+ if err != nil {
+ return err
+ }
+ cs.CreditSummaryProperties = &creditSummaryProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ cs.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ cs.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ cs.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ cs.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// CreditSummaryProperties the properties of the credit summary.
+type CreditSummaryProperties struct {
+ // BalanceSummary - READ-ONLY; Summary of balances associated with this credit summary.
+ BalanceSummary *CreditBalanceSummary `json:"balanceSummary,omitempty"`
+ // PendingCreditAdjustments - READ-ONLY; Pending credit adjustments.
+ PendingCreditAdjustments *Amount `json:"pendingCreditAdjustments,omitempty"`
+ // ExpiredCredit - READ-ONLY; Expired credit.
+ ExpiredCredit *Amount `json:"expiredCredit,omitempty"`
+ // PendingEligibleCharges - READ-ONLY; Pending eligible charges.
+ PendingEligibleCharges *Amount `json:"pendingEligibleCharges,omitempty"`
+}
+
+// CurrentSpend the current amount of cost which is being tracked for a budget.
+type CurrentSpend struct {
+ // Amount - READ-ONLY; The total amount of cost which is being tracked by the budget.
+ Amount *decimal.Decimal `json:"amount,omitempty"`
+ // Unit - READ-ONLY; The unit of measure for the budget amount.
+ Unit *string `json:"unit,omitempty"`
+}
+
+// ErrorDetails the details of the error.
+type ErrorDetails struct {
+ // Code - READ-ONLY; Error code.
+ Code *string `json:"code,omitempty"`
+ // Message - READ-ONLY; Error message indicating why the operation failed.
+ Message *string `json:"message,omitempty"`
+}
+
+// ErrorResponse error response indicates that the service is not able to process the incoming request. The
+// reason is provided in the error message.
+//
+// Some Error responses:
+//
+// * 429 TooManyRequests - Request is throttled. Retry after waiting for the time specified in the
+// "x-ms-ratelimit-microsoft.consumption-retry-after" header.
+//
+// * 503 ServiceUnavailable - Service is temporarily unavailable. Retry after waiting for the time
+// specified in the "Retry-After" header.
+type ErrorResponse struct {
+ // Error - The details of the error.
+ Error *ErrorDetails `json:"error,omitempty"`
+}
+
+// EventProperties the event properties.
+type EventProperties struct {
+ // TransactionDate - READ-ONLY; Transaction date.
+ TransactionDate *date.Time `json:"transactionDate,omitempty"`
+ // Description - READ-ONLY; Transaction description.
+ Description *string `json:"description,omitempty"`
+ // NewCredit - READ-ONLY; New Credit.
+ NewCredit *Amount `json:"newCredit,omitempty"`
+ // Adjustments - READ-ONLY; Adjustments amount.
+ Adjustments *Amount `json:"adjustments,omitempty"`
+ // CreditExpired - READ-ONLY; Credit expired.
+ CreditExpired *Amount `json:"creditExpired,omitempty"`
+ // Charges - READ-ONLY; Charges amount.
+ Charges *Amount `json:"charges,omitempty"`
+ // ClosedBalance - READ-ONLY; Closed balance.
+ ClosedBalance *Amount `json:"closedBalance,omitempty"`
+ // EventType - The type of event. Possible values include: 'SettledCharges', 'PendingCharges', 'PendingAdjustments', 'PendingNewCredit', 'PendingExpiredCredit', 'UnKnown', 'NewCredit'
+ EventType EventType `json:"eventType,omitempty"`
+ // InvoiceNumber - READ-ONLY; Invoice number.
+ InvoiceNumber *string `json:"invoiceNumber,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for EventProperties.
+func (ep EventProperties) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if ep.EventType != "" {
+ objectMap["eventType"] = ep.EventType
+ }
+ return json.Marshal(objectMap)
+}
+
+// Events result of listing event summary.
+type Events struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; The list of event summary.
+ Value *[]EventSummary `json:"value,omitempty"`
+ // NextLink - READ-ONLY; The link (url) to the next page of results.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// EventsIterator provides access to a complete listing of EventSummary values.
+type EventsIterator struct {
+ i int
+ page EventsPage
+}
+
+// NextWithContext advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+func (iter *EventsIterator) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/EventsIterator.NextWithContext")
+ defer func() {
+ sc := -1
+ if iter.Response().Response.Response != nil {
+ sc = iter.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ iter.i++
+ if iter.i < len(iter.page.Values()) {
+ return nil
+ }
+ err = iter.page.NextWithContext(ctx)
+ if err != nil {
+ iter.i--
+ return err
+ }
+ iter.i = 0
+ return nil
+}
+
+// Next advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (iter *EventsIterator) Next() error {
+ return iter.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the enumeration should be started or is not yet complete.
+func (iter EventsIterator) NotDone() bool {
+ return iter.page.NotDone() && iter.i < len(iter.page.Values())
+}
+
+// Response returns the raw server response from the last page request.
+func (iter EventsIterator) Response() Events {
+ return iter.page.Response()
+}
+
+// Value returns the current value or a zero-initialized value if the
+// iterator has advanced beyond the end of the collection.
+func (iter EventsIterator) Value() EventSummary {
+ if !iter.page.NotDone() {
+ return EventSummary{}
+ }
+ return iter.page.Values()[iter.i]
+}
+
+// Creates a new instance of the EventsIterator type.
+func NewEventsIterator(page EventsPage) EventsIterator {
+ return EventsIterator{page: page}
+}
+
+// IsEmpty returns true if the ListResult contains no values.
+func (e Events) IsEmpty() bool {
+ return e.Value == nil || len(*e.Value) == 0
+}
+
+// hasNextLink returns true if the NextLink is not empty.
+func (e Events) hasNextLink() bool {
+ return e.NextLink != nil && len(*e.NextLink) != 0
+}
+
+// eventsPreparer prepares a request to retrieve the next set of results.
+// It returns nil if no more results exist.
+func (e Events) eventsPreparer(ctx context.Context) (*http.Request, error) {
+ if !e.hasNextLink() {
+ return nil, nil
+ }
+ return autorest.Prepare((&http.Request{}).WithContext(ctx),
+ autorest.AsJSON(),
+ autorest.AsGet(),
+ autorest.WithBaseURL(to.String(e.NextLink)))
+}
+
+// EventsPage contains a page of EventSummary values.
+type EventsPage struct {
+ fn func(context.Context, Events) (Events, error)
+ e Events
+}
+
+// NextWithContext advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+func (page *EventsPage) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/EventsPage.NextWithContext")
+ defer func() {
+ sc := -1
+ if page.Response().Response.Response != nil {
+ sc = page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ for {
+ next, err := page.fn(ctx, page.e)
+ if err != nil {
+ return err
+ }
+ page.e = next
+ if !next.hasNextLink() || !next.IsEmpty() {
+ break
+ }
+ }
+ return nil
+}
+
+// Next advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (page *EventsPage) Next() error {
+ return page.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the page enumeration should be started or is not yet complete.
+func (page EventsPage) NotDone() bool {
+ return !page.e.IsEmpty()
+}
+
+// Response returns the raw server response from the last page request.
+func (page EventsPage) Response() Events {
+ return page.e
+}
+
+// Values returns the slice of values for the current page or nil if there are no values.
+func (page EventsPage) Values() []EventSummary {
+ if page.e.IsEmpty() {
+ return nil
+ }
+ return *page.e.Value
+}
+
+// Creates a new instance of the EventsPage type.
+func NewEventsPage(cur Events, getNextPage func(context.Context, Events) (Events, error)) EventsPage {
+ return EventsPage{
+ fn: getNextPage,
+ e: cur,
+ }
+}
+
+// EventSummary an event summary resource.
+type EventSummary struct {
+ *EventProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for EventSummary.
+func (es EventSummary) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if es.EventProperties != nil {
+ objectMap["properties"] = es.EventProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for EventSummary struct.
+func (es *EventSummary) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var eventProperties EventProperties
+ err = json.Unmarshal(*v, &eventProperties)
+ if err != nil {
+ return err
+ }
+ es.EventProperties = &eventProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ es.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ es.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ es.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ es.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// Forecast a forecast resource.
+type Forecast struct {
+ *ForecastProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for Forecast.
+func (f Forecast) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if f.ForecastProperties != nil {
+ objectMap["properties"] = f.ForecastProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for Forecast struct.
+func (f *Forecast) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var forecastProperties ForecastProperties
+ err = json.Unmarshal(*v, &forecastProperties)
+ if err != nil {
+ return err
+ }
+ f.ForecastProperties = &forecastProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ f.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ f.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ f.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ f.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// ForecastProperties the properties of the forecast charge.
+type ForecastProperties struct {
+ // UsageDate - READ-ONLY; The usage date of the forecast.
+ UsageDate *string `json:"usageDate,omitempty"`
+ // Grain - The granularity of forecast. Possible values include: 'Daily', 'Monthly', 'Yearly'
+ Grain Grain `json:"grain,omitempty"`
+ // Charge - READ-ONLY; The amount of charge
+ Charge *decimal.Decimal `json:"charge,omitempty"`
+ // Currency - READ-ONLY; The ISO currency in which the meter is charged, for example, USD.
+ Currency *string `json:"currency,omitempty"`
+ // ChargeType - The type of the charge. Could be actual or forecast. Possible values include: 'ChargeTypeActual', 'ChargeTypeForecast'
+ ChargeType ChargeType `json:"chargeType,omitempty"`
+ // ConfidenceLevels - READ-ONLY; The details about the forecast confidence levels. This is populated only when chargeType is Forecast.
+ ConfidenceLevels *[]ForecastPropertiesConfidenceLevelsItem `json:"confidenceLevels,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for ForecastProperties.
+func (fp ForecastProperties) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if fp.Grain != "" {
+ objectMap["grain"] = fp.Grain
+ }
+ if fp.ChargeType != "" {
+ objectMap["chargeType"] = fp.ChargeType
+ }
+ return json.Marshal(objectMap)
+}
+
+// ForecastPropertiesConfidenceLevelsItem ...
+type ForecastPropertiesConfidenceLevelsItem struct {
+ // Percentage - READ-ONLY; The percentage level of the confidence
+ Percentage *decimal.Decimal `json:"percentage,omitempty"`
+ // Bound - The boundary of the percentage, values could be 'Upper' or 'Lower'. Possible values include: 'Upper', 'Lower'
+ Bound Bound `json:"bound,omitempty"`
+ // Value - READ-ONLY; The amount of forecast within the percentage level
+ Value *decimal.Decimal `json:"value,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for ForecastPropertiesConfidenceLevelsItem.
+func (fpLi ForecastPropertiesConfidenceLevelsItem) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if fpLi.Bound != "" {
+ objectMap["bound"] = fpLi.Bound
+ }
+ return json.Marshal(objectMap)
+}
+
+// ForecastsListResult result of listing forecasts. It contains a list of available forecasts.
+type ForecastsListResult struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; The list of forecasts.
+ Value *[]Forecast `json:"value,omitempty"`
+}
+
+// LegacyChargeSummary legacy charge summary.
+type LegacyChargeSummary struct {
+ // LegacyChargeSummaryProperties - Properties for legacy charge summary
+ *LegacyChargeSummaryProperties `json:"properties,omitempty"`
+ // Kind - Possible values include: 'KindBasicChargeSummaryKindChargeSummary', 'KindBasicChargeSummaryKindLegacy', 'KindBasicChargeSummaryKindModern'
+ Kind KindBasicChargeSummary `json:"kind,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for LegacyChargeSummary.
+func (lcs LegacyChargeSummary) MarshalJSON() ([]byte, error) {
+ lcs.Kind = KindBasicChargeSummaryKindLegacy
+ objectMap := make(map[string]interface{})
+ if lcs.LegacyChargeSummaryProperties != nil {
+ objectMap["properties"] = lcs.LegacyChargeSummaryProperties
+ }
+ if lcs.Kind != "" {
+ objectMap["kind"] = lcs.Kind
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsLegacyChargeSummary is the BasicChargeSummary implementation for LegacyChargeSummary.
+func (lcs LegacyChargeSummary) AsLegacyChargeSummary() (*LegacyChargeSummary, bool) {
+ return &lcs, true
+}
+
+// AsModernChargeSummary is the BasicChargeSummary implementation for LegacyChargeSummary.
+func (lcs LegacyChargeSummary) AsModernChargeSummary() (*ModernChargeSummary, bool) {
+ return nil, false
+}
+
+// AsChargeSummary is the BasicChargeSummary implementation for LegacyChargeSummary.
+func (lcs LegacyChargeSummary) AsChargeSummary() (*ChargeSummary, bool) {
+ return nil, false
+}
+
+// AsBasicChargeSummary is the BasicChargeSummary implementation for LegacyChargeSummary.
+func (lcs LegacyChargeSummary) AsBasicChargeSummary() (BasicChargeSummary, bool) {
+ return &lcs, true
+}
+
+// UnmarshalJSON is the custom unmarshaler for LegacyChargeSummary struct.
+func (lcs *LegacyChargeSummary) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var legacyChargeSummaryProperties LegacyChargeSummaryProperties
+ err = json.Unmarshal(*v, &legacyChargeSummaryProperties)
+ if err != nil {
+ return err
+ }
+ lcs.LegacyChargeSummaryProperties = &legacyChargeSummaryProperties
+ }
+ case "kind":
+ if v != nil {
+ var kind KindBasicChargeSummary
+ err = json.Unmarshal(*v, &kind)
+ if err != nil {
+ return err
+ }
+ lcs.Kind = kind
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ lcs.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ lcs.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ lcs.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ lcs.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// LegacyChargeSummaryProperties the properties of legacy charge summary.
+type LegacyChargeSummaryProperties struct {
+ // BillingPeriodID - READ-ONLY; The id of the billing period resource that the charge belongs to.
+ BillingPeriodID *string `json:"billingPeriodId,omitempty"`
+ // UsageStart - READ-ONLY; Usage start date.
+ UsageStart *string `json:"usageStart,omitempty"`
+ // UsageEnd - READ-ONLY; Usage end date.
+ UsageEnd *string `json:"usageEnd,omitempty"`
+ // AzureCharges - READ-ONLY; Azure Charges.
+ AzureCharges *decimal.Decimal `json:"azureCharges,omitempty"`
+ // ChargesBilledSeparately - READ-ONLY; Charges Billed separately.
+ ChargesBilledSeparately *decimal.Decimal `json:"chargesBilledSeparately,omitempty"`
+ // MarketplaceCharges - READ-ONLY; Marketplace Charges.
+ MarketplaceCharges *decimal.Decimal `json:"marketplaceCharges,omitempty"`
+ // Currency - READ-ONLY; Currency Code
+ Currency *string `json:"currency,omitempty"`
+}
+
+// LegacyReservationRecommendation legacy reservation recommendation.
+type LegacyReservationRecommendation struct {
+ // LegacyReservationRecommendationProperties - Properties for legacy reservation recommendation
+ *LegacyReservationRecommendationProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+ // Location - READ-ONLY; Resource location
+ Location *string `json:"location,omitempty"`
+ // Sku - READ-ONLY; Resource sku
+ Sku *string `json:"sku,omitempty"`
+ // Kind - Possible values include: 'KindBasicReservationRecommendationKindReservationRecommendation', 'KindBasicReservationRecommendationKindLegacy', 'KindBasicReservationRecommendationKindModern'
+ Kind KindBasicReservationRecommendation `json:"kind,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for LegacyReservationRecommendation.
+func (lrr LegacyReservationRecommendation) MarshalJSON() ([]byte, error) {
+ lrr.Kind = KindBasicReservationRecommendationKindLegacy
+ objectMap := make(map[string]interface{})
+ if lrr.LegacyReservationRecommendationProperties != nil {
+ objectMap["properties"] = lrr.LegacyReservationRecommendationProperties
+ }
+ if lrr.Kind != "" {
+ objectMap["kind"] = lrr.Kind
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsLegacyReservationRecommendation is the BasicReservationRecommendation implementation for LegacyReservationRecommendation.
+func (lrr LegacyReservationRecommendation) AsLegacyReservationRecommendation() (*LegacyReservationRecommendation, bool) {
+ return &lrr, true
+}
+
+// AsModernReservationRecommendation is the BasicReservationRecommendation implementation for LegacyReservationRecommendation.
+func (lrr LegacyReservationRecommendation) AsModernReservationRecommendation() (*ModernReservationRecommendation, bool) {
+ return nil, false
+}
+
+// AsReservationRecommendation is the BasicReservationRecommendation implementation for LegacyReservationRecommendation.
+func (lrr LegacyReservationRecommendation) AsReservationRecommendation() (*ReservationRecommendation, bool) {
+ return nil, false
+}
+
+// AsBasicReservationRecommendation is the BasicReservationRecommendation implementation for LegacyReservationRecommendation.
+func (lrr LegacyReservationRecommendation) AsBasicReservationRecommendation() (BasicReservationRecommendation, bool) {
+ return &lrr, true
+}
+
+// UnmarshalJSON is the custom unmarshaler for LegacyReservationRecommendation struct.
+func (lrr *LegacyReservationRecommendation) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var legacyReservationRecommendationProperties LegacyReservationRecommendationProperties
+ err = json.Unmarshal(*v, &legacyReservationRecommendationProperties)
+ if err != nil {
+ return err
+ }
+ lrr.LegacyReservationRecommendationProperties = &legacyReservationRecommendationProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ lrr.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ lrr.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ lrr.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ lrr.Tags = tags
+ }
+ case "location":
+ if v != nil {
+ var location string
+ err = json.Unmarshal(*v, &location)
+ if err != nil {
+ return err
+ }
+ lrr.Location = &location
+ }
+ case "sku":
+ if v != nil {
+ var sku string
+ err = json.Unmarshal(*v, &sku)
+ if err != nil {
+ return err
+ }
+ lrr.Sku = &sku
+ }
+ case "kind":
+ if v != nil {
+ var kind KindBasicReservationRecommendation
+ err = json.Unmarshal(*v, &kind)
+ if err != nil {
+ return err
+ }
+ lrr.Kind = kind
+ }
+ }
+ }
+
+ return nil
+}
+
+// LegacyReservationRecommendationProperties the properties of the reservation recommendation.
+type LegacyReservationRecommendationProperties struct {
+ // LookBackPeriod - READ-ONLY; The number of days of usage to look back for recommendation.
+ LookBackPeriod *string `json:"lookBackPeriod,omitempty"`
+ // InstanceFlexibilityRatio - READ-ONLY; The instance Flexibility Ratio.
+ InstanceFlexibilityRatio *int32 `json:"instanceFlexibilityRatio,omitempty"`
+ // InstanceFlexibilityGroup - READ-ONLY; The instance Flexibility Group.
+ InstanceFlexibilityGroup *string `json:"instanceFlexibilityGroup,omitempty"`
+ // NormalizedSize - READ-ONLY; The normalized Size.
+ NormalizedSize *string `json:"normalizedSize,omitempty"`
+ // RecommendedQuantityNormalized - READ-ONLY; The recommended Quantity Normalized.
+ RecommendedQuantityNormalized *float64 `json:"recommendedQuantityNormalized,omitempty"`
+ // MeterID - READ-ONLY; The meter id (GUID)
+ MeterID *uuid.UUID `json:"meterId,omitempty"`
+ // Term - READ-ONLY; RI recommendations in one or three year terms.
+ Term *string `json:"term,omitempty"`
+ // CostWithNoReservedInstances - READ-ONLY; The total amount of cost without reserved instances.
+ CostWithNoReservedInstances *decimal.Decimal `json:"costWithNoReservedInstances,omitempty"`
+ // RecommendedQuantity - READ-ONLY; Recommended quality for reserved instances.
+ RecommendedQuantity *decimal.Decimal `json:"recommendedQuantity,omitempty"`
+ // TotalCostWithReservedInstances - READ-ONLY; The total amount of cost with reserved instances.
+ TotalCostWithReservedInstances *decimal.Decimal `json:"totalCostWithReservedInstances,omitempty"`
+ // NetSavings - READ-ONLY; Total estimated savings with reserved instances.
+ NetSavings *decimal.Decimal `json:"netSavings,omitempty"`
+ // FirstUsageDate - READ-ONLY; The usage date for looking back.
+ FirstUsageDate *date.Time `json:"firstUsageDate,omitempty"`
+ // Scope - READ-ONLY; Shared or single recommendation.
+ Scope *string `json:"scope,omitempty"`
+ // SkuProperties - READ-ONLY; List of sku properties
+ SkuProperties *[]SkuProperty `json:"skuProperties,omitempty"`
+}
+
+// LegacyReservationTransaction legacy Reservation transaction resource.
+type LegacyReservationTransaction struct {
+ *LegacyReservationTransactionProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags *[]string `json:"tags,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for LegacyReservationTransaction.
+func (lrt LegacyReservationTransaction) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if lrt.LegacyReservationTransactionProperties != nil {
+ objectMap["properties"] = lrt.LegacyReservationTransactionProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for LegacyReservationTransaction struct.
+func (lrt *LegacyReservationTransaction) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var legacyReservationTransactionProperties LegacyReservationTransactionProperties
+ err = json.Unmarshal(*v, &legacyReservationTransactionProperties)
+ if err != nil {
+ return err
+ }
+ lrt.LegacyReservationTransactionProperties = &legacyReservationTransactionProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ lrt.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ lrt.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ lrt.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags []string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ lrt.Tags = &tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// LegacyReservationTransactionProperties the properties of a legacy reservation transaction.
+type LegacyReservationTransactionProperties struct {
+ // EventDate - READ-ONLY; The date of the transaction
+ EventDate *date.Time `json:"eventDate,omitempty"`
+ // ReservationOrderID - READ-ONLY; The reservation order ID is the identifier for a reservation purchase. Each reservation order ID represents a single purchase transaction. A reservation order contains reservations. The reservation order specifies the VM size and region for the reservations.
+ ReservationOrderID *string `json:"reservationOrderId,omitempty"`
+ // Description - READ-ONLY; The description of the transaction.
+ Description *string `json:"description,omitempty"`
+ // EventType - READ-ONLY; The type of the transaction (Purchase, Cancel, etc.)
+ EventType *string `json:"eventType,omitempty"`
+ // Quantity - READ-ONLY; The quantity of the transaction.
+ Quantity *decimal.Decimal `json:"quantity,omitempty"`
+ // Amount - READ-ONLY; The charge of the transaction.
+ Amount *decimal.Decimal `json:"amount,omitempty"`
+ // Currency - READ-ONLY; The ISO currency in which the transaction is charged, for example, USD.
+ Currency *string `json:"currency,omitempty"`
+ // ReservationOrderName - READ-ONLY; The name of the reservation order.
+ ReservationOrderName *string `json:"reservationOrderName,omitempty"`
+ // PurchasingEnrollment - READ-ONLY; The purchasing enrollment.
+ PurchasingEnrollment *string `json:"purchasingEnrollment,omitempty"`
+ // PurchasingSubscriptionGUID - READ-ONLY; The subscription guid that makes the transaction.
+ PurchasingSubscriptionGUID *uuid.UUID `json:"purchasingSubscriptionGuid,omitempty"`
+ // PurchasingSubscriptionName - READ-ONLY; The subscription name that makes the transaction.
+ PurchasingSubscriptionName *string `json:"purchasingSubscriptionName,omitempty"`
+ // ArmSkuName - READ-ONLY; This is the ARM Sku name. It can be used to join with the serviceType field in additional info in usage records.
+ ArmSkuName *string `json:"armSkuName,omitempty"`
+ // Term - READ-ONLY; This is the term of the transaction.
+ Term *string `json:"term,omitempty"`
+ // Region - READ-ONLY; The region of the transaction.
+ Region *string `json:"region,omitempty"`
+ // AccountName - READ-ONLY; The name of the account that makes the transaction.
+ AccountName *string `json:"accountName,omitempty"`
+ // AccountOwnerEmail - READ-ONLY; The email of the account owner that makes the transaction.
+ AccountOwnerEmail *string `json:"accountOwnerEmail,omitempty"`
+ // DepartmentName - READ-ONLY; The department name.
+ DepartmentName *string `json:"departmentName,omitempty"`
+ // CostCenter - READ-ONLY; The cost center of this department if it is a department and a cost center is provided.
+ CostCenter *string `json:"costCenter,omitempty"`
+ // CurrentEnrollment - READ-ONLY; The current enrollment.
+ CurrentEnrollment *string `json:"currentEnrollment,omitempty"`
+ // BillingFrequency - READ-ONLY; The billing frequency, which can be either one-time or recurring.
+ BillingFrequency *string `json:"billingFrequency,omitempty"`
+}
+
+// LegacyUsageDetail legacy usage detail.
+type LegacyUsageDetail struct {
+ // LegacyUsageDetailProperties - Properties for legacy usage details
+ *LegacyUsageDetailProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+ // Kind - Possible values include: 'KindUsageDetail', 'KindLegacy', 'KindModern'
+ Kind Kind `json:"kind,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for LegacyUsageDetail.
+func (lud LegacyUsageDetail) MarshalJSON() ([]byte, error) {
+ lud.Kind = KindLegacy
+ objectMap := make(map[string]interface{})
+ if lud.LegacyUsageDetailProperties != nil {
+ objectMap["properties"] = lud.LegacyUsageDetailProperties
+ }
+ if lud.Kind != "" {
+ objectMap["kind"] = lud.Kind
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsLegacyUsageDetail is the BasicUsageDetail implementation for LegacyUsageDetail.
+func (lud LegacyUsageDetail) AsLegacyUsageDetail() (*LegacyUsageDetail, bool) {
+ return &lud, true
+}
+
+// AsModernUsageDetail is the BasicUsageDetail implementation for LegacyUsageDetail.
+func (lud LegacyUsageDetail) AsModernUsageDetail() (*ModernUsageDetail, bool) {
+ return nil, false
+}
+
+// AsUsageDetail is the BasicUsageDetail implementation for LegacyUsageDetail.
+func (lud LegacyUsageDetail) AsUsageDetail() (*UsageDetail, bool) {
+ return nil, false
+}
+
+// AsBasicUsageDetail is the BasicUsageDetail implementation for LegacyUsageDetail.
+func (lud LegacyUsageDetail) AsBasicUsageDetail() (BasicUsageDetail, bool) {
+ return &lud, true
+}
+
+// UnmarshalJSON is the custom unmarshaler for LegacyUsageDetail struct.
+func (lud *LegacyUsageDetail) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var legacyUsageDetailProperties LegacyUsageDetailProperties
+ err = json.Unmarshal(*v, &legacyUsageDetailProperties)
+ if err != nil {
+ return err
+ }
+ lud.LegacyUsageDetailProperties = &legacyUsageDetailProperties
+ }
+ case "kind":
+ if v != nil {
+ var kind Kind
+ err = json.Unmarshal(*v, &kind)
+ if err != nil {
+ return err
+ }
+ lud.Kind = kind
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ lud.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ lud.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ lud.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ lud.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// LegacyUsageDetailProperties the properties of the legacy usage detail.
+type LegacyUsageDetailProperties struct {
+ // BillingAccountID - READ-ONLY; Billing Account identifier.
+ BillingAccountID *string `json:"billingAccountId,omitempty"`
+ // BillingAccountName - READ-ONLY; Billing Account Name.
+ BillingAccountName *string `json:"billingAccountName,omitempty"`
+ // BillingPeriodStartDate - READ-ONLY; The billing period start date.
+ BillingPeriodStartDate *date.Time `json:"billingPeriodStartDate,omitempty"`
+ // BillingPeriodEndDate - READ-ONLY; The billing period end date.
+ BillingPeriodEndDate *date.Time `json:"billingPeriodEndDate,omitempty"`
+ // BillingProfileID - READ-ONLY; Billing Profile identifier.
+ BillingProfileID *string `json:"billingProfileId,omitempty"`
+ // BillingProfileName - READ-ONLY; Billing Profile Name.
+ BillingProfileName *string `json:"billingProfileName,omitempty"`
+ // AccountOwnerID - READ-ONLY; Account Owner Id.
+ AccountOwnerID *string `json:"accountOwnerId,omitempty"`
+ // AccountName - READ-ONLY; Account Name.
+ AccountName *string `json:"accountName,omitempty"`
+ // SubscriptionID - READ-ONLY; Subscription guid.
+ SubscriptionID *string `json:"subscriptionId,omitempty"`
+ // SubscriptionName - READ-ONLY; Subscription name.
+ SubscriptionName *string `json:"subscriptionName,omitempty"`
+ // Date - READ-ONLY; Date for the usage record.
+ Date *date.Time `json:"date,omitempty"`
+ // Product - READ-ONLY; Product name for the consumed service or purchase. Not available for Marketplace.
+ Product *string `json:"product,omitempty"`
+ // PartNumber - READ-ONLY; Part Number of the service used. Can be used to join with the price sheet. Not available for marketplace.
+ PartNumber *string `json:"partNumber,omitempty"`
+ // MeterID - READ-ONLY; The meter id (GUID). Not available for marketplace. For reserved instance this represents the primary meter for which the reservation was purchased. For the actual VM Size for which the reservation is purchased see productOrderName.
+ MeterID *uuid.UUID `json:"meterId,omitempty"`
+ // MeterDetails - READ-ONLY; The details about the meter. By default this is not populated, unless it's specified in $expand.
+ MeterDetails *MeterDetailsResponse `json:"meterDetails,omitempty"`
+ // Quantity - READ-ONLY; The usage quantity.
+ Quantity *decimal.Decimal `json:"quantity,omitempty"`
+ // EffectivePrice - READ-ONLY; Effective Price that's charged for the usage.
+ EffectivePrice *decimal.Decimal `json:"effectivePrice,omitempty"`
+ // Cost - READ-ONLY; The amount of cost before tax.
+ Cost *decimal.Decimal `json:"cost,omitempty"`
+ // UnitPrice - READ-ONLY; Unit Price is the price applicable to you. (your EA or other contract price).
+ UnitPrice *decimal.Decimal `json:"unitPrice,omitempty"`
+ // BillingCurrency - READ-ONLY; Billing Currency.
+ BillingCurrency *string `json:"billingCurrency,omitempty"`
+ // ResourceLocation - READ-ONLY; Resource Location.
+ ResourceLocation *string `json:"resourceLocation,omitempty"`
+ // ConsumedService - READ-ONLY; Consumed service name. Name of the azure resource provider that emits the usage or was purchased. This value is not provided for marketplace usage.
+ ConsumedService *string `json:"consumedService,omitempty"`
+ // ResourceID - READ-ONLY; Azure resource manager resource identifier.
+ ResourceID *string `json:"resourceId,omitempty"`
+ // ResourceName - READ-ONLY; Resource Name.
+ ResourceName *string `json:"resourceName,omitempty"`
+ // ServiceInfo1 - READ-ONLY; Service Info 1.
+ ServiceInfo1 *string `json:"serviceInfo1,omitempty"`
+ // ServiceInfo2 - READ-ONLY; Service Info 2.
+ ServiceInfo2 *string `json:"serviceInfo2,omitempty"`
+ // AdditionalInfo - READ-ONLY; Additional details of this usage item. By default this is not populated, unless it's specified in $expand. Use this field to get usage line item specific details such as the actual VM Size (ServiceType) or the ratio in which the reservation discount is applied.
+ AdditionalInfo *string `json:"additionalInfo,omitempty"`
+ // InvoiceSection - READ-ONLY; Invoice Section Name.
+ InvoiceSection *string `json:"invoiceSection,omitempty"`
+ // CostCenter - READ-ONLY; The cost center of this department if it is a department and a cost center is provided.
+ CostCenter *string `json:"costCenter,omitempty"`
+ // ResourceGroup - READ-ONLY; Resource Group Name.
+ ResourceGroup *string `json:"resourceGroup,omitempty"`
+ // ReservationID - READ-ONLY; ARM resource id of the reservation. Only applies to records relevant to reservations.
+ ReservationID *string `json:"reservationId,omitempty"`
+ // ReservationName - READ-ONLY; User provided display name of the reservation. Last known name for a particular day is populated in the daily data. Only applies to records relevant to reservations.
+ ReservationName *string `json:"reservationName,omitempty"`
+ // ProductOrderID - READ-ONLY; Product Order Id. For reservations this is the Reservation Order ID.
+ ProductOrderID *string `json:"productOrderId,omitempty"`
+ // ProductOrderName - READ-ONLY; Product Order Name. For reservations this is the SKU that was purchased.
+ ProductOrderName *string `json:"productOrderName,omitempty"`
+ // OfferID - READ-ONLY; Offer Id. Ex: MS-AZR-0017P, MS-AZR-0148P.
+ OfferID *string `json:"offerId,omitempty"`
+ // IsAzureCreditEligible - READ-ONLY; Is Azure Credit Eligible.
+ IsAzureCreditEligible *bool `json:"isAzureCreditEligible,omitempty"`
+ // Term - READ-ONLY; Term (in months). 1 month for monthly recurring purchase. 12 months for a 1 year reservation. 36 months for a 3 year reservation.
+ Term *string `json:"term,omitempty"`
+ // PublisherName - READ-ONLY; Publisher Name.
+ PublisherName *string `json:"publisherName,omitempty"`
+ // PublisherType - READ-ONLY; Publisher Type.
+ PublisherType *string `json:"publisherType,omitempty"`
+ // PlanName - READ-ONLY; Plan Name.
+ PlanName *string `json:"planName,omitempty"`
+ // ChargeType - READ-ONLY; Indicates a charge represents credits, usage, a Marketplace purchase, a reservation fee, or a refund.
+ ChargeType *string `json:"chargeType,omitempty"`
+ // Frequency - READ-ONLY; Indicates how frequently this charge will occur. OneTime for purchases which only happen once, Monthly for fees which recur every month, and UsageBased for charges based on how much a service is used.
+ Frequency *string `json:"frequency,omitempty"`
+}
+
+// LotProperties the lot properties.
+type LotProperties struct {
+ // OriginalAmount - READ-ONLY; Original amount.
+ OriginalAmount *Amount `json:"originalAmount,omitempty"`
+ // ClosedBalance - READ-ONLY; Closed balance.
+ ClosedBalance *Amount `json:"closedBalance,omitempty"`
+ // Source - READ-ONLY; Lot source. Possible values include: 'PurchasedCredit', 'PromotionalCredit'
+ Source LotSource `json:"source,omitempty"`
+ // StartDate - READ-ONLY; Start date.
+ StartDate *date.Time `json:"startDate,omitempty"`
+ // ExpirationDate - READ-ONLY; Expiration date.
+ ExpirationDate *date.Time `json:"expirationDate,omitempty"`
+ // PoNumber - READ-ONLY; PO number.
+ PoNumber *string `json:"poNumber,omitempty"`
+}
+
+// Lots result of listing lot summary.
+type Lots struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; The list of lot summary.
+ Value *[]LotSummary `json:"value,omitempty"`
+ // NextLink - READ-ONLY; The link (url) to the next page of results.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// LotsIterator provides access to a complete listing of LotSummary values.
+type LotsIterator struct {
+ i int
+ page LotsPage
+}
+
+// NextWithContext advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+func (iter *LotsIterator) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/LotsIterator.NextWithContext")
+ defer func() {
+ sc := -1
+ if iter.Response().Response.Response != nil {
+ sc = iter.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ iter.i++
+ if iter.i < len(iter.page.Values()) {
+ return nil
+ }
+ err = iter.page.NextWithContext(ctx)
+ if err != nil {
+ iter.i--
+ return err
+ }
+ iter.i = 0
+ return nil
+}
+
+// Next advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (iter *LotsIterator) Next() error {
+ return iter.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the enumeration should be started or is not yet complete.
+func (iter LotsIterator) NotDone() bool {
+ return iter.page.NotDone() && iter.i < len(iter.page.Values())
+}
+
+// Response returns the raw server response from the last page request.
+func (iter LotsIterator) Response() Lots {
+ return iter.page.Response()
+}
+
+// Value returns the current value or a zero-initialized value if the
+// iterator has advanced beyond the end of the collection.
+func (iter LotsIterator) Value() LotSummary {
+ if !iter.page.NotDone() {
+ return LotSummary{}
+ }
+ return iter.page.Values()[iter.i]
+}
+
+// Creates a new instance of the LotsIterator type.
+func NewLotsIterator(page LotsPage) LotsIterator {
+ return LotsIterator{page: page}
+}
+
+// IsEmpty returns true if the ListResult contains no values.
+func (l Lots) IsEmpty() bool {
+ return l.Value == nil || len(*l.Value) == 0
+}
+
+// hasNextLink returns true if the NextLink is not empty.
+func (l Lots) hasNextLink() bool {
+ return l.NextLink != nil && len(*l.NextLink) != 0
+}
+
+// lotsPreparer prepares a request to retrieve the next set of results.
+// It returns nil if no more results exist.
+func (l Lots) lotsPreparer(ctx context.Context) (*http.Request, error) {
+ if !l.hasNextLink() {
+ return nil, nil
+ }
+ return autorest.Prepare((&http.Request{}).WithContext(ctx),
+ autorest.AsJSON(),
+ autorest.AsGet(),
+ autorest.WithBaseURL(to.String(l.NextLink)))
+}
+
+// LotsPage contains a page of LotSummary values.
+type LotsPage struct {
+ fn func(context.Context, Lots) (Lots, error)
+ l Lots
+}
+
+// NextWithContext advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+func (page *LotsPage) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/LotsPage.NextWithContext")
+ defer func() {
+ sc := -1
+ if page.Response().Response.Response != nil {
+ sc = page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ for {
+ next, err := page.fn(ctx, page.l)
+ if err != nil {
+ return err
+ }
+ page.l = next
+ if !next.hasNextLink() || !next.IsEmpty() {
+ break
+ }
+ }
+ return nil
+}
+
+// Next advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (page *LotsPage) Next() error {
+ return page.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the page enumeration should be started or is not yet complete.
+func (page LotsPage) NotDone() bool {
+ return !page.l.IsEmpty()
+}
+
+// Response returns the raw server response from the last page request.
+func (page LotsPage) Response() Lots {
+ return page.l
+}
+
+// Values returns the slice of values for the current page or nil if there are no values.
+func (page LotsPage) Values() []LotSummary {
+ if page.l.IsEmpty() {
+ return nil
+ }
+ return *page.l.Value
+}
+
+// Creates a new instance of the LotsPage type.
+func NewLotsPage(cur Lots, getNextPage func(context.Context, Lots) (Lots, error)) LotsPage {
+ return LotsPage{
+ fn: getNextPage,
+ l: cur,
+ }
+}
+
+// LotSummary a lot summary resource.
+type LotSummary struct {
+ *LotProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for LotSummary.
+func (ls LotSummary) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if ls.LotProperties != nil {
+ objectMap["properties"] = ls.LotProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for LotSummary struct.
+func (ls *LotSummary) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var lotProperties LotProperties
+ err = json.Unmarshal(*v, &lotProperties)
+ if err != nil {
+ return err
+ }
+ ls.LotProperties = &lotProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ ls.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ ls.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ ls.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ ls.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// ManagementGroupAggregatedCostProperties the properties of the Management Group Aggregated Cost.
+type ManagementGroupAggregatedCostProperties struct {
+ // BillingPeriodID - READ-ONLY; The id of the billing period resource that the aggregated cost belongs to.
+ BillingPeriodID *string `json:"billingPeriodId,omitempty"`
+ // UsageStart - READ-ONLY; The start of the date time range covered by aggregated cost.
+ UsageStart *date.Time `json:"usageStart,omitempty"`
+ // UsageEnd - READ-ONLY; The end of the date time range covered by the aggregated cost.
+ UsageEnd *date.Time `json:"usageEnd,omitempty"`
+ // AzureCharges - READ-ONLY; Azure Charges.
+ AzureCharges *decimal.Decimal `json:"azureCharges,omitempty"`
+ // MarketplaceCharges - READ-ONLY; Marketplace Charges.
+ MarketplaceCharges *decimal.Decimal `json:"marketplaceCharges,omitempty"`
+ // ChargesBilledSeparately - READ-ONLY; Charges Billed Separately.
+ ChargesBilledSeparately *decimal.Decimal `json:"chargesBilledSeparately,omitempty"`
+ // Currency - READ-ONLY; The ISO currency in which the meter is charged, for example, USD.
+ Currency *string `json:"currency,omitempty"`
+ // Children - Children of a management group
+ Children *[]ManagementGroupAggregatedCostResult `json:"children,omitempty"`
+ // IncludedSubscriptions - List of subscription Guids included in the calculation of aggregated cost
+ IncludedSubscriptions *[]string `json:"includedSubscriptions,omitempty"`
+ // ExcludedSubscriptions - List of subscription Guids excluded from the calculation of aggregated cost
+ ExcludedSubscriptions *[]string `json:"excludedSubscriptions,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for ManagementGroupAggregatedCostProperties.
+func (mgacp ManagementGroupAggregatedCostProperties) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if mgacp.Children != nil {
+ objectMap["children"] = mgacp.Children
+ }
+ if mgacp.IncludedSubscriptions != nil {
+ objectMap["includedSubscriptions"] = mgacp.IncludedSubscriptions
+ }
+ if mgacp.ExcludedSubscriptions != nil {
+ objectMap["excludedSubscriptions"] = mgacp.ExcludedSubscriptions
+ }
+ return json.Marshal(objectMap)
+}
+
+// ManagementGroupAggregatedCostResult a management group aggregated cost resource.
+type ManagementGroupAggregatedCostResult struct {
+ autorest.Response `json:"-"`
+ *ManagementGroupAggregatedCostProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for ManagementGroupAggregatedCostResult.
+func (mgacr ManagementGroupAggregatedCostResult) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if mgacr.ManagementGroupAggregatedCostProperties != nil {
+ objectMap["properties"] = mgacr.ManagementGroupAggregatedCostProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for ManagementGroupAggregatedCostResult struct.
+func (mgacr *ManagementGroupAggregatedCostResult) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var managementGroupAggregatedCostProperties ManagementGroupAggregatedCostProperties
+ err = json.Unmarshal(*v, &managementGroupAggregatedCostProperties)
+ if err != nil {
+ return err
+ }
+ mgacr.ManagementGroupAggregatedCostProperties = &managementGroupAggregatedCostProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ mgacr.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ mgacr.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ mgacr.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ mgacr.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// Marketplace an marketplace resource.
+type Marketplace struct {
+ *MarketplaceProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for Marketplace.
+func (mVar Marketplace) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if mVar.MarketplaceProperties != nil {
+ objectMap["properties"] = mVar.MarketplaceProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for Marketplace struct.
+func (mVar *Marketplace) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var marketplaceProperties MarketplaceProperties
+ err = json.Unmarshal(*v, &marketplaceProperties)
+ if err != nil {
+ return err
+ }
+ mVar.MarketplaceProperties = &marketplaceProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ mVar.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ mVar.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ mVar.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ mVar.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// MarketplaceProperties the properties of the marketplace usage detail.
+type MarketplaceProperties struct {
+ // BillingPeriodID - READ-ONLY; The id of the billing period resource that the usage belongs to.
+ BillingPeriodID *string `json:"billingPeriodId,omitempty"`
+ // UsageStart - READ-ONLY; The start of the date time range covered by the usage detail.
+ UsageStart *date.Time `json:"usageStart,omitempty"`
+ // UsageEnd - READ-ONLY; The end of the date time range covered by the usage detail.
+ UsageEnd *date.Time `json:"usageEnd,omitempty"`
+ // ResourceRate - READ-ONLY; The marketplace resource rate.
+ ResourceRate *decimal.Decimal `json:"resourceRate,omitempty"`
+ // OfferName - READ-ONLY; The type of offer.
+ OfferName *string `json:"offerName,omitempty"`
+ // ResourceGroup - READ-ONLY; The name of resource group.
+ ResourceGroup *string `json:"resourceGroup,omitempty"`
+ // OrderNumber - READ-ONLY; The order number.
+ OrderNumber *string `json:"orderNumber,omitempty"`
+ // InstanceName - READ-ONLY; The name of the resource instance that the usage is about.
+ InstanceName *string `json:"instanceName,omitempty"`
+ // InstanceID - READ-ONLY; The uri of the resource instance that the usage is about.
+ InstanceID *string `json:"instanceId,omitempty"`
+ // Currency - READ-ONLY; The ISO currency in which the meter is charged, for example, USD.
+ Currency *string `json:"currency,omitempty"`
+ // ConsumedQuantity - READ-ONLY; The quantity of usage.
+ ConsumedQuantity *decimal.Decimal `json:"consumedQuantity,omitempty"`
+ // UnitOfMeasure - READ-ONLY; The unit of measure.
+ UnitOfMeasure *string `json:"unitOfMeasure,omitempty"`
+ // PretaxCost - READ-ONLY; The amount of cost before tax.
+ PretaxCost *decimal.Decimal `json:"pretaxCost,omitempty"`
+ // IsEstimated - READ-ONLY; The estimated usage is subject to change.
+ IsEstimated *bool `json:"isEstimated,omitempty"`
+ // MeterID - READ-ONLY; The meter id (GUID).
+ MeterID *uuid.UUID `json:"meterId,omitempty"`
+ // SubscriptionGUID - READ-ONLY; Subscription guid.
+ SubscriptionGUID *uuid.UUID `json:"subscriptionGuid,omitempty"`
+ // SubscriptionName - READ-ONLY; Subscription name.
+ SubscriptionName *string `json:"subscriptionName,omitempty"`
+ // AccountName - READ-ONLY; Account name.
+ AccountName *string `json:"accountName,omitempty"`
+ // DepartmentName - READ-ONLY; Department name.
+ DepartmentName *string `json:"departmentName,omitempty"`
+ // ConsumedService - READ-ONLY; Consumed service name.
+ ConsumedService *string `json:"consumedService,omitempty"`
+ // CostCenter - READ-ONLY; The cost center of this department if it is a department and a costcenter exists
+ CostCenter *string `json:"costCenter,omitempty"`
+ // AdditionalProperties - READ-ONLY; Additional details of this usage item. By default this is not populated, unless it's specified in $expand.
+ AdditionalProperties *string `json:"additionalProperties,omitempty"`
+ // PublisherName - READ-ONLY; The name of publisher.
+ PublisherName *string `json:"publisherName,omitempty"`
+ // PlanName - READ-ONLY; The name of plan.
+ PlanName *string `json:"planName,omitempty"`
+ // IsRecurringCharge - READ-ONLY; Flag indicating whether this is a recurring charge or not.
+ IsRecurringCharge *bool `json:"isRecurringCharge,omitempty"`
+}
+
+// MarketplacesListResult result of listing marketplaces. It contains a list of available marketplaces in
+// reverse chronological order by billing period.
+type MarketplacesListResult struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; The list of marketplaces.
+ Value *[]Marketplace `json:"value,omitempty"`
+ // NextLink - READ-ONLY; The link (url) to the next page of results.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// MarketplacesListResultIterator provides access to a complete listing of Marketplace values.
+type MarketplacesListResultIterator struct {
+ i int
+ page MarketplacesListResultPage
+}
+
+// NextWithContext advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+func (iter *MarketplacesListResultIterator) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/MarketplacesListResultIterator.NextWithContext")
+ defer func() {
+ sc := -1
+ if iter.Response().Response.Response != nil {
+ sc = iter.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ iter.i++
+ if iter.i < len(iter.page.Values()) {
+ return nil
+ }
+ err = iter.page.NextWithContext(ctx)
+ if err != nil {
+ iter.i--
+ return err
+ }
+ iter.i = 0
+ return nil
+}
+
+// Next advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (iter *MarketplacesListResultIterator) Next() error {
+ return iter.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the enumeration should be started or is not yet complete.
+func (iter MarketplacesListResultIterator) NotDone() bool {
+ return iter.page.NotDone() && iter.i < len(iter.page.Values())
+}
+
+// Response returns the raw server response from the last page request.
+func (iter MarketplacesListResultIterator) Response() MarketplacesListResult {
+ return iter.page.Response()
+}
+
+// Value returns the current value or a zero-initialized value if the
+// iterator has advanced beyond the end of the collection.
+func (iter MarketplacesListResultIterator) Value() Marketplace {
+ if !iter.page.NotDone() {
+ return Marketplace{}
+ }
+ return iter.page.Values()[iter.i]
+}
+
+// Creates a new instance of the MarketplacesListResultIterator type.
+func NewMarketplacesListResultIterator(page MarketplacesListResultPage) MarketplacesListResultIterator {
+ return MarketplacesListResultIterator{page: page}
+}
+
+// IsEmpty returns true if the ListResult contains no values.
+func (mlr MarketplacesListResult) IsEmpty() bool {
+ return mlr.Value == nil || len(*mlr.Value) == 0
+}
+
+// hasNextLink returns true if the NextLink is not empty.
+func (mlr MarketplacesListResult) hasNextLink() bool {
+ return mlr.NextLink != nil && len(*mlr.NextLink) != 0
+}
+
+// marketplacesListResultPreparer prepares a request to retrieve the next set of results.
+// It returns nil if no more results exist.
+func (mlr MarketplacesListResult) marketplacesListResultPreparer(ctx context.Context) (*http.Request, error) {
+ if !mlr.hasNextLink() {
+ return nil, nil
+ }
+ return autorest.Prepare((&http.Request{}).WithContext(ctx),
+ autorest.AsJSON(),
+ autorest.AsGet(),
+ autorest.WithBaseURL(to.String(mlr.NextLink)))
+}
+
+// MarketplacesListResultPage contains a page of Marketplace values.
+type MarketplacesListResultPage struct {
+ fn func(context.Context, MarketplacesListResult) (MarketplacesListResult, error)
+ mlr MarketplacesListResult
+}
+
+// NextWithContext advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+func (page *MarketplacesListResultPage) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/MarketplacesListResultPage.NextWithContext")
+ defer func() {
+ sc := -1
+ if page.Response().Response.Response != nil {
+ sc = page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ for {
+ next, err := page.fn(ctx, page.mlr)
+ if err != nil {
+ return err
+ }
+ page.mlr = next
+ if !next.hasNextLink() || !next.IsEmpty() {
+ break
+ }
+ }
+ return nil
+}
+
+// Next advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (page *MarketplacesListResultPage) Next() error {
+ return page.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the page enumeration should be started or is not yet complete.
+func (page MarketplacesListResultPage) NotDone() bool {
+ return !page.mlr.IsEmpty()
+}
+
+// Response returns the raw server response from the last page request.
+func (page MarketplacesListResultPage) Response() MarketplacesListResult {
+ return page.mlr
+}
+
+// Values returns the slice of values for the current page or nil if there are no values.
+func (page MarketplacesListResultPage) Values() []Marketplace {
+ if page.mlr.IsEmpty() {
+ return nil
+ }
+ return *page.mlr.Value
+}
+
+// Creates a new instance of the MarketplacesListResultPage type.
+func NewMarketplacesListResultPage(cur MarketplacesListResult, getNextPage func(context.Context, MarketplacesListResult) (MarketplacesListResult, error)) MarketplacesListResultPage {
+ return MarketplacesListResultPage{
+ fn: getNextPage,
+ mlr: cur,
+ }
+}
+
+// MeterDetails the properties of the meter detail.
+type MeterDetails struct {
+ // MeterName - READ-ONLY; The name of the meter, within the given meter category
+ MeterName *string `json:"meterName,omitempty"`
+ // MeterCategory - READ-ONLY; The category of the meter, for example, 'Cloud services', 'Networking', etc..
+ MeterCategory *string `json:"meterCategory,omitempty"`
+ // MeterSubCategory - READ-ONLY; The subcategory of the meter, for example, 'A6 Cloud services', 'ExpressRoute (IXP)', etc..
+ MeterSubCategory *string `json:"meterSubCategory,omitempty"`
+ // Unit - READ-ONLY; The unit in which the meter consumption is charged, for example, 'Hours', 'GB', etc.
+ Unit *string `json:"unit,omitempty"`
+ // MeterLocation - READ-ONLY; The location in which the Azure service is available.
+ MeterLocation *string `json:"meterLocation,omitempty"`
+ // TotalIncludedQuantity - READ-ONLY; The total included quantity associated with the offer.
+ TotalIncludedQuantity *decimal.Decimal `json:"totalIncludedQuantity,omitempty"`
+ // PretaxStandardRate - READ-ONLY; The pretax listing price.
+ PretaxStandardRate *decimal.Decimal `json:"pretaxStandardRate,omitempty"`
+ // ServiceName - READ-ONLY; The name of the service.
+ ServiceName *string `json:"serviceName,omitempty"`
+ // ServiceTier - READ-ONLY; The service tier.
+ ServiceTier *string `json:"serviceTier,omitempty"`
+}
+
+// MeterDetailsResponse the properties of the meter detail.
+type MeterDetailsResponse struct {
+ // MeterName - READ-ONLY; The name of the meter, within the given meter category
+ MeterName *string `json:"meterName,omitempty"`
+ // MeterCategory - READ-ONLY; The category of the meter, for example, 'Cloud services', 'Networking', etc..
+ MeterCategory *string `json:"meterCategory,omitempty"`
+ // MeterSubCategory - READ-ONLY; The subcategory of the meter, for example, 'A6 Cloud services', 'ExpressRoute (IXP)', etc..
+ MeterSubCategory *string `json:"meterSubCategory,omitempty"`
+ // UnitOfMeasure - READ-ONLY; The unit in which the meter consumption is charged, for example, 'Hours', 'GB', etc.
+ UnitOfMeasure *string `json:"unitOfMeasure,omitempty"`
+ // ServiceFamily - READ-ONLY; The service family.
+ ServiceFamily *string `json:"serviceFamily,omitempty"`
+}
+
+// ModernChargeSummary modern charge summary.
+type ModernChargeSummary struct {
+ // ModernChargeSummaryProperties - Properties for modern charge summary
+ *ModernChargeSummaryProperties `json:"properties,omitempty"`
+ // Kind - Possible values include: 'KindBasicChargeSummaryKindChargeSummary', 'KindBasicChargeSummaryKindLegacy', 'KindBasicChargeSummaryKindModern'
+ Kind KindBasicChargeSummary `json:"kind,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for ModernChargeSummary.
+func (mcs ModernChargeSummary) MarshalJSON() ([]byte, error) {
+ mcs.Kind = KindBasicChargeSummaryKindModern
+ objectMap := make(map[string]interface{})
+ if mcs.ModernChargeSummaryProperties != nil {
+ objectMap["properties"] = mcs.ModernChargeSummaryProperties
+ }
+ if mcs.Kind != "" {
+ objectMap["kind"] = mcs.Kind
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsLegacyChargeSummary is the BasicChargeSummary implementation for ModernChargeSummary.
+func (mcs ModernChargeSummary) AsLegacyChargeSummary() (*LegacyChargeSummary, bool) {
+ return nil, false
+}
+
+// AsModernChargeSummary is the BasicChargeSummary implementation for ModernChargeSummary.
+func (mcs ModernChargeSummary) AsModernChargeSummary() (*ModernChargeSummary, bool) {
+ return &mcs, true
+}
+
+// AsChargeSummary is the BasicChargeSummary implementation for ModernChargeSummary.
+func (mcs ModernChargeSummary) AsChargeSummary() (*ChargeSummary, bool) {
+ return nil, false
+}
+
+// AsBasicChargeSummary is the BasicChargeSummary implementation for ModernChargeSummary.
+func (mcs ModernChargeSummary) AsBasicChargeSummary() (BasicChargeSummary, bool) {
+ return &mcs, true
+}
+
+// UnmarshalJSON is the custom unmarshaler for ModernChargeSummary struct.
+func (mcs *ModernChargeSummary) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var modernChargeSummaryProperties ModernChargeSummaryProperties
+ err = json.Unmarshal(*v, &modernChargeSummaryProperties)
+ if err != nil {
+ return err
+ }
+ mcs.ModernChargeSummaryProperties = &modernChargeSummaryProperties
+ }
+ case "kind":
+ if v != nil {
+ var kind KindBasicChargeSummary
+ err = json.Unmarshal(*v, &kind)
+ if err != nil {
+ return err
+ }
+ mcs.Kind = kind
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ mcs.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ mcs.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ mcs.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ mcs.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// ModernChargeSummaryProperties the properties of modern charge summary.
+type ModernChargeSummaryProperties struct {
+ // BillingPeriodID - READ-ONLY; The id of the billing period resource that the charge belongs to.
+ BillingPeriodID *string `json:"billingPeriodId,omitempty"`
+ // UsageStart - READ-ONLY; Usage start date.
+ UsageStart *string `json:"usageStart,omitempty"`
+ // UsageEnd - READ-ONLY; Usage end date.
+ UsageEnd *string `json:"usageEnd,omitempty"`
+ // AzureCharges - READ-ONLY; Azure Charges.
+ AzureCharges *Amount `json:"azureCharges,omitempty"`
+ // ChargesBilledSeparately - READ-ONLY; Charges Billed separately.
+ ChargesBilledSeparately *Amount `json:"chargesBilledSeparately,omitempty"`
+ // MarketplaceCharges - READ-ONLY; Marketplace Charges.
+ MarketplaceCharges *Amount `json:"marketplaceCharges,omitempty"`
+ // BillingAccountID - READ-ONLY; Billing Account Id
+ BillingAccountID *string `json:"billingAccountId,omitempty"`
+ // BillingProfileID - READ-ONLY; Billing Profile Id
+ BillingProfileID *string `json:"billingProfileId,omitempty"`
+ // InvoiceSectionID - READ-ONLY; Invoice Section Id
+ InvoiceSectionID *string `json:"invoiceSectionId,omitempty"`
+ // CustomerID - READ-ONLY; Customer Id
+ CustomerID *string `json:"customerId,omitempty"`
+ // IsInvoiced - READ-ONLY; Is charge Invoiced
+ IsInvoiced *bool `json:"isInvoiced,omitempty"`
+}
+
+// ModernReservationRecommendation modern reservation recommendation.
+type ModernReservationRecommendation struct {
+ // ModernReservationRecommendationProperties - Properties for modern reservation recommendation
+ *ModernReservationRecommendationProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+ // Location - READ-ONLY; Resource location
+ Location *string `json:"location,omitempty"`
+ // Sku - READ-ONLY; Resource sku
+ Sku *string `json:"sku,omitempty"`
+ // Kind - Possible values include: 'KindBasicReservationRecommendationKindReservationRecommendation', 'KindBasicReservationRecommendationKindLegacy', 'KindBasicReservationRecommendationKindModern'
+ Kind KindBasicReservationRecommendation `json:"kind,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for ModernReservationRecommendation.
+func (mrr ModernReservationRecommendation) MarshalJSON() ([]byte, error) {
+ mrr.Kind = KindBasicReservationRecommendationKindModern
+ objectMap := make(map[string]interface{})
+ if mrr.ModernReservationRecommendationProperties != nil {
+ objectMap["properties"] = mrr.ModernReservationRecommendationProperties
+ }
+ if mrr.Kind != "" {
+ objectMap["kind"] = mrr.Kind
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsLegacyReservationRecommendation is the BasicReservationRecommendation implementation for ModernReservationRecommendation.
+func (mrr ModernReservationRecommendation) AsLegacyReservationRecommendation() (*LegacyReservationRecommendation, bool) {
+ return nil, false
+}
+
+// AsModernReservationRecommendation is the BasicReservationRecommendation implementation for ModernReservationRecommendation.
+func (mrr ModernReservationRecommendation) AsModernReservationRecommendation() (*ModernReservationRecommendation, bool) {
+ return &mrr, true
+}
+
+// AsReservationRecommendation is the BasicReservationRecommendation implementation for ModernReservationRecommendation.
+func (mrr ModernReservationRecommendation) AsReservationRecommendation() (*ReservationRecommendation, bool) {
+ return nil, false
+}
+
+// AsBasicReservationRecommendation is the BasicReservationRecommendation implementation for ModernReservationRecommendation.
+func (mrr ModernReservationRecommendation) AsBasicReservationRecommendation() (BasicReservationRecommendation, bool) {
+ return &mrr, true
+}
+
+// UnmarshalJSON is the custom unmarshaler for ModernReservationRecommendation struct.
+func (mrr *ModernReservationRecommendation) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var modernReservationRecommendationProperties ModernReservationRecommendationProperties
+ err = json.Unmarshal(*v, &modernReservationRecommendationProperties)
+ if err != nil {
+ return err
+ }
+ mrr.ModernReservationRecommendationProperties = &modernReservationRecommendationProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ mrr.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ mrr.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ mrr.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ mrr.Tags = tags
+ }
+ case "location":
+ if v != nil {
+ var location string
+ err = json.Unmarshal(*v, &location)
+ if err != nil {
+ return err
+ }
+ mrr.Location = &location
+ }
+ case "sku":
+ if v != nil {
+ var sku string
+ err = json.Unmarshal(*v, &sku)
+ if err != nil {
+ return err
+ }
+ mrr.Sku = &sku
+ }
+ case "kind":
+ if v != nil {
+ var kind KindBasicReservationRecommendation
+ err = json.Unmarshal(*v, &kind)
+ if err != nil {
+ return err
+ }
+ mrr.Kind = kind
+ }
+ }
+ }
+
+ return nil
+}
+
+// ModernReservationRecommendationProperties the properties of the reservation recommendation.
+type ModernReservationRecommendationProperties struct {
+ // LookBackPeriod - READ-ONLY; The number of days of usage to look back for recommendation.
+ LookBackPeriod *string `json:"lookBackPeriod,omitempty"`
+ // InstanceFlexibilityRatio - READ-ONLY; The instance Flexibility Ratio.
+ InstanceFlexibilityRatio *int32 `json:"instanceFlexibilityRatio,omitempty"`
+ // InstanceFlexibilityGroup - READ-ONLY; The instance Flexibility Group.
+ InstanceFlexibilityGroup *string `json:"instanceFlexibilityGroup,omitempty"`
+ // NormalizedSize - READ-ONLY; The normalized Size.
+ NormalizedSize *string `json:"normalizedSize,omitempty"`
+ // RecommendedQuantityNormalized - READ-ONLY; The recommended Quantity Normalized.
+ RecommendedQuantityNormalized *float64 `json:"recommendedQuantityNormalized,omitempty"`
+ // MeterID - READ-ONLY; The meter id (GUID)
+ MeterID *uuid.UUID `json:"meterId,omitempty"`
+ // Term - READ-ONLY; RI recommendations in one or three year terms.
+ Term *string `json:"term,omitempty"`
+ // CostWithNoReservedInstances - READ-ONLY; The total amount of cost without reserved instances.
+ CostWithNoReservedInstances *Amount `json:"costWithNoReservedInstances,omitempty"`
+ // RecommendedQuantity - READ-ONLY; Recommended quality for reserved instances.
+ RecommendedQuantity *decimal.Decimal `json:"recommendedQuantity,omitempty"`
+ // TotalCostWithReservedInstances - READ-ONLY; The total amount of cost with reserved instances.
+ TotalCostWithReservedInstances *Amount `json:"totalCostWithReservedInstances,omitempty"`
+ // NetSavings - READ-ONLY; Total estimated savings with reserved instances.
+ NetSavings *Amount `json:"netSavings,omitempty"`
+ // FirstUsageDate - READ-ONLY; The usage date for looking back.
+ FirstUsageDate *date.Time `json:"firstUsageDate,omitempty"`
+ // Scope - READ-ONLY; Shared or single recommendation.
+ Scope *string `json:"scope,omitempty"`
+ // SkuProperties - READ-ONLY; List of sku properties
+ SkuProperties *[]SkuProperty `json:"skuProperties,omitempty"`
+}
+
+// ModernReservationTransaction modern Reservation transaction resource.
+type ModernReservationTransaction struct {
+ *ModernReservationTransactionProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags *[]string `json:"tags,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for ModernReservationTransaction.
+func (mrt ModernReservationTransaction) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if mrt.ModernReservationTransactionProperties != nil {
+ objectMap["properties"] = mrt.ModernReservationTransactionProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for ModernReservationTransaction struct.
+func (mrt *ModernReservationTransaction) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var modernReservationTransactionProperties ModernReservationTransactionProperties
+ err = json.Unmarshal(*v, &modernReservationTransactionProperties)
+ if err != nil {
+ return err
+ }
+ mrt.ModernReservationTransactionProperties = &modernReservationTransactionProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ mrt.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ mrt.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ mrt.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags []string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ mrt.Tags = &tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// ModernReservationTransactionProperties the properties of a modern reservation transaction.
+type ModernReservationTransactionProperties struct {
+ // Amount - READ-ONLY; The charge of the transaction.
+ Amount *decimal.Decimal `json:"amount,omitempty"`
+ // ArmSkuName - READ-ONLY; This is the ARM Sku name. It can be used to join with the serviceType field in additional info in usage records.
+ ArmSkuName *string `json:"armSkuName,omitempty"`
+ // BillingFrequency - READ-ONLY; The billing frequency, which can be either one-time or recurring.
+ BillingFrequency *string `json:"billingFrequency,omitempty"`
+ // BillingProfileID - READ-ONLY; Billing profile Id.
+ BillingProfileID *string `json:"billingProfileId,omitempty"`
+ // BillingProfileName - READ-ONLY; Billing profile name.
+ BillingProfileName *string `json:"billingProfileName,omitempty"`
+ // Currency - READ-ONLY; The ISO currency in which the transaction is charged, for example, USD.
+ Currency *string `json:"currency,omitempty"`
+ // Description - READ-ONLY; The description of the transaction.
+ Description *string `json:"description,omitempty"`
+ // EventDate - READ-ONLY; The date of the transaction
+ EventDate *date.Time `json:"eventDate,omitempty"`
+ // EventType - READ-ONLY; The type of the transaction (Purchase, Cancel, etc.)
+ EventType *string `json:"eventType,omitempty"`
+ // Invoice - READ-ONLY; Invoice Number
+ Invoice *string `json:"invoice,omitempty"`
+ // InvoiceID - READ-ONLY; Invoice Id as on the invoice where the specific transaction appears.
+ InvoiceID *string `json:"invoiceId,omitempty"`
+ // InvoiceSectionID - READ-ONLY; Invoice Section Id
+ InvoiceSectionID *string `json:"invoiceSectionId,omitempty"`
+ // InvoiceSectionName - READ-ONLY; Invoice Section Name.
+ InvoiceSectionName *string `json:"invoiceSectionName,omitempty"`
+ // PurchasingSubscriptionGUID - READ-ONLY; The subscription guid that makes the transaction.
+ PurchasingSubscriptionGUID *uuid.UUID `json:"purchasingSubscriptionGuid,omitempty"`
+ // PurchasingSubscriptionName - READ-ONLY; The subscription name that makes the transaction.
+ PurchasingSubscriptionName *string `json:"purchasingSubscriptionName,omitempty"`
+ // Quantity - READ-ONLY; The quantity of the transaction.
+ Quantity *decimal.Decimal `json:"quantity,omitempty"`
+ // Region - READ-ONLY; The region of the transaction.
+ Region *string `json:"region,omitempty"`
+ // ReservationOrderID - READ-ONLY; The reservation order ID is the identifier for a reservation purchase. Each reservation order ID represents a single purchase transaction. A reservation order contains reservations. The reservation order specifies the VM size and region for the reservations.
+ ReservationOrderID *string `json:"reservationOrderId,omitempty"`
+ // ReservationOrderName - READ-ONLY; The name of the reservation order.
+ ReservationOrderName *string `json:"reservationOrderName,omitempty"`
+ // Term - READ-ONLY; This is the term of the transaction.
+ Term *string `json:"term,omitempty"`
+}
+
+// ModernReservationTransactionsListResult result of listing reservation recommendations.
+type ModernReservationTransactionsListResult struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; The list of reservation recommendations.
+ Value *[]ModernReservationTransaction `json:"value,omitempty"`
+ // NextLink - READ-ONLY; The link (url) to the next page of results.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// ModernReservationTransactionsListResultIterator provides access to a complete listing of
+// ModernReservationTransaction values.
+type ModernReservationTransactionsListResultIterator struct {
+ i int
+ page ModernReservationTransactionsListResultPage
+}
+
+// NextWithContext advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+func (iter *ModernReservationTransactionsListResultIterator) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ModernReservationTransactionsListResultIterator.NextWithContext")
+ defer func() {
+ sc := -1
+ if iter.Response().Response.Response != nil {
+ sc = iter.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ iter.i++
+ if iter.i < len(iter.page.Values()) {
+ return nil
+ }
+ err = iter.page.NextWithContext(ctx)
+ if err != nil {
+ iter.i--
+ return err
+ }
+ iter.i = 0
+ return nil
+}
+
+// Next advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (iter *ModernReservationTransactionsListResultIterator) Next() error {
+ return iter.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the enumeration should be started or is not yet complete.
+func (iter ModernReservationTransactionsListResultIterator) NotDone() bool {
+ return iter.page.NotDone() && iter.i < len(iter.page.Values())
+}
+
+// Response returns the raw server response from the last page request.
+func (iter ModernReservationTransactionsListResultIterator) Response() ModernReservationTransactionsListResult {
+ return iter.page.Response()
+}
+
+// Value returns the current value or a zero-initialized value if the
+// iterator has advanced beyond the end of the collection.
+func (iter ModernReservationTransactionsListResultIterator) Value() ModernReservationTransaction {
+ if !iter.page.NotDone() {
+ return ModernReservationTransaction{}
+ }
+ return iter.page.Values()[iter.i]
+}
+
+// Creates a new instance of the ModernReservationTransactionsListResultIterator type.
+func NewModernReservationTransactionsListResultIterator(page ModernReservationTransactionsListResultPage) ModernReservationTransactionsListResultIterator {
+ return ModernReservationTransactionsListResultIterator{page: page}
+}
+
+// IsEmpty returns true if the ListResult contains no values.
+func (mrtlr ModernReservationTransactionsListResult) IsEmpty() bool {
+ return mrtlr.Value == nil || len(*mrtlr.Value) == 0
+}
+
+// hasNextLink returns true if the NextLink is not empty.
+func (mrtlr ModernReservationTransactionsListResult) hasNextLink() bool {
+ return mrtlr.NextLink != nil && len(*mrtlr.NextLink) != 0
+}
+
+// modernReservationTransactionsListResultPreparer prepares a request to retrieve the next set of results.
+// It returns nil if no more results exist.
+func (mrtlr ModernReservationTransactionsListResult) modernReservationTransactionsListResultPreparer(ctx context.Context) (*http.Request, error) {
+ if !mrtlr.hasNextLink() {
+ return nil, nil
+ }
+ return autorest.Prepare((&http.Request{}).WithContext(ctx),
+ autorest.AsJSON(),
+ autorest.AsGet(),
+ autorest.WithBaseURL(to.String(mrtlr.NextLink)))
+}
+
+// ModernReservationTransactionsListResultPage contains a page of ModernReservationTransaction values.
+type ModernReservationTransactionsListResultPage struct {
+ fn func(context.Context, ModernReservationTransactionsListResult) (ModernReservationTransactionsListResult, error)
+ mrtlr ModernReservationTransactionsListResult
+}
+
+// NextWithContext advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+func (page *ModernReservationTransactionsListResultPage) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ModernReservationTransactionsListResultPage.NextWithContext")
+ defer func() {
+ sc := -1
+ if page.Response().Response.Response != nil {
+ sc = page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ for {
+ next, err := page.fn(ctx, page.mrtlr)
+ if err != nil {
+ return err
+ }
+ page.mrtlr = next
+ if !next.hasNextLink() || !next.IsEmpty() {
+ break
+ }
+ }
+ return nil
+}
+
+// Next advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (page *ModernReservationTransactionsListResultPage) Next() error {
+ return page.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the page enumeration should be started or is not yet complete.
+func (page ModernReservationTransactionsListResultPage) NotDone() bool {
+ return !page.mrtlr.IsEmpty()
+}
+
+// Response returns the raw server response from the last page request.
+func (page ModernReservationTransactionsListResultPage) Response() ModernReservationTransactionsListResult {
+ return page.mrtlr
+}
+
+// Values returns the slice of values for the current page or nil if there are no values.
+func (page ModernReservationTransactionsListResultPage) Values() []ModernReservationTransaction {
+ if page.mrtlr.IsEmpty() {
+ return nil
+ }
+ return *page.mrtlr.Value
+}
+
+// Creates a new instance of the ModernReservationTransactionsListResultPage type.
+func NewModernReservationTransactionsListResultPage(cur ModernReservationTransactionsListResult, getNextPage func(context.Context, ModernReservationTransactionsListResult) (ModernReservationTransactionsListResult, error)) ModernReservationTransactionsListResultPage {
+ return ModernReservationTransactionsListResultPage{
+ fn: getNextPage,
+ mrtlr: cur,
+ }
+}
+
+// ModernUsageDetail modern usage detail.
+type ModernUsageDetail struct {
+ // ModernUsageDetailProperties - Properties for modern usage details
+ *ModernUsageDetailProperties `json:"properties,omitempty"`
+ // Kind - Possible values include: 'KindUsageDetail', 'KindLegacy', 'KindModern'
+ Kind Kind `json:"kind,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for ModernUsageDetail.
+func (mud ModernUsageDetail) MarshalJSON() ([]byte, error) {
+ mud.Kind = KindModern
+ objectMap := make(map[string]interface{})
+ if mud.ModernUsageDetailProperties != nil {
+ objectMap["properties"] = mud.ModernUsageDetailProperties
+ }
+ if mud.Kind != "" {
+ objectMap["kind"] = mud.Kind
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsLegacyUsageDetail is the BasicUsageDetail implementation for ModernUsageDetail.
+func (mud ModernUsageDetail) AsLegacyUsageDetail() (*LegacyUsageDetail, bool) {
+ return nil, false
+}
+
+// AsModernUsageDetail is the BasicUsageDetail implementation for ModernUsageDetail.
+func (mud ModernUsageDetail) AsModernUsageDetail() (*ModernUsageDetail, bool) {
+ return &mud, true
+}
+
+// AsUsageDetail is the BasicUsageDetail implementation for ModernUsageDetail.
+func (mud ModernUsageDetail) AsUsageDetail() (*UsageDetail, bool) {
+ return nil, false
+}
+
+// AsBasicUsageDetail is the BasicUsageDetail implementation for ModernUsageDetail.
+func (mud ModernUsageDetail) AsBasicUsageDetail() (BasicUsageDetail, bool) {
+ return &mud, true
+}
+
+// UnmarshalJSON is the custom unmarshaler for ModernUsageDetail struct.
+func (mud *ModernUsageDetail) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var modernUsageDetailProperties ModernUsageDetailProperties
+ err = json.Unmarshal(*v, &modernUsageDetailProperties)
+ if err != nil {
+ return err
+ }
+ mud.ModernUsageDetailProperties = &modernUsageDetailProperties
+ }
+ case "kind":
+ if v != nil {
+ var kind Kind
+ err = json.Unmarshal(*v, &kind)
+ if err != nil {
+ return err
+ }
+ mud.Kind = kind
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ mud.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ mud.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ mud.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ mud.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// ModernUsageDetailProperties the properties of the usage detail.
+type ModernUsageDetailProperties struct {
+ // BillingAccountID - READ-ONLY; Billing Account identifier.
+ BillingAccountID *string `json:"billingAccountId,omitempty"`
+ // BillingAccountName - READ-ONLY; Name of the Billing Account.
+ BillingAccountName *string `json:"billingAccountName,omitempty"`
+ // BillingPeriodStartDate - READ-ONLY; Billing Period Start Date as in the invoice.
+ BillingPeriodStartDate *date.Time `json:"billingPeriodStartDate,omitempty"`
+ // BillingPeriodEndDate - READ-ONLY; Billing Period End Date as in the invoice.
+ BillingPeriodEndDate *date.Time `json:"billingPeriodEndDate,omitempty"`
+ // BillingProfileID - READ-ONLY; Identifier for the billing profile that groups costs across invoices in the a singular billing currency across across the customers who have onboarded the Microsoft customer agreement and the customers in CSP who have made entitlement purchases like SaaS, Marketplace, RI, etc.
+ BillingProfileID *string `json:"billingProfileId,omitempty"`
+ // BillingProfileName - READ-ONLY; Name of the billing profile that groups costs across invoices in the a singular billing currency across across the customers who have onboarded the Microsoft customer agreement and the customers in CSP who have made entitlement purchases like SaaS, Marketplace, RI, etc.
+ BillingProfileName *string `json:"billingProfileName,omitempty"`
+ // SubscriptionGUID - READ-ONLY; Unique Microsoft generated identifier for the Azure Subscription.
+ SubscriptionGUID *string `json:"subscriptionGuid,omitempty"`
+ // SubscriptionName - READ-ONLY; Name of the Azure Subscription.
+ SubscriptionName *string `json:"subscriptionName,omitempty"`
+ // Date - READ-ONLY; Date for the usage record.
+ Date *date.Time `json:"date,omitempty"`
+ // Product - READ-ONLY; Name of the product that has accrued charges by consumption or purchase as listed in the invoice. Not available for Marketplace.
+ Product *string `json:"product,omitempty"`
+ // MeterID - READ-ONLY; The meter id (GUID). Not available for marketplace. For reserved instance this represents the primary meter for which the reservation was purchased. For the actual VM Size for which the reservation is purchased see productOrderName.
+ MeterID *uuid.UUID `json:"meterId,omitempty"`
+ // MeterName - READ-ONLY; Identifies the name of the meter against which consumption is measured.
+ MeterName *string `json:"meterName,omitempty"`
+ // MeterRegion - READ-ONLY; Identifies the location of the datacenter for certain services that are priced based on datacenter location.
+ MeterRegion *string `json:"meterRegion,omitempty"`
+ // MeterCategory - READ-ONLY; Identifies the top-level service for the usage.
+ MeterCategory *string `json:"meterCategory,omitempty"`
+ // MeterSubCategory - READ-ONLY; Defines the type or sub-category of Azure service that can affect the rate.
+ MeterSubCategory *string `json:"meterSubCategory,omitempty"`
+ // ServiceFamily - READ-ONLY; List the service family for the product purchased or charged (Example: Storage ; Compute).
+ ServiceFamily *string `json:"serviceFamily,omitempty"`
+ // Quantity - READ-ONLY; Measure the quantity purchased or consumed.The amount of the meter used during the billing period.
+ Quantity *decimal.Decimal `json:"quantity,omitempty"`
+ // UnitOfMeasure - READ-ONLY; Identifies the Unit that the service is charged in. For example, GB, hours, 10,000 s.
+ UnitOfMeasure *string `json:"unitOfMeasure,omitempty"`
+ // InstanceName - READ-ONLY; Instance Name.
+ InstanceName *string `json:"instanceName,omitempty"`
+ // CostInUSD - READ-ONLY; Estimated extendedCost or blended cost before tax in USD.
+ CostInUSD *decimal.Decimal `json:"costInUSD,omitempty"`
+ // UnitPrice - READ-ONLY; Unit Price is the price applicable to you. (your EA or other contract price).
+ UnitPrice *decimal.Decimal `json:"unitPrice,omitempty"`
+ // BillingCurrencyCode - READ-ONLY; The currency defining the billed cost.
+ BillingCurrencyCode *string `json:"billingCurrencyCode,omitempty"`
+ // ResourceLocation - READ-ONLY; Name of the resource location.
+ ResourceLocation *string `json:"resourceLocation,omitempty"`
+ // ConsumedService - READ-ONLY; Consumed service name. Name of the azure resource provider that emits the usage or was purchased. This value is not provided for marketplace usage.
+ ConsumedService *string `json:"consumedService,omitempty"`
+ // ServiceInfo1 - READ-ONLY; Service Info 1.
+ ServiceInfo1 *string `json:"serviceInfo1,omitempty"`
+ // ServiceInfo2 - READ-ONLY; Service Info 2.
+ ServiceInfo2 *string `json:"serviceInfo2,omitempty"`
+ // AdditionalInfo - READ-ONLY; Additional details of this usage item. Use this field to get usage line item specific details such as the actual VM Size (ServiceType) or the ratio in which the reservation discount is applied.
+ AdditionalInfo *string `json:"additionalInfo,omitempty"`
+ // InvoiceSectionID - READ-ONLY; Identifier of the project that is being charged in the invoice. Not applicable for Microsoft Customer Agreements onboarded by partners.
+ InvoiceSectionID *string `json:"invoiceSectionId,omitempty"`
+ // InvoiceSectionName - READ-ONLY; Name of the project that is being charged in the invoice. Not applicable for Microsoft Customer Agreements onboarded by partners.
+ InvoiceSectionName *string `json:"invoiceSectionName,omitempty"`
+ // CostCenter - READ-ONLY; The cost center of this department if it is a department and a cost center is provided.
+ CostCenter *string `json:"costCenter,omitempty"`
+ // ResourceGroup - READ-ONLY; Name of the Azure resource group used for cohesive lifecycle management of resources.
+ ResourceGroup *string `json:"resourceGroup,omitempty"`
+ // ReservationID - READ-ONLY; ARM resource id of the reservation. Only applies to records relevant to reservations.
+ ReservationID *string `json:"reservationId,omitempty"`
+ // ReservationName - READ-ONLY; User provided display name of the reservation. Last known name for a particular day is populated in the daily data. Only applies to records relevant to reservations.
+ ReservationName *string `json:"reservationName,omitempty"`
+ // ProductOrderID - READ-ONLY; The identifier for the asset or Azure plan name that the subscription belongs to. For example: Azure Plan. For reservations this is the Reservation Order ID.
+ ProductOrderID *string `json:"productOrderId,omitempty"`
+ // ProductOrderName - READ-ONLY; Product Order Name. For reservations this is the SKU that was purchased.
+ ProductOrderName *string `json:"productOrderName,omitempty"`
+ // IsAzureCreditEligible - READ-ONLY; Determines if the cost is eligible to be paid for using Azure credits.
+ IsAzureCreditEligible *bool `json:"isAzureCreditEligible,omitempty"`
+ // Term - READ-ONLY; Term (in months). Displays the term for the validity of the offer. For example. In case of reserved instances it displays 12 months for yearly term of reserved instance. For one time purchases or recurring purchases, the terms displays 1 month; This is not applicable for Azure consumption.
+ Term *string `json:"term,omitempty"`
+ // PublisherName - READ-ONLY; Name of the publisher of the service including Microsoft or Third Party publishers.
+ PublisherName *string `json:"publisherName,omitempty"`
+ // PublisherType - READ-ONLY; Type of publisher that identifies if the publisher is first party, third party reseller or third party agency.
+ PublisherType *string `json:"publisherType,omitempty"`
+ // ChargeType - READ-ONLY; Indicates a charge represents credits, usage, a Marketplace purchase, a reservation fee, or a refund.
+ ChargeType *string `json:"chargeType,omitempty"`
+ // Frequency - READ-ONLY; Indicates how frequently this charge will occur. OneTime for purchases which only happen once, Monthly for fees which recur every month, and UsageBased for charges based on how much a service is used.
+ Frequency *string `json:"frequency,omitempty"`
+ // CostInBillingCurrency - READ-ONLY; ExtendedCost or blended cost before tax in billed currency.
+ CostInBillingCurrency *decimal.Decimal `json:"costInBillingCurrency,omitempty"`
+ // CostInPricingCurrency - READ-ONLY; ExtendedCost or blended cost before tax in pricing currency to correlate with prices.
+ CostInPricingCurrency *decimal.Decimal `json:"costInPricingCurrency,omitempty"`
+ // ExchangeRate - READ-ONLY; Exchange rate used in conversion from pricing currency to billing currency.
+ ExchangeRate *string `json:"exchangeRate,omitempty"`
+ // ExchangeRateDate - READ-ONLY; Date on which exchange rate used in conversion from pricing currency to billing currency.
+ ExchangeRateDate *date.Time `json:"exchangeRateDate,omitempty"`
+ // InvoiceID - READ-ONLY; Invoice ID as on the invoice where the specific transaction appears.
+ InvoiceID *string `json:"invoiceId,omitempty"`
+ // PreviousInvoiceID - READ-ONLY; Reference to an original invoice there is a refund (negative cost). This is populated only when there is a refund.
+ PreviousInvoiceID *string `json:"previousInvoiceId,omitempty"`
+ // PricingCurrencyCode - READ-ONLY; Pricing Billing Currency.
+ PricingCurrencyCode *string `json:"pricingCurrencyCode,omitempty"`
+ // ProductIdentifier - READ-ONLY; Identifer for the product that has accrued charges by consumption or purchase . This is the concatenated key of productId and SKuId in partner center.
+ ProductIdentifier *string `json:"productIdentifier,omitempty"`
+ // ResourceLocationNormalized - READ-ONLY; Resource Location Normalized.
+ ResourceLocationNormalized *string `json:"resourceLocationNormalized,omitempty"`
+ // ServicePeriodStartDate - READ-ONLY; Start date for the rating period when the service usage was rated for charges. The prices for Azure services are determined for the rating period.
+ ServicePeriodStartDate *date.Time `json:"servicePeriodStartDate,omitempty"`
+ // ServicePeriodEndDate - READ-ONLY; End date for the period when the service usage was rated for charges. The prices for Azure services are determined based on the rating period.
+ ServicePeriodEndDate *date.Time `json:"servicePeriodEndDate,omitempty"`
+ // CustomerTenantID - READ-ONLY; Identifier of the customer's AAD tenant.
+ CustomerTenantID *string `json:"customerTenantId,omitempty"`
+ // CustomerName - READ-ONLY; Name of the customer's AAD tenant.
+ CustomerName *string `json:"customerName,omitempty"`
+ // PartnerTenantID - READ-ONLY; Identifier for the partner's AAD tenant.
+ PartnerTenantID *string `json:"partnerTenantId,omitempty"`
+ // PartnerName - READ-ONLY; Name of the partner' AAD tenant.
+ PartnerName *string `json:"partnerName,omitempty"`
+ // ResellerMpnID - READ-ONLY; MPNId for the reseller associated with the subscription.
+ ResellerMpnID *string `json:"resellerMpnId,omitempty"`
+ // ResellerName - READ-ONLY; Reseller Name.
+ ResellerName *string `json:"resellerName,omitempty"`
+ // PublisherID - READ-ONLY; Publisher Id.
+ PublisherID *string `json:"publisherId,omitempty"`
+ // MarketPrice - READ-ONLY; Market Price that's charged for the usage.
+ MarketPrice *decimal.Decimal `json:"marketPrice,omitempty"`
+ // ExchangeRatePricingToBilling - READ-ONLY; Exchange Rate from pricing currency to billing currency.
+ ExchangeRatePricingToBilling *decimal.Decimal `json:"exchangeRatePricingToBilling,omitempty"`
+ // PaygCostInBillingCurrency - READ-ONLY; The amount of PayG cost before tax in billing currency.
+ PaygCostInBillingCurrency *decimal.Decimal `json:"paygCostInBillingCurrency,omitempty"`
+ // PaygCostInUSD - READ-ONLY; The amount of PayG cost before tax in US Dollar currency.
+ PaygCostInUSD *decimal.Decimal `json:"paygCostInUSD,omitempty"`
+ // PartnerEarnedCreditRate - READ-ONLY; Rate of discount applied if there is a partner earned credit (PEC) based on partner admin link access.
+ PartnerEarnedCreditRate *decimal.Decimal `json:"partnerEarnedCreditRate,omitempty"`
+ // PartnerEarnedCreditApplied - READ-ONLY; Flag to indicate if partner earned credit has been applied or not.
+ PartnerEarnedCreditApplied *string `json:"partnerEarnedCreditApplied,omitempty"`
+}
+
+// Notification the notification associated with a budget.
+type Notification struct {
+ // Enabled - The notification is enabled or not.
+ Enabled *bool `json:"enabled,omitempty"`
+ // Operator - The comparison operator. Possible values include: 'EqualTo', 'GreaterThan', 'GreaterThanOrEqualTo'
+ Operator OperatorType `json:"operator,omitempty"`
+ // Threshold - Threshold value associated with a notification. Notification is sent when the cost exceeded the threshold. It is always percent and has to be between 0 and 1000.
+ Threshold *decimal.Decimal `json:"threshold,omitempty"`
+ // ContactEmails - Email addresses to send the budget notification to when the threshold is exceeded.
+ ContactEmails *[]string `json:"contactEmails,omitempty"`
+ // ContactRoles - Contact roles to send the budget notification to when the threshold is exceeded.
+ ContactRoles *[]string `json:"contactRoles,omitempty"`
+ // ContactGroups - Action groups to send the budget notification to when the threshold is exceeded.
+ ContactGroups *[]string `json:"contactGroups,omitempty"`
+ // ThresholdType - The type of threshold. Possible values include: 'Actual'
+ ThresholdType ThresholdType `json:"thresholdType,omitempty"`
+}
+
+// Operation a Consumption REST API operation.
+type Operation struct {
+ // Name - READ-ONLY; Operation name: {provider}/{resource}/{operation}.
+ Name *string `json:"name,omitempty"`
+ // Display - The object that represents the operation.
+ Display *OperationDisplay `json:"display,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for Operation.
+func (o Operation) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if o.Display != nil {
+ objectMap["display"] = o.Display
+ }
+ return json.Marshal(objectMap)
+}
+
+// OperationDisplay the object that represents the operation.
+type OperationDisplay struct {
+ // Provider - READ-ONLY; Service provider: Microsoft.Consumption.
+ Provider *string `json:"provider,omitempty"`
+ // Resource - READ-ONLY; Resource on which the operation is performed: UsageDetail, etc.
+ Resource *string `json:"resource,omitempty"`
+ // Operation - READ-ONLY; Operation type: Read, write, delete, etc.
+ Operation *string `json:"operation,omitempty"`
+}
+
+// OperationListResult result of listing consumption operations. It contains a list of operations and a URL
+// link to get the next set of results.
+type OperationListResult struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; List of consumption operations supported by the Microsoft.Consumption resource provider.
+ Value *[]Operation `json:"value,omitempty"`
+ // NextLink - READ-ONLY; URL to get the next set of operation list results if there are any.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// OperationListResultIterator provides access to a complete listing of Operation values.
+type OperationListResultIterator struct {
+ i int
+ page OperationListResultPage
+}
+
+// NextWithContext advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+func (iter *OperationListResultIterator) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/OperationListResultIterator.NextWithContext")
+ defer func() {
+ sc := -1
+ if iter.Response().Response.Response != nil {
+ sc = iter.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ iter.i++
+ if iter.i < len(iter.page.Values()) {
+ return nil
+ }
+ err = iter.page.NextWithContext(ctx)
+ if err != nil {
+ iter.i--
+ return err
+ }
+ iter.i = 0
+ return nil
+}
+
+// Next advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (iter *OperationListResultIterator) Next() error {
+ return iter.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the enumeration should be started or is not yet complete.
+func (iter OperationListResultIterator) NotDone() bool {
+ return iter.page.NotDone() && iter.i < len(iter.page.Values())
+}
+
+// Response returns the raw server response from the last page request.
+func (iter OperationListResultIterator) Response() OperationListResult {
+ return iter.page.Response()
+}
+
+// Value returns the current value or a zero-initialized value if the
+// iterator has advanced beyond the end of the collection.
+func (iter OperationListResultIterator) Value() Operation {
+ if !iter.page.NotDone() {
+ return Operation{}
+ }
+ return iter.page.Values()[iter.i]
+}
+
+// Creates a new instance of the OperationListResultIterator type.
+func NewOperationListResultIterator(page OperationListResultPage) OperationListResultIterator {
+ return OperationListResultIterator{page: page}
+}
+
+// IsEmpty returns true if the ListResult contains no values.
+func (olr OperationListResult) IsEmpty() bool {
+ return olr.Value == nil || len(*olr.Value) == 0
+}
+
+// hasNextLink returns true if the NextLink is not empty.
+func (olr OperationListResult) hasNextLink() bool {
+ return olr.NextLink != nil && len(*olr.NextLink) != 0
+}
+
+// operationListResultPreparer prepares a request to retrieve the next set of results.
+// It returns nil if no more results exist.
+func (olr OperationListResult) operationListResultPreparer(ctx context.Context) (*http.Request, error) {
+ if !olr.hasNextLink() {
+ return nil, nil
+ }
+ return autorest.Prepare((&http.Request{}).WithContext(ctx),
+ autorest.AsJSON(),
+ autorest.AsGet(),
+ autorest.WithBaseURL(to.String(olr.NextLink)))
+}
+
+// OperationListResultPage contains a page of Operation values.
+type OperationListResultPage struct {
+ fn func(context.Context, OperationListResult) (OperationListResult, error)
+ olr OperationListResult
+}
+
+// NextWithContext advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+func (page *OperationListResultPage) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/OperationListResultPage.NextWithContext")
+ defer func() {
+ sc := -1
+ if page.Response().Response.Response != nil {
+ sc = page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ for {
+ next, err := page.fn(ctx, page.olr)
+ if err != nil {
+ return err
+ }
+ page.olr = next
+ if !next.hasNextLink() || !next.IsEmpty() {
+ break
+ }
+ }
+ return nil
+}
+
+// Next advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (page *OperationListResultPage) Next() error {
+ return page.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the page enumeration should be started or is not yet complete.
+func (page OperationListResultPage) NotDone() bool {
+ return !page.olr.IsEmpty()
+}
+
+// Response returns the raw server response from the last page request.
+func (page OperationListResultPage) Response() OperationListResult {
+ return page.olr
+}
+
+// Values returns the slice of values for the current page or nil if there are no values.
+func (page OperationListResultPage) Values() []Operation {
+ if page.olr.IsEmpty() {
+ return nil
+ }
+ return *page.olr.Value
+}
+
+// Creates a new instance of the OperationListResultPage type.
+func NewOperationListResultPage(cur OperationListResult, getNextPage func(context.Context, OperationListResult) (OperationListResult, error)) OperationListResultPage {
+ return OperationListResultPage{
+ fn: getNextPage,
+ olr: cur,
+ }
+}
+
+// PriceSheetModel price sheet result. It contains the pricesheet associated with billing period
+type PriceSheetModel struct {
+ // Pricesheets - READ-ONLY; Price sheet
+ Pricesheets *[]PriceSheetProperties `json:"pricesheets,omitempty"`
+ // NextLink - READ-ONLY; The link (url) to the next page of results.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// PriceSheetProperties the properties of the price sheet.
+type PriceSheetProperties struct {
+ // BillingPeriodID - READ-ONLY; The id of the billing period resource that the usage belongs to.
+ BillingPeriodID *string `json:"billingPeriodId,omitempty"`
+ // MeterID - READ-ONLY; The meter id (GUID)
+ MeterID *uuid.UUID `json:"meterId,omitempty"`
+ // MeterDetails - READ-ONLY; The details about the meter. By default this is not populated, unless it's specified in $expand.
+ MeterDetails *MeterDetails `json:"meterDetails,omitempty"`
+ // UnitOfMeasure - READ-ONLY; Unit of measure
+ UnitOfMeasure *string `json:"unitOfMeasure,omitempty"`
+ // IncludedQuantity - READ-ONLY; Included quality for an offer
+ IncludedQuantity *decimal.Decimal `json:"includedQuantity,omitempty"`
+ // PartNumber - READ-ONLY; Part Number
+ PartNumber *string `json:"partNumber,omitempty"`
+ // UnitPrice - READ-ONLY; Unit Price
+ UnitPrice *decimal.Decimal `json:"unitPrice,omitempty"`
+ // CurrencyCode - READ-ONLY; Currency Code
+ CurrencyCode *string `json:"currencyCode,omitempty"`
+ // OfferID - READ-ONLY; Offer Id
+ OfferID *string `json:"offerId,omitempty"`
+}
+
+// PriceSheetResult an pricesheet resource.
+type PriceSheetResult struct {
+ autorest.Response `json:"-"`
+ *PriceSheetModel `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for PriceSheetResult.
+func (psr PriceSheetResult) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if psr.PriceSheetModel != nil {
+ objectMap["properties"] = psr.PriceSheetModel
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for PriceSheetResult struct.
+func (psr *PriceSheetResult) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var priceSheetModel PriceSheetModel
+ err = json.Unmarshal(*v, &priceSheetModel)
+ if err != nil {
+ return err
+ }
+ psr.PriceSheetModel = &priceSheetModel
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ psr.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ psr.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ psr.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ psr.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// ProxyResource the Resource model definition.
+type ProxyResource struct {
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // ETag - eTag of the resource. To handle concurrent update scenario, this field will be used to determine whether the user is updating the latest version or not.
+ ETag *string `json:"eTag,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for ProxyResource.
+func (pr ProxyResource) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if pr.ETag != nil {
+ objectMap["eTag"] = pr.ETag
+ }
+ return json.Marshal(objectMap)
+}
+
+// ReservationDetail reservation detail resource.
+type ReservationDetail struct {
+ *ReservationDetailProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for ReservationDetail.
+func (rd ReservationDetail) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if rd.ReservationDetailProperties != nil {
+ objectMap["properties"] = rd.ReservationDetailProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for ReservationDetail struct.
+func (rd *ReservationDetail) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var reservationDetailProperties ReservationDetailProperties
+ err = json.Unmarshal(*v, &reservationDetailProperties)
+ if err != nil {
+ return err
+ }
+ rd.ReservationDetailProperties = &reservationDetailProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ rd.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ rd.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ rd.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ rd.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// ReservationDetailProperties the properties of the reservation detail.
+type ReservationDetailProperties struct {
+ // ReservationOrderID - READ-ONLY; The reservation order ID is the identifier for a reservation purchase. Each reservation order ID represents a single purchase transaction. A reservation order contains reservations. The reservation order specifies the VM size and region for the reservations.
+ ReservationOrderID *string `json:"reservationOrderId,omitempty"`
+ // InstanceFlexibilityRatio - READ-ONLY; The instance Flexibility Ratio.
+ InstanceFlexibilityRatio *string `json:"instanceFlexibilityRatio,omitempty"`
+ // InstanceFlexibilityGroup - READ-ONLY; The instance Flexibility Group.
+ InstanceFlexibilityGroup *string `json:"instanceFlexibilityGroup,omitempty"`
+ // ReservationID - READ-ONLY; The reservation ID is the identifier of a reservation within a reservation order. Each reservation is the grouping for applying the benefit scope and also specifies the number of instances to which the reservation benefit can be applied to.
+ ReservationID *string `json:"reservationId,omitempty"`
+ // SkuName - READ-ONLY; This is the ARM Sku name. It can be used to join with the serviceType field in additional info in usage records.
+ SkuName *string `json:"skuName,omitempty"`
+ // ReservedHours - READ-ONLY; This is the total hours reserved for the day. E.g. if reservation for 1 instance was made on 1 PM, this will be 11 hours for that day and 24 hours from subsequent days.
+ ReservedHours *decimal.Decimal `json:"reservedHours,omitempty"`
+ // UsageDate - READ-ONLY; The date on which consumption occurred.
+ UsageDate *date.Time `json:"usageDate,omitempty"`
+ // UsedHours - READ-ONLY; This is the total hours used by the instance.
+ UsedHours *decimal.Decimal `json:"usedHours,omitempty"`
+ // InstanceID - READ-ONLY; This identifier is the name of the resource or the fully qualified Resource ID.
+ InstanceID *string `json:"instanceId,omitempty"`
+ // TotalReservedQuantity - READ-ONLY; This is the total count of instances that are reserved for the reservationId.
+ TotalReservedQuantity *decimal.Decimal `json:"totalReservedQuantity,omitempty"`
+ // Kind - READ-ONLY; The reservation kind.
+ Kind *string `json:"kind,omitempty"`
+}
+
+// ReservationDetailsListResult result of listing reservation details.
+type ReservationDetailsListResult struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; The list of reservation details.
+ Value *[]ReservationDetail `json:"value,omitempty"`
+ // NextLink - READ-ONLY; The link (url) to the next page of results.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// ReservationDetailsListResultIterator provides access to a complete listing of ReservationDetail values.
+type ReservationDetailsListResultIterator struct {
+ i int
+ page ReservationDetailsListResultPage
+}
+
+// NextWithContext advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+func (iter *ReservationDetailsListResultIterator) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationDetailsListResultIterator.NextWithContext")
+ defer func() {
+ sc := -1
+ if iter.Response().Response.Response != nil {
+ sc = iter.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ iter.i++
+ if iter.i < len(iter.page.Values()) {
+ return nil
+ }
+ err = iter.page.NextWithContext(ctx)
+ if err != nil {
+ iter.i--
+ return err
+ }
+ iter.i = 0
+ return nil
+}
+
+// Next advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (iter *ReservationDetailsListResultIterator) Next() error {
+ return iter.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the enumeration should be started or is not yet complete.
+func (iter ReservationDetailsListResultIterator) NotDone() bool {
+ return iter.page.NotDone() && iter.i < len(iter.page.Values())
+}
+
+// Response returns the raw server response from the last page request.
+func (iter ReservationDetailsListResultIterator) Response() ReservationDetailsListResult {
+ return iter.page.Response()
+}
+
+// Value returns the current value or a zero-initialized value if the
+// iterator has advanced beyond the end of the collection.
+func (iter ReservationDetailsListResultIterator) Value() ReservationDetail {
+ if !iter.page.NotDone() {
+ return ReservationDetail{}
+ }
+ return iter.page.Values()[iter.i]
+}
+
+// Creates a new instance of the ReservationDetailsListResultIterator type.
+func NewReservationDetailsListResultIterator(page ReservationDetailsListResultPage) ReservationDetailsListResultIterator {
+ return ReservationDetailsListResultIterator{page: page}
+}
+
+// IsEmpty returns true if the ListResult contains no values.
+func (rdlr ReservationDetailsListResult) IsEmpty() bool {
+ return rdlr.Value == nil || len(*rdlr.Value) == 0
+}
+
+// hasNextLink returns true if the NextLink is not empty.
+func (rdlr ReservationDetailsListResult) hasNextLink() bool {
+ return rdlr.NextLink != nil && len(*rdlr.NextLink) != 0
+}
+
+// reservationDetailsListResultPreparer prepares a request to retrieve the next set of results.
+// It returns nil if no more results exist.
+func (rdlr ReservationDetailsListResult) reservationDetailsListResultPreparer(ctx context.Context) (*http.Request, error) {
+ if !rdlr.hasNextLink() {
+ return nil, nil
+ }
+ return autorest.Prepare((&http.Request{}).WithContext(ctx),
+ autorest.AsJSON(),
+ autorest.AsGet(),
+ autorest.WithBaseURL(to.String(rdlr.NextLink)))
+}
+
+// ReservationDetailsListResultPage contains a page of ReservationDetail values.
+type ReservationDetailsListResultPage struct {
+ fn func(context.Context, ReservationDetailsListResult) (ReservationDetailsListResult, error)
+ rdlr ReservationDetailsListResult
+}
+
+// NextWithContext advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+func (page *ReservationDetailsListResultPage) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationDetailsListResultPage.NextWithContext")
+ defer func() {
+ sc := -1
+ if page.Response().Response.Response != nil {
+ sc = page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ for {
+ next, err := page.fn(ctx, page.rdlr)
+ if err != nil {
+ return err
+ }
+ page.rdlr = next
+ if !next.hasNextLink() || !next.IsEmpty() {
+ break
+ }
+ }
+ return nil
+}
+
+// Next advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (page *ReservationDetailsListResultPage) Next() error {
+ return page.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the page enumeration should be started or is not yet complete.
+func (page ReservationDetailsListResultPage) NotDone() bool {
+ return !page.rdlr.IsEmpty()
+}
+
+// Response returns the raw server response from the last page request.
+func (page ReservationDetailsListResultPage) Response() ReservationDetailsListResult {
+ return page.rdlr
+}
+
+// Values returns the slice of values for the current page or nil if there are no values.
+func (page ReservationDetailsListResultPage) Values() []ReservationDetail {
+ if page.rdlr.IsEmpty() {
+ return nil
+ }
+ return *page.rdlr.Value
+}
+
+// Creates a new instance of the ReservationDetailsListResultPage type.
+func NewReservationDetailsListResultPage(cur ReservationDetailsListResult, getNextPage func(context.Context, ReservationDetailsListResult) (ReservationDetailsListResult, error)) ReservationDetailsListResultPage {
+ return ReservationDetailsListResultPage{
+ fn: getNextPage,
+ rdlr: cur,
+ }
+}
+
+// BasicReservationRecommendation a reservation recommendation resource.
+type BasicReservationRecommendation interface {
+ AsLegacyReservationRecommendation() (*LegacyReservationRecommendation, bool)
+ AsModernReservationRecommendation() (*ModernReservationRecommendation, bool)
+ AsReservationRecommendation() (*ReservationRecommendation, bool)
+}
+
+// ReservationRecommendation a reservation recommendation resource.
+type ReservationRecommendation struct {
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+ // Location - READ-ONLY; Resource location
+ Location *string `json:"location,omitempty"`
+ // Sku - READ-ONLY; Resource sku
+ Sku *string `json:"sku,omitempty"`
+ // Kind - Possible values include: 'KindBasicReservationRecommendationKindReservationRecommendation', 'KindBasicReservationRecommendationKindLegacy', 'KindBasicReservationRecommendationKindModern'
+ Kind KindBasicReservationRecommendation `json:"kind,omitempty"`
+}
+
+func unmarshalBasicReservationRecommendation(body []byte) (BasicReservationRecommendation, error) {
+ var m map[string]interface{}
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return nil, err
+ }
+
+ switch m["kind"] {
+ case string(KindBasicReservationRecommendationKindLegacy):
+ var lrr LegacyReservationRecommendation
+ err := json.Unmarshal(body, &lrr)
+ return lrr, err
+ case string(KindBasicReservationRecommendationKindModern):
+ var mrr ModernReservationRecommendation
+ err := json.Unmarshal(body, &mrr)
+ return mrr, err
+ default:
+ var rr ReservationRecommendation
+ err := json.Unmarshal(body, &rr)
+ return rr, err
+ }
+}
+func unmarshalBasicReservationRecommendationArray(body []byte) ([]BasicReservationRecommendation, error) {
+ var rawMessages []*json.RawMessage
+ err := json.Unmarshal(body, &rawMessages)
+ if err != nil {
+ return nil, err
+ }
+
+ rrArray := make([]BasicReservationRecommendation, len(rawMessages))
+
+ for index, rawMessage := range rawMessages {
+ rr, err := unmarshalBasicReservationRecommendation(*rawMessage)
+ if err != nil {
+ return nil, err
+ }
+ rrArray[index] = rr
+ }
+ return rrArray, nil
+}
+
+// MarshalJSON is the custom marshaler for ReservationRecommendation.
+func (rr ReservationRecommendation) MarshalJSON() ([]byte, error) {
+ rr.Kind = KindBasicReservationRecommendationKindReservationRecommendation
+ objectMap := make(map[string]interface{})
+ if rr.Kind != "" {
+ objectMap["kind"] = rr.Kind
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsLegacyReservationRecommendation is the BasicReservationRecommendation implementation for ReservationRecommendation.
+func (rr ReservationRecommendation) AsLegacyReservationRecommendation() (*LegacyReservationRecommendation, bool) {
+ return nil, false
+}
+
+// AsModernReservationRecommendation is the BasicReservationRecommendation implementation for ReservationRecommendation.
+func (rr ReservationRecommendation) AsModernReservationRecommendation() (*ModernReservationRecommendation, bool) {
+ return nil, false
+}
+
+// AsReservationRecommendation is the BasicReservationRecommendation implementation for ReservationRecommendation.
+func (rr ReservationRecommendation) AsReservationRecommendation() (*ReservationRecommendation, bool) {
+ return &rr, true
+}
+
+// AsBasicReservationRecommendation is the BasicReservationRecommendation implementation for ReservationRecommendation.
+func (rr ReservationRecommendation) AsBasicReservationRecommendation() (BasicReservationRecommendation, bool) {
+ return &rr, true
+}
+
+// ReservationRecommendationDetailsCalculatedSavingsProperties details of estimated savings.
+type ReservationRecommendationDetailsCalculatedSavingsProperties struct {
+ // OnDemandCost - READ-ONLY; The cost without reservation.
+ OnDemandCost *float64 `json:"onDemandCost,omitempty"`
+ // OverageCost - READ-ONLY; The difference between total reservation cost and reservation cost.
+ OverageCost *float64 `json:"overageCost,omitempty"`
+ // Quantity - READ-ONLY; The quantity for calculated savings.
+ Quantity *float64 `json:"quantity,omitempty"`
+ // ReservationCost - READ-ONLY; The exact cost of the estimated usage using reservation.
+ ReservationCost *float64 `json:"reservationCost,omitempty"`
+ // TotalReservationCost - READ-ONLY; The cost of the suggested quantity.
+ TotalReservationCost *float64 `json:"totalReservationCost,omitempty"`
+ // ReservedUnitCount - The number of reserved units used to calculate savings. Always 1 for virtual machines.
+ ReservedUnitCount *float64 `json:"reservedUnitCount,omitempty"`
+ // Savings - READ-ONLY; The amount saved by purchasing the recommended quantity of reservation.
+ Savings *float64 `json:"savings,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for ReservationRecommendationDetailsCalculatedSavingsProperties.
+func (rrdcsp ReservationRecommendationDetailsCalculatedSavingsProperties) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if rrdcsp.ReservedUnitCount != nil {
+ objectMap["reservedUnitCount"] = rrdcsp.ReservedUnitCount
+ }
+ return json.Marshal(objectMap)
+}
+
+// ReservationRecommendationDetailsModel reservation recommendation details.
+type ReservationRecommendationDetailsModel struct {
+ autorest.Response `json:"-"`
+ // Location - Resource Location.
+ Location *string `json:"location,omitempty"`
+ // Sku - Resource sku
+ Sku *string `json:"sku,omitempty"`
+ *ReservationRecommendationDetailsProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for ReservationRecommendationDetailsModel.
+func (rrdm ReservationRecommendationDetailsModel) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if rrdm.Location != nil {
+ objectMap["location"] = rrdm.Location
+ }
+ if rrdm.Sku != nil {
+ objectMap["sku"] = rrdm.Sku
+ }
+ if rrdm.ReservationRecommendationDetailsProperties != nil {
+ objectMap["properties"] = rrdm.ReservationRecommendationDetailsProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for ReservationRecommendationDetailsModel struct.
+func (rrdm *ReservationRecommendationDetailsModel) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "location":
+ if v != nil {
+ var location string
+ err = json.Unmarshal(*v, &location)
+ if err != nil {
+ return err
+ }
+ rrdm.Location = &location
+ }
+ case "sku":
+ if v != nil {
+ var sku string
+ err = json.Unmarshal(*v, &sku)
+ if err != nil {
+ return err
+ }
+ rrdm.Sku = &sku
+ }
+ case "properties":
+ if v != nil {
+ var reservationRecommendationDetailsProperties ReservationRecommendationDetailsProperties
+ err = json.Unmarshal(*v, &reservationRecommendationDetailsProperties)
+ if err != nil {
+ return err
+ }
+ rrdm.ReservationRecommendationDetailsProperties = &reservationRecommendationDetailsProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ rrdm.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ rrdm.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ rrdm.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ rrdm.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// ReservationRecommendationDetailsProperties the properties of the reservation recommendation.
+type ReservationRecommendationDetailsProperties struct {
+ // Currency - READ-ONLY; An ISO 4217 currency code identifier for the costs and savings
+ Currency *string `json:"currency,omitempty"`
+ // Resource - READ-ONLY; Resource specific properties.
+ Resource *ReservationRecommendationDetailsResourceProperties `json:"resource,omitempty"`
+ // ResourceGroup - READ-ONLY; Resource Group.
+ ResourceGroup *string `json:"resourceGroup,omitempty"`
+ // Savings - READ-ONLY; Savings information for the recommendation.
+ Savings *ReservationRecommendationDetailsSavingsProperties `json:"savings,omitempty"`
+ // Scope - READ-ONLY; Scope of the reservation, ex: Single or Shared.
+ Scope *string `json:"scope,omitempty"`
+ // Usage - READ-ONLY; Historical usage details used to calculate the estimated savings.
+ Usage *ReservationRecommendationDetailsUsageProperties `json:"usage,omitempty"`
+}
+
+// ReservationRecommendationDetailsResourceProperties details of the resource.
+type ReservationRecommendationDetailsResourceProperties struct {
+ // AppliedScopes - READ-ONLY; List of subscriptions for which the reservation is applied.
+ AppliedScopes *[]string `json:"appliedScopes,omitempty"`
+ // OnDemandRate - READ-ONLY; On demand rate of the resource.
+ OnDemandRate *float64 `json:"onDemandRate,omitempty"`
+ // Product - READ-ONLY; Azure product ex: Standard_E8s_v3 etc.
+ Product *string `json:"product,omitempty"`
+ // Region - READ-ONLY; Azure resource region ex:EastUS, WestUS etc.
+ Region *string `json:"region,omitempty"`
+ // ReservationRate - READ-ONLY; Reservation rate of the resource.
+ ReservationRate *float64 `json:"reservationRate,omitempty"`
+ // ResourceType - READ-ONLY; The azure resource type.
+ ResourceType *string `json:"resourceType,omitempty"`
+}
+
+// ReservationRecommendationDetailsSavingsProperties details of the estimated savings.
+type ReservationRecommendationDetailsSavingsProperties struct {
+ // CalculatedSavings - List of calculated savings.
+ CalculatedSavings *[]ReservationRecommendationDetailsCalculatedSavingsProperties `json:"calculatedSavings,omitempty"`
+ // LookBackPeriod - READ-ONLY; Number of days of usage to look back used for computing the recommendation.
+ LookBackPeriod *int32 `json:"lookBackPeriod,omitempty"`
+ // RecommendedQuantity - READ-ONLY; Number of recommended units of the resource.
+ RecommendedQuantity *float64 `json:"recommendedQuantity,omitempty"`
+ // ReservationOrderTerm - READ-ONLY; Term period of the reservation, ex: P1Y or P3Y.
+ ReservationOrderTerm *string `json:"reservationOrderTerm,omitempty"`
+ // SavingsType - READ-ONLY; Type of savings, ex: instance.
+ SavingsType *string `json:"savingsType,omitempty"`
+ // UnitOfMeasure - READ-ONLY; Measurement unit ex: hour etc.
+ UnitOfMeasure *string `json:"unitOfMeasure,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for ReservationRecommendationDetailsSavingsProperties.
+func (rrdsp ReservationRecommendationDetailsSavingsProperties) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if rrdsp.CalculatedSavings != nil {
+ objectMap["calculatedSavings"] = rrdsp.CalculatedSavings
+ }
+ return json.Marshal(objectMap)
+}
+
+// ReservationRecommendationDetailsUsageProperties details about historical usage data that has been used
+// for computing the recommendation.
+type ReservationRecommendationDetailsUsageProperties struct {
+ // FirstConsumptionDate - READ-ONLY; The first usage date used for looking back for computing the recommendation.
+ FirstConsumptionDate *string `json:"firstConsumptionDate,omitempty"`
+ // LastConsumptionDate - READ-ONLY; The last usage date used for looking back for computing the recommendation.
+ LastConsumptionDate *string `json:"lastConsumptionDate,omitempty"`
+ // LookBackUnitType - READ-ONLY; What the usage data values represent ex: virtual machine instance.
+ LookBackUnitType *string `json:"lookBackUnitType,omitempty"`
+ // UsageData - READ-ONLY; The breakdown of historical resource usage. The values are in the order of usage between the firstConsumptionDate and the lastConsumptionDate.
+ UsageData *[]float64 `json:"usageData,omitempty"`
+ // UsageGrain - READ-ONLY; The grain of the values represented in the usage data ex: hourly.
+ UsageGrain *string `json:"usageGrain,omitempty"`
+}
+
+// ReservationRecommendationsListResult result of listing reservation recommendations.
+type ReservationRecommendationsListResult struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; The list of reservation recommendations.
+ Value *[]BasicReservationRecommendation `json:"value,omitempty"`
+ // NextLink - READ-ONLY; The link (url) to the next page of results.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// UnmarshalJSON is the custom unmarshaler for ReservationRecommendationsListResult struct.
+func (rrlr *ReservationRecommendationsListResult) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "value":
+ if v != nil {
+ value, err := unmarshalBasicReservationRecommendationArray(*v)
+ if err != nil {
+ return err
+ }
+ rrlr.Value = &value
+ }
+ case "nextLink":
+ if v != nil {
+ var nextLink string
+ err = json.Unmarshal(*v, &nextLink)
+ if err != nil {
+ return err
+ }
+ rrlr.NextLink = &nextLink
+ }
+ }
+ }
+
+ return nil
+}
+
+// ReservationRecommendationsListResultIterator provides access to a complete listing of
+// ReservationRecommendation values.
+type ReservationRecommendationsListResultIterator struct {
+ i int
+ page ReservationRecommendationsListResultPage
+}
+
+// NextWithContext advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+func (iter *ReservationRecommendationsListResultIterator) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationRecommendationsListResultIterator.NextWithContext")
+ defer func() {
+ sc := -1
+ if iter.Response().Response.Response != nil {
+ sc = iter.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ iter.i++
+ if iter.i < len(iter.page.Values()) {
+ return nil
+ }
+ err = iter.page.NextWithContext(ctx)
+ if err != nil {
+ iter.i--
+ return err
+ }
+ iter.i = 0
+ return nil
+}
+
+// Next advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (iter *ReservationRecommendationsListResultIterator) Next() error {
+ return iter.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the enumeration should be started or is not yet complete.
+func (iter ReservationRecommendationsListResultIterator) NotDone() bool {
+ return iter.page.NotDone() && iter.i < len(iter.page.Values())
+}
+
+// Response returns the raw server response from the last page request.
+func (iter ReservationRecommendationsListResultIterator) Response() ReservationRecommendationsListResult {
+ return iter.page.Response()
+}
+
+// Value returns the current value or a zero-initialized value if the
+// iterator has advanced beyond the end of the collection.
+func (iter ReservationRecommendationsListResultIterator) Value() BasicReservationRecommendation {
+ if !iter.page.NotDone() {
+ return ReservationRecommendation{}
+ }
+ return iter.page.Values()[iter.i]
+}
+
+// Creates a new instance of the ReservationRecommendationsListResultIterator type.
+func NewReservationRecommendationsListResultIterator(page ReservationRecommendationsListResultPage) ReservationRecommendationsListResultIterator {
+ return ReservationRecommendationsListResultIterator{page: page}
+}
+
+// IsEmpty returns true if the ListResult contains no values.
+func (rrlr ReservationRecommendationsListResult) IsEmpty() bool {
+ return rrlr.Value == nil || len(*rrlr.Value) == 0
+}
+
+// hasNextLink returns true if the NextLink is not empty.
+func (rrlr ReservationRecommendationsListResult) hasNextLink() bool {
+ return rrlr.NextLink != nil && len(*rrlr.NextLink) != 0
+}
+
+// reservationRecommendationsListResultPreparer prepares a request to retrieve the next set of results.
+// It returns nil if no more results exist.
+func (rrlr ReservationRecommendationsListResult) reservationRecommendationsListResultPreparer(ctx context.Context) (*http.Request, error) {
+ if !rrlr.hasNextLink() {
+ return nil, nil
+ }
+ return autorest.Prepare((&http.Request{}).WithContext(ctx),
+ autorest.AsJSON(),
+ autorest.AsGet(),
+ autorest.WithBaseURL(to.String(rrlr.NextLink)))
+}
+
+// ReservationRecommendationsListResultPage contains a page of BasicReservationRecommendation values.
+type ReservationRecommendationsListResultPage struct {
+ fn func(context.Context, ReservationRecommendationsListResult) (ReservationRecommendationsListResult, error)
+ rrlr ReservationRecommendationsListResult
+}
+
+// NextWithContext advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+func (page *ReservationRecommendationsListResultPage) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationRecommendationsListResultPage.NextWithContext")
+ defer func() {
+ sc := -1
+ if page.Response().Response.Response != nil {
+ sc = page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ for {
+ next, err := page.fn(ctx, page.rrlr)
+ if err != nil {
+ return err
+ }
+ page.rrlr = next
+ if !next.hasNextLink() || !next.IsEmpty() {
+ break
+ }
+ }
+ return nil
+}
+
+// Next advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (page *ReservationRecommendationsListResultPage) Next() error {
+ return page.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the page enumeration should be started or is not yet complete.
+func (page ReservationRecommendationsListResultPage) NotDone() bool {
+ return !page.rrlr.IsEmpty()
+}
+
+// Response returns the raw server response from the last page request.
+func (page ReservationRecommendationsListResultPage) Response() ReservationRecommendationsListResult {
+ return page.rrlr
+}
+
+// Values returns the slice of values for the current page or nil if there are no values.
+func (page ReservationRecommendationsListResultPage) Values() []BasicReservationRecommendation {
+ if page.rrlr.IsEmpty() {
+ return nil
+ }
+ return *page.rrlr.Value
+}
+
+// Creates a new instance of the ReservationRecommendationsListResultPage type.
+func NewReservationRecommendationsListResultPage(cur ReservationRecommendationsListResult, getNextPage func(context.Context, ReservationRecommendationsListResult) (ReservationRecommendationsListResult, error)) ReservationRecommendationsListResultPage {
+ return ReservationRecommendationsListResultPage{
+ fn: getNextPage,
+ rrlr: cur,
+ }
+}
+
+// ReservationSummariesListResult result of listing reservation summaries.
+type ReservationSummariesListResult struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; The list of reservation summaries.
+ Value *[]ReservationSummary `json:"value,omitempty"`
+ // NextLink - READ-ONLY; The link (url) to the next page of results.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// ReservationSummariesListResultIterator provides access to a complete listing of ReservationSummary
+// values.
+type ReservationSummariesListResultIterator struct {
+ i int
+ page ReservationSummariesListResultPage
+}
+
+// NextWithContext advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+func (iter *ReservationSummariesListResultIterator) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationSummariesListResultIterator.NextWithContext")
+ defer func() {
+ sc := -1
+ if iter.Response().Response.Response != nil {
+ sc = iter.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ iter.i++
+ if iter.i < len(iter.page.Values()) {
+ return nil
+ }
+ err = iter.page.NextWithContext(ctx)
+ if err != nil {
+ iter.i--
+ return err
+ }
+ iter.i = 0
+ return nil
+}
+
+// Next advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (iter *ReservationSummariesListResultIterator) Next() error {
+ return iter.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the enumeration should be started or is not yet complete.
+func (iter ReservationSummariesListResultIterator) NotDone() bool {
+ return iter.page.NotDone() && iter.i < len(iter.page.Values())
+}
+
+// Response returns the raw server response from the last page request.
+func (iter ReservationSummariesListResultIterator) Response() ReservationSummariesListResult {
+ return iter.page.Response()
+}
+
+// Value returns the current value or a zero-initialized value if the
+// iterator has advanced beyond the end of the collection.
+func (iter ReservationSummariesListResultIterator) Value() ReservationSummary {
+ if !iter.page.NotDone() {
+ return ReservationSummary{}
+ }
+ return iter.page.Values()[iter.i]
+}
+
+// Creates a new instance of the ReservationSummariesListResultIterator type.
+func NewReservationSummariesListResultIterator(page ReservationSummariesListResultPage) ReservationSummariesListResultIterator {
+ return ReservationSummariesListResultIterator{page: page}
+}
+
+// IsEmpty returns true if the ListResult contains no values.
+func (rslr ReservationSummariesListResult) IsEmpty() bool {
+ return rslr.Value == nil || len(*rslr.Value) == 0
+}
+
+// hasNextLink returns true if the NextLink is not empty.
+func (rslr ReservationSummariesListResult) hasNextLink() bool {
+ return rslr.NextLink != nil && len(*rslr.NextLink) != 0
+}
+
+// reservationSummariesListResultPreparer prepares a request to retrieve the next set of results.
+// It returns nil if no more results exist.
+func (rslr ReservationSummariesListResult) reservationSummariesListResultPreparer(ctx context.Context) (*http.Request, error) {
+ if !rslr.hasNextLink() {
+ return nil, nil
+ }
+ return autorest.Prepare((&http.Request{}).WithContext(ctx),
+ autorest.AsJSON(),
+ autorest.AsGet(),
+ autorest.WithBaseURL(to.String(rslr.NextLink)))
+}
+
+// ReservationSummariesListResultPage contains a page of ReservationSummary values.
+type ReservationSummariesListResultPage struct {
+ fn func(context.Context, ReservationSummariesListResult) (ReservationSummariesListResult, error)
+ rslr ReservationSummariesListResult
+}
+
+// NextWithContext advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+func (page *ReservationSummariesListResultPage) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationSummariesListResultPage.NextWithContext")
+ defer func() {
+ sc := -1
+ if page.Response().Response.Response != nil {
+ sc = page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ for {
+ next, err := page.fn(ctx, page.rslr)
+ if err != nil {
+ return err
+ }
+ page.rslr = next
+ if !next.hasNextLink() || !next.IsEmpty() {
+ break
+ }
+ }
+ return nil
+}
+
+// Next advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (page *ReservationSummariesListResultPage) Next() error {
+ return page.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the page enumeration should be started or is not yet complete.
+func (page ReservationSummariesListResultPage) NotDone() bool {
+ return !page.rslr.IsEmpty()
+}
+
+// Response returns the raw server response from the last page request.
+func (page ReservationSummariesListResultPage) Response() ReservationSummariesListResult {
+ return page.rslr
+}
+
+// Values returns the slice of values for the current page or nil if there are no values.
+func (page ReservationSummariesListResultPage) Values() []ReservationSummary {
+ if page.rslr.IsEmpty() {
+ return nil
+ }
+ return *page.rslr.Value
+}
+
+// Creates a new instance of the ReservationSummariesListResultPage type.
+func NewReservationSummariesListResultPage(cur ReservationSummariesListResult, getNextPage func(context.Context, ReservationSummariesListResult) (ReservationSummariesListResult, error)) ReservationSummariesListResultPage {
+ return ReservationSummariesListResultPage{
+ fn: getNextPage,
+ rslr: cur,
+ }
+}
+
+// ReservationSummary reservation summary resource.
+type ReservationSummary struct {
+ *ReservationSummaryProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for ReservationSummary.
+func (rs ReservationSummary) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if rs.ReservationSummaryProperties != nil {
+ objectMap["properties"] = rs.ReservationSummaryProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for ReservationSummary struct.
+func (rs *ReservationSummary) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var reservationSummaryProperties ReservationSummaryProperties
+ err = json.Unmarshal(*v, &reservationSummaryProperties)
+ if err != nil {
+ return err
+ }
+ rs.ReservationSummaryProperties = &reservationSummaryProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ rs.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ rs.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ rs.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ rs.Tags = tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// ReservationSummaryProperties the properties of the reservation summary.
+type ReservationSummaryProperties struct {
+ // ReservationOrderID - READ-ONLY; The reservation order ID is the identifier for a reservation purchase. Each reservation order ID represents a single purchase transaction. A reservation order contains reservations. The reservation order specifies the VM size and region for the reservations.
+ ReservationOrderID *string `json:"reservationOrderId,omitempty"`
+ // ReservationID - READ-ONLY; The reservation ID is the identifier of a reservation within a reservation order. Each reservation is the grouping for applying the benefit scope and also specifies the number of instances to which the reservation benefit can be applied to.
+ ReservationID *string `json:"reservationId,omitempty"`
+ // SkuName - READ-ONLY; This is the ARM Sku name. It can be used to join with the serviceType field in additional info in usage records.
+ SkuName *string `json:"skuName,omitempty"`
+ // ReservedHours - READ-ONLY; This is the total hours reserved. E.g. if reservation for 1 instance was made on 1 PM, this will be 11 hours for that day and 24 hours from subsequent days
+ ReservedHours *decimal.Decimal `json:"reservedHours,omitempty"`
+ // UsageDate - READ-ONLY; Data corresponding to the utilization record. If the grain of data is monthly, it will be first day of month.
+ UsageDate *date.Time `json:"usageDate,omitempty"`
+ // UsedHours - READ-ONLY; Total used hours by the reservation
+ UsedHours *decimal.Decimal `json:"usedHours,omitempty"`
+ // MinUtilizationPercentage - READ-ONLY; This is the minimum hourly utilization in the usage time (day or month). E.g. if usage record corresponds to 12/10/2017 and on that for hour 4 and 5, utilization was 10%, this field will return 10% for that day
+ MinUtilizationPercentage *decimal.Decimal `json:"minUtilizationPercentage,omitempty"`
+ // AvgUtilizationPercentage - READ-ONLY; This is average utilization for the entire time range. (day or month depending on the grain)
+ AvgUtilizationPercentage *decimal.Decimal `json:"avgUtilizationPercentage,omitempty"`
+ // MaxUtilizationPercentage - READ-ONLY; This is the maximum hourly utilization in the usage time (day or month). E.g. if usage record corresponds to 12/10/2017 and on that for hour 4 and 5, utilization was 100%, this field will return 100% for that day.
+ MaxUtilizationPercentage *decimal.Decimal `json:"maxUtilizationPercentage,omitempty"`
+ // Kind - READ-ONLY; The reservation kind.
+ Kind *string `json:"kind,omitempty"`
+ // PurchasedQuantity - READ-ONLY; This is the purchased quantity for the reservationId.
+ PurchasedQuantity *decimal.Decimal `json:"purchasedQuantity,omitempty"`
+ // RemainingQuantity - READ-ONLY; This is the remaining quantity for the reservationId.
+ RemainingQuantity *decimal.Decimal `json:"remainingQuantity,omitempty"`
+ // TotalReservedQuantity - READ-ONLY; This is the total count of instances that are reserved for the reservationId.
+ TotalReservedQuantity *decimal.Decimal `json:"totalReservedQuantity,omitempty"`
+ // UsedQuantity - READ-ONLY; This is the used quantity for the reservationId.
+ UsedQuantity *decimal.Decimal `json:"usedQuantity,omitempty"`
+ // UtilizedPercentage - READ-ONLY; This is the utilized percentage for the reservation Id.
+ UtilizedPercentage *decimal.Decimal `json:"utilizedPercentage,omitempty"`
+}
+
+// ReservationTransaction reservation transaction resource.
+type ReservationTransaction struct {
+ *LegacyReservationTransactionProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags *[]string `json:"tags,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for ReservationTransaction.
+func (rt ReservationTransaction) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if rt.LegacyReservationTransactionProperties != nil {
+ objectMap["properties"] = rt.LegacyReservationTransactionProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for ReservationTransaction struct.
+func (rt *ReservationTransaction) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var legacyReservationTransactionProperties LegacyReservationTransactionProperties
+ err = json.Unmarshal(*v, &legacyReservationTransactionProperties)
+ if err != nil {
+ return err
+ }
+ rt.LegacyReservationTransactionProperties = &legacyReservationTransactionProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ rt.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ rt.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ rt.Type = &typeVar
+ }
+ case "tags":
+ if v != nil {
+ var tags []string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ rt.Tags = &tags
+ }
+ }
+ }
+
+ return nil
+}
+
+// ReservationTransactionResource the Resource model definition.
+type ReservationTransactionResource struct {
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags *[]string `json:"tags,omitempty"`
+}
+
+// ReservationTransactionsListResult result of listing reservation recommendations.
+type ReservationTransactionsListResult struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; The list of reservation recommendations.
+ Value *[]ReservationTransaction `json:"value,omitempty"`
+ // NextLink - READ-ONLY; The link (url) to the next page of results.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// ReservationTransactionsListResultIterator provides access to a complete listing of
+// ReservationTransaction values.
+type ReservationTransactionsListResultIterator struct {
+ i int
+ page ReservationTransactionsListResultPage
+}
+
+// NextWithContext advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+func (iter *ReservationTransactionsListResultIterator) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationTransactionsListResultIterator.NextWithContext")
+ defer func() {
+ sc := -1
+ if iter.Response().Response.Response != nil {
+ sc = iter.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ iter.i++
+ if iter.i < len(iter.page.Values()) {
+ return nil
+ }
+ err = iter.page.NextWithContext(ctx)
+ if err != nil {
+ iter.i--
+ return err
+ }
+ iter.i = 0
+ return nil
+}
+
+// Next advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (iter *ReservationTransactionsListResultIterator) Next() error {
+ return iter.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the enumeration should be started or is not yet complete.
+func (iter ReservationTransactionsListResultIterator) NotDone() bool {
+ return iter.page.NotDone() && iter.i < len(iter.page.Values())
+}
+
+// Response returns the raw server response from the last page request.
+func (iter ReservationTransactionsListResultIterator) Response() ReservationTransactionsListResult {
+ return iter.page.Response()
+}
+
+// Value returns the current value or a zero-initialized value if the
+// iterator has advanced beyond the end of the collection.
+func (iter ReservationTransactionsListResultIterator) Value() ReservationTransaction {
+ if !iter.page.NotDone() {
+ return ReservationTransaction{}
+ }
+ return iter.page.Values()[iter.i]
+}
+
+// Creates a new instance of the ReservationTransactionsListResultIterator type.
+func NewReservationTransactionsListResultIterator(page ReservationTransactionsListResultPage) ReservationTransactionsListResultIterator {
+ return ReservationTransactionsListResultIterator{page: page}
+}
+
+// IsEmpty returns true if the ListResult contains no values.
+func (rtlr ReservationTransactionsListResult) IsEmpty() bool {
+ return rtlr.Value == nil || len(*rtlr.Value) == 0
+}
+
+// hasNextLink returns true if the NextLink is not empty.
+func (rtlr ReservationTransactionsListResult) hasNextLink() bool {
+ return rtlr.NextLink != nil && len(*rtlr.NextLink) != 0
+}
+
+// reservationTransactionsListResultPreparer prepares a request to retrieve the next set of results.
+// It returns nil if no more results exist.
+func (rtlr ReservationTransactionsListResult) reservationTransactionsListResultPreparer(ctx context.Context) (*http.Request, error) {
+ if !rtlr.hasNextLink() {
+ return nil, nil
+ }
+ return autorest.Prepare((&http.Request{}).WithContext(ctx),
+ autorest.AsJSON(),
+ autorest.AsGet(),
+ autorest.WithBaseURL(to.String(rtlr.NextLink)))
+}
+
+// ReservationTransactionsListResultPage contains a page of ReservationTransaction values.
+type ReservationTransactionsListResultPage struct {
+ fn func(context.Context, ReservationTransactionsListResult) (ReservationTransactionsListResult, error)
+ rtlr ReservationTransactionsListResult
+}
+
+// NextWithContext advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+func (page *ReservationTransactionsListResultPage) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationTransactionsListResultPage.NextWithContext")
+ defer func() {
+ sc := -1
+ if page.Response().Response.Response != nil {
+ sc = page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ for {
+ next, err := page.fn(ctx, page.rtlr)
+ if err != nil {
+ return err
+ }
+ page.rtlr = next
+ if !next.hasNextLink() || !next.IsEmpty() {
+ break
+ }
+ }
+ return nil
+}
+
+// Next advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (page *ReservationTransactionsListResultPage) Next() error {
+ return page.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the page enumeration should be started or is not yet complete.
+func (page ReservationTransactionsListResultPage) NotDone() bool {
+ return !page.rtlr.IsEmpty()
+}
+
+// Response returns the raw server response from the last page request.
+func (page ReservationTransactionsListResultPage) Response() ReservationTransactionsListResult {
+ return page.rtlr
+}
+
+// Values returns the slice of values for the current page or nil if there are no values.
+func (page ReservationTransactionsListResultPage) Values() []ReservationTransaction {
+ if page.rtlr.IsEmpty() {
+ return nil
+ }
+ return *page.rtlr.Value
+}
+
+// Creates a new instance of the ReservationTransactionsListResultPage type.
+func NewReservationTransactionsListResultPage(cur ReservationTransactionsListResult, getNextPage func(context.Context, ReservationTransactionsListResult) (ReservationTransactionsListResult, error)) ReservationTransactionsListResultPage {
+ return ReservationTransactionsListResultPage{
+ fn: getNextPage,
+ rtlr: cur,
+ }
+}
+
+// Resource the Resource model definition.
+type Resource struct {
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+// MarshalJSON is the custom marshaler for Resource.
+func (r Resource) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ return json.Marshal(objectMap)
+}
+
+// ResourceAttributes the Resource model definition.
+type ResourceAttributes struct {
+ // Location - READ-ONLY; Resource location
+ Location *string `json:"location,omitempty"`
+ // Sku - READ-ONLY; Resource sku
+ Sku *string `json:"sku,omitempty"`
+}
+
+// SkuProperty the Sku property
+type SkuProperty struct {
+ // Name - READ-ONLY; The name of sku property.
+ Name *string `json:"name,omitempty"`
+ // Value - READ-ONLY; The value of sku property.
+ Value *string `json:"value,omitempty"`
+}
+
+// Tag the tag resource.
+type Tag struct {
+ // Key - Tag key.
+ Key *string `json:"key,omitempty"`
+}
+
+// TagProperties the properties of the tag.
+type TagProperties struct {
+ // Tags - A list of Tag.
+ Tags *[]Tag `json:"tags,omitempty"`
+}
+
+// TagsResult a resource listing all tags.
+type TagsResult struct {
+ autorest.Response `json:"-"`
+ *TagProperties `json:"properties,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // ETag - eTag of the resource. To handle concurrent update scenario, this field will be used to determine whether the user is updating the latest version or not.
+ ETag *string `json:"eTag,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for TagsResult.
+func (tr TagsResult) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if tr.TagProperties != nil {
+ objectMap["properties"] = tr.TagProperties
+ }
+ if tr.ETag != nil {
+ objectMap["eTag"] = tr.ETag
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for TagsResult struct.
+func (tr *TagsResult) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var tagProperties TagProperties
+ err = json.Unmarshal(*v, &tagProperties)
+ if err != nil {
+ return err
+ }
+ tr.TagProperties = &tagProperties
+ }
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ tr.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ tr.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ tr.Type = &typeVar
+ }
+ case "eTag":
+ if v != nil {
+ var eTag string
+ err = json.Unmarshal(*v, &eTag)
+ if err != nil {
+ return err
+ }
+ tr.ETag = &eTag
+ }
+ }
+ }
+
+ return nil
+}
+
+// BasicUsageDetail an usage detail resource.
+type BasicUsageDetail interface {
+ AsLegacyUsageDetail() (*LegacyUsageDetail, bool)
+ AsModernUsageDetail() (*ModernUsageDetail, bool)
+ AsUsageDetail() (*UsageDetail, bool)
+}
+
+// UsageDetail an usage detail resource.
+type UsageDetail struct {
+ // Kind - Possible values include: 'KindUsageDetail', 'KindLegacy', 'KindModern'
+ Kind Kind `json:"kind,omitempty"`
+ // ID - READ-ONLY; Resource Id.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Resource name.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Resource type.
+ Type *string `json:"type,omitempty"`
+ // Tags - READ-ONLY; Resource tags.
+ Tags map[string]*string `json:"tags"`
+}
+
+func unmarshalBasicUsageDetail(body []byte) (BasicUsageDetail, error) {
+ var m map[string]interface{}
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return nil, err
+ }
+
+ switch m["kind"] {
+ case string(KindLegacy):
+ var lud LegacyUsageDetail
+ err := json.Unmarshal(body, &lud)
+ return lud, err
+ case string(KindModern):
+ var mud ModernUsageDetail
+ err := json.Unmarshal(body, &mud)
+ return mud, err
+ default:
+ var ud UsageDetail
+ err := json.Unmarshal(body, &ud)
+ return ud, err
+ }
+}
+func unmarshalBasicUsageDetailArray(body []byte) ([]BasicUsageDetail, error) {
+ var rawMessages []*json.RawMessage
+ err := json.Unmarshal(body, &rawMessages)
+ if err != nil {
+ return nil, err
+ }
+
+ udArray := make([]BasicUsageDetail, len(rawMessages))
+
+ for index, rawMessage := range rawMessages {
+ ud, err := unmarshalBasicUsageDetail(*rawMessage)
+ if err != nil {
+ return nil, err
+ }
+ udArray[index] = ud
+ }
+ return udArray, nil
+}
+
+// MarshalJSON is the custom marshaler for UsageDetail.
+func (ud UsageDetail) MarshalJSON() ([]byte, error) {
+ ud.Kind = KindUsageDetail
+ objectMap := make(map[string]interface{})
+ if ud.Kind != "" {
+ objectMap["kind"] = ud.Kind
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsLegacyUsageDetail is the BasicUsageDetail implementation for UsageDetail.
+func (ud UsageDetail) AsLegacyUsageDetail() (*LegacyUsageDetail, bool) {
+ return nil, false
+}
+
+// AsModernUsageDetail is the BasicUsageDetail implementation for UsageDetail.
+func (ud UsageDetail) AsModernUsageDetail() (*ModernUsageDetail, bool) {
+ return nil, false
+}
+
+// AsUsageDetail is the BasicUsageDetail implementation for UsageDetail.
+func (ud UsageDetail) AsUsageDetail() (*UsageDetail, bool) {
+ return &ud, true
+}
+
+// AsBasicUsageDetail is the BasicUsageDetail implementation for UsageDetail.
+func (ud UsageDetail) AsBasicUsageDetail() (BasicUsageDetail, bool) {
+ return &ud, true
+}
+
+// UsageDetailsListResult result of listing usage details. It contains a list of available usage details in
+// reverse chronological order by billing period.
+type UsageDetailsListResult struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; The list of usage details.
+ Value *[]BasicUsageDetail `json:"value,omitempty"`
+ // NextLink - READ-ONLY; The link (url) to the next page of results.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// UnmarshalJSON is the custom unmarshaler for UsageDetailsListResult struct.
+func (udlr *UsageDetailsListResult) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "value":
+ if v != nil {
+ value, err := unmarshalBasicUsageDetailArray(*v)
+ if err != nil {
+ return err
+ }
+ udlr.Value = &value
+ }
+ case "nextLink":
+ if v != nil {
+ var nextLink string
+ err = json.Unmarshal(*v, &nextLink)
+ if err != nil {
+ return err
+ }
+ udlr.NextLink = &nextLink
+ }
+ }
+ }
+
+ return nil
+}
+
+// UsageDetailsListResultIterator provides access to a complete listing of UsageDetail values.
+type UsageDetailsListResultIterator struct {
+ i int
+ page UsageDetailsListResultPage
+}
+
+// NextWithContext advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+func (iter *UsageDetailsListResultIterator) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/UsageDetailsListResultIterator.NextWithContext")
+ defer func() {
+ sc := -1
+ if iter.Response().Response.Response != nil {
+ sc = iter.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ iter.i++
+ if iter.i < len(iter.page.Values()) {
+ return nil
+ }
+ err = iter.page.NextWithContext(ctx)
+ if err != nil {
+ iter.i--
+ return err
+ }
+ iter.i = 0
+ return nil
+}
+
+// Next advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (iter *UsageDetailsListResultIterator) Next() error {
+ return iter.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the enumeration should be started or is not yet complete.
+func (iter UsageDetailsListResultIterator) NotDone() bool {
+ return iter.page.NotDone() && iter.i < len(iter.page.Values())
+}
+
+// Response returns the raw server response from the last page request.
+func (iter UsageDetailsListResultIterator) Response() UsageDetailsListResult {
+ return iter.page.Response()
+}
+
+// Value returns the current value or a zero-initialized value if the
+// iterator has advanced beyond the end of the collection.
+func (iter UsageDetailsListResultIterator) Value() BasicUsageDetail {
+ if !iter.page.NotDone() {
+ return UsageDetail{}
+ }
+ return iter.page.Values()[iter.i]
+}
+
+// Creates a new instance of the UsageDetailsListResultIterator type.
+func NewUsageDetailsListResultIterator(page UsageDetailsListResultPage) UsageDetailsListResultIterator {
+ return UsageDetailsListResultIterator{page: page}
+}
+
+// IsEmpty returns true if the ListResult contains no values.
+func (udlr UsageDetailsListResult) IsEmpty() bool {
+ return udlr.Value == nil || len(*udlr.Value) == 0
+}
+
+// hasNextLink returns true if the NextLink is not empty.
+func (udlr UsageDetailsListResult) hasNextLink() bool {
+ return udlr.NextLink != nil && len(*udlr.NextLink) != 0
+}
+
+// usageDetailsListResultPreparer prepares a request to retrieve the next set of results.
+// It returns nil if no more results exist.
+func (udlr UsageDetailsListResult) usageDetailsListResultPreparer(ctx context.Context) (*http.Request, error) {
+ if !udlr.hasNextLink() {
+ return nil, nil
+ }
+ return autorest.Prepare((&http.Request{}).WithContext(ctx),
+ autorest.AsJSON(),
+ autorest.AsGet(),
+ autorest.WithBaseURL(to.String(udlr.NextLink)))
+}
+
+// UsageDetailsListResultPage contains a page of BasicUsageDetail values.
+type UsageDetailsListResultPage struct {
+ fn func(context.Context, UsageDetailsListResult) (UsageDetailsListResult, error)
+ udlr UsageDetailsListResult
+}
+
+// NextWithContext advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+func (page *UsageDetailsListResultPage) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/UsageDetailsListResultPage.NextWithContext")
+ defer func() {
+ sc := -1
+ if page.Response().Response.Response != nil {
+ sc = page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ for {
+ next, err := page.fn(ctx, page.udlr)
+ if err != nil {
+ return err
+ }
+ page.udlr = next
+ if !next.hasNextLink() || !next.IsEmpty() {
+ break
+ }
+ }
+ return nil
+}
+
+// Next advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (page *UsageDetailsListResultPage) Next() error {
+ return page.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the page enumeration should be started or is not yet complete.
+func (page UsageDetailsListResultPage) NotDone() bool {
+ return !page.udlr.IsEmpty()
+}
+
+// Response returns the raw server response from the last page request.
+func (page UsageDetailsListResultPage) Response() UsageDetailsListResult {
+ return page.udlr
+}
+
+// Values returns the slice of values for the current page or nil if there are no values.
+func (page UsageDetailsListResultPage) Values() []BasicUsageDetail {
+ if page.udlr.IsEmpty() {
+ return nil
+ }
+ return *page.udlr.Value
+}
+
+// Creates a new instance of the UsageDetailsListResultPage type.
+func NewUsageDetailsListResultPage(cur UsageDetailsListResult, getNextPage func(context.Context, UsageDetailsListResult) (UsageDetailsListResult, error)) UsageDetailsListResultPage {
+ return UsageDetailsListResultPage{
+ fn: getNextPage,
+ udlr: cur,
+ }
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/operations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/operations.go
new file mode 100644
index 0000000000000..edc859c889c46
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/operations.go
@@ -0,0 +1,141 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// OperationsClient is the consumption management client provides access to consumption resources for Azure Enterprise
+// Subscriptions.
+type OperationsClient struct {
+ BaseClient
+}
+
+// NewOperationsClient creates an instance of the OperationsClient client.
+func NewOperationsClient(subscriptionID string) OperationsClient {
+ return NewOperationsClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewOperationsClientWithBaseURI creates an instance of the OperationsClient client using a custom endpoint. Use this
+// when interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewOperationsClientWithBaseURI(baseURI string, subscriptionID string) OperationsClient {
+ return OperationsClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// List lists all of the available consumption REST API operations.
+func (client OperationsClient) List(ctx context.Context) (result OperationListResultPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/OperationsClient.List")
+ defer func() {
+ sc := -1
+ if result.olr.Response.Response != nil {
+ sc = result.olr.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.fn = client.listNextResults
+ req, err := client.ListPreparer(ctx)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.OperationsClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.olr.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.OperationsClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result.olr, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.OperationsClient", "List", resp, "Failure responding to request")
+ return
+ }
+ if result.olr.hasNextLink() && result.olr.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client OperationsClient) ListPreparer(ctx context.Context) (*http.Request, error) {
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPath("/providers/Microsoft.Consumption/operations"),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client OperationsClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client OperationsClient) ListResponder(resp *http.Response) (result OperationListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listNextResults retrieves the next set of results, if any.
+func (client OperationsClient) listNextResults(ctx context.Context, lastResults OperationListResult) (result OperationListResult, err error) {
+ req, err := lastResults.operationListResultPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.OperationsClient", "listNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.OperationsClient", "listNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.OperationsClient", "listNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListComplete enumerates all values, automatically crossing page boundaries as required.
+func (client OperationsClient) ListComplete(ctx context.Context) (result OperationListResultIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/OperationsClient.List")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.List(ctx)
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/pricesheet.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/pricesheet.go
new file mode 100644
index 0000000000000..05503cada92b8
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/pricesheet.go
@@ -0,0 +1,229 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/autorest/validation"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// PriceSheetClient is the consumption management client provides access to consumption resources for Azure Enterprise
+// Subscriptions.
+type PriceSheetClient struct {
+ BaseClient
+}
+
+// NewPriceSheetClient creates an instance of the PriceSheetClient client.
+func NewPriceSheetClient(subscriptionID string) PriceSheetClient {
+ return NewPriceSheetClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewPriceSheetClientWithBaseURI creates an instance of the PriceSheetClient client using a custom endpoint. Use this
+// when interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewPriceSheetClientWithBaseURI(baseURI string, subscriptionID string) PriceSheetClient {
+ return PriceSheetClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// Get gets the price sheet for a scope by subscriptionId. Price sheet is available via this API only for May 1, 2014
+// or later.
+// Parameters:
+// expand - may be used to expand the properties/meterDetails within a price sheet. By default, these fields
+// are not included when returning price sheet.
+// skiptoken - skiptoken is only used if a previous operation returned a partial result. If a previous response
+// contains a nextLink element, the value of the nextLink element will include a skiptoken parameter that
+// specifies a starting point to use for subsequent calls.
+// top - may be used to limit the number of results to the top N results.
+func (client PriceSheetClient) Get(ctx context.Context, expand string, skiptoken string, top *int32) (result PriceSheetResult, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/PriceSheetClient.Get")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: top,
+ Constraints: []validation.Constraint{{Target: "top", Name: validation.Null, Rule: false,
+ Chain: []validation.Constraint{{Target: "top", Name: validation.InclusiveMaximum, Rule: int64(1000), Chain: nil},
+ {Target: "top", Name: validation.InclusiveMinimum, Rule: int64(1), Chain: nil},
+ }}}}}); err != nil {
+ return result, validation.NewError("consumption.PriceSheetClient", "Get", err.Error())
+ }
+
+ req, err := client.GetPreparer(ctx, expand, skiptoken, top)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.PriceSheetClient", "Get", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.PriceSheetClient", "Get", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.PriceSheetClient", "Get", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetPreparer prepares the Get request.
+func (client PriceSheetClient) GetPreparer(ctx context.Context, expand string, skiptoken string, top *int32) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "subscriptionId": autorest.Encode("path", client.SubscriptionID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+ if len(expand) > 0 {
+ queryParameters["$expand"] = autorest.Encode("query", expand)
+ }
+ if len(skiptoken) > 0 {
+ queryParameters["$skiptoken"] = autorest.Encode("query", skiptoken)
+ }
+ if top != nil {
+ queryParameters["$top"] = autorest.Encode("query", *top)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Consumption/pricesheets/default", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetSender sends the Get request. The method will close the
+// http.Response Body if it receives an error.
+func (client PriceSheetClient) GetSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, azure.DoRetryWithRegistration(client.Client))
+}
+
+// GetResponder handles the response to the Get request. The method always
+// closes the http.Response Body.
+func (client PriceSheetClient) GetResponder(resp *http.Response) (result PriceSheetResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// GetByBillingPeriod get the price sheet for a scope by subscriptionId and billing period. Price sheet is available
+// via this API only for May 1, 2014 or later.
+// Parameters:
+// billingPeriodName - billing Period Name.
+// expand - may be used to expand the properties/meterDetails within a price sheet. By default, these fields
+// are not included when returning price sheet.
+// skiptoken - skiptoken is only used if a previous operation returned a partial result. If a previous response
+// contains a nextLink element, the value of the nextLink element will include a skiptoken parameter that
+// specifies a starting point to use for subsequent calls.
+// top - may be used to limit the number of results to the top N results.
+func (client PriceSheetClient) GetByBillingPeriod(ctx context.Context, billingPeriodName string, expand string, skiptoken string, top *int32) (result PriceSheetResult, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/PriceSheetClient.GetByBillingPeriod")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: top,
+ Constraints: []validation.Constraint{{Target: "top", Name: validation.Null, Rule: false,
+ Chain: []validation.Constraint{{Target: "top", Name: validation.InclusiveMaximum, Rule: int64(1000), Chain: nil},
+ {Target: "top", Name: validation.InclusiveMinimum, Rule: int64(1), Chain: nil},
+ }}}}}); err != nil {
+ return result, validation.NewError("consumption.PriceSheetClient", "GetByBillingPeriod", err.Error())
+ }
+
+ req, err := client.GetByBillingPeriodPreparer(ctx, billingPeriodName, expand, skiptoken, top)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.PriceSheetClient", "GetByBillingPeriod", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetByBillingPeriodSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.PriceSheetClient", "GetByBillingPeriod", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetByBillingPeriodResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.PriceSheetClient", "GetByBillingPeriod", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetByBillingPeriodPreparer prepares the GetByBillingPeriod request.
+func (client PriceSheetClient) GetByBillingPeriodPreparer(ctx context.Context, billingPeriodName string, expand string, skiptoken string, top *int32) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "billingPeriodName": autorest.Encode("path", billingPeriodName),
+ "subscriptionId": autorest.Encode("path", client.SubscriptionID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+ if len(expand) > 0 {
+ queryParameters["$expand"] = autorest.Encode("query", expand)
+ }
+ if len(skiptoken) > 0 {
+ queryParameters["$skiptoken"] = autorest.Encode("query", skiptoken)
+ }
+ if top != nil {
+ queryParameters["$top"] = autorest.Encode("query", *top)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}/providers/Microsoft.Consumption/pricesheets/default", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetByBillingPeriodSender sends the GetByBillingPeriod request. The method will close the
+// http.Response Body if it receives an error.
+func (client PriceSheetClient) GetByBillingPeriodSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, azure.DoRetryWithRegistration(client.Client))
+}
+
+// GetByBillingPeriodResponder handles the response to the GetByBillingPeriod request. The method always
+// closes the http.Response Body.
+func (client PriceSheetClient) GetByBillingPeriodResponder(resp *http.Response) (result PriceSheetResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationrecommendationdetails.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationrecommendationdetails.go
new file mode 100644
index 0000000000000..f6d286a65ad47
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationrecommendationdetails.go
@@ -0,0 +1,122 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// ReservationRecommendationDetailsClient is the consumption management client provides access to consumption resources
+// for Azure Enterprise Subscriptions.
+type ReservationRecommendationDetailsClient struct {
+ BaseClient
+}
+
+// NewReservationRecommendationDetailsClient creates an instance of the ReservationRecommendationDetailsClient client.
+func NewReservationRecommendationDetailsClient(subscriptionID string) ReservationRecommendationDetailsClient {
+ return NewReservationRecommendationDetailsClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewReservationRecommendationDetailsClientWithBaseURI creates an instance of the
+// ReservationRecommendationDetailsClient client using a custom endpoint. Use this when interacting with an Azure
+// cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewReservationRecommendationDetailsClientWithBaseURI(baseURI string, subscriptionID string) ReservationRecommendationDetailsClient {
+ return ReservationRecommendationDetailsClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// Get details of a reservation recommendation for what-if analysis of reserved instances.
+// Parameters:
+// billingScope - the scope associated with reservation recommendation details operations. This includes
+// '/subscriptions/{subscriptionId}/' for subscription scope,
+// '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}' for resource group scope,
+// /providers/Microsoft.Billing/billingAccounts/{billingAccountId}' for BillingAccount scope, and
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}' for
+// billingProfile scope
+// scope - scope of the reservation.
+// region - used to select the region the recommendation should be generated for.
+// term - specify length of reservation recommendation term.
+// lookBackPeriod - filter the time period on which reservation recommendation results are based.
+// product - filter the products for which reservation recommendation results are generated. Examples:
+// Standard_DS1_v2 (for VM), Premium_SSD_Managed_Disks_P30 (for Managed Disks)
+func (client ReservationRecommendationDetailsClient) Get(ctx context.Context, billingScope string, scope Scope11, region string, term Term, lookBackPeriod LookBackPeriod, product string) (result ReservationRecommendationDetailsModel, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationRecommendationDetailsClient.Get")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.GetPreparer(ctx, billingScope, scope, region, term, lookBackPeriod, product)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationRecommendationDetailsClient", "Get", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.ReservationRecommendationDetailsClient", "Get", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationRecommendationDetailsClient", "Get", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetPreparer prepares the Get request.
+func (client ReservationRecommendationDetailsClient) GetPreparer(ctx context.Context, billingScope string, scope Scope11, region string, term Term, lookBackPeriod LookBackPeriod, product string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "billingScope": billingScope,
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ "lookBackPeriod": autorest.Encode("query", lookBackPeriod),
+ "product": autorest.Encode("query", product),
+ "region": autorest.Encode("query", region),
+ "scope": autorest.Encode("query", scope),
+ "term": autorest.Encode("query", term),
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/{billingScope}/providers/Microsoft.Consumption/reservationRecommendationDetails", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetSender sends the Get request. The method will close the
+// http.Response Body if it receives an error.
+func (client ReservationRecommendationDetailsClient) GetSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// GetResponder handles the response to the Get request. The method always
+// closes the http.Response Body.
+func (client ReservationRecommendationDetailsClient) GetResponder(resp *http.Response) (result ReservationRecommendationDetailsModel, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationrecommendations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationrecommendations.go
new file mode 100644
index 0000000000000..46173fbc9a495
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationrecommendations.go
@@ -0,0 +1,162 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// ReservationRecommendationsClient is the consumption management client provides access to consumption resources for
+// Azure Enterprise Subscriptions.
+type ReservationRecommendationsClient struct {
+ BaseClient
+}
+
+// NewReservationRecommendationsClient creates an instance of the ReservationRecommendationsClient client.
+func NewReservationRecommendationsClient(subscriptionID string) ReservationRecommendationsClient {
+ return NewReservationRecommendationsClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewReservationRecommendationsClientWithBaseURI creates an instance of the ReservationRecommendationsClient client
+// using a custom endpoint. Use this when interacting with an Azure cloud that uses a non-standard base URI (sovereign
+// clouds, Azure stack).
+func NewReservationRecommendationsClientWithBaseURI(baseURI string, subscriptionID string) ReservationRecommendationsClient {
+ return ReservationRecommendationsClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// List list of recommendations for purchasing reserved instances.
+// Parameters:
+// scope - the scope associated with reservation recommendations operations. This includes
+// '/subscriptions/{subscriptionId}/' for subscription scope,
+// '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}' for resource group scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}' for BillingAccount scope, and
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}' for
+// billingProfile scope
+// filter - may be used to filter reservationRecommendations by: properties/scope with allowed values
+// ['Single', 'Shared'] and default value 'Single'; properties/resourceType with allowed values
+// ['VirtualMachines', 'SQLDatabases', 'PostgreSQL', 'ManagedDisk', 'MySQL', 'RedHat', 'MariaDB', 'RedisCache',
+// 'CosmosDB', 'SqlDataWarehouse', 'SUSELinux', 'AppService', 'BlockBlob', 'AzureDataExplorer',
+// 'VMwareCloudSimple'] and default value 'VirtualMachines'; and properties/lookBackPeriod with allowed values
+// ['Last7Days', 'Last30Days', 'Last60Days'] and default value 'Last7Days'.
+func (client ReservationRecommendationsClient) List(ctx context.Context, scope string, filter string) (result ReservationRecommendationsListResultPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationRecommendationsClient.List")
+ defer func() {
+ sc := -1
+ if result.rrlr.Response.Response != nil {
+ sc = result.rrlr.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.fn = client.listNextResults
+ req, err := client.ListPreparer(ctx, scope, filter)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationRecommendationsClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.rrlr.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.ReservationRecommendationsClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result.rrlr, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationRecommendationsClient", "List", resp, "Failure responding to request")
+ return
+ }
+ if result.rrlr.hasNextLink() && result.rrlr.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client ReservationRecommendationsClient) ListPreparer(ctx context.Context, scope string, filter string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "scope": scope,
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+ if len(filter) > 0 {
+ queryParameters["$filter"] = autorest.Encode("query", filter)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/{scope}/providers/Microsoft.Consumption/reservationRecommendations", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client ReservationRecommendationsClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client ReservationRecommendationsClient) ListResponder(resp *http.Response) (result ReservationRecommendationsListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listNextResults retrieves the next set of results, if any.
+func (client ReservationRecommendationsClient) listNextResults(ctx context.Context, lastResults ReservationRecommendationsListResult) (result ReservationRecommendationsListResult, err error) {
+ req, err := lastResults.reservationRecommendationsListResultPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationRecommendationsClient", "listNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationRecommendationsClient", "listNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationRecommendationsClient", "listNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListComplete enumerates all values, automatically crossing page boundaries as required.
+func (client ReservationRecommendationsClient) ListComplete(ctx context.Context, scope string, filter string) (result ReservationRecommendationsListResultIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationRecommendationsClient.List")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.List(ctx, scope, filter)
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationsdetails.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationsdetails.go
new file mode 100644
index 0000000000000..73a88bb6e418d
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationsdetails.go
@@ -0,0 +1,412 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// ReservationsDetailsClient is the consumption management client provides access to consumption resources for Azure
+// Enterprise Subscriptions.
+type ReservationsDetailsClient struct {
+ BaseClient
+}
+
+// NewReservationsDetailsClient creates an instance of the ReservationsDetailsClient client.
+func NewReservationsDetailsClient(subscriptionID string) ReservationsDetailsClient {
+ return NewReservationsDetailsClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewReservationsDetailsClientWithBaseURI creates an instance of the ReservationsDetailsClient client using a custom
+// endpoint. Use this when interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure
+// stack).
+func NewReservationsDetailsClientWithBaseURI(baseURI string, subscriptionID string) ReservationsDetailsClient {
+ return ReservationsDetailsClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// List lists the reservations details for the defined scope and provided date range.
+// Parameters:
+// scope - the scope associated with reservations details operations. This includes
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}' for BillingAccount scope (legacy), and
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}' for
+// BillingProfile scope (modern).
+// startDate - start date. Only applicable when querying with billing profile
+// endDate - end date. Only applicable when querying with billing profile
+// filter - filter reservation details by date range. The properties/UsageDate for start date and end date. The
+// filter supports 'le' and 'ge'. Not applicable when querying with billing profile
+// reservationID - reservation Id GUID. Only valid if reservationOrderId is also provided. Filter to a specific
+// reservation
+// reservationOrderID - reservation Order Id GUID. Required if reservationId is provided. Filter to a specific
+// reservation order
+func (client ReservationsDetailsClient) List(ctx context.Context, scope string, startDate string, endDate string, filter string, reservationID string, reservationOrderID string) (result ReservationDetailsListResultPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationsDetailsClient.List")
+ defer func() {
+ sc := -1
+ if result.rdlr.Response.Response != nil {
+ sc = result.rdlr.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.fn = client.listNextResults
+ req, err := client.ListPreparer(ctx, scope, startDate, endDate, filter, reservationID, reservationOrderID)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.rdlr.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result.rdlr, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "List", resp, "Failure responding to request")
+ return
+ }
+ if result.rdlr.hasNextLink() && result.rdlr.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client ReservationsDetailsClient) ListPreparer(ctx context.Context, scope string, startDate string, endDate string, filter string, reservationID string, reservationOrderID string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "scope": scope,
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+ if len(startDate) > 0 {
+ queryParameters["startDate"] = autorest.Encode("query", startDate)
+ }
+ if len(endDate) > 0 {
+ queryParameters["endDate"] = autorest.Encode("query", endDate)
+ }
+ if len(filter) > 0 {
+ queryParameters["$filter"] = autorest.Encode("query", filter)
+ }
+ if len(reservationID) > 0 {
+ queryParameters["reservationId"] = autorest.Encode("query", reservationID)
+ }
+ if len(reservationOrderID) > 0 {
+ queryParameters["reservationOrderId"] = autorest.Encode("query", reservationOrderID)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/{scope}/providers/Microsoft.Consumption/reservationDetails", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client ReservationsDetailsClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client ReservationsDetailsClient) ListResponder(resp *http.Response) (result ReservationDetailsListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listNextResults retrieves the next set of results, if any.
+func (client ReservationsDetailsClient) listNextResults(ctx context.Context, lastResults ReservationDetailsListResult) (result ReservationDetailsListResult, err error) {
+ req, err := lastResults.reservationDetailsListResultPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "listNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "listNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "listNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListComplete enumerates all values, automatically crossing page boundaries as required.
+func (client ReservationsDetailsClient) ListComplete(ctx context.Context, scope string, startDate string, endDate string, filter string, reservationID string, reservationOrderID string) (result ReservationDetailsListResultIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationsDetailsClient.List")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.List(ctx, scope, startDate, endDate, filter, reservationID, reservationOrderID)
+ return
+}
+
+// ListByReservationOrder lists the reservations details for provided date range.
+// Parameters:
+// reservationOrderID - order Id of the reservation
+// filter - filter reservation details by date range. The properties/UsageDate for start date and end date. The
+// filter supports 'le' and 'ge'
+func (client ReservationsDetailsClient) ListByReservationOrder(ctx context.Context, reservationOrderID string, filter string) (result ReservationDetailsListResultPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationsDetailsClient.ListByReservationOrder")
+ defer func() {
+ sc := -1
+ if result.rdlr.Response.Response != nil {
+ sc = result.rdlr.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.fn = client.listByReservationOrderNextResults
+ req, err := client.ListByReservationOrderPreparer(ctx, reservationOrderID, filter)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "ListByReservationOrder", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListByReservationOrderSender(req)
+ if err != nil {
+ result.rdlr.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "ListByReservationOrder", resp, "Failure sending request")
+ return
+ }
+
+ result.rdlr, err = client.ListByReservationOrderResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "ListByReservationOrder", resp, "Failure responding to request")
+ return
+ }
+ if result.rdlr.hasNextLink() && result.rdlr.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListByReservationOrderPreparer prepares the ListByReservationOrder request.
+func (client ReservationsDetailsClient) ListByReservationOrderPreparer(ctx context.Context, reservationOrderID string, filter string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "reservationOrderId": autorest.Encode("path", reservationOrderID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "$filter": autorest.Encode("query", filter),
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/Microsoft.Capacity/reservationorders/{reservationOrderId}/providers/Microsoft.Consumption/reservationDetails", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListByReservationOrderSender sends the ListByReservationOrder request. The method will close the
+// http.Response Body if it receives an error.
+func (client ReservationsDetailsClient) ListByReservationOrderSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListByReservationOrderResponder handles the response to the ListByReservationOrder request. The method always
+// closes the http.Response Body.
+func (client ReservationsDetailsClient) ListByReservationOrderResponder(resp *http.Response) (result ReservationDetailsListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listByReservationOrderNextResults retrieves the next set of results, if any.
+func (client ReservationsDetailsClient) listByReservationOrderNextResults(ctx context.Context, lastResults ReservationDetailsListResult) (result ReservationDetailsListResult, err error) {
+ req, err := lastResults.reservationDetailsListResultPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "listByReservationOrderNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListByReservationOrderSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "listByReservationOrderNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListByReservationOrderResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "listByReservationOrderNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListByReservationOrderComplete enumerates all values, automatically crossing page boundaries as required.
+func (client ReservationsDetailsClient) ListByReservationOrderComplete(ctx context.Context, reservationOrderID string, filter string) (result ReservationDetailsListResultIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationsDetailsClient.ListByReservationOrder")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.ListByReservationOrder(ctx, reservationOrderID, filter)
+ return
+}
+
+// ListByReservationOrderAndReservation lists the reservations details for provided date range.
+// Parameters:
+// reservationOrderID - order Id of the reservation
+// reservationID - id of the reservation
+// filter - filter reservation details by date range. The properties/UsageDate for start date and end date. The
+// filter supports 'le' and 'ge'
+func (client ReservationsDetailsClient) ListByReservationOrderAndReservation(ctx context.Context, reservationOrderID string, reservationID string, filter string) (result ReservationDetailsListResultPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationsDetailsClient.ListByReservationOrderAndReservation")
+ defer func() {
+ sc := -1
+ if result.rdlr.Response.Response != nil {
+ sc = result.rdlr.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.fn = client.listByReservationOrderAndReservationNextResults
+ req, err := client.ListByReservationOrderAndReservationPreparer(ctx, reservationOrderID, reservationID, filter)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "ListByReservationOrderAndReservation", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListByReservationOrderAndReservationSender(req)
+ if err != nil {
+ result.rdlr.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "ListByReservationOrderAndReservation", resp, "Failure sending request")
+ return
+ }
+
+ result.rdlr, err = client.ListByReservationOrderAndReservationResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "ListByReservationOrderAndReservation", resp, "Failure responding to request")
+ return
+ }
+ if result.rdlr.hasNextLink() && result.rdlr.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListByReservationOrderAndReservationPreparer prepares the ListByReservationOrderAndReservation request.
+func (client ReservationsDetailsClient) ListByReservationOrderAndReservationPreparer(ctx context.Context, reservationOrderID string, reservationID string, filter string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "reservationId": autorest.Encode("path", reservationID),
+ "reservationOrderId": autorest.Encode("path", reservationOrderID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "$filter": autorest.Encode("query", filter),
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/Microsoft.Capacity/reservationorders/{reservationOrderId}/reservations/{reservationId}/providers/Microsoft.Consumption/reservationDetails", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListByReservationOrderAndReservationSender sends the ListByReservationOrderAndReservation request. The method will close the
+// http.Response Body if it receives an error.
+func (client ReservationsDetailsClient) ListByReservationOrderAndReservationSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListByReservationOrderAndReservationResponder handles the response to the ListByReservationOrderAndReservation request. The method always
+// closes the http.Response Body.
+func (client ReservationsDetailsClient) ListByReservationOrderAndReservationResponder(resp *http.Response) (result ReservationDetailsListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listByReservationOrderAndReservationNextResults retrieves the next set of results, if any.
+func (client ReservationsDetailsClient) listByReservationOrderAndReservationNextResults(ctx context.Context, lastResults ReservationDetailsListResult) (result ReservationDetailsListResult, err error) {
+ req, err := lastResults.reservationDetailsListResultPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "listByReservationOrderAndReservationNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListByReservationOrderAndReservationSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "listByReservationOrderAndReservationNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListByReservationOrderAndReservationResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsDetailsClient", "listByReservationOrderAndReservationNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListByReservationOrderAndReservationComplete enumerates all values, automatically crossing page boundaries as required.
+func (client ReservationsDetailsClient) ListByReservationOrderAndReservationComplete(ctx context.Context, reservationOrderID string, reservationID string, filter string) (result ReservationDetailsListResultIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationsDetailsClient.ListByReservationOrderAndReservation")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.ListByReservationOrderAndReservation(ctx, reservationOrderID, reservationID, filter)
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationssummaries.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationssummaries.go
new file mode 100644
index 0000000000000..6b3c510a4fb3e
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationssummaries.go
@@ -0,0 +1,422 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// ReservationsSummariesClient is the consumption management client provides access to consumption resources for Azure
+// Enterprise Subscriptions.
+type ReservationsSummariesClient struct {
+ BaseClient
+}
+
+// NewReservationsSummariesClient creates an instance of the ReservationsSummariesClient client.
+func NewReservationsSummariesClient(subscriptionID string) ReservationsSummariesClient {
+ return NewReservationsSummariesClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewReservationsSummariesClientWithBaseURI creates an instance of the ReservationsSummariesClient client using a
+// custom endpoint. Use this when interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds,
+// Azure stack).
+func NewReservationsSummariesClientWithBaseURI(baseURI string, subscriptionID string) ReservationsSummariesClient {
+ return ReservationsSummariesClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// List lists the reservations summaries for the defined scope daily or monthly grain.
+// Parameters:
+// scope - the scope associated with reservations summaries operations. This includes
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}' for BillingAccount scope (legacy), and
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}' for
+// BillingProfile scope (modern).
+// grain - can be daily or monthly
+// startDate - start date. Only applicable when querying with billing profile
+// endDate - end date. Only applicable when querying with billing profile
+// filter - required only for daily grain. The properties/UsageDate for start date and end date. The filter
+// supports 'le' and 'ge'. Not applicable when querying with billing profile
+// reservationID - reservation Id GUID. Only valid if reservationOrderId is also provided. Filter to a specific
+// reservation
+// reservationOrderID - reservation Order Id GUID. Required if reservationId is provided. Filter to a specific
+// reservation order
+func (client ReservationsSummariesClient) List(ctx context.Context, scope string, grain Datagrain, startDate string, endDate string, filter string, reservationID string, reservationOrderID string) (result ReservationSummariesListResultPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationsSummariesClient.List")
+ defer func() {
+ sc := -1
+ if result.rslr.Response.Response != nil {
+ sc = result.rslr.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.fn = client.listNextResults
+ req, err := client.ListPreparer(ctx, scope, grain, startDate, endDate, filter, reservationID, reservationOrderID)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.rslr.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result.rslr, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "List", resp, "Failure responding to request")
+ return
+ }
+ if result.rslr.hasNextLink() && result.rslr.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client ReservationsSummariesClient) ListPreparer(ctx context.Context, scope string, grain Datagrain, startDate string, endDate string, filter string, reservationID string, reservationOrderID string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "scope": scope,
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ "grain": autorest.Encode("query", grain),
+ }
+ if len(startDate) > 0 {
+ queryParameters["startDate"] = autorest.Encode("query", startDate)
+ }
+ if len(endDate) > 0 {
+ queryParameters["endDate"] = autorest.Encode("query", endDate)
+ }
+ if len(filter) > 0 {
+ queryParameters["$filter"] = autorest.Encode("query", filter)
+ }
+ if len(reservationID) > 0 {
+ queryParameters["reservationId"] = autorest.Encode("query", reservationID)
+ }
+ if len(reservationOrderID) > 0 {
+ queryParameters["reservationOrderId"] = autorest.Encode("query", reservationOrderID)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/{scope}/providers/Microsoft.Consumption/reservationSummaries", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client ReservationsSummariesClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client ReservationsSummariesClient) ListResponder(resp *http.Response) (result ReservationSummariesListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listNextResults retrieves the next set of results, if any.
+func (client ReservationsSummariesClient) listNextResults(ctx context.Context, lastResults ReservationSummariesListResult) (result ReservationSummariesListResult, err error) {
+ req, err := lastResults.reservationSummariesListResultPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "listNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "listNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "listNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListComplete enumerates all values, automatically crossing page boundaries as required.
+func (client ReservationsSummariesClient) ListComplete(ctx context.Context, scope string, grain Datagrain, startDate string, endDate string, filter string, reservationID string, reservationOrderID string) (result ReservationSummariesListResultIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationsSummariesClient.List")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.List(ctx, scope, grain, startDate, endDate, filter, reservationID, reservationOrderID)
+ return
+}
+
+// ListByReservationOrder lists the reservations summaries for daily or monthly grain.
+// Parameters:
+// reservationOrderID - order Id of the reservation
+// grain - can be daily or monthly
+// filter - required only for daily grain. The properties/UsageDate for start date and end date. The filter
+// supports 'le' and 'ge'
+func (client ReservationsSummariesClient) ListByReservationOrder(ctx context.Context, reservationOrderID string, grain Datagrain, filter string) (result ReservationSummariesListResultPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationsSummariesClient.ListByReservationOrder")
+ defer func() {
+ sc := -1
+ if result.rslr.Response.Response != nil {
+ sc = result.rslr.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.fn = client.listByReservationOrderNextResults
+ req, err := client.ListByReservationOrderPreparer(ctx, reservationOrderID, grain, filter)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "ListByReservationOrder", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListByReservationOrderSender(req)
+ if err != nil {
+ result.rslr.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "ListByReservationOrder", resp, "Failure sending request")
+ return
+ }
+
+ result.rslr, err = client.ListByReservationOrderResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "ListByReservationOrder", resp, "Failure responding to request")
+ return
+ }
+ if result.rslr.hasNextLink() && result.rslr.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListByReservationOrderPreparer prepares the ListByReservationOrder request.
+func (client ReservationsSummariesClient) ListByReservationOrderPreparer(ctx context.Context, reservationOrderID string, grain Datagrain, filter string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "reservationOrderId": autorest.Encode("path", reservationOrderID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ "grain": autorest.Encode("query", grain),
+ }
+ if len(filter) > 0 {
+ queryParameters["$filter"] = autorest.Encode("query", filter)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/Microsoft.Capacity/reservationorders/{reservationOrderId}/providers/Microsoft.Consumption/reservationSummaries", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListByReservationOrderSender sends the ListByReservationOrder request. The method will close the
+// http.Response Body if it receives an error.
+func (client ReservationsSummariesClient) ListByReservationOrderSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListByReservationOrderResponder handles the response to the ListByReservationOrder request. The method always
+// closes the http.Response Body.
+func (client ReservationsSummariesClient) ListByReservationOrderResponder(resp *http.Response) (result ReservationSummariesListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listByReservationOrderNextResults retrieves the next set of results, if any.
+func (client ReservationsSummariesClient) listByReservationOrderNextResults(ctx context.Context, lastResults ReservationSummariesListResult) (result ReservationSummariesListResult, err error) {
+ req, err := lastResults.reservationSummariesListResultPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "listByReservationOrderNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListByReservationOrderSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "listByReservationOrderNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListByReservationOrderResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "listByReservationOrderNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListByReservationOrderComplete enumerates all values, automatically crossing page boundaries as required.
+func (client ReservationsSummariesClient) ListByReservationOrderComplete(ctx context.Context, reservationOrderID string, grain Datagrain, filter string) (result ReservationSummariesListResultIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationsSummariesClient.ListByReservationOrder")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.ListByReservationOrder(ctx, reservationOrderID, grain, filter)
+ return
+}
+
+// ListByReservationOrderAndReservation lists the reservations summaries for daily or monthly grain.
+// Parameters:
+// reservationOrderID - order Id of the reservation
+// reservationID - id of the reservation
+// grain - can be daily or monthly
+// filter - required only for daily grain. The properties/UsageDate for start date and end date. The filter
+// supports 'le' and 'ge'
+func (client ReservationsSummariesClient) ListByReservationOrderAndReservation(ctx context.Context, reservationOrderID string, reservationID string, grain Datagrain, filter string) (result ReservationSummariesListResultPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationsSummariesClient.ListByReservationOrderAndReservation")
+ defer func() {
+ sc := -1
+ if result.rslr.Response.Response != nil {
+ sc = result.rslr.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.fn = client.listByReservationOrderAndReservationNextResults
+ req, err := client.ListByReservationOrderAndReservationPreparer(ctx, reservationOrderID, reservationID, grain, filter)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "ListByReservationOrderAndReservation", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListByReservationOrderAndReservationSender(req)
+ if err != nil {
+ result.rslr.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "ListByReservationOrderAndReservation", resp, "Failure sending request")
+ return
+ }
+
+ result.rslr, err = client.ListByReservationOrderAndReservationResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "ListByReservationOrderAndReservation", resp, "Failure responding to request")
+ return
+ }
+ if result.rslr.hasNextLink() && result.rslr.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListByReservationOrderAndReservationPreparer prepares the ListByReservationOrderAndReservation request.
+func (client ReservationsSummariesClient) ListByReservationOrderAndReservationPreparer(ctx context.Context, reservationOrderID string, reservationID string, grain Datagrain, filter string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "reservationId": autorest.Encode("path", reservationID),
+ "reservationOrderId": autorest.Encode("path", reservationOrderID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ "grain": autorest.Encode("query", grain),
+ }
+ if len(filter) > 0 {
+ queryParameters["$filter"] = autorest.Encode("query", filter)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/Microsoft.Capacity/reservationorders/{reservationOrderId}/reservations/{reservationId}/providers/Microsoft.Consumption/reservationSummaries", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListByReservationOrderAndReservationSender sends the ListByReservationOrderAndReservation request. The method will close the
+// http.Response Body if it receives an error.
+func (client ReservationsSummariesClient) ListByReservationOrderAndReservationSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListByReservationOrderAndReservationResponder handles the response to the ListByReservationOrderAndReservation request. The method always
+// closes the http.Response Body.
+func (client ReservationsSummariesClient) ListByReservationOrderAndReservationResponder(resp *http.Response) (result ReservationSummariesListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listByReservationOrderAndReservationNextResults retrieves the next set of results, if any.
+func (client ReservationsSummariesClient) listByReservationOrderAndReservationNextResults(ctx context.Context, lastResults ReservationSummariesListResult) (result ReservationSummariesListResult, err error) {
+ req, err := lastResults.reservationSummariesListResultPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "listByReservationOrderAndReservationNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListByReservationOrderAndReservationSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "listByReservationOrderAndReservationNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListByReservationOrderAndReservationResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationsSummariesClient", "listByReservationOrderAndReservationNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListByReservationOrderAndReservationComplete enumerates all values, automatically crossing page boundaries as required.
+func (client ReservationsSummariesClient) ListByReservationOrderAndReservationComplete(ctx context.Context, reservationOrderID string, reservationID string, grain Datagrain, filter string) (result ReservationSummariesListResultIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationsSummariesClient.ListByReservationOrderAndReservation")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.ListByReservationOrderAndReservation(ctx, reservationOrderID, reservationID, grain, filter)
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationtransactions.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationtransactions.go
new file mode 100644
index 0000000000000..e781954a65d5c
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/reservationtransactions.go
@@ -0,0 +1,275 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// ReservationTransactionsClient is the consumption management client provides access to consumption resources for
+// Azure Enterprise Subscriptions.
+type ReservationTransactionsClient struct {
+ BaseClient
+}
+
+// NewReservationTransactionsClient creates an instance of the ReservationTransactionsClient client.
+func NewReservationTransactionsClient(subscriptionID string) ReservationTransactionsClient {
+ return NewReservationTransactionsClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewReservationTransactionsClientWithBaseURI creates an instance of the ReservationTransactionsClient client using a
+// custom endpoint. Use this when interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds,
+// Azure stack).
+func NewReservationTransactionsClientWithBaseURI(baseURI string, subscriptionID string) ReservationTransactionsClient {
+ return ReservationTransactionsClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// List list of transactions for reserved instances on billing account scope
+// Parameters:
+// billingAccountID - billingAccount ID
+// filter - filter reservation transactions by date range. The properties/EventDate for start date and end
+// date. The filter supports 'le' and 'ge'
+func (client ReservationTransactionsClient) List(ctx context.Context, billingAccountID string, filter string) (result ReservationTransactionsListResultPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationTransactionsClient.List")
+ defer func() {
+ sc := -1
+ if result.rtlr.Response.Response != nil {
+ sc = result.rtlr.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.fn = client.listNextResults
+ req, err := client.ListPreparer(ctx, billingAccountID, filter)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationTransactionsClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.rtlr.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.ReservationTransactionsClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result.rtlr, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationTransactionsClient", "List", resp, "Failure responding to request")
+ return
+ }
+ if result.rtlr.hasNextLink() && result.rtlr.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client ReservationTransactionsClient) ListPreparer(ctx context.Context, billingAccountID string, filter string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "billingAccountId": autorest.Encode("path", billingAccountID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+ if len(filter) > 0 {
+ queryParameters["$filter"] = autorest.Encode("query", filter)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/providers/Microsoft.Consumption/reservationTransactions", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client ReservationTransactionsClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client ReservationTransactionsClient) ListResponder(resp *http.Response) (result ReservationTransactionsListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listNextResults retrieves the next set of results, if any.
+func (client ReservationTransactionsClient) listNextResults(ctx context.Context, lastResults ReservationTransactionsListResult) (result ReservationTransactionsListResult, err error) {
+ req, err := lastResults.reservationTransactionsListResultPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationTransactionsClient", "listNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationTransactionsClient", "listNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationTransactionsClient", "listNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListComplete enumerates all values, automatically crossing page boundaries as required.
+func (client ReservationTransactionsClient) ListComplete(ctx context.Context, billingAccountID string, filter string) (result ReservationTransactionsListResultIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationTransactionsClient.List")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.List(ctx, billingAccountID, filter)
+ return
+}
+
+// ListByBillingProfile list of transactions for reserved instances on billing account scope
+// Parameters:
+// billingAccountID - billingAccount ID
+// billingProfileID - azure Billing Profile ID.
+// filter - filter reservation transactions by date range. The properties/EventDate for start date and end
+// date. The filter supports 'le' and 'ge'
+func (client ReservationTransactionsClient) ListByBillingProfile(ctx context.Context, billingAccountID string, billingProfileID string, filter string) (result ModernReservationTransactionsListResultPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationTransactionsClient.ListByBillingProfile")
+ defer func() {
+ sc := -1
+ if result.mrtlr.Response.Response != nil {
+ sc = result.mrtlr.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.fn = client.listByBillingProfileNextResults
+ req, err := client.ListByBillingProfilePreparer(ctx, billingAccountID, billingProfileID, filter)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationTransactionsClient", "ListByBillingProfile", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListByBillingProfileSender(req)
+ if err != nil {
+ result.mrtlr.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.ReservationTransactionsClient", "ListByBillingProfile", resp, "Failure sending request")
+ return
+ }
+
+ result.mrtlr, err = client.ListByBillingProfileResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationTransactionsClient", "ListByBillingProfile", resp, "Failure responding to request")
+ return
+ }
+ if result.mrtlr.hasNextLink() && result.mrtlr.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListByBillingProfilePreparer prepares the ListByBillingProfile request.
+func (client ReservationTransactionsClient) ListByBillingProfilePreparer(ctx context.Context, billingAccountID string, billingProfileID string, filter string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "billingAccountId": autorest.Encode("path", billingAccountID),
+ "billingProfileId": autorest.Encode("path", billingProfileID),
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+ if len(filter) > 0 {
+ queryParameters["$filter"] = autorest.Encode("query", filter)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}/providers/Microsoft.Consumption/reservationTransactions", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListByBillingProfileSender sends the ListByBillingProfile request. The method will close the
+// http.Response Body if it receives an error.
+func (client ReservationTransactionsClient) ListByBillingProfileSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListByBillingProfileResponder handles the response to the ListByBillingProfile request. The method always
+// closes the http.Response Body.
+func (client ReservationTransactionsClient) ListByBillingProfileResponder(resp *http.Response) (result ModernReservationTransactionsListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listByBillingProfileNextResults retrieves the next set of results, if any.
+func (client ReservationTransactionsClient) listByBillingProfileNextResults(ctx context.Context, lastResults ModernReservationTransactionsListResult) (result ModernReservationTransactionsListResult, err error) {
+ req, err := lastResults.modernReservationTransactionsListResultPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationTransactionsClient", "listByBillingProfileNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListByBillingProfileSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.ReservationTransactionsClient", "listByBillingProfileNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListByBillingProfileResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.ReservationTransactionsClient", "listByBillingProfileNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListByBillingProfileComplete enumerates all values, automatically crossing page boundaries as required.
+func (client ReservationTransactionsClient) ListByBillingProfileComplete(ctx context.Context, billingAccountID string, billingProfileID string, filter string) (result ModernReservationTransactionsListResultIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ReservationTransactionsClient.ListByBillingProfile")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.ListByBillingProfile(ctx, billingAccountID, billingProfileID, filter)
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/tags.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/tags.go
new file mode 100644
index 0000000000000..9f26eb4a91be0
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/tags.go
@@ -0,0 +1,112 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// TagsClient is the consumption management client provides access to consumption resources for Azure Enterprise
+// Subscriptions.
+type TagsClient struct {
+ BaseClient
+}
+
+// NewTagsClient creates an instance of the TagsClient client.
+func NewTagsClient(subscriptionID string) TagsClient {
+ return NewTagsClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewTagsClientWithBaseURI creates an instance of the TagsClient client using a custom endpoint. Use this when
+// interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewTagsClientWithBaseURI(baseURI string, subscriptionID string) TagsClient {
+ return TagsClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// Get get all available tag keys for the defined scope
+// Parameters:
+// scope - the scope associated with tags operations. This includes '/subscriptions/{subscriptionId}/' for
+// subscription scope, '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}' for resourceGroup
+// scope, '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}' for Billing Account scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/departments/{departmentId}' for Department
+// scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/enrollmentAccounts/{enrollmentAccountId}'
+// for EnrollmentAccount scope and '/providers/Microsoft.Management/managementGroups/{managementGroupId}' for
+// Management Group scope..
+func (client TagsClient) Get(ctx context.Context, scope string) (result TagsResult, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/TagsClient.Get")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.GetPreparer(ctx, scope)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.TagsClient", "Get", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.TagsClient", "Get", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.TagsClient", "Get", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetPreparer prepares the Get request.
+func (client TagsClient) GetPreparer(ctx context.Context, scope string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "scope": scope,
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/{scope}/providers/Microsoft.Consumption/tags", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetSender sends the Get request. The method will close the
+// http.Response Body if it receives an error.
+func (client TagsClient) GetSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// GetResponder handles the response to the Get request. The method always
+// closes the http.Response Body.
+func (client TagsClient) GetResponder(resp *http.Response) (result TagsResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/usagedetails.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/usagedetails.go
new file mode 100644
index 0000000000000..3ef3eb43f3c84
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/usagedetails.go
@@ -0,0 +1,201 @@
+package consumption
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/autorest/validation"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// UsageDetailsClient is the consumption management client provides access to consumption resources for Azure
+// Enterprise Subscriptions.
+type UsageDetailsClient struct {
+ BaseClient
+}
+
+// NewUsageDetailsClient creates an instance of the UsageDetailsClient client.
+func NewUsageDetailsClient(subscriptionID string) UsageDetailsClient {
+ return NewUsageDetailsClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewUsageDetailsClientWithBaseURI creates an instance of the UsageDetailsClient client using a custom endpoint. Use
+// this when interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewUsageDetailsClientWithBaseURI(baseURI string, subscriptionID string) UsageDetailsClient {
+ return UsageDetailsClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// List lists the usage details for the defined scope. Usage details are available via this API only for May 1, 2014 or
+// later.
+// Parameters:
+// scope - the scope associated with usage details operations. This includes '/subscriptions/{subscriptionId}/'
+// for subscription scope, '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}' for Billing
+// Account scope, '/providers/Microsoft.Billing/departments/{departmentId}' for Department scope,
+// '/providers/Microsoft.Billing/enrollmentAccounts/{enrollmentAccountId}' for EnrollmentAccount scope and
+// '/providers/Microsoft.Management/managementGroups/{managementGroupId}' for Management Group scope. For
+// subscription, billing account, department, enrollment account and management group, you can also add billing
+// period to the scope using '/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}'. For e.g. to
+// specify billing period at department scope use
+// '/providers/Microsoft.Billing/departments/{departmentId}/providers/Microsoft.Billing/billingPeriods/{billingPeriodName}'.
+// Also, Modern Commerce Account scopes are '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}'
+// for billingAccount scope,
+// '/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}' for
+// billingProfile scope,
+// 'providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfiles/{billingProfileId}/invoiceSections/{invoiceSectionId}'
+// for invoiceSection scope, and
+// 'providers/Microsoft.Billing/billingAccounts/{billingAccountId}/customers/{customerId}' specific for
+// partners.
+// expand - may be used to expand the properties/additionalInfo or properties/meterDetails within a list of
+// usage details. By default, these fields are not included when listing usage details.
+// filter - may be used to filter usageDetails by properties/resourceGroup, properties/resourceName,
+// properties/resourceId, properties/chargeType, properties/reservationId, properties/publisherType or tags.
+// The filter supports 'eq', 'lt', 'gt', 'le', 'ge', and 'and'. It does not currently support 'ne', 'or', or
+// 'not'. Tag filter is a key value pair string where key and value is separated by a colon (:). PublisherType
+// Filter accepts two values azure and marketplace and it is currently supported for Web Direct Offer Type
+// skiptoken - skiptoken is only used if a previous operation returned a partial result. If a previous response
+// contains a nextLink element, the value of the nextLink element will include a skiptoken parameter that
+// specifies a starting point to use for subsequent calls.
+// top - may be used to limit the number of results to the most recent N usageDetails.
+// metric - allows to select different type of cost/usage records.
+func (client UsageDetailsClient) List(ctx context.Context, scope string, expand string, filter string, skiptoken string, top *int32, metric Metrictype) (result UsageDetailsListResultPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/UsageDetailsClient.List")
+ defer func() {
+ sc := -1
+ if result.udlr.Response.Response != nil {
+ sc = result.udlr.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: top,
+ Constraints: []validation.Constraint{{Target: "top", Name: validation.Null, Rule: false,
+ Chain: []validation.Constraint{{Target: "top", Name: validation.InclusiveMaximum, Rule: int64(1000), Chain: nil},
+ {Target: "top", Name: validation.InclusiveMinimum, Rule: int64(1), Chain: nil},
+ }}}}}); err != nil {
+ return result, validation.NewError("consumption.UsageDetailsClient", "List", err.Error())
+ }
+
+ result.fn = client.listNextResults
+ req, err := client.ListPreparer(ctx, scope, expand, filter, skiptoken, top, metric)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.UsageDetailsClient", "List", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.udlr.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "consumption.UsageDetailsClient", "List", resp, "Failure sending request")
+ return
+ }
+
+ result.udlr, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.UsageDetailsClient", "List", resp, "Failure responding to request")
+ return
+ }
+ if result.udlr.hasNextLink() && result.udlr.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListPreparer prepares the List request.
+func (client UsageDetailsClient) ListPreparer(ctx context.Context, scope string, expand string, filter string, skiptoken string, top *int32, metric Metrictype) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "scope": scope,
+ }
+
+ const APIVersion = "2019-10-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+ if len(expand) > 0 {
+ queryParameters["$expand"] = autorest.Encode("query", expand)
+ }
+ if len(filter) > 0 {
+ queryParameters["$filter"] = autorest.Encode("query", filter)
+ }
+ if len(skiptoken) > 0 {
+ queryParameters["$skiptoken"] = autorest.Encode("query", skiptoken)
+ }
+ if top != nil {
+ queryParameters["$top"] = autorest.Encode("query", *top)
+ }
+ if len(string(metric)) > 0 {
+ queryParameters["metric"] = autorest.Encode("query", metric)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/{scope}/providers/Microsoft.Consumption/usageDetails", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListSender sends the List request. The method will close the
+// http.Response Body if it receives an error.
+func (client UsageDetailsClient) ListSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListResponder handles the response to the List request. The method always
+// closes the http.Response Body.
+func (client UsageDetailsClient) ListResponder(resp *http.Response) (result UsageDetailsListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listNextResults retrieves the next set of results, if any.
+func (client UsageDetailsClient) listNextResults(ctx context.Context, lastResults UsageDetailsListResult) (result UsageDetailsListResult, err error) {
+ req, err := lastResults.usageDetailsListResultPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "consumption.UsageDetailsClient", "listNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "consumption.UsageDetailsClient", "listNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "consumption.UsageDetailsClient", "listNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListComplete enumerates all values, automatically crossing page boundaries as required.
+func (client UsageDetailsClient) ListComplete(ctx context.Context, scope string, expand string, filter string, skiptoken string, top *int32, metric Metrictype) (result UsageDetailsListResultIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/UsageDetailsClient.List")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.List(ctx, scope, expand, filter, skiptoken, top, metric)
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/version.go b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/version.go
new file mode 100644
index 0000000000000..11f277728a880
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/consumption/mgmt/2019-10-01/consumption/version.go
@@ -0,0 +1,19 @@
+package consumption
+
+import "github.com/Azure/azure-sdk-for-go/version"
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+// UserAgent returns the UserAgent string to use when sending http.Requests.
+func UserAgent() string {
+ return "Azure-SDK-For-Go/" + Version() + " consumption/2019-10-01"
+}
+
+// Version returns the semantic version (see http://semver.org) of the client.
+func Version() string {
+ return version.Number
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/enums.go b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/enums.go
deleted file mode 100644
index 4ccd12d1179aa..0000000000000
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/enums.go
+++ /dev/null
@@ -1,777 +0,0 @@
-package containerservice
-
-// Copyright (c) Microsoft Corporation. All rights reserved.
-// Licensed under the MIT License. See License.txt in the project root for license information.
-//
-// Code generated by Microsoft (R) AutoRest Code Generator.
-// Changes may cause incorrect behavior and will be lost if the code is regenerated.
-
-// AgentPoolMode enumerates the values for agent pool mode.
-type AgentPoolMode string
-
-const (
- // System ...
- System AgentPoolMode = "System"
- // User ...
- User AgentPoolMode = "User"
-)
-
-// PossibleAgentPoolModeValues returns an array of possible values for the AgentPoolMode const type.
-func PossibleAgentPoolModeValues() []AgentPoolMode {
- return []AgentPoolMode{System, User}
-}
-
-// AgentPoolType enumerates the values for agent pool type.
-type AgentPoolType string
-
-const (
- // AvailabilitySet ...
- AvailabilitySet AgentPoolType = "AvailabilitySet"
- // VirtualMachineScaleSets ...
- VirtualMachineScaleSets AgentPoolType = "VirtualMachineScaleSets"
-)
-
-// PossibleAgentPoolTypeValues returns an array of possible values for the AgentPoolType const type.
-func PossibleAgentPoolTypeValues() []AgentPoolType {
- return []AgentPoolType{AvailabilitySet, VirtualMachineScaleSets}
-}
-
-// Code enumerates the values for code.
-type Code string
-
-const (
- // Running ...
- Running Code = "Running"
- // Stopped ...
- Stopped Code = "Stopped"
-)
-
-// PossibleCodeValues returns an array of possible values for the Code const type.
-func PossibleCodeValues() []Code {
- return []Code{Running, Stopped}
-}
-
-// ConnectionStatus enumerates the values for connection status.
-type ConnectionStatus string
-
-const (
- // Approved ...
- Approved ConnectionStatus = "Approved"
- // Disconnected ...
- Disconnected ConnectionStatus = "Disconnected"
- // Pending ...
- Pending ConnectionStatus = "Pending"
- // Rejected ...
- Rejected ConnectionStatus = "Rejected"
-)
-
-// PossibleConnectionStatusValues returns an array of possible values for the ConnectionStatus const type.
-func PossibleConnectionStatusValues() []ConnectionStatus {
- return []ConnectionStatus{Approved, Disconnected, Pending, Rejected}
-}
-
-// CreatedByType enumerates the values for created by type.
-type CreatedByType string
-
-const (
- // CreatedByTypeApplication ...
- CreatedByTypeApplication CreatedByType = "Application"
- // CreatedByTypeKey ...
- CreatedByTypeKey CreatedByType = "Key"
- // CreatedByTypeManagedIdentity ...
- CreatedByTypeManagedIdentity CreatedByType = "ManagedIdentity"
- // CreatedByTypeUser ...
- CreatedByTypeUser CreatedByType = "User"
-)
-
-// PossibleCreatedByTypeValues returns an array of possible values for the CreatedByType const type.
-func PossibleCreatedByTypeValues() []CreatedByType {
- return []CreatedByType{CreatedByTypeApplication, CreatedByTypeKey, CreatedByTypeManagedIdentity, CreatedByTypeUser}
-}
-
-// Expander enumerates the values for expander.
-type Expander string
-
-const (
- // LeastWaste ...
- LeastWaste Expander = "least-waste"
- // MostPods ...
- MostPods Expander = "most-pods"
- // Priority ...
- Priority Expander = "priority"
- // Random ...
- Random Expander = "random"
-)
-
-// PossibleExpanderValues returns an array of possible values for the Expander const type.
-func PossibleExpanderValues() []Expander {
- return []Expander{LeastWaste, MostPods, Priority, Random}
-}
-
-// KubeletDiskType enumerates the values for kubelet disk type.
-type KubeletDiskType string
-
-const (
- // OS ...
- OS KubeletDiskType = "OS"
- // Temporary ...
- Temporary KubeletDiskType = "Temporary"
-)
-
-// PossibleKubeletDiskTypeValues returns an array of possible values for the KubeletDiskType const type.
-func PossibleKubeletDiskTypeValues() []KubeletDiskType {
- return []KubeletDiskType{OS, Temporary}
-}
-
-// LicenseType enumerates the values for license type.
-type LicenseType string
-
-const (
- // None ...
- None LicenseType = "None"
- // WindowsServer ...
- WindowsServer LicenseType = "Windows_Server"
-)
-
-// PossibleLicenseTypeValues returns an array of possible values for the LicenseType const type.
-func PossibleLicenseTypeValues() []LicenseType {
- return []LicenseType{None, WindowsServer}
-}
-
-// LoadBalancerSku enumerates the values for load balancer sku.
-type LoadBalancerSku string
-
-const (
- // Basic ...
- Basic LoadBalancerSku = "basic"
- // Standard ...
- Standard LoadBalancerSku = "standard"
-)
-
-// PossibleLoadBalancerSkuValues returns an array of possible values for the LoadBalancerSku const type.
-func PossibleLoadBalancerSkuValues() []LoadBalancerSku {
- return []LoadBalancerSku{Basic, Standard}
-}
-
-// ManagedClusterPodIdentityProvisioningState enumerates the values for managed cluster pod identity
-// provisioning state.
-type ManagedClusterPodIdentityProvisioningState string
-
-const (
- // Assigned ...
- Assigned ManagedClusterPodIdentityProvisioningState = "Assigned"
- // Deleting ...
- Deleting ManagedClusterPodIdentityProvisioningState = "Deleting"
- // Failed ...
- Failed ManagedClusterPodIdentityProvisioningState = "Failed"
- // Updating ...
- Updating ManagedClusterPodIdentityProvisioningState = "Updating"
-)
-
-// PossibleManagedClusterPodIdentityProvisioningStateValues returns an array of possible values for the ManagedClusterPodIdentityProvisioningState const type.
-func PossibleManagedClusterPodIdentityProvisioningStateValues() []ManagedClusterPodIdentityProvisioningState {
- return []ManagedClusterPodIdentityProvisioningState{Assigned, Deleting, Failed, Updating}
-}
-
-// ManagedClusterSKUName enumerates the values for managed cluster sku name.
-type ManagedClusterSKUName string
-
-const (
- // ManagedClusterSKUNameBasic ...
- ManagedClusterSKUNameBasic ManagedClusterSKUName = "Basic"
-)
-
-// PossibleManagedClusterSKUNameValues returns an array of possible values for the ManagedClusterSKUName const type.
-func PossibleManagedClusterSKUNameValues() []ManagedClusterSKUName {
- return []ManagedClusterSKUName{ManagedClusterSKUNameBasic}
-}
-
-// ManagedClusterSKUTier enumerates the values for managed cluster sku tier.
-type ManagedClusterSKUTier string
-
-const (
- // Free ...
- Free ManagedClusterSKUTier = "Free"
- // Paid ...
- Paid ManagedClusterSKUTier = "Paid"
-)
-
-// PossibleManagedClusterSKUTierValues returns an array of possible values for the ManagedClusterSKUTier const type.
-func PossibleManagedClusterSKUTierValues() []ManagedClusterSKUTier {
- return []ManagedClusterSKUTier{Free, Paid}
-}
-
-// NetworkMode enumerates the values for network mode.
-type NetworkMode string
-
-const (
- // Bridge ...
- Bridge NetworkMode = "bridge"
- // Transparent ...
- Transparent NetworkMode = "transparent"
-)
-
-// PossibleNetworkModeValues returns an array of possible values for the NetworkMode const type.
-func PossibleNetworkModeValues() []NetworkMode {
- return []NetworkMode{Bridge, Transparent}
-}
-
-// NetworkPlugin enumerates the values for network plugin.
-type NetworkPlugin string
-
-const (
- // Azure ...
- Azure NetworkPlugin = "azure"
- // Kubenet ...
- Kubenet NetworkPlugin = "kubenet"
-)
-
-// PossibleNetworkPluginValues returns an array of possible values for the NetworkPlugin const type.
-func PossibleNetworkPluginValues() []NetworkPlugin {
- return []NetworkPlugin{Azure, Kubenet}
-}
-
-// NetworkPolicy enumerates the values for network policy.
-type NetworkPolicy string
-
-const (
- // NetworkPolicyAzure ...
- NetworkPolicyAzure NetworkPolicy = "azure"
- // NetworkPolicyCalico ...
- NetworkPolicyCalico NetworkPolicy = "calico"
-)
-
-// PossibleNetworkPolicyValues returns an array of possible values for the NetworkPolicy const type.
-func PossibleNetworkPolicyValues() []NetworkPolicy {
- return []NetworkPolicy{NetworkPolicyAzure, NetworkPolicyCalico}
-}
-
-// OSDiskType enumerates the values for os disk type.
-type OSDiskType string
-
-const (
- // Ephemeral ...
- Ephemeral OSDiskType = "Ephemeral"
- // Managed ...
- Managed OSDiskType = "Managed"
-)
-
-// PossibleOSDiskTypeValues returns an array of possible values for the OSDiskType const type.
-func PossibleOSDiskTypeValues() []OSDiskType {
- return []OSDiskType{Ephemeral, Managed}
-}
-
-// OSType enumerates the values for os type.
-type OSType string
-
-const (
- // Linux ...
- Linux OSType = "Linux"
- // Windows ...
- Windows OSType = "Windows"
-)
-
-// PossibleOSTypeValues returns an array of possible values for the OSType const type.
-func PossibleOSTypeValues() []OSType {
- return []OSType{Linux, Windows}
-}
-
-// OutboundType enumerates the values for outbound type.
-type OutboundType string
-
-const (
- // LoadBalancer ...
- LoadBalancer OutboundType = "loadBalancer"
- // UserDefinedRouting ...
- UserDefinedRouting OutboundType = "userDefinedRouting"
-)
-
-// PossibleOutboundTypeValues returns an array of possible values for the OutboundType const type.
-func PossibleOutboundTypeValues() []OutboundType {
- return []OutboundType{LoadBalancer, UserDefinedRouting}
-}
-
-// PrivateEndpointConnectionProvisioningState enumerates the values for private endpoint connection
-// provisioning state.
-type PrivateEndpointConnectionProvisioningState string
-
-const (
- // PrivateEndpointConnectionProvisioningStateCreating ...
- PrivateEndpointConnectionProvisioningStateCreating PrivateEndpointConnectionProvisioningState = "Creating"
- // PrivateEndpointConnectionProvisioningStateDeleting ...
- PrivateEndpointConnectionProvisioningStateDeleting PrivateEndpointConnectionProvisioningState = "Deleting"
- // PrivateEndpointConnectionProvisioningStateFailed ...
- PrivateEndpointConnectionProvisioningStateFailed PrivateEndpointConnectionProvisioningState = "Failed"
- // PrivateEndpointConnectionProvisioningStateSucceeded ...
- PrivateEndpointConnectionProvisioningStateSucceeded PrivateEndpointConnectionProvisioningState = "Succeeded"
-)
-
-// PossiblePrivateEndpointConnectionProvisioningStateValues returns an array of possible values for the PrivateEndpointConnectionProvisioningState const type.
-func PossiblePrivateEndpointConnectionProvisioningStateValues() []PrivateEndpointConnectionProvisioningState {
- return []PrivateEndpointConnectionProvisioningState{PrivateEndpointConnectionProvisioningStateCreating, PrivateEndpointConnectionProvisioningStateDeleting, PrivateEndpointConnectionProvisioningStateFailed, PrivateEndpointConnectionProvisioningStateSucceeded}
-}
-
-// ResourceIdentityType enumerates the values for resource identity type.
-type ResourceIdentityType string
-
-const (
- // ResourceIdentityTypeNone ...
- ResourceIdentityTypeNone ResourceIdentityType = "None"
- // ResourceIdentityTypeSystemAssigned ...
- ResourceIdentityTypeSystemAssigned ResourceIdentityType = "SystemAssigned"
- // ResourceIdentityTypeUserAssigned ...
- ResourceIdentityTypeUserAssigned ResourceIdentityType = "UserAssigned"
-)
-
-// PossibleResourceIdentityTypeValues returns an array of possible values for the ResourceIdentityType const type.
-func PossibleResourceIdentityTypeValues() []ResourceIdentityType {
- return []ResourceIdentityType{ResourceIdentityTypeNone, ResourceIdentityTypeSystemAssigned, ResourceIdentityTypeUserAssigned}
-}
-
-// ScaleSetEvictionPolicy enumerates the values for scale set eviction policy.
-type ScaleSetEvictionPolicy string
-
-const (
- // Deallocate ...
- Deallocate ScaleSetEvictionPolicy = "Deallocate"
- // Delete ...
- Delete ScaleSetEvictionPolicy = "Delete"
-)
-
-// PossibleScaleSetEvictionPolicyValues returns an array of possible values for the ScaleSetEvictionPolicy const type.
-func PossibleScaleSetEvictionPolicyValues() []ScaleSetEvictionPolicy {
- return []ScaleSetEvictionPolicy{Deallocate, Delete}
-}
-
-// ScaleSetPriority enumerates the values for scale set priority.
-type ScaleSetPriority string
-
-const (
- // Regular ...
- Regular ScaleSetPriority = "Regular"
- // Spot ...
- Spot ScaleSetPriority = "Spot"
-)
-
-// PossibleScaleSetPriorityValues returns an array of possible values for the ScaleSetPriority const type.
-func PossibleScaleSetPriorityValues() []ScaleSetPriority {
- return []ScaleSetPriority{Regular, Spot}
-}
-
-// StorageProfileTypes enumerates the values for storage profile types.
-type StorageProfileTypes string
-
-const (
- // ManagedDisks ...
- ManagedDisks StorageProfileTypes = "ManagedDisks"
- // StorageAccount ...
- StorageAccount StorageProfileTypes = "StorageAccount"
-)
-
-// PossibleStorageProfileTypesValues returns an array of possible values for the StorageProfileTypes const type.
-func PossibleStorageProfileTypesValues() []StorageProfileTypes {
- return []StorageProfileTypes{ManagedDisks, StorageAccount}
-}
-
-// UpgradeChannel enumerates the values for upgrade channel.
-type UpgradeChannel string
-
-const (
- // UpgradeChannelNone ...
- UpgradeChannelNone UpgradeChannel = "none"
- // UpgradeChannelPatch ...
- UpgradeChannelPatch UpgradeChannel = "patch"
- // UpgradeChannelRapid ...
- UpgradeChannelRapid UpgradeChannel = "rapid"
- // UpgradeChannelStable ...
- UpgradeChannelStable UpgradeChannel = "stable"
-)
-
-// PossibleUpgradeChannelValues returns an array of possible values for the UpgradeChannel const type.
-func PossibleUpgradeChannelValues() []UpgradeChannel {
- return []UpgradeChannel{UpgradeChannelNone, UpgradeChannelPatch, UpgradeChannelRapid, UpgradeChannelStable}
-}
-
-// VMSizeTypes enumerates the values for vm size types.
-type VMSizeTypes string
-
-const (
- // StandardA1 ...
- StandardA1 VMSizeTypes = "Standard_A1"
- // StandardA10 ...
- StandardA10 VMSizeTypes = "Standard_A10"
- // StandardA11 ...
- StandardA11 VMSizeTypes = "Standard_A11"
- // StandardA1V2 ...
- StandardA1V2 VMSizeTypes = "Standard_A1_v2"
- // StandardA2 ...
- StandardA2 VMSizeTypes = "Standard_A2"
- // StandardA2mV2 ...
- StandardA2mV2 VMSizeTypes = "Standard_A2m_v2"
- // StandardA2V2 ...
- StandardA2V2 VMSizeTypes = "Standard_A2_v2"
- // StandardA3 ...
- StandardA3 VMSizeTypes = "Standard_A3"
- // StandardA4 ...
- StandardA4 VMSizeTypes = "Standard_A4"
- // StandardA4mV2 ...
- StandardA4mV2 VMSizeTypes = "Standard_A4m_v2"
- // StandardA4V2 ...
- StandardA4V2 VMSizeTypes = "Standard_A4_v2"
- // StandardA5 ...
- StandardA5 VMSizeTypes = "Standard_A5"
- // StandardA6 ...
- StandardA6 VMSizeTypes = "Standard_A6"
- // StandardA7 ...
- StandardA7 VMSizeTypes = "Standard_A7"
- // StandardA8 ...
- StandardA8 VMSizeTypes = "Standard_A8"
- // StandardA8mV2 ...
- StandardA8mV2 VMSizeTypes = "Standard_A8m_v2"
- // StandardA8V2 ...
- StandardA8V2 VMSizeTypes = "Standard_A8_v2"
- // StandardA9 ...
- StandardA9 VMSizeTypes = "Standard_A9"
- // StandardB2ms ...
- StandardB2ms VMSizeTypes = "Standard_B2ms"
- // StandardB2s ...
- StandardB2s VMSizeTypes = "Standard_B2s"
- // StandardB4ms ...
- StandardB4ms VMSizeTypes = "Standard_B4ms"
- // StandardB8ms ...
- StandardB8ms VMSizeTypes = "Standard_B8ms"
- // StandardD1 ...
- StandardD1 VMSizeTypes = "Standard_D1"
- // StandardD11 ...
- StandardD11 VMSizeTypes = "Standard_D11"
- // StandardD11V2 ...
- StandardD11V2 VMSizeTypes = "Standard_D11_v2"
- // StandardD11V2Promo ...
- StandardD11V2Promo VMSizeTypes = "Standard_D11_v2_Promo"
- // StandardD12 ...
- StandardD12 VMSizeTypes = "Standard_D12"
- // StandardD12V2 ...
- StandardD12V2 VMSizeTypes = "Standard_D12_v2"
- // StandardD12V2Promo ...
- StandardD12V2Promo VMSizeTypes = "Standard_D12_v2_Promo"
- // StandardD13 ...
- StandardD13 VMSizeTypes = "Standard_D13"
- // StandardD13V2 ...
- StandardD13V2 VMSizeTypes = "Standard_D13_v2"
- // StandardD13V2Promo ...
- StandardD13V2Promo VMSizeTypes = "Standard_D13_v2_Promo"
- // StandardD14 ...
- StandardD14 VMSizeTypes = "Standard_D14"
- // StandardD14V2 ...
- StandardD14V2 VMSizeTypes = "Standard_D14_v2"
- // StandardD14V2Promo ...
- StandardD14V2Promo VMSizeTypes = "Standard_D14_v2_Promo"
- // StandardD15V2 ...
- StandardD15V2 VMSizeTypes = "Standard_D15_v2"
- // StandardD16sV3 ...
- StandardD16sV3 VMSizeTypes = "Standard_D16s_v3"
- // StandardD16V3 ...
- StandardD16V3 VMSizeTypes = "Standard_D16_v3"
- // StandardD1V2 ...
- StandardD1V2 VMSizeTypes = "Standard_D1_v2"
- // StandardD2 ...
- StandardD2 VMSizeTypes = "Standard_D2"
- // StandardD2sV3 ...
- StandardD2sV3 VMSizeTypes = "Standard_D2s_v3"
- // StandardD2V2 ...
- StandardD2V2 VMSizeTypes = "Standard_D2_v2"
- // StandardD2V2Promo ...
- StandardD2V2Promo VMSizeTypes = "Standard_D2_v2_Promo"
- // StandardD2V3 ...
- StandardD2V3 VMSizeTypes = "Standard_D2_v3"
- // StandardD3 ...
- StandardD3 VMSizeTypes = "Standard_D3"
- // StandardD32sV3 ...
- StandardD32sV3 VMSizeTypes = "Standard_D32s_v3"
- // StandardD32V3 ...
- StandardD32V3 VMSizeTypes = "Standard_D32_v3"
- // StandardD3V2 ...
- StandardD3V2 VMSizeTypes = "Standard_D3_v2"
- // StandardD3V2Promo ...
- StandardD3V2Promo VMSizeTypes = "Standard_D3_v2_Promo"
- // StandardD4 ...
- StandardD4 VMSizeTypes = "Standard_D4"
- // StandardD4sV3 ...
- StandardD4sV3 VMSizeTypes = "Standard_D4s_v3"
- // StandardD4V2 ...
- StandardD4V2 VMSizeTypes = "Standard_D4_v2"
- // StandardD4V2Promo ...
- StandardD4V2Promo VMSizeTypes = "Standard_D4_v2_Promo"
- // StandardD4V3 ...
- StandardD4V3 VMSizeTypes = "Standard_D4_v3"
- // StandardD5V2 ...
- StandardD5V2 VMSizeTypes = "Standard_D5_v2"
- // StandardD5V2Promo ...
- StandardD5V2Promo VMSizeTypes = "Standard_D5_v2_Promo"
- // StandardD64sV3 ...
- StandardD64sV3 VMSizeTypes = "Standard_D64s_v3"
- // StandardD64V3 ...
- StandardD64V3 VMSizeTypes = "Standard_D64_v3"
- // StandardD8sV3 ...
- StandardD8sV3 VMSizeTypes = "Standard_D8s_v3"
- // StandardD8V3 ...
- StandardD8V3 VMSizeTypes = "Standard_D8_v3"
- // StandardDS1 ...
- StandardDS1 VMSizeTypes = "Standard_DS1"
- // StandardDS11 ...
- StandardDS11 VMSizeTypes = "Standard_DS11"
- // StandardDS11V2 ...
- StandardDS11V2 VMSizeTypes = "Standard_DS11_v2"
- // StandardDS11V2Promo ...
- StandardDS11V2Promo VMSizeTypes = "Standard_DS11_v2_Promo"
- // StandardDS12 ...
- StandardDS12 VMSizeTypes = "Standard_DS12"
- // StandardDS12V2 ...
- StandardDS12V2 VMSizeTypes = "Standard_DS12_v2"
- // StandardDS12V2Promo ...
- StandardDS12V2Promo VMSizeTypes = "Standard_DS12_v2_Promo"
- // StandardDS13 ...
- StandardDS13 VMSizeTypes = "Standard_DS13"
- // StandardDS132V2 ...
- StandardDS132V2 VMSizeTypes = "Standard_DS13-2_v2"
- // StandardDS134V2 ...
- StandardDS134V2 VMSizeTypes = "Standard_DS13-4_v2"
- // StandardDS13V2 ...
- StandardDS13V2 VMSizeTypes = "Standard_DS13_v2"
- // StandardDS13V2Promo ...
- StandardDS13V2Promo VMSizeTypes = "Standard_DS13_v2_Promo"
- // StandardDS14 ...
- StandardDS14 VMSizeTypes = "Standard_DS14"
- // StandardDS144V2 ...
- StandardDS144V2 VMSizeTypes = "Standard_DS14-4_v2"
- // StandardDS148V2 ...
- StandardDS148V2 VMSizeTypes = "Standard_DS14-8_v2"
- // StandardDS14V2 ...
- StandardDS14V2 VMSizeTypes = "Standard_DS14_v2"
- // StandardDS14V2Promo ...
- StandardDS14V2Promo VMSizeTypes = "Standard_DS14_v2_Promo"
- // StandardDS15V2 ...
- StandardDS15V2 VMSizeTypes = "Standard_DS15_v2"
- // StandardDS1V2 ...
- StandardDS1V2 VMSizeTypes = "Standard_DS1_v2"
- // StandardDS2 ...
- StandardDS2 VMSizeTypes = "Standard_DS2"
- // StandardDS2V2 ...
- StandardDS2V2 VMSizeTypes = "Standard_DS2_v2"
- // StandardDS2V2Promo ...
- StandardDS2V2Promo VMSizeTypes = "Standard_DS2_v2_Promo"
- // StandardDS3 ...
- StandardDS3 VMSizeTypes = "Standard_DS3"
- // StandardDS3V2 ...
- StandardDS3V2 VMSizeTypes = "Standard_DS3_v2"
- // StandardDS3V2Promo ...
- StandardDS3V2Promo VMSizeTypes = "Standard_DS3_v2_Promo"
- // StandardDS4 ...
- StandardDS4 VMSizeTypes = "Standard_DS4"
- // StandardDS4V2 ...
- StandardDS4V2 VMSizeTypes = "Standard_DS4_v2"
- // StandardDS4V2Promo ...
- StandardDS4V2Promo VMSizeTypes = "Standard_DS4_v2_Promo"
- // StandardDS5V2 ...
- StandardDS5V2 VMSizeTypes = "Standard_DS5_v2"
- // StandardDS5V2Promo ...
- StandardDS5V2Promo VMSizeTypes = "Standard_DS5_v2_Promo"
- // StandardE16sV3 ...
- StandardE16sV3 VMSizeTypes = "Standard_E16s_v3"
- // StandardE16V3 ...
- StandardE16V3 VMSizeTypes = "Standard_E16_v3"
- // StandardE2sV3 ...
- StandardE2sV3 VMSizeTypes = "Standard_E2s_v3"
- // StandardE2V3 ...
- StandardE2V3 VMSizeTypes = "Standard_E2_v3"
- // StandardE3216sV3 ...
- StandardE3216sV3 VMSizeTypes = "Standard_E32-16s_v3"
- // StandardE328sV3 ...
- StandardE328sV3 VMSizeTypes = "Standard_E32-8s_v3"
- // StandardE32sV3 ...
- StandardE32sV3 VMSizeTypes = "Standard_E32s_v3"
- // StandardE32V3 ...
- StandardE32V3 VMSizeTypes = "Standard_E32_v3"
- // StandardE4sV3 ...
- StandardE4sV3 VMSizeTypes = "Standard_E4s_v3"
- // StandardE4V3 ...
- StandardE4V3 VMSizeTypes = "Standard_E4_v3"
- // StandardE6416sV3 ...
- StandardE6416sV3 VMSizeTypes = "Standard_E64-16s_v3"
- // StandardE6432sV3 ...
- StandardE6432sV3 VMSizeTypes = "Standard_E64-32s_v3"
- // StandardE64sV3 ...
- StandardE64sV3 VMSizeTypes = "Standard_E64s_v3"
- // StandardE64V3 ...
- StandardE64V3 VMSizeTypes = "Standard_E64_v3"
- // StandardE8sV3 ...
- StandardE8sV3 VMSizeTypes = "Standard_E8s_v3"
- // StandardE8V3 ...
- StandardE8V3 VMSizeTypes = "Standard_E8_v3"
- // StandardF1 ...
- StandardF1 VMSizeTypes = "Standard_F1"
- // StandardF16 ...
- StandardF16 VMSizeTypes = "Standard_F16"
- // StandardF16s ...
- StandardF16s VMSizeTypes = "Standard_F16s"
- // StandardF16sV2 ...
- StandardF16sV2 VMSizeTypes = "Standard_F16s_v2"
- // StandardF1s ...
- StandardF1s VMSizeTypes = "Standard_F1s"
- // StandardF2 ...
- StandardF2 VMSizeTypes = "Standard_F2"
- // StandardF2s ...
- StandardF2s VMSizeTypes = "Standard_F2s"
- // StandardF2sV2 ...
- StandardF2sV2 VMSizeTypes = "Standard_F2s_v2"
- // StandardF32sV2 ...
- StandardF32sV2 VMSizeTypes = "Standard_F32s_v2"
- // StandardF4 ...
- StandardF4 VMSizeTypes = "Standard_F4"
- // StandardF4s ...
- StandardF4s VMSizeTypes = "Standard_F4s"
- // StandardF4sV2 ...
- StandardF4sV2 VMSizeTypes = "Standard_F4s_v2"
- // StandardF64sV2 ...
- StandardF64sV2 VMSizeTypes = "Standard_F64s_v2"
- // StandardF72sV2 ...
- StandardF72sV2 VMSizeTypes = "Standard_F72s_v2"
- // StandardF8 ...
- StandardF8 VMSizeTypes = "Standard_F8"
- // StandardF8s ...
- StandardF8s VMSizeTypes = "Standard_F8s"
- // StandardF8sV2 ...
- StandardF8sV2 VMSizeTypes = "Standard_F8s_v2"
- // StandardG1 ...
- StandardG1 VMSizeTypes = "Standard_G1"
- // StandardG2 ...
- StandardG2 VMSizeTypes = "Standard_G2"
- // StandardG3 ...
- StandardG3 VMSizeTypes = "Standard_G3"
- // StandardG4 ...
- StandardG4 VMSizeTypes = "Standard_G4"
- // StandardG5 ...
- StandardG5 VMSizeTypes = "Standard_G5"
- // StandardGS1 ...
- StandardGS1 VMSizeTypes = "Standard_GS1"
- // StandardGS2 ...
- StandardGS2 VMSizeTypes = "Standard_GS2"
- // StandardGS3 ...
- StandardGS3 VMSizeTypes = "Standard_GS3"
- // StandardGS4 ...
- StandardGS4 VMSizeTypes = "Standard_GS4"
- // StandardGS44 ...
- StandardGS44 VMSizeTypes = "Standard_GS4-4"
- // StandardGS48 ...
- StandardGS48 VMSizeTypes = "Standard_GS4-8"
- // StandardGS5 ...
- StandardGS5 VMSizeTypes = "Standard_GS5"
- // StandardGS516 ...
- StandardGS516 VMSizeTypes = "Standard_GS5-16"
- // StandardGS58 ...
- StandardGS58 VMSizeTypes = "Standard_GS5-8"
- // StandardH16 ...
- StandardH16 VMSizeTypes = "Standard_H16"
- // StandardH16m ...
- StandardH16m VMSizeTypes = "Standard_H16m"
- // StandardH16mr ...
- StandardH16mr VMSizeTypes = "Standard_H16mr"
- // StandardH16r ...
- StandardH16r VMSizeTypes = "Standard_H16r"
- // StandardH8 ...
- StandardH8 VMSizeTypes = "Standard_H8"
- // StandardH8m ...
- StandardH8m VMSizeTypes = "Standard_H8m"
- // StandardL16s ...
- StandardL16s VMSizeTypes = "Standard_L16s"
- // StandardL32s ...
- StandardL32s VMSizeTypes = "Standard_L32s"
- // StandardL4s ...
- StandardL4s VMSizeTypes = "Standard_L4s"
- // StandardL8s ...
- StandardL8s VMSizeTypes = "Standard_L8s"
- // StandardM12832ms ...
- StandardM12832ms VMSizeTypes = "Standard_M128-32ms"
- // StandardM12864ms ...
- StandardM12864ms VMSizeTypes = "Standard_M128-64ms"
- // StandardM128ms ...
- StandardM128ms VMSizeTypes = "Standard_M128ms"
- // StandardM128s ...
- StandardM128s VMSizeTypes = "Standard_M128s"
- // StandardM6416ms ...
- StandardM6416ms VMSizeTypes = "Standard_M64-16ms"
- // StandardM6432ms ...
- StandardM6432ms VMSizeTypes = "Standard_M64-32ms"
- // StandardM64ms ...
- StandardM64ms VMSizeTypes = "Standard_M64ms"
- // StandardM64s ...
- StandardM64s VMSizeTypes = "Standard_M64s"
- // StandardNC12 ...
- StandardNC12 VMSizeTypes = "Standard_NC12"
- // StandardNC12sV2 ...
- StandardNC12sV2 VMSizeTypes = "Standard_NC12s_v2"
- // StandardNC12sV3 ...
- StandardNC12sV3 VMSizeTypes = "Standard_NC12s_v3"
- // StandardNC24 ...
- StandardNC24 VMSizeTypes = "Standard_NC24"
- // StandardNC24r ...
- StandardNC24r VMSizeTypes = "Standard_NC24r"
- // StandardNC24rsV2 ...
- StandardNC24rsV2 VMSizeTypes = "Standard_NC24rs_v2"
- // StandardNC24rsV3 ...
- StandardNC24rsV3 VMSizeTypes = "Standard_NC24rs_v3"
- // StandardNC24sV2 ...
- StandardNC24sV2 VMSizeTypes = "Standard_NC24s_v2"
- // StandardNC24sV3 ...
- StandardNC24sV3 VMSizeTypes = "Standard_NC24s_v3"
- // StandardNC6 ...
- StandardNC6 VMSizeTypes = "Standard_NC6"
- // StandardNC6sV2 ...
- StandardNC6sV2 VMSizeTypes = "Standard_NC6s_v2"
- // StandardNC6sV3 ...
- StandardNC6sV3 VMSizeTypes = "Standard_NC6s_v3"
- // StandardND12s ...
- StandardND12s VMSizeTypes = "Standard_ND12s"
- // StandardND24rs ...
- StandardND24rs VMSizeTypes = "Standard_ND24rs"
- // StandardND24s ...
- StandardND24s VMSizeTypes = "Standard_ND24s"
- // StandardND6s ...
- StandardND6s VMSizeTypes = "Standard_ND6s"
- // StandardNV12 ...
- StandardNV12 VMSizeTypes = "Standard_NV12"
- // StandardNV24 ...
- StandardNV24 VMSizeTypes = "Standard_NV24"
- // StandardNV6 ...
- StandardNV6 VMSizeTypes = "Standard_NV6"
-)
-
-// PossibleVMSizeTypesValues returns an array of possible values for the VMSizeTypes const type.
-func PossibleVMSizeTypesValues() []VMSizeTypes {
- return []VMSizeTypes{StandardA1, StandardA10, StandardA11, StandardA1V2, StandardA2, StandardA2mV2, StandardA2V2, StandardA3, StandardA4, StandardA4mV2, StandardA4V2, StandardA5, StandardA6, StandardA7, StandardA8, StandardA8mV2, StandardA8V2, StandardA9, StandardB2ms, StandardB2s, StandardB4ms, StandardB8ms, StandardD1, StandardD11, StandardD11V2, StandardD11V2Promo, StandardD12, StandardD12V2, StandardD12V2Promo, StandardD13, StandardD13V2, StandardD13V2Promo, StandardD14, StandardD14V2, StandardD14V2Promo, StandardD15V2, StandardD16sV3, StandardD16V3, StandardD1V2, StandardD2, StandardD2sV3, StandardD2V2, StandardD2V2Promo, StandardD2V3, StandardD3, StandardD32sV3, StandardD32V3, StandardD3V2, StandardD3V2Promo, StandardD4, StandardD4sV3, StandardD4V2, StandardD4V2Promo, StandardD4V3, StandardD5V2, StandardD5V2Promo, StandardD64sV3, StandardD64V3, StandardD8sV3, StandardD8V3, StandardDS1, StandardDS11, StandardDS11V2, StandardDS11V2Promo, StandardDS12, StandardDS12V2, StandardDS12V2Promo, StandardDS13, StandardDS132V2, StandardDS134V2, StandardDS13V2, StandardDS13V2Promo, StandardDS14, StandardDS144V2, StandardDS148V2, StandardDS14V2, StandardDS14V2Promo, StandardDS15V2, StandardDS1V2, StandardDS2, StandardDS2V2, StandardDS2V2Promo, StandardDS3, StandardDS3V2, StandardDS3V2Promo, StandardDS4, StandardDS4V2, StandardDS4V2Promo, StandardDS5V2, StandardDS5V2Promo, StandardE16sV3, StandardE16V3, StandardE2sV3, StandardE2V3, StandardE3216sV3, StandardE328sV3, StandardE32sV3, StandardE32V3, StandardE4sV3, StandardE4V3, StandardE6416sV3, StandardE6432sV3, StandardE64sV3, StandardE64V3, StandardE8sV3, StandardE8V3, StandardF1, StandardF16, StandardF16s, StandardF16sV2, StandardF1s, StandardF2, StandardF2s, StandardF2sV2, StandardF32sV2, StandardF4, StandardF4s, StandardF4sV2, StandardF64sV2, StandardF72sV2, StandardF8, StandardF8s, StandardF8sV2, StandardG1, StandardG2, StandardG3, StandardG4, StandardG5, StandardGS1, StandardGS2, StandardGS3, StandardGS4, StandardGS44, StandardGS48, StandardGS5, StandardGS516, StandardGS58, StandardH16, StandardH16m, StandardH16mr, StandardH16r, StandardH8, StandardH8m, StandardL16s, StandardL32s, StandardL4s, StandardL8s, StandardM12832ms, StandardM12864ms, StandardM128ms, StandardM128s, StandardM6416ms, StandardM6432ms, StandardM64ms, StandardM64s, StandardNC12, StandardNC12sV2, StandardNC12sV3, StandardNC24, StandardNC24r, StandardNC24rsV2, StandardNC24rsV3, StandardNC24sV2, StandardNC24sV3, StandardNC6, StandardNC6sV2, StandardNC6sV3, StandardND12s, StandardND24rs, StandardND24s, StandardND6s, StandardNV12, StandardNV24, StandardNV6}
-}
-
-// WeekDay enumerates the values for week day.
-type WeekDay string
-
-const (
- // Friday ...
- Friday WeekDay = "Friday"
- // Monday ...
- Monday WeekDay = "Monday"
- // Saturday ...
- Saturday WeekDay = "Saturday"
- // Sunday ...
- Sunday WeekDay = "Sunday"
- // Thursday ...
- Thursday WeekDay = "Thursday"
- // Tuesday ...
- Tuesday WeekDay = "Tuesday"
- // Wednesday ...
- Wednesday WeekDay = "Wednesday"
-)
-
-// PossibleWeekDayValues returns an array of possible values for the WeekDay const type.
-func PossibleWeekDayValues() []WeekDay {
- return []WeekDay{Friday, Monday, Saturday, Sunday, Thursday, Tuesday, Wednesday}
-}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/CHANGELOG.md b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/CHANGELOG.md
new file mode 100644
index 0000000000000..11a98d4b7b27a
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/CHANGELOG.md
@@ -0,0 +1,473 @@
+# Change History
+
+## Breaking Changes
+
+### Removed Constants
+
+1. AgentPoolMode.System
+1. AgentPoolMode.User
+1. AgentPoolType.AvailabilitySet
+1. AgentPoolType.VirtualMachineScaleSets
+1. Code.Running
+1. Code.Stopped
+1. ConnectionStatus.Approved
+1. ConnectionStatus.Disconnected
+1. ConnectionStatus.Pending
+1. ConnectionStatus.Rejected
+1. Expander.LeastWaste
+1. Expander.MostPods
+1. Expander.Priority
+1. Expander.Random
+1. ExtendedLocationTypes.EdgeZone
+1. GPUInstanceProfile.MIG1g
+1. GPUInstanceProfile.MIG2g
+1. GPUInstanceProfile.MIG3g
+1. GPUInstanceProfile.MIG4g
+1. GPUInstanceProfile.MIG7g
+1. KubeletDiskType.OS
+1. KubeletDiskType.Temporary
+1. LicenseType.None
+1. LicenseType.WindowsServer
+1. LoadBalancerSku.Basic
+1. LoadBalancerSku.Standard
+1. ManagedClusterPodIdentityProvisioningState.Assigned
+1. ManagedClusterPodIdentityProvisioningState.Deleting
+1. ManagedClusterPodIdentityProvisioningState.Failed
+1. ManagedClusterPodIdentityProvisioningState.Updating
+1. ManagedClusterSKUTier.Free
+1. ManagedClusterSKUTier.Paid
+1. NetworkMode.Bridge
+1. NetworkMode.Transparent
+1. NetworkPlugin.Azure
+1. NetworkPlugin.Kubenet
+1. OSDiskType.Ephemeral
+1. OSDiskType.Managed
+1. OSSKU.CBLMariner
+1. OSSKU.Ubuntu
+1. OSType.Linux
+1. OSType.Windows
+1. OutboundType.LoadBalancer
+1. OutboundType.UserDefinedRouting
+1. ScaleSetEvictionPolicy.Deallocate
+1. ScaleSetEvictionPolicy.Delete
+1. ScaleSetPriority.Regular
+1. ScaleSetPriority.Spot
+1. StorageProfileTypes.ManagedDisks
+1. StorageProfileTypes.StorageAccount
+1. VMSizeTypes.StandardA1
+1. VMSizeTypes.StandardA10
+1. VMSizeTypes.StandardA11
+1. VMSizeTypes.StandardA1V2
+1. VMSizeTypes.StandardA2
+1. VMSizeTypes.StandardA2V2
+1. VMSizeTypes.StandardA2mV2
+1. VMSizeTypes.StandardA3
+1. VMSizeTypes.StandardA4
+1. VMSizeTypes.StandardA4V2
+1. VMSizeTypes.StandardA4mV2
+1. VMSizeTypes.StandardA5
+1. VMSizeTypes.StandardA6
+1. VMSizeTypes.StandardA7
+1. VMSizeTypes.StandardA8
+1. VMSizeTypes.StandardA8V2
+1. VMSizeTypes.StandardA8mV2
+1. VMSizeTypes.StandardA9
+1. VMSizeTypes.StandardB2ms
+1. VMSizeTypes.StandardB2s
+1. VMSizeTypes.StandardB4ms
+1. VMSizeTypes.StandardB8ms
+1. VMSizeTypes.StandardD1
+1. VMSizeTypes.StandardD11
+1. VMSizeTypes.StandardD11V2
+1. VMSizeTypes.StandardD11V2Promo
+1. VMSizeTypes.StandardD12
+1. VMSizeTypes.StandardD12V2
+1. VMSizeTypes.StandardD12V2Promo
+1. VMSizeTypes.StandardD13
+1. VMSizeTypes.StandardD13V2
+1. VMSizeTypes.StandardD13V2Promo
+1. VMSizeTypes.StandardD14
+1. VMSizeTypes.StandardD14V2
+1. VMSizeTypes.StandardD14V2Promo
+1. VMSizeTypes.StandardD15V2
+1. VMSizeTypes.StandardD16V3
+1. VMSizeTypes.StandardD16sV3
+1. VMSizeTypes.StandardD1V2
+1. VMSizeTypes.StandardD2
+1. VMSizeTypes.StandardD2V2
+1. VMSizeTypes.StandardD2V2Promo
+1. VMSizeTypes.StandardD2V3
+1. VMSizeTypes.StandardD2sV3
+1. VMSizeTypes.StandardD3
+1. VMSizeTypes.StandardD32V3
+1. VMSizeTypes.StandardD32sV3
+1. VMSizeTypes.StandardD3V2
+1. VMSizeTypes.StandardD3V2Promo
+1. VMSizeTypes.StandardD4
+1. VMSizeTypes.StandardD4V2
+1. VMSizeTypes.StandardD4V2Promo
+1. VMSizeTypes.StandardD4V3
+1. VMSizeTypes.StandardD4sV3
+1. VMSizeTypes.StandardD5V2
+1. VMSizeTypes.StandardD5V2Promo
+1. VMSizeTypes.StandardD64V3
+1. VMSizeTypes.StandardD64sV3
+1. VMSizeTypes.StandardD8V3
+1. VMSizeTypes.StandardD8sV3
+1. VMSizeTypes.StandardDS1
+1. VMSizeTypes.StandardDS11
+1. VMSizeTypes.StandardDS11V2
+1. VMSizeTypes.StandardDS11V2Promo
+1. VMSizeTypes.StandardDS12
+1. VMSizeTypes.StandardDS12V2
+1. VMSizeTypes.StandardDS12V2Promo
+1. VMSizeTypes.StandardDS13
+1. VMSizeTypes.StandardDS132V2
+1. VMSizeTypes.StandardDS134V2
+1. VMSizeTypes.StandardDS13V2
+1. VMSizeTypes.StandardDS13V2Promo
+1. VMSizeTypes.StandardDS14
+1. VMSizeTypes.StandardDS144V2
+1. VMSizeTypes.StandardDS148V2
+1. VMSizeTypes.StandardDS14V2
+1. VMSizeTypes.StandardDS14V2Promo
+1. VMSizeTypes.StandardDS15V2
+1. VMSizeTypes.StandardDS1V2
+1. VMSizeTypes.StandardDS2
+1. VMSizeTypes.StandardDS2V2
+1. VMSizeTypes.StandardDS2V2Promo
+1. VMSizeTypes.StandardDS3
+1. VMSizeTypes.StandardDS3V2
+1. VMSizeTypes.StandardDS3V2Promo
+1. VMSizeTypes.StandardDS4
+1. VMSizeTypes.StandardDS4V2
+1. VMSizeTypes.StandardDS4V2Promo
+1. VMSizeTypes.StandardDS5V2
+1. VMSizeTypes.StandardDS5V2Promo
+1. VMSizeTypes.StandardE16V3
+1. VMSizeTypes.StandardE16sV3
+1. VMSizeTypes.StandardE2V3
+1. VMSizeTypes.StandardE2sV3
+1. VMSizeTypes.StandardE3216sV3
+1. VMSizeTypes.StandardE328sV3
+1. VMSizeTypes.StandardE32V3
+1. VMSizeTypes.StandardE32sV3
+1. VMSizeTypes.StandardE4V3
+1. VMSizeTypes.StandardE4sV3
+1. VMSizeTypes.StandardE6416sV3
+1. VMSizeTypes.StandardE6432sV3
+1. VMSizeTypes.StandardE64V3
+1. VMSizeTypes.StandardE64sV3
+1. VMSizeTypes.StandardE8V3
+1. VMSizeTypes.StandardE8sV3
+1. VMSizeTypes.StandardF1
+1. VMSizeTypes.StandardF16
+1. VMSizeTypes.StandardF16s
+1. VMSizeTypes.StandardF16sV2
+1. VMSizeTypes.StandardF1s
+1. VMSizeTypes.StandardF2
+1. VMSizeTypes.StandardF2s
+1. VMSizeTypes.StandardF2sV2
+1. VMSizeTypes.StandardF32sV2
+1. VMSizeTypes.StandardF4
+1. VMSizeTypes.StandardF4s
+1. VMSizeTypes.StandardF4sV2
+1. VMSizeTypes.StandardF64sV2
+1. VMSizeTypes.StandardF72sV2
+1. VMSizeTypes.StandardF8
+1. VMSizeTypes.StandardF8s
+1. VMSizeTypes.StandardF8sV2
+1. VMSizeTypes.StandardG1
+1. VMSizeTypes.StandardG2
+1. VMSizeTypes.StandardG3
+1. VMSizeTypes.StandardG4
+1. VMSizeTypes.StandardG5
+1. VMSizeTypes.StandardGS1
+1. VMSizeTypes.StandardGS2
+1. VMSizeTypes.StandardGS3
+1. VMSizeTypes.StandardGS4
+1. VMSizeTypes.StandardGS44
+1. VMSizeTypes.StandardGS48
+1. VMSizeTypes.StandardGS5
+1. VMSizeTypes.StandardGS516
+1. VMSizeTypes.StandardGS58
+1. VMSizeTypes.StandardH16
+1. VMSizeTypes.StandardH16m
+1. VMSizeTypes.StandardH16mr
+1. VMSizeTypes.StandardH16r
+1. VMSizeTypes.StandardH8
+1. VMSizeTypes.StandardH8m
+1. VMSizeTypes.StandardL16s
+1. VMSizeTypes.StandardL32s
+1. VMSizeTypes.StandardL4s
+1. VMSizeTypes.StandardL8s
+1. VMSizeTypes.StandardM12832ms
+1. VMSizeTypes.StandardM12864ms
+1. VMSizeTypes.StandardM128ms
+1. VMSizeTypes.StandardM128s
+1. VMSizeTypes.StandardM6416ms
+1. VMSizeTypes.StandardM6432ms
+1. VMSizeTypes.StandardM64ms
+1. VMSizeTypes.StandardM64s
+1. VMSizeTypes.StandardNC12
+1. VMSizeTypes.StandardNC12sV2
+1. VMSizeTypes.StandardNC12sV3
+1. VMSizeTypes.StandardNC24
+1. VMSizeTypes.StandardNC24r
+1. VMSizeTypes.StandardNC24rsV2
+1. VMSizeTypes.StandardNC24rsV3
+1. VMSizeTypes.StandardNC24sV2
+1. VMSizeTypes.StandardNC24sV3
+1. VMSizeTypes.StandardNC6
+1. VMSizeTypes.StandardNC6sV2
+1. VMSizeTypes.StandardNC6sV3
+1. VMSizeTypes.StandardND12s
+1. VMSizeTypes.StandardND24rs
+1. VMSizeTypes.StandardND24s
+1. VMSizeTypes.StandardND6s
+1. VMSizeTypes.StandardNV12
+1. VMSizeTypes.StandardNV24
+1. VMSizeTypes.StandardNV6
+1. WeekDay.Friday
+1. WeekDay.Monday
+1. WeekDay.Saturday
+1. WeekDay.Sunday
+1. WeekDay.Thursday
+1. WeekDay.Tuesday
+1. WeekDay.Wednesday
+
+## Additive Changes
+
+### New Constants
+
+1. AgentPoolMode.AgentPoolModeSystem
+1. AgentPoolMode.AgentPoolModeUser
+1. AgentPoolType.AgentPoolTypeAvailabilitySet
+1. AgentPoolType.AgentPoolTypeVirtualMachineScaleSets
+1. Code.CodeRunning
+1. Code.CodeStopped
+1. ConnectionStatus.ConnectionStatusApproved
+1. ConnectionStatus.ConnectionStatusDisconnected
+1. ConnectionStatus.ConnectionStatusPending
+1. ConnectionStatus.ConnectionStatusRejected
+1. Expander.ExpanderLeastWaste
+1. Expander.ExpanderMostPods
+1. Expander.ExpanderPriority
+1. Expander.ExpanderRandom
+1. ExtendedLocationTypes.ExtendedLocationTypesEdgeZone
+1. GPUInstanceProfile.GPUInstanceProfileMIG1g
+1. GPUInstanceProfile.GPUInstanceProfileMIG2g
+1. GPUInstanceProfile.GPUInstanceProfileMIG3g
+1. GPUInstanceProfile.GPUInstanceProfileMIG4g
+1. GPUInstanceProfile.GPUInstanceProfileMIG7g
+1. KubeletDiskType.KubeletDiskTypeOS
+1. KubeletDiskType.KubeletDiskTypeTemporary
+1. LicenseType.LicenseTypeNone
+1. LicenseType.LicenseTypeWindowsServer
+1. LoadBalancerSku.LoadBalancerSkuBasic
+1. LoadBalancerSku.LoadBalancerSkuStandard
+1. ManagedClusterPodIdentityProvisioningState.ManagedClusterPodIdentityProvisioningStateAssigned
+1. ManagedClusterPodIdentityProvisioningState.ManagedClusterPodIdentityProvisioningStateDeleting
+1. ManagedClusterPodIdentityProvisioningState.ManagedClusterPodIdentityProvisioningStateFailed
+1. ManagedClusterPodIdentityProvisioningState.ManagedClusterPodIdentityProvisioningStateUpdating
+1. ManagedClusterSKUTier.ManagedClusterSKUTierFree
+1. ManagedClusterSKUTier.ManagedClusterSKUTierPaid
+1. NetworkMode.NetworkModeBridge
+1. NetworkMode.NetworkModeTransparent
+1. NetworkPlugin.NetworkPluginAzure
+1. NetworkPlugin.NetworkPluginKubenet
+1. OSDiskType.OSDiskTypeEphemeral
+1. OSDiskType.OSDiskTypeManaged
+1. OSSKU.OSSKUCBLMariner
+1. OSSKU.OSSKUUbuntu
+1. OSType.OSTypeLinux
+1. OSType.OSTypeWindows
+1. OutboundType.OutboundTypeLoadBalancer
+1. OutboundType.OutboundTypeUserDefinedRouting
+1. ScaleSetEvictionPolicy.ScaleSetEvictionPolicyDeallocate
+1. ScaleSetEvictionPolicy.ScaleSetEvictionPolicyDelete
+1. ScaleSetPriority.ScaleSetPriorityRegular
+1. ScaleSetPriority.ScaleSetPrioritySpot
+1. StorageProfileTypes.StorageProfileTypesManagedDisks
+1. StorageProfileTypes.StorageProfileTypesStorageAccount
+1. VMSizeTypes.VMSizeTypesStandardA1
+1. VMSizeTypes.VMSizeTypesStandardA10
+1. VMSizeTypes.VMSizeTypesStandardA11
+1. VMSizeTypes.VMSizeTypesStandardA1V2
+1. VMSizeTypes.VMSizeTypesStandardA2
+1. VMSizeTypes.VMSizeTypesStandardA2V2
+1. VMSizeTypes.VMSizeTypesStandardA2mV2
+1. VMSizeTypes.VMSizeTypesStandardA3
+1. VMSizeTypes.VMSizeTypesStandardA4
+1. VMSizeTypes.VMSizeTypesStandardA4V2
+1. VMSizeTypes.VMSizeTypesStandardA4mV2
+1. VMSizeTypes.VMSizeTypesStandardA5
+1. VMSizeTypes.VMSizeTypesStandardA6
+1. VMSizeTypes.VMSizeTypesStandardA7
+1. VMSizeTypes.VMSizeTypesStandardA8
+1. VMSizeTypes.VMSizeTypesStandardA8V2
+1. VMSizeTypes.VMSizeTypesStandardA8mV2
+1. VMSizeTypes.VMSizeTypesStandardA9
+1. VMSizeTypes.VMSizeTypesStandardB2ms
+1. VMSizeTypes.VMSizeTypesStandardB2s
+1. VMSizeTypes.VMSizeTypesStandardB4ms
+1. VMSizeTypes.VMSizeTypesStandardB8ms
+1. VMSizeTypes.VMSizeTypesStandardD1
+1. VMSizeTypes.VMSizeTypesStandardD11
+1. VMSizeTypes.VMSizeTypesStandardD11V2
+1. VMSizeTypes.VMSizeTypesStandardD11V2Promo
+1. VMSizeTypes.VMSizeTypesStandardD12
+1. VMSizeTypes.VMSizeTypesStandardD12V2
+1. VMSizeTypes.VMSizeTypesStandardD12V2Promo
+1. VMSizeTypes.VMSizeTypesStandardD13
+1. VMSizeTypes.VMSizeTypesStandardD13V2
+1. VMSizeTypes.VMSizeTypesStandardD13V2Promo
+1. VMSizeTypes.VMSizeTypesStandardD14
+1. VMSizeTypes.VMSizeTypesStandardD14V2
+1. VMSizeTypes.VMSizeTypesStandardD14V2Promo
+1. VMSizeTypes.VMSizeTypesStandardD15V2
+1. VMSizeTypes.VMSizeTypesStandardD16V3
+1. VMSizeTypes.VMSizeTypesStandardD16sV3
+1. VMSizeTypes.VMSizeTypesStandardD1V2
+1. VMSizeTypes.VMSizeTypesStandardD2
+1. VMSizeTypes.VMSizeTypesStandardD2V2
+1. VMSizeTypes.VMSizeTypesStandardD2V2Promo
+1. VMSizeTypes.VMSizeTypesStandardD2V3
+1. VMSizeTypes.VMSizeTypesStandardD2sV3
+1. VMSizeTypes.VMSizeTypesStandardD3
+1. VMSizeTypes.VMSizeTypesStandardD32V3
+1. VMSizeTypes.VMSizeTypesStandardD32sV3
+1. VMSizeTypes.VMSizeTypesStandardD3V2
+1. VMSizeTypes.VMSizeTypesStandardD3V2Promo
+1. VMSizeTypes.VMSizeTypesStandardD4
+1. VMSizeTypes.VMSizeTypesStandardD4V2
+1. VMSizeTypes.VMSizeTypesStandardD4V2Promo
+1. VMSizeTypes.VMSizeTypesStandardD4V3
+1. VMSizeTypes.VMSizeTypesStandardD4sV3
+1. VMSizeTypes.VMSizeTypesStandardD5V2
+1. VMSizeTypes.VMSizeTypesStandardD5V2Promo
+1. VMSizeTypes.VMSizeTypesStandardD64V3
+1. VMSizeTypes.VMSizeTypesStandardD64sV3
+1. VMSizeTypes.VMSizeTypesStandardD8V3
+1. VMSizeTypes.VMSizeTypesStandardD8sV3
+1. VMSizeTypes.VMSizeTypesStandardDS1
+1. VMSizeTypes.VMSizeTypesStandardDS11
+1. VMSizeTypes.VMSizeTypesStandardDS11V2
+1. VMSizeTypes.VMSizeTypesStandardDS11V2Promo
+1. VMSizeTypes.VMSizeTypesStandardDS12
+1. VMSizeTypes.VMSizeTypesStandardDS12V2
+1. VMSizeTypes.VMSizeTypesStandardDS12V2Promo
+1. VMSizeTypes.VMSizeTypesStandardDS13
+1. VMSizeTypes.VMSizeTypesStandardDS132V2
+1. VMSizeTypes.VMSizeTypesStandardDS134V2
+1. VMSizeTypes.VMSizeTypesStandardDS13V2
+1. VMSizeTypes.VMSizeTypesStandardDS13V2Promo
+1. VMSizeTypes.VMSizeTypesStandardDS14
+1. VMSizeTypes.VMSizeTypesStandardDS144V2
+1. VMSizeTypes.VMSizeTypesStandardDS148V2
+1. VMSizeTypes.VMSizeTypesStandardDS14V2
+1. VMSizeTypes.VMSizeTypesStandardDS14V2Promo
+1. VMSizeTypes.VMSizeTypesStandardDS15V2
+1. VMSizeTypes.VMSizeTypesStandardDS1V2
+1. VMSizeTypes.VMSizeTypesStandardDS2
+1. VMSizeTypes.VMSizeTypesStandardDS2V2
+1. VMSizeTypes.VMSizeTypesStandardDS2V2Promo
+1. VMSizeTypes.VMSizeTypesStandardDS3
+1. VMSizeTypes.VMSizeTypesStandardDS3V2
+1. VMSizeTypes.VMSizeTypesStandardDS3V2Promo
+1. VMSizeTypes.VMSizeTypesStandardDS4
+1. VMSizeTypes.VMSizeTypesStandardDS4V2
+1. VMSizeTypes.VMSizeTypesStandardDS4V2Promo
+1. VMSizeTypes.VMSizeTypesStandardDS5V2
+1. VMSizeTypes.VMSizeTypesStandardDS5V2Promo
+1. VMSizeTypes.VMSizeTypesStandardE16V3
+1. VMSizeTypes.VMSizeTypesStandardE16sV3
+1. VMSizeTypes.VMSizeTypesStandardE2V3
+1. VMSizeTypes.VMSizeTypesStandardE2sV3
+1. VMSizeTypes.VMSizeTypesStandardE3216sV3
+1. VMSizeTypes.VMSizeTypesStandardE328sV3
+1. VMSizeTypes.VMSizeTypesStandardE32V3
+1. VMSizeTypes.VMSizeTypesStandardE32sV3
+1. VMSizeTypes.VMSizeTypesStandardE4V3
+1. VMSizeTypes.VMSizeTypesStandardE4sV3
+1. VMSizeTypes.VMSizeTypesStandardE6416sV3
+1. VMSizeTypes.VMSizeTypesStandardE6432sV3
+1. VMSizeTypes.VMSizeTypesStandardE64V3
+1. VMSizeTypes.VMSizeTypesStandardE64sV3
+1. VMSizeTypes.VMSizeTypesStandardE8V3
+1. VMSizeTypes.VMSizeTypesStandardE8sV3
+1. VMSizeTypes.VMSizeTypesStandardF1
+1. VMSizeTypes.VMSizeTypesStandardF16
+1. VMSizeTypes.VMSizeTypesStandardF16s
+1. VMSizeTypes.VMSizeTypesStandardF16sV2
+1. VMSizeTypes.VMSizeTypesStandardF1s
+1. VMSizeTypes.VMSizeTypesStandardF2
+1. VMSizeTypes.VMSizeTypesStandardF2s
+1. VMSizeTypes.VMSizeTypesStandardF2sV2
+1. VMSizeTypes.VMSizeTypesStandardF32sV2
+1. VMSizeTypes.VMSizeTypesStandardF4
+1. VMSizeTypes.VMSizeTypesStandardF4s
+1. VMSizeTypes.VMSizeTypesStandardF4sV2
+1. VMSizeTypes.VMSizeTypesStandardF64sV2
+1. VMSizeTypes.VMSizeTypesStandardF72sV2
+1. VMSizeTypes.VMSizeTypesStandardF8
+1. VMSizeTypes.VMSizeTypesStandardF8s
+1. VMSizeTypes.VMSizeTypesStandardF8sV2
+1. VMSizeTypes.VMSizeTypesStandardG1
+1. VMSizeTypes.VMSizeTypesStandardG2
+1. VMSizeTypes.VMSizeTypesStandardG3
+1. VMSizeTypes.VMSizeTypesStandardG4
+1. VMSizeTypes.VMSizeTypesStandardG5
+1. VMSizeTypes.VMSizeTypesStandardGS1
+1. VMSizeTypes.VMSizeTypesStandardGS2
+1. VMSizeTypes.VMSizeTypesStandardGS3
+1. VMSizeTypes.VMSizeTypesStandardGS4
+1. VMSizeTypes.VMSizeTypesStandardGS44
+1. VMSizeTypes.VMSizeTypesStandardGS48
+1. VMSizeTypes.VMSizeTypesStandardGS5
+1. VMSizeTypes.VMSizeTypesStandardGS516
+1. VMSizeTypes.VMSizeTypesStandardGS58
+1. VMSizeTypes.VMSizeTypesStandardH16
+1. VMSizeTypes.VMSizeTypesStandardH16m
+1. VMSizeTypes.VMSizeTypesStandardH16mr
+1. VMSizeTypes.VMSizeTypesStandardH16r
+1. VMSizeTypes.VMSizeTypesStandardH8
+1. VMSizeTypes.VMSizeTypesStandardH8m
+1. VMSizeTypes.VMSizeTypesStandardL16s
+1. VMSizeTypes.VMSizeTypesStandardL32s
+1. VMSizeTypes.VMSizeTypesStandardL4s
+1. VMSizeTypes.VMSizeTypesStandardL8s
+1. VMSizeTypes.VMSizeTypesStandardM12832ms
+1. VMSizeTypes.VMSizeTypesStandardM12864ms
+1. VMSizeTypes.VMSizeTypesStandardM128ms
+1. VMSizeTypes.VMSizeTypesStandardM128s
+1. VMSizeTypes.VMSizeTypesStandardM6416ms
+1. VMSizeTypes.VMSizeTypesStandardM6432ms
+1. VMSizeTypes.VMSizeTypesStandardM64ms
+1. VMSizeTypes.VMSizeTypesStandardM64s
+1. VMSizeTypes.VMSizeTypesStandardNC12
+1. VMSizeTypes.VMSizeTypesStandardNC12sV2
+1. VMSizeTypes.VMSizeTypesStandardNC12sV3
+1. VMSizeTypes.VMSizeTypesStandardNC24
+1. VMSizeTypes.VMSizeTypesStandardNC24r
+1. VMSizeTypes.VMSizeTypesStandardNC24rsV2
+1. VMSizeTypes.VMSizeTypesStandardNC24rsV3
+1. VMSizeTypes.VMSizeTypesStandardNC24sV2
+1. VMSizeTypes.VMSizeTypesStandardNC24sV3
+1. VMSizeTypes.VMSizeTypesStandardNC6
+1. VMSizeTypes.VMSizeTypesStandardNC6sV2
+1. VMSizeTypes.VMSizeTypesStandardNC6sV3
+1. VMSizeTypes.VMSizeTypesStandardND12s
+1. VMSizeTypes.VMSizeTypesStandardND24rs
+1. VMSizeTypes.VMSizeTypesStandardND24s
+1. VMSizeTypes.VMSizeTypesStandardND6s
+1. VMSizeTypes.VMSizeTypesStandardNV12
+1. VMSizeTypes.VMSizeTypesStandardNV24
+1. VMSizeTypes.VMSizeTypesStandardNV6
+1. WeekDay.WeekDayFriday
+1. WeekDay.WeekDayMonday
+1. WeekDay.WeekDaySaturday
+1. WeekDay.WeekDaySunday
+1. WeekDay.WeekDayThursday
+1. WeekDay.WeekDayTuesday
+1. WeekDay.WeekDayWednesday
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/_meta.json b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/_meta.json
new file mode 100644
index 0000000000000..f875f9e4c0100
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/_meta.json
@@ -0,0 +1,11 @@
+{
+ "commit": "5d89c9807d3e84a5890b381a68a308198f9ef141",
+ "readme": "/_/azure-rest-api-specs/specification/containerservice/resource-manager/readme.md",
+ "tag": "package-2021-03",
+ "use": "@microsoft.azure/autorest.go@2.1.180",
+ "repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
+ "autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.180 --tag=package-2021-03 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION --enum-prefix /_/azure-rest-api-specs/specification/containerservice/resource-manager/readme.md",
+ "additional_properties": {
+ "additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION --enum-prefix"
+ }
+}
\ No newline at end of file
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/agentpools.go b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/agentpools.go
similarity index 99%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/agentpools.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/agentpools.go
index 219d7b2954559..d441927755cb7 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/agentpools.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/agentpools.go
@@ -89,7 +89,7 @@ func (client AgentPoolsClient) CreateOrUpdatePreparer(ctx context.Context, resou
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -181,7 +181,7 @@ func (client AgentPoolsClient) DeletePreparer(ctx context.Context, resourceGroup
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -277,7 +277,7 @@ func (client AgentPoolsClient) GetPreparer(ctx context.Context, resourceGroupNam
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -363,7 +363,7 @@ func (client AgentPoolsClient) GetAvailableAgentPoolVersionsPreparer(ctx context
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -452,7 +452,7 @@ func (client AgentPoolsClient) GetUpgradeProfilePreparer(ctx context.Context, re
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -544,7 +544,7 @@ func (client AgentPoolsClient) ListPreparer(ctx context.Context, resourceGroupNa
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -662,7 +662,7 @@ func (client AgentPoolsClient) UpgradeNodeImageVersionPreparer(ctx context.Conte
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/client.go b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/client.go
similarity index 97%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/client.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/client.go
index 65ebe7fd7fc9a..e2e91a94955d4 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/client.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/client.go
@@ -1,4 +1,4 @@
-// Package containerservice implements the Azure ARM Containerservice service API version 2021-02-01.
+// Package containerservice implements the Azure ARM Containerservice service API version 2021-03-01.
//
// The Container Service Client.
package containerservice
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/enums.go b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/enums.go
new file mode 100644
index 0000000000000..f7b27ec02d006
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/enums.go
@@ -0,0 +1,828 @@
+package containerservice
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+// AgentPoolMode enumerates the values for agent pool mode.
+type AgentPoolMode string
+
+const (
+ // AgentPoolModeSystem ...
+ AgentPoolModeSystem AgentPoolMode = "System"
+ // AgentPoolModeUser ...
+ AgentPoolModeUser AgentPoolMode = "User"
+)
+
+// PossibleAgentPoolModeValues returns an array of possible values for the AgentPoolMode const type.
+func PossibleAgentPoolModeValues() []AgentPoolMode {
+ return []AgentPoolMode{AgentPoolModeSystem, AgentPoolModeUser}
+}
+
+// AgentPoolType enumerates the values for agent pool type.
+type AgentPoolType string
+
+const (
+ // AgentPoolTypeAvailabilitySet ...
+ AgentPoolTypeAvailabilitySet AgentPoolType = "AvailabilitySet"
+ // AgentPoolTypeVirtualMachineScaleSets ...
+ AgentPoolTypeVirtualMachineScaleSets AgentPoolType = "VirtualMachineScaleSets"
+)
+
+// PossibleAgentPoolTypeValues returns an array of possible values for the AgentPoolType const type.
+func PossibleAgentPoolTypeValues() []AgentPoolType {
+ return []AgentPoolType{AgentPoolTypeAvailabilitySet, AgentPoolTypeVirtualMachineScaleSets}
+}
+
+// Code enumerates the values for code.
+type Code string
+
+const (
+ // CodeRunning ...
+ CodeRunning Code = "Running"
+ // CodeStopped ...
+ CodeStopped Code = "Stopped"
+)
+
+// PossibleCodeValues returns an array of possible values for the Code const type.
+func PossibleCodeValues() []Code {
+ return []Code{CodeRunning, CodeStopped}
+}
+
+// ConnectionStatus enumerates the values for connection status.
+type ConnectionStatus string
+
+const (
+ // ConnectionStatusApproved ...
+ ConnectionStatusApproved ConnectionStatus = "Approved"
+ // ConnectionStatusDisconnected ...
+ ConnectionStatusDisconnected ConnectionStatus = "Disconnected"
+ // ConnectionStatusPending ...
+ ConnectionStatusPending ConnectionStatus = "Pending"
+ // ConnectionStatusRejected ...
+ ConnectionStatusRejected ConnectionStatus = "Rejected"
+)
+
+// PossibleConnectionStatusValues returns an array of possible values for the ConnectionStatus const type.
+func PossibleConnectionStatusValues() []ConnectionStatus {
+ return []ConnectionStatus{ConnectionStatusApproved, ConnectionStatusDisconnected, ConnectionStatusPending, ConnectionStatusRejected}
+}
+
+// CreatedByType enumerates the values for created by type.
+type CreatedByType string
+
+const (
+ // CreatedByTypeApplication ...
+ CreatedByTypeApplication CreatedByType = "Application"
+ // CreatedByTypeKey ...
+ CreatedByTypeKey CreatedByType = "Key"
+ // CreatedByTypeManagedIdentity ...
+ CreatedByTypeManagedIdentity CreatedByType = "ManagedIdentity"
+ // CreatedByTypeUser ...
+ CreatedByTypeUser CreatedByType = "User"
+)
+
+// PossibleCreatedByTypeValues returns an array of possible values for the CreatedByType const type.
+func PossibleCreatedByTypeValues() []CreatedByType {
+ return []CreatedByType{CreatedByTypeApplication, CreatedByTypeKey, CreatedByTypeManagedIdentity, CreatedByTypeUser}
+}
+
+// Expander enumerates the values for expander.
+type Expander string
+
+const (
+ // ExpanderLeastWaste ...
+ ExpanderLeastWaste Expander = "least-waste"
+ // ExpanderMostPods ...
+ ExpanderMostPods Expander = "most-pods"
+ // ExpanderPriority ...
+ ExpanderPriority Expander = "priority"
+ // ExpanderRandom ...
+ ExpanderRandom Expander = "random"
+)
+
+// PossibleExpanderValues returns an array of possible values for the Expander const type.
+func PossibleExpanderValues() []Expander {
+ return []Expander{ExpanderLeastWaste, ExpanderMostPods, ExpanderPriority, ExpanderRandom}
+}
+
+// ExtendedLocationTypes enumerates the values for extended location types.
+type ExtendedLocationTypes string
+
+const (
+ // ExtendedLocationTypesEdgeZone ...
+ ExtendedLocationTypesEdgeZone ExtendedLocationTypes = "EdgeZone"
+)
+
+// PossibleExtendedLocationTypesValues returns an array of possible values for the ExtendedLocationTypes const type.
+func PossibleExtendedLocationTypesValues() []ExtendedLocationTypes {
+ return []ExtendedLocationTypes{ExtendedLocationTypesEdgeZone}
+}
+
+// GPUInstanceProfile enumerates the values for gpu instance profile.
+type GPUInstanceProfile string
+
+const (
+ // GPUInstanceProfileMIG1g ...
+ GPUInstanceProfileMIG1g GPUInstanceProfile = "MIG1g"
+ // GPUInstanceProfileMIG2g ...
+ GPUInstanceProfileMIG2g GPUInstanceProfile = "MIG2g"
+ // GPUInstanceProfileMIG3g ...
+ GPUInstanceProfileMIG3g GPUInstanceProfile = "MIG3g"
+ // GPUInstanceProfileMIG4g ...
+ GPUInstanceProfileMIG4g GPUInstanceProfile = "MIG4g"
+ // GPUInstanceProfileMIG7g ...
+ GPUInstanceProfileMIG7g GPUInstanceProfile = "MIG7g"
+)
+
+// PossibleGPUInstanceProfileValues returns an array of possible values for the GPUInstanceProfile const type.
+func PossibleGPUInstanceProfileValues() []GPUInstanceProfile {
+ return []GPUInstanceProfile{GPUInstanceProfileMIG1g, GPUInstanceProfileMIG2g, GPUInstanceProfileMIG3g, GPUInstanceProfileMIG4g, GPUInstanceProfileMIG7g}
+}
+
+// KubeletDiskType enumerates the values for kubelet disk type.
+type KubeletDiskType string
+
+const (
+ // KubeletDiskTypeOS ...
+ KubeletDiskTypeOS KubeletDiskType = "OS"
+ // KubeletDiskTypeTemporary ...
+ KubeletDiskTypeTemporary KubeletDiskType = "Temporary"
+)
+
+// PossibleKubeletDiskTypeValues returns an array of possible values for the KubeletDiskType const type.
+func PossibleKubeletDiskTypeValues() []KubeletDiskType {
+ return []KubeletDiskType{KubeletDiskTypeOS, KubeletDiskTypeTemporary}
+}
+
+// LicenseType enumerates the values for license type.
+type LicenseType string
+
+const (
+ // LicenseTypeNone ...
+ LicenseTypeNone LicenseType = "None"
+ // LicenseTypeWindowsServer ...
+ LicenseTypeWindowsServer LicenseType = "Windows_Server"
+)
+
+// PossibleLicenseTypeValues returns an array of possible values for the LicenseType const type.
+func PossibleLicenseTypeValues() []LicenseType {
+ return []LicenseType{LicenseTypeNone, LicenseTypeWindowsServer}
+}
+
+// LoadBalancerSku enumerates the values for load balancer sku.
+type LoadBalancerSku string
+
+const (
+ // LoadBalancerSkuBasic ...
+ LoadBalancerSkuBasic LoadBalancerSku = "basic"
+ // LoadBalancerSkuStandard ...
+ LoadBalancerSkuStandard LoadBalancerSku = "standard"
+)
+
+// PossibleLoadBalancerSkuValues returns an array of possible values for the LoadBalancerSku const type.
+func PossibleLoadBalancerSkuValues() []LoadBalancerSku {
+ return []LoadBalancerSku{LoadBalancerSkuBasic, LoadBalancerSkuStandard}
+}
+
+// ManagedClusterPodIdentityProvisioningState enumerates the values for managed cluster pod identity
+// provisioning state.
+type ManagedClusterPodIdentityProvisioningState string
+
+const (
+ // ManagedClusterPodIdentityProvisioningStateAssigned ...
+ ManagedClusterPodIdentityProvisioningStateAssigned ManagedClusterPodIdentityProvisioningState = "Assigned"
+ // ManagedClusterPodIdentityProvisioningStateDeleting ...
+ ManagedClusterPodIdentityProvisioningStateDeleting ManagedClusterPodIdentityProvisioningState = "Deleting"
+ // ManagedClusterPodIdentityProvisioningStateFailed ...
+ ManagedClusterPodIdentityProvisioningStateFailed ManagedClusterPodIdentityProvisioningState = "Failed"
+ // ManagedClusterPodIdentityProvisioningStateUpdating ...
+ ManagedClusterPodIdentityProvisioningStateUpdating ManagedClusterPodIdentityProvisioningState = "Updating"
+)
+
+// PossibleManagedClusterPodIdentityProvisioningStateValues returns an array of possible values for the ManagedClusterPodIdentityProvisioningState const type.
+func PossibleManagedClusterPodIdentityProvisioningStateValues() []ManagedClusterPodIdentityProvisioningState {
+ return []ManagedClusterPodIdentityProvisioningState{ManagedClusterPodIdentityProvisioningStateAssigned, ManagedClusterPodIdentityProvisioningStateDeleting, ManagedClusterPodIdentityProvisioningStateFailed, ManagedClusterPodIdentityProvisioningStateUpdating}
+}
+
+// ManagedClusterSKUName enumerates the values for managed cluster sku name.
+type ManagedClusterSKUName string
+
+const (
+ // ManagedClusterSKUNameBasic ...
+ ManagedClusterSKUNameBasic ManagedClusterSKUName = "Basic"
+)
+
+// PossibleManagedClusterSKUNameValues returns an array of possible values for the ManagedClusterSKUName const type.
+func PossibleManagedClusterSKUNameValues() []ManagedClusterSKUName {
+ return []ManagedClusterSKUName{ManagedClusterSKUNameBasic}
+}
+
+// ManagedClusterSKUTier enumerates the values for managed cluster sku tier.
+type ManagedClusterSKUTier string
+
+const (
+ // ManagedClusterSKUTierFree ...
+ ManagedClusterSKUTierFree ManagedClusterSKUTier = "Free"
+ // ManagedClusterSKUTierPaid ...
+ ManagedClusterSKUTierPaid ManagedClusterSKUTier = "Paid"
+)
+
+// PossibleManagedClusterSKUTierValues returns an array of possible values for the ManagedClusterSKUTier const type.
+func PossibleManagedClusterSKUTierValues() []ManagedClusterSKUTier {
+ return []ManagedClusterSKUTier{ManagedClusterSKUTierFree, ManagedClusterSKUTierPaid}
+}
+
+// NetworkMode enumerates the values for network mode.
+type NetworkMode string
+
+const (
+ // NetworkModeBridge ...
+ NetworkModeBridge NetworkMode = "bridge"
+ // NetworkModeTransparent ...
+ NetworkModeTransparent NetworkMode = "transparent"
+)
+
+// PossibleNetworkModeValues returns an array of possible values for the NetworkMode const type.
+func PossibleNetworkModeValues() []NetworkMode {
+ return []NetworkMode{NetworkModeBridge, NetworkModeTransparent}
+}
+
+// NetworkPlugin enumerates the values for network plugin.
+type NetworkPlugin string
+
+const (
+ // NetworkPluginAzure ...
+ NetworkPluginAzure NetworkPlugin = "azure"
+ // NetworkPluginKubenet ...
+ NetworkPluginKubenet NetworkPlugin = "kubenet"
+)
+
+// PossibleNetworkPluginValues returns an array of possible values for the NetworkPlugin const type.
+func PossibleNetworkPluginValues() []NetworkPlugin {
+ return []NetworkPlugin{NetworkPluginAzure, NetworkPluginKubenet}
+}
+
+// NetworkPolicy enumerates the values for network policy.
+type NetworkPolicy string
+
+const (
+ // NetworkPolicyAzure ...
+ NetworkPolicyAzure NetworkPolicy = "azure"
+ // NetworkPolicyCalico ...
+ NetworkPolicyCalico NetworkPolicy = "calico"
+)
+
+// PossibleNetworkPolicyValues returns an array of possible values for the NetworkPolicy const type.
+func PossibleNetworkPolicyValues() []NetworkPolicy {
+ return []NetworkPolicy{NetworkPolicyAzure, NetworkPolicyCalico}
+}
+
+// OSDiskType enumerates the values for os disk type.
+type OSDiskType string
+
+const (
+ // OSDiskTypeEphemeral ...
+ OSDiskTypeEphemeral OSDiskType = "Ephemeral"
+ // OSDiskTypeManaged ...
+ OSDiskTypeManaged OSDiskType = "Managed"
+)
+
+// PossibleOSDiskTypeValues returns an array of possible values for the OSDiskType const type.
+func PossibleOSDiskTypeValues() []OSDiskType {
+ return []OSDiskType{OSDiskTypeEphemeral, OSDiskTypeManaged}
+}
+
+// OSSKU enumerates the values for ossku.
+type OSSKU string
+
+const (
+ // OSSKUCBLMariner ...
+ OSSKUCBLMariner OSSKU = "CBLMariner"
+ // OSSKUUbuntu ...
+ OSSKUUbuntu OSSKU = "Ubuntu"
+)
+
+// PossibleOSSKUValues returns an array of possible values for the OSSKU const type.
+func PossibleOSSKUValues() []OSSKU {
+ return []OSSKU{OSSKUCBLMariner, OSSKUUbuntu}
+}
+
+// OSType enumerates the values for os type.
+type OSType string
+
+const (
+ // OSTypeLinux ...
+ OSTypeLinux OSType = "Linux"
+ // OSTypeWindows ...
+ OSTypeWindows OSType = "Windows"
+)
+
+// PossibleOSTypeValues returns an array of possible values for the OSType const type.
+func PossibleOSTypeValues() []OSType {
+ return []OSType{OSTypeLinux, OSTypeWindows}
+}
+
+// OutboundType enumerates the values for outbound type.
+type OutboundType string
+
+const (
+ // OutboundTypeLoadBalancer ...
+ OutboundTypeLoadBalancer OutboundType = "loadBalancer"
+ // OutboundTypeUserDefinedRouting ...
+ OutboundTypeUserDefinedRouting OutboundType = "userDefinedRouting"
+)
+
+// PossibleOutboundTypeValues returns an array of possible values for the OutboundType const type.
+func PossibleOutboundTypeValues() []OutboundType {
+ return []OutboundType{OutboundTypeLoadBalancer, OutboundTypeUserDefinedRouting}
+}
+
+// PrivateEndpointConnectionProvisioningState enumerates the values for private endpoint connection
+// provisioning state.
+type PrivateEndpointConnectionProvisioningState string
+
+const (
+ // PrivateEndpointConnectionProvisioningStateCreating ...
+ PrivateEndpointConnectionProvisioningStateCreating PrivateEndpointConnectionProvisioningState = "Creating"
+ // PrivateEndpointConnectionProvisioningStateDeleting ...
+ PrivateEndpointConnectionProvisioningStateDeleting PrivateEndpointConnectionProvisioningState = "Deleting"
+ // PrivateEndpointConnectionProvisioningStateFailed ...
+ PrivateEndpointConnectionProvisioningStateFailed PrivateEndpointConnectionProvisioningState = "Failed"
+ // PrivateEndpointConnectionProvisioningStateSucceeded ...
+ PrivateEndpointConnectionProvisioningStateSucceeded PrivateEndpointConnectionProvisioningState = "Succeeded"
+)
+
+// PossiblePrivateEndpointConnectionProvisioningStateValues returns an array of possible values for the PrivateEndpointConnectionProvisioningState const type.
+func PossiblePrivateEndpointConnectionProvisioningStateValues() []PrivateEndpointConnectionProvisioningState {
+ return []PrivateEndpointConnectionProvisioningState{PrivateEndpointConnectionProvisioningStateCreating, PrivateEndpointConnectionProvisioningStateDeleting, PrivateEndpointConnectionProvisioningStateFailed, PrivateEndpointConnectionProvisioningStateSucceeded}
+}
+
+// ResourceIdentityType enumerates the values for resource identity type.
+type ResourceIdentityType string
+
+const (
+ // ResourceIdentityTypeNone ...
+ ResourceIdentityTypeNone ResourceIdentityType = "None"
+ // ResourceIdentityTypeSystemAssigned ...
+ ResourceIdentityTypeSystemAssigned ResourceIdentityType = "SystemAssigned"
+ // ResourceIdentityTypeUserAssigned ...
+ ResourceIdentityTypeUserAssigned ResourceIdentityType = "UserAssigned"
+)
+
+// PossibleResourceIdentityTypeValues returns an array of possible values for the ResourceIdentityType const type.
+func PossibleResourceIdentityTypeValues() []ResourceIdentityType {
+ return []ResourceIdentityType{ResourceIdentityTypeNone, ResourceIdentityTypeSystemAssigned, ResourceIdentityTypeUserAssigned}
+}
+
+// ScaleSetEvictionPolicy enumerates the values for scale set eviction policy.
+type ScaleSetEvictionPolicy string
+
+const (
+ // ScaleSetEvictionPolicyDeallocate ...
+ ScaleSetEvictionPolicyDeallocate ScaleSetEvictionPolicy = "Deallocate"
+ // ScaleSetEvictionPolicyDelete ...
+ ScaleSetEvictionPolicyDelete ScaleSetEvictionPolicy = "Delete"
+)
+
+// PossibleScaleSetEvictionPolicyValues returns an array of possible values for the ScaleSetEvictionPolicy const type.
+func PossibleScaleSetEvictionPolicyValues() []ScaleSetEvictionPolicy {
+ return []ScaleSetEvictionPolicy{ScaleSetEvictionPolicyDeallocate, ScaleSetEvictionPolicyDelete}
+}
+
+// ScaleSetPriority enumerates the values for scale set priority.
+type ScaleSetPriority string
+
+const (
+ // ScaleSetPriorityRegular ...
+ ScaleSetPriorityRegular ScaleSetPriority = "Regular"
+ // ScaleSetPrioritySpot ...
+ ScaleSetPrioritySpot ScaleSetPriority = "Spot"
+)
+
+// PossibleScaleSetPriorityValues returns an array of possible values for the ScaleSetPriority const type.
+func PossibleScaleSetPriorityValues() []ScaleSetPriority {
+ return []ScaleSetPriority{ScaleSetPriorityRegular, ScaleSetPrioritySpot}
+}
+
+// StorageProfileTypes enumerates the values for storage profile types.
+type StorageProfileTypes string
+
+const (
+ // StorageProfileTypesManagedDisks ...
+ StorageProfileTypesManagedDisks StorageProfileTypes = "ManagedDisks"
+ // StorageProfileTypesStorageAccount ...
+ StorageProfileTypesStorageAccount StorageProfileTypes = "StorageAccount"
+)
+
+// PossibleStorageProfileTypesValues returns an array of possible values for the StorageProfileTypes const type.
+func PossibleStorageProfileTypesValues() []StorageProfileTypes {
+ return []StorageProfileTypes{StorageProfileTypesManagedDisks, StorageProfileTypesStorageAccount}
+}
+
+// UpgradeChannel enumerates the values for upgrade channel.
+type UpgradeChannel string
+
+const (
+ // UpgradeChannelNodeImage ...
+ UpgradeChannelNodeImage UpgradeChannel = "node-image"
+ // UpgradeChannelNone ...
+ UpgradeChannelNone UpgradeChannel = "none"
+ // UpgradeChannelPatch ...
+ UpgradeChannelPatch UpgradeChannel = "patch"
+ // UpgradeChannelRapid ...
+ UpgradeChannelRapid UpgradeChannel = "rapid"
+ // UpgradeChannelStable ...
+ UpgradeChannelStable UpgradeChannel = "stable"
+)
+
+// PossibleUpgradeChannelValues returns an array of possible values for the UpgradeChannel const type.
+func PossibleUpgradeChannelValues() []UpgradeChannel {
+ return []UpgradeChannel{UpgradeChannelNodeImage, UpgradeChannelNone, UpgradeChannelPatch, UpgradeChannelRapid, UpgradeChannelStable}
+}
+
+// VMSizeTypes enumerates the values for vm size types.
+type VMSizeTypes string
+
+const (
+ // VMSizeTypesStandardA1 ...
+ VMSizeTypesStandardA1 VMSizeTypes = "Standard_A1"
+ // VMSizeTypesStandardA10 ...
+ VMSizeTypesStandardA10 VMSizeTypes = "Standard_A10"
+ // VMSizeTypesStandardA11 ...
+ VMSizeTypesStandardA11 VMSizeTypes = "Standard_A11"
+ // VMSizeTypesStandardA1V2 ...
+ VMSizeTypesStandardA1V2 VMSizeTypes = "Standard_A1_v2"
+ // VMSizeTypesStandardA2 ...
+ VMSizeTypesStandardA2 VMSizeTypes = "Standard_A2"
+ // VMSizeTypesStandardA2mV2 ...
+ VMSizeTypesStandardA2mV2 VMSizeTypes = "Standard_A2m_v2"
+ // VMSizeTypesStandardA2V2 ...
+ VMSizeTypesStandardA2V2 VMSizeTypes = "Standard_A2_v2"
+ // VMSizeTypesStandardA3 ...
+ VMSizeTypesStandardA3 VMSizeTypes = "Standard_A3"
+ // VMSizeTypesStandardA4 ...
+ VMSizeTypesStandardA4 VMSizeTypes = "Standard_A4"
+ // VMSizeTypesStandardA4mV2 ...
+ VMSizeTypesStandardA4mV2 VMSizeTypes = "Standard_A4m_v2"
+ // VMSizeTypesStandardA4V2 ...
+ VMSizeTypesStandardA4V2 VMSizeTypes = "Standard_A4_v2"
+ // VMSizeTypesStandardA5 ...
+ VMSizeTypesStandardA5 VMSizeTypes = "Standard_A5"
+ // VMSizeTypesStandardA6 ...
+ VMSizeTypesStandardA6 VMSizeTypes = "Standard_A6"
+ // VMSizeTypesStandardA7 ...
+ VMSizeTypesStandardA7 VMSizeTypes = "Standard_A7"
+ // VMSizeTypesStandardA8 ...
+ VMSizeTypesStandardA8 VMSizeTypes = "Standard_A8"
+ // VMSizeTypesStandardA8mV2 ...
+ VMSizeTypesStandardA8mV2 VMSizeTypes = "Standard_A8m_v2"
+ // VMSizeTypesStandardA8V2 ...
+ VMSizeTypesStandardA8V2 VMSizeTypes = "Standard_A8_v2"
+ // VMSizeTypesStandardA9 ...
+ VMSizeTypesStandardA9 VMSizeTypes = "Standard_A9"
+ // VMSizeTypesStandardB2ms ...
+ VMSizeTypesStandardB2ms VMSizeTypes = "Standard_B2ms"
+ // VMSizeTypesStandardB2s ...
+ VMSizeTypesStandardB2s VMSizeTypes = "Standard_B2s"
+ // VMSizeTypesStandardB4ms ...
+ VMSizeTypesStandardB4ms VMSizeTypes = "Standard_B4ms"
+ // VMSizeTypesStandardB8ms ...
+ VMSizeTypesStandardB8ms VMSizeTypes = "Standard_B8ms"
+ // VMSizeTypesStandardD1 ...
+ VMSizeTypesStandardD1 VMSizeTypes = "Standard_D1"
+ // VMSizeTypesStandardD11 ...
+ VMSizeTypesStandardD11 VMSizeTypes = "Standard_D11"
+ // VMSizeTypesStandardD11V2 ...
+ VMSizeTypesStandardD11V2 VMSizeTypes = "Standard_D11_v2"
+ // VMSizeTypesStandardD11V2Promo ...
+ VMSizeTypesStandardD11V2Promo VMSizeTypes = "Standard_D11_v2_Promo"
+ // VMSizeTypesStandardD12 ...
+ VMSizeTypesStandardD12 VMSizeTypes = "Standard_D12"
+ // VMSizeTypesStandardD12V2 ...
+ VMSizeTypesStandardD12V2 VMSizeTypes = "Standard_D12_v2"
+ // VMSizeTypesStandardD12V2Promo ...
+ VMSizeTypesStandardD12V2Promo VMSizeTypes = "Standard_D12_v2_Promo"
+ // VMSizeTypesStandardD13 ...
+ VMSizeTypesStandardD13 VMSizeTypes = "Standard_D13"
+ // VMSizeTypesStandardD13V2 ...
+ VMSizeTypesStandardD13V2 VMSizeTypes = "Standard_D13_v2"
+ // VMSizeTypesStandardD13V2Promo ...
+ VMSizeTypesStandardD13V2Promo VMSizeTypes = "Standard_D13_v2_Promo"
+ // VMSizeTypesStandardD14 ...
+ VMSizeTypesStandardD14 VMSizeTypes = "Standard_D14"
+ // VMSizeTypesStandardD14V2 ...
+ VMSizeTypesStandardD14V2 VMSizeTypes = "Standard_D14_v2"
+ // VMSizeTypesStandardD14V2Promo ...
+ VMSizeTypesStandardD14V2Promo VMSizeTypes = "Standard_D14_v2_Promo"
+ // VMSizeTypesStandardD15V2 ...
+ VMSizeTypesStandardD15V2 VMSizeTypes = "Standard_D15_v2"
+ // VMSizeTypesStandardD16sV3 ...
+ VMSizeTypesStandardD16sV3 VMSizeTypes = "Standard_D16s_v3"
+ // VMSizeTypesStandardD16V3 ...
+ VMSizeTypesStandardD16V3 VMSizeTypes = "Standard_D16_v3"
+ // VMSizeTypesStandardD1V2 ...
+ VMSizeTypesStandardD1V2 VMSizeTypes = "Standard_D1_v2"
+ // VMSizeTypesStandardD2 ...
+ VMSizeTypesStandardD2 VMSizeTypes = "Standard_D2"
+ // VMSizeTypesStandardD2sV3 ...
+ VMSizeTypesStandardD2sV3 VMSizeTypes = "Standard_D2s_v3"
+ // VMSizeTypesStandardD2V2 ...
+ VMSizeTypesStandardD2V2 VMSizeTypes = "Standard_D2_v2"
+ // VMSizeTypesStandardD2V2Promo ...
+ VMSizeTypesStandardD2V2Promo VMSizeTypes = "Standard_D2_v2_Promo"
+ // VMSizeTypesStandardD2V3 ...
+ VMSizeTypesStandardD2V3 VMSizeTypes = "Standard_D2_v3"
+ // VMSizeTypesStandardD3 ...
+ VMSizeTypesStandardD3 VMSizeTypes = "Standard_D3"
+ // VMSizeTypesStandardD32sV3 ...
+ VMSizeTypesStandardD32sV3 VMSizeTypes = "Standard_D32s_v3"
+ // VMSizeTypesStandardD32V3 ...
+ VMSizeTypesStandardD32V3 VMSizeTypes = "Standard_D32_v3"
+ // VMSizeTypesStandardD3V2 ...
+ VMSizeTypesStandardD3V2 VMSizeTypes = "Standard_D3_v2"
+ // VMSizeTypesStandardD3V2Promo ...
+ VMSizeTypesStandardD3V2Promo VMSizeTypes = "Standard_D3_v2_Promo"
+ // VMSizeTypesStandardD4 ...
+ VMSizeTypesStandardD4 VMSizeTypes = "Standard_D4"
+ // VMSizeTypesStandardD4sV3 ...
+ VMSizeTypesStandardD4sV3 VMSizeTypes = "Standard_D4s_v3"
+ // VMSizeTypesStandardD4V2 ...
+ VMSizeTypesStandardD4V2 VMSizeTypes = "Standard_D4_v2"
+ // VMSizeTypesStandardD4V2Promo ...
+ VMSizeTypesStandardD4V2Promo VMSizeTypes = "Standard_D4_v2_Promo"
+ // VMSizeTypesStandardD4V3 ...
+ VMSizeTypesStandardD4V3 VMSizeTypes = "Standard_D4_v3"
+ // VMSizeTypesStandardD5V2 ...
+ VMSizeTypesStandardD5V2 VMSizeTypes = "Standard_D5_v2"
+ // VMSizeTypesStandardD5V2Promo ...
+ VMSizeTypesStandardD5V2Promo VMSizeTypes = "Standard_D5_v2_Promo"
+ // VMSizeTypesStandardD64sV3 ...
+ VMSizeTypesStandardD64sV3 VMSizeTypes = "Standard_D64s_v3"
+ // VMSizeTypesStandardD64V3 ...
+ VMSizeTypesStandardD64V3 VMSizeTypes = "Standard_D64_v3"
+ // VMSizeTypesStandardD8sV3 ...
+ VMSizeTypesStandardD8sV3 VMSizeTypes = "Standard_D8s_v3"
+ // VMSizeTypesStandardD8V3 ...
+ VMSizeTypesStandardD8V3 VMSizeTypes = "Standard_D8_v3"
+ // VMSizeTypesStandardDS1 ...
+ VMSizeTypesStandardDS1 VMSizeTypes = "Standard_DS1"
+ // VMSizeTypesStandardDS11 ...
+ VMSizeTypesStandardDS11 VMSizeTypes = "Standard_DS11"
+ // VMSizeTypesStandardDS11V2 ...
+ VMSizeTypesStandardDS11V2 VMSizeTypes = "Standard_DS11_v2"
+ // VMSizeTypesStandardDS11V2Promo ...
+ VMSizeTypesStandardDS11V2Promo VMSizeTypes = "Standard_DS11_v2_Promo"
+ // VMSizeTypesStandardDS12 ...
+ VMSizeTypesStandardDS12 VMSizeTypes = "Standard_DS12"
+ // VMSizeTypesStandardDS12V2 ...
+ VMSizeTypesStandardDS12V2 VMSizeTypes = "Standard_DS12_v2"
+ // VMSizeTypesStandardDS12V2Promo ...
+ VMSizeTypesStandardDS12V2Promo VMSizeTypes = "Standard_DS12_v2_Promo"
+ // VMSizeTypesStandardDS13 ...
+ VMSizeTypesStandardDS13 VMSizeTypes = "Standard_DS13"
+ // VMSizeTypesStandardDS132V2 ...
+ VMSizeTypesStandardDS132V2 VMSizeTypes = "Standard_DS13-2_v2"
+ // VMSizeTypesStandardDS134V2 ...
+ VMSizeTypesStandardDS134V2 VMSizeTypes = "Standard_DS13-4_v2"
+ // VMSizeTypesStandardDS13V2 ...
+ VMSizeTypesStandardDS13V2 VMSizeTypes = "Standard_DS13_v2"
+ // VMSizeTypesStandardDS13V2Promo ...
+ VMSizeTypesStandardDS13V2Promo VMSizeTypes = "Standard_DS13_v2_Promo"
+ // VMSizeTypesStandardDS14 ...
+ VMSizeTypesStandardDS14 VMSizeTypes = "Standard_DS14"
+ // VMSizeTypesStandardDS144V2 ...
+ VMSizeTypesStandardDS144V2 VMSizeTypes = "Standard_DS14-4_v2"
+ // VMSizeTypesStandardDS148V2 ...
+ VMSizeTypesStandardDS148V2 VMSizeTypes = "Standard_DS14-8_v2"
+ // VMSizeTypesStandardDS14V2 ...
+ VMSizeTypesStandardDS14V2 VMSizeTypes = "Standard_DS14_v2"
+ // VMSizeTypesStandardDS14V2Promo ...
+ VMSizeTypesStandardDS14V2Promo VMSizeTypes = "Standard_DS14_v2_Promo"
+ // VMSizeTypesStandardDS15V2 ...
+ VMSizeTypesStandardDS15V2 VMSizeTypes = "Standard_DS15_v2"
+ // VMSizeTypesStandardDS1V2 ...
+ VMSizeTypesStandardDS1V2 VMSizeTypes = "Standard_DS1_v2"
+ // VMSizeTypesStandardDS2 ...
+ VMSizeTypesStandardDS2 VMSizeTypes = "Standard_DS2"
+ // VMSizeTypesStandardDS2V2 ...
+ VMSizeTypesStandardDS2V2 VMSizeTypes = "Standard_DS2_v2"
+ // VMSizeTypesStandardDS2V2Promo ...
+ VMSizeTypesStandardDS2V2Promo VMSizeTypes = "Standard_DS2_v2_Promo"
+ // VMSizeTypesStandardDS3 ...
+ VMSizeTypesStandardDS3 VMSizeTypes = "Standard_DS3"
+ // VMSizeTypesStandardDS3V2 ...
+ VMSizeTypesStandardDS3V2 VMSizeTypes = "Standard_DS3_v2"
+ // VMSizeTypesStandardDS3V2Promo ...
+ VMSizeTypesStandardDS3V2Promo VMSizeTypes = "Standard_DS3_v2_Promo"
+ // VMSizeTypesStandardDS4 ...
+ VMSizeTypesStandardDS4 VMSizeTypes = "Standard_DS4"
+ // VMSizeTypesStandardDS4V2 ...
+ VMSizeTypesStandardDS4V2 VMSizeTypes = "Standard_DS4_v2"
+ // VMSizeTypesStandardDS4V2Promo ...
+ VMSizeTypesStandardDS4V2Promo VMSizeTypes = "Standard_DS4_v2_Promo"
+ // VMSizeTypesStandardDS5V2 ...
+ VMSizeTypesStandardDS5V2 VMSizeTypes = "Standard_DS5_v2"
+ // VMSizeTypesStandardDS5V2Promo ...
+ VMSizeTypesStandardDS5V2Promo VMSizeTypes = "Standard_DS5_v2_Promo"
+ // VMSizeTypesStandardE16sV3 ...
+ VMSizeTypesStandardE16sV3 VMSizeTypes = "Standard_E16s_v3"
+ // VMSizeTypesStandardE16V3 ...
+ VMSizeTypesStandardE16V3 VMSizeTypes = "Standard_E16_v3"
+ // VMSizeTypesStandardE2sV3 ...
+ VMSizeTypesStandardE2sV3 VMSizeTypes = "Standard_E2s_v3"
+ // VMSizeTypesStandardE2V3 ...
+ VMSizeTypesStandardE2V3 VMSizeTypes = "Standard_E2_v3"
+ // VMSizeTypesStandardE3216sV3 ...
+ VMSizeTypesStandardE3216sV3 VMSizeTypes = "Standard_E32-16s_v3"
+ // VMSizeTypesStandardE328sV3 ...
+ VMSizeTypesStandardE328sV3 VMSizeTypes = "Standard_E32-8s_v3"
+ // VMSizeTypesStandardE32sV3 ...
+ VMSizeTypesStandardE32sV3 VMSizeTypes = "Standard_E32s_v3"
+ // VMSizeTypesStandardE32V3 ...
+ VMSizeTypesStandardE32V3 VMSizeTypes = "Standard_E32_v3"
+ // VMSizeTypesStandardE4sV3 ...
+ VMSizeTypesStandardE4sV3 VMSizeTypes = "Standard_E4s_v3"
+ // VMSizeTypesStandardE4V3 ...
+ VMSizeTypesStandardE4V3 VMSizeTypes = "Standard_E4_v3"
+ // VMSizeTypesStandardE6416sV3 ...
+ VMSizeTypesStandardE6416sV3 VMSizeTypes = "Standard_E64-16s_v3"
+ // VMSizeTypesStandardE6432sV3 ...
+ VMSizeTypesStandardE6432sV3 VMSizeTypes = "Standard_E64-32s_v3"
+ // VMSizeTypesStandardE64sV3 ...
+ VMSizeTypesStandardE64sV3 VMSizeTypes = "Standard_E64s_v3"
+ // VMSizeTypesStandardE64V3 ...
+ VMSizeTypesStandardE64V3 VMSizeTypes = "Standard_E64_v3"
+ // VMSizeTypesStandardE8sV3 ...
+ VMSizeTypesStandardE8sV3 VMSizeTypes = "Standard_E8s_v3"
+ // VMSizeTypesStandardE8V3 ...
+ VMSizeTypesStandardE8V3 VMSizeTypes = "Standard_E8_v3"
+ // VMSizeTypesStandardF1 ...
+ VMSizeTypesStandardF1 VMSizeTypes = "Standard_F1"
+ // VMSizeTypesStandardF16 ...
+ VMSizeTypesStandardF16 VMSizeTypes = "Standard_F16"
+ // VMSizeTypesStandardF16s ...
+ VMSizeTypesStandardF16s VMSizeTypes = "Standard_F16s"
+ // VMSizeTypesStandardF16sV2 ...
+ VMSizeTypesStandardF16sV2 VMSizeTypes = "Standard_F16s_v2"
+ // VMSizeTypesStandardF1s ...
+ VMSizeTypesStandardF1s VMSizeTypes = "Standard_F1s"
+ // VMSizeTypesStandardF2 ...
+ VMSizeTypesStandardF2 VMSizeTypes = "Standard_F2"
+ // VMSizeTypesStandardF2s ...
+ VMSizeTypesStandardF2s VMSizeTypes = "Standard_F2s"
+ // VMSizeTypesStandardF2sV2 ...
+ VMSizeTypesStandardF2sV2 VMSizeTypes = "Standard_F2s_v2"
+ // VMSizeTypesStandardF32sV2 ...
+ VMSizeTypesStandardF32sV2 VMSizeTypes = "Standard_F32s_v2"
+ // VMSizeTypesStandardF4 ...
+ VMSizeTypesStandardF4 VMSizeTypes = "Standard_F4"
+ // VMSizeTypesStandardF4s ...
+ VMSizeTypesStandardF4s VMSizeTypes = "Standard_F4s"
+ // VMSizeTypesStandardF4sV2 ...
+ VMSizeTypesStandardF4sV2 VMSizeTypes = "Standard_F4s_v2"
+ // VMSizeTypesStandardF64sV2 ...
+ VMSizeTypesStandardF64sV2 VMSizeTypes = "Standard_F64s_v2"
+ // VMSizeTypesStandardF72sV2 ...
+ VMSizeTypesStandardF72sV2 VMSizeTypes = "Standard_F72s_v2"
+ // VMSizeTypesStandardF8 ...
+ VMSizeTypesStandardF8 VMSizeTypes = "Standard_F8"
+ // VMSizeTypesStandardF8s ...
+ VMSizeTypesStandardF8s VMSizeTypes = "Standard_F8s"
+ // VMSizeTypesStandardF8sV2 ...
+ VMSizeTypesStandardF8sV2 VMSizeTypes = "Standard_F8s_v2"
+ // VMSizeTypesStandardG1 ...
+ VMSizeTypesStandardG1 VMSizeTypes = "Standard_G1"
+ // VMSizeTypesStandardG2 ...
+ VMSizeTypesStandardG2 VMSizeTypes = "Standard_G2"
+ // VMSizeTypesStandardG3 ...
+ VMSizeTypesStandardG3 VMSizeTypes = "Standard_G3"
+ // VMSizeTypesStandardG4 ...
+ VMSizeTypesStandardG4 VMSizeTypes = "Standard_G4"
+ // VMSizeTypesStandardG5 ...
+ VMSizeTypesStandardG5 VMSizeTypes = "Standard_G5"
+ // VMSizeTypesStandardGS1 ...
+ VMSizeTypesStandardGS1 VMSizeTypes = "Standard_GS1"
+ // VMSizeTypesStandardGS2 ...
+ VMSizeTypesStandardGS2 VMSizeTypes = "Standard_GS2"
+ // VMSizeTypesStandardGS3 ...
+ VMSizeTypesStandardGS3 VMSizeTypes = "Standard_GS3"
+ // VMSizeTypesStandardGS4 ...
+ VMSizeTypesStandardGS4 VMSizeTypes = "Standard_GS4"
+ // VMSizeTypesStandardGS44 ...
+ VMSizeTypesStandardGS44 VMSizeTypes = "Standard_GS4-4"
+ // VMSizeTypesStandardGS48 ...
+ VMSizeTypesStandardGS48 VMSizeTypes = "Standard_GS4-8"
+ // VMSizeTypesStandardGS5 ...
+ VMSizeTypesStandardGS5 VMSizeTypes = "Standard_GS5"
+ // VMSizeTypesStandardGS516 ...
+ VMSizeTypesStandardGS516 VMSizeTypes = "Standard_GS5-16"
+ // VMSizeTypesStandardGS58 ...
+ VMSizeTypesStandardGS58 VMSizeTypes = "Standard_GS5-8"
+ // VMSizeTypesStandardH16 ...
+ VMSizeTypesStandardH16 VMSizeTypes = "Standard_H16"
+ // VMSizeTypesStandardH16m ...
+ VMSizeTypesStandardH16m VMSizeTypes = "Standard_H16m"
+ // VMSizeTypesStandardH16mr ...
+ VMSizeTypesStandardH16mr VMSizeTypes = "Standard_H16mr"
+ // VMSizeTypesStandardH16r ...
+ VMSizeTypesStandardH16r VMSizeTypes = "Standard_H16r"
+ // VMSizeTypesStandardH8 ...
+ VMSizeTypesStandardH8 VMSizeTypes = "Standard_H8"
+ // VMSizeTypesStandardH8m ...
+ VMSizeTypesStandardH8m VMSizeTypes = "Standard_H8m"
+ // VMSizeTypesStandardL16s ...
+ VMSizeTypesStandardL16s VMSizeTypes = "Standard_L16s"
+ // VMSizeTypesStandardL32s ...
+ VMSizeTypesStandardL32s VMSizeTypes = "Standard_L32s"
+ // VMSizeTypesStandardL4s ...
+ VMSizeTypesStandardL4s VMSizeTypes = "Standard_L4s"
+ // VMSizeTypesStandardL8s ...
+ VMSizeTypesStandardL8s VMSizeTypes = "Standard_L8s"
+ // VMSizeTypesStandardM12832ms ...
+ VMSizeTypesStandardM12832ms VMSizeTypes = "Standard_M128-32ms"
+ // VMSizeTypesStandardM12864ms ...
+ VMSizeTypesStandardM12864ms VMSizeTypes = "Standard_M128-64ms"
+ // VMSizeTypesStandardM128ms ...
+ VMSizeTypesStandardM128ms VMSizeTypes = "Standard_M128ms"
+ // VMSizeTypesStandardM128s ...
+ VMSizeTypesStandardM128s VMSizeTypes = "Standard_M128s"
+ // VMSizeTypesStandardM6416ms ...
+ VMSizeTypesStandardM6416ms VMSizeTypes = "Standard_M64-16ms"
+ // VMSizeTypesStandardM6432ms ...
+ VMSizeTypesStandardM6432ms VMSizeTypes = "Standard_M64-32ms"
+ // VMSizeTypesStandardM64ms ...
+ VMSizeTypesStandardM64ms VMSizeTypes = "Standard_M64ms"
+ // VMSizeTypesStandardM64s ...
+ VMSizeTypesStandardM64s VMSizeTypes = "Standard_M64s"
+ // VMSizeTypesStandardNC12 ...
+ VMSizeTypesStandardNC12 VMSizeTypes = "Standard_NC12"
+ // VMSizeTypesStandardNC12sV2 ...
+ VMSizeTypesStandardNC12sV2 VMSizeTypes = "Standard_NC12s_v2"
+ // VMSizeTypesStandardNC12sV3 ...
+ VMSizeTypesStandardNC12sV3 VMSizeTypes = "Standard_NC12s_v3"
+ // VMSizeTypesStandardNC24 ...
+ VMSizeTypesStandardNC24 VMSizeTypes = "Standard_NC24"
+ // VMSizeTypesStandardNC24r ...
+ VMSizeTypesStandardNC24r VMSizeTypes = "Standard_NC24r"
+ // VMSizeTypesStandardNC24rsV2 ...
+ VMSizeTypesStandardNC24rsV2 VMSizeTypes = "Standard_NC24rs_v2"
+ // VMSizeTypesStandardNC24rsV3 ...
+ VMSizeTypesStandardNC24rsV3 VMSizeTypes = "Standard_NC24rs_v3"
+ // VMSizeTypesStandardNC24sV2 ...
+ VMSizeTypesStandardNC24sV2 VMSizeTypes = "Standard_NC24s_v2"
+ // VMSizeTypesStandardNC24sV3 ...
+ VMSizeTypesStandardNC24sV3 VMSizeTypes = "Standard_NC24s_v3"
+ // VMSizeTypesStandardNC6 ...
+ VMSizeTypesStandardNC6 VMSizeTypes = "Standard_NC6"
+ // VMSizeTypesStandardNC6sV2 ...
+ VMSizeTypesStandardNC6sV2 VMSizeTypes = "Standard_NC6s_v2"
+ // VMSizeTypesStandardNC6sV3 ...
+ VMSizeTypesStandardNC6sV3 VMSizeTypes = "Standard_NC6s_v3"
+ // VMSizeTypesStandardND12s ...
+ VMSizeTypesStandardND12s VMSizeTypes = "Standard_ND12s"
+ // VMSizeTypesStandardND24rs ...
+ VMSizeTypesStandardND24rs VMSizeTypes = "Standard_ND24rs"
+ // VMSizeTypesStandardND24s ...
+ VMSizeTypesStandardND24s VMSizeTypes = "Standard_ND24s"
+ // VMSizeTypesStandardND6s ...
+ VMSizeTypesStandardND6s VMSizeTypes = "Standard_ND6s"
+ // VMSizeTypesStandardNV12 ...
+ VMSizeTypesStandardNV12 VMSizeTypes = "Standard_NV12"
+ // VMSizeTypesStandardNV24 ...
+ VMSizeTypesStandardNV24 VMSizeTypes = "Standard_NV24"
+ // VMSizeTypesStandardNV6 ...
+ VMSizeTypesStandardNV6 VMSizeTypes = "Standard_NV6"
+)
+
+// PossibleVMSizeTypesValues returns an array of possible values for the VMSizeTypes const type.
+func PossibleVMSizeTypesValues() []VMSizeTypes {
+ return []VMSizeTypes{VMSizeTypesStandardA1, VMSizeTypesStandardA10, VMSizeTypesStandardA11, VMSizeTypesStandardA1V2, VMSizeTypesStandardA2, VMSizeTypesStandardA2mV2, VMSizeTypesStandardA2V2, VMSizeTypesStandardA3, VMSizeTypesStandardA4, VMSizeTypesStandardA4mV2, VMSizeTypesStandardA4V2, VMSizeTypesStandardA5, VMSizeTypesStandardA6, VMSizeTypesStandardA7, VMSizeTypesStandardA8, VMSizeTypesStandardA8mV2, VMSizeTypesStandardA8V2, VMSizeTypesStandardA9, VMSizeTypesStandardB2ms, VMSizeTypesStandardB2s, VMSizeTypesStandardB4ms, VMSizeTypesStandardB8ms, VMSizeTypesStandardD1, VMSizeTypesStandardD11, VMSizeTypesStandardD11V2, VMSizeTypesStandardD11V2Promo, VMSizeTypesStandardD12, VMSizeTypesStandardD12V2, VMSizeTypesStandardD12V2Promo, VMSizeTypesStandardD13, VMSizeTypesStandardD13V2, VMSizeTypesStandardD13V2Promo, VMSizeTypesStandardD14, VMSizeTypesStandardD14V2, VMSizeTypesStandardD14V2Promo, VMSizeTypesStandardD15V2, VMSizeTypesStandardD16sV3, VMSizeTypesStandardD16V3, VMSizeTypesStandardD1V2, VMSizeTypesStandardD2, VMSizeTypesStandardD2sV3, VMSizeTypesStandardD2V2, VMSizeTypesStandardD2V2Promo, VMSizeTypesStandardD2V3, VMSizeTypesStandardD3, VMSizeTypesStandardD32sV3, VMSizeTypesStandardD32V3, VMSizeTypesStandardD3V2, VMSizeTypesStandardD3V2Promo, VMSizeTypesStandardD4, VMSizeTypesStandardD4sV3, VMSizeTypesStandardD4V2, VMSizeTypesStandardD4V2Promo, VMSizeTypesStandardD4V3, VMSizeTypesStandardD5V2, VMSizeTypesStandardD5V2Promo, VMSizeTypesStandardD64sV3, VMSizeTypesStandardD64V3, VMSizeTypesStandardD8sV3, VMSizeTypesStandardD8V3, VMSizeTypesStandardDS1, VMSizeTypesStandardDS11, VMSizeTypesStandardDS11V2, VMSizeTypesStandardDS11V2Promo, VMSizeTypesStandardDS12, VMSizeTypesStandardDS12V2, VMSizeTypesStandardDS12V2Promo, VMSizeTypesStandardDS13, VMSizeTypesStandardDS132V2, VMSizeTypesStandardDS134V2, VMSizeTypesStandardDS13V2, VMSizeTypesStandardDS13V2Promo, VMSizeTypesStandardDS14, VMSizeTypesStandardDS144V2, VMSizeTypesStandardDS148V2, VMSizeTypesStandardDS14V2, VMSizeTypesStandardDS14V2Promo, VMSizeTypesStandardDS15V2, VMSizeTypesStandardDS1V2, VMSizeTypesStandardDS2, VMSizeTypesStandardDS2V2, VMSizeTypesStandardDS2V2Promo, VMSizeTypesStandardDS3, VMSizeTypesStandardDS3V2, VMSizeTypesStandardDS3V2Promo, VMSizeTypesStandardDS4, VMSizeTypesStandardDS4V2, VMSizeTypesStandardDS4V2Promo, VMSizeTypesStandardDS5V2, VMSizeTypesStandardDS5V2Promo, VMSizeTypesStandardE16sV3, VMSizeTypesStandardE16V3, VMSizeTypesStandardE2sV3, VMSizeTypesStandardE2V3, VMSizeTypesStandardE3216sV3, VMSizeTypesStandardE328sV3, VMSizeTypesStandardE32sV3, VMSizeTypesStandardE32V3, VMSizeTypesStandardE4sV3, VMSizeTypesStandardE4V3, VMSizeTypesStandardE6416sV3, VMSizeTypesStandardE6432sV3, VMSizeTypesStandardE64sV3, VMSizeTypesStandardE64V3, VMSizeTypesStandardE8sV3, VMSizeTypesStandardE8V3, VMSizeTypesStandardF1, VMSizeTypesStandardF16, VMSizeTypesStandardF16s, VMSizeTypesStandardF16sV2, VMSizeTypesStandardF1s, VMSizeTypesStandardF2, VMSizeTypesStandardF2s, VMSizeTypesStandardF2sV2, VMSizeTypesStandardF32sV2, VMSizeTypesStandardF4, VMSizeTypesStandardF4s, VMSizeTypesStandardF4sV2, VMSizeTypesStandardF64sV2, VMSizeTypesStandardF72sV2, VMSizeTypesStandardF8, VMSizeTypesStandardF8s, VMSizeTypesStandardF8sV2, VMSizeTypesStandardG1, VMSizeTypesStandardG2, VMSizeTypesStandardG3, VMSizeTypesStandardG4, VMSizeTypesStandardG5, VMSizeTypesStandardGS1, VMSizeTypesStandardGS2, VMSizeTypesStandardGS3, VMSizeTypesStandardGS4, VMSizeTypesStandardGS44, VMSizeTypesStandardGS48, VMSizeTypesStandardGS5, VMSizeTypesStandardGS516, VMSizeTypesStandardGS58, VMSizeTypesStandardH16, VMSizeTypesStandardH16m, VMSizeTypesStandardH16mr, VMSizeTypesStandardH16r, VMSizeTypesStandardH8, VMSizeTypesStandardH8m, VMSizeTypesStandardL16s, VMSizeTypesStandardL32s, VMSizeTypesStandardL4s, VMSizeTypesStandardL8s, VMSizeTypesStandardM12832ms, VMSizeTypesStandardM12864ms, VMSizeTypesStandardM128ms, VMSizeTypesStandardM128s, VMSizeTypesStandardM6416ms, VMSizeTypesStandardM6432ms, VMSizeTypesStandardM64ms, VMSizeTypesStandardM64s, VMSizeTypesStandardNC12, VMSizeTypesStandardNC12sV2, VMSizeTypesStandardNC12sV3, VMSizeTypesStandardNC24, VMSizeTypesStandardNC24r, VMSizeTypesStandardNC24rsV2, VMSizeTypesStandardNC24rsV3, VMSizeTypesStandardNC24sV2, VMSizeTypesStandardNC24sV3, VMSizeTypesStandardNC6, VMSizeTypesStandardNC6sV2, VMSizeTypesStandardNC6sV3, VMSizeTypesStandardND12s, VMSizeTypesStandardND24rs, VMSizeTypesStandardND24s, VMSizeTypesStandardND6s, VMSizeTypesStandardNV12, VMSizeTypesStandardNV24, VMSizeTypesStandardNV6}
+}
+
+// WeekDay enumerates the values for week day.
+type WeekDay string
+
+const (
+ // WeekDayFriday ...
+ WeekDayFriday WeekDay = "Friday"
+ // WeekDayMonday ...
+ WeekDayMonday WeekDay = "Monday"
+ // WeekDaySaturday ...
+ WeekDaySaturday WeekDay = "Saturday"
+ // WeekDaySunday ...
+ WeekDaySunday WeekDay = "Sunday"
+ // WeekDayThursday ...
+ WeekDayThursday WeekDay = "Thursday"
+ // WeekDayTuesday ...
+ WeekDayTuesday WeekDay = "Tuesday"
+ // WeekDayWednesday ...
+ WeekDayWednesday WeekDay = "Wednesday"
+)
+
+// PossibleWeekDayValues returns an array of possible values for the WeekDay const type.
+func PossibleWeekDayValues() []WeekDay {
+ return []WeekDay{WeekDayFriday, WeekDayMonday, WeekDaySaturday, WeekDaySunday, WeekDayThursday, WeekDayTuesday, WeekDayWednesday}
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/maintenanceconfigurations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/maintenanceconfigurations.go
similarity index 99%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/maintenanceconfigurations.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/maintenanceconfigurations.go
index 4b96f1d3e7c1d..6fa3730346bf7 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/maintenanceconfigurations.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/maintenanceconfigurations.go
@@ -90,7 +90,7 @@ func (client MaintenanceConfigurationsClient) CreateOrUpdatePreparer(ctx context
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -181,7 +181,7 @@ func (client MaintenanceConfigurationsClient) DeletePreparer(ctx context.Context
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -268,7 +268,7 @@ func (client MaintenanceConfigurationsClient) GetPreparer(ctx context.Context, r
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -360,7 +360,7 @@ func (client MaintenanceConfigurationsClient) ListByManagedClusterPreparer(ctx c
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/managedclusters.go b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/managedclusters.go
similarity index 85%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/managedclusters.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/managedclusters.go
index acfd35aef930a..ab9f04a71d25c 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/managedclusters.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/managedclusters.go
@@ -120,7 +120,7 @@ func (client ManagedClustersClient) CreateOrUpdatePreparer(ctx context.Context,
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -210,7 +210,7 @@ func (client ManagedClustersClient) DeletePreparer(ctx context.Context, resource
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -304,7 +304,7 @@ func (client ManagedClustersClient) GetPreparer(ctx context.Context, resourceGro
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -397,7 +397,7 @@ func (client ManagedClustersClient) GetAccessProfilePreparer(ctx context.Context
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -428,6 +428,172 @@ func (client ManagedClustersClient) GetAccessProfileResponder(resp *http.Respons
return
}
+// GetCommandResult get command result from previous runCommand invoke.
+// Parameters:
+// resourceGroupName - the name of the resource group.
+// resourceName - the name of the managed cluster resource.
+// commandID - id of the command request.
+func (client ManagedClustersClient) GetCommandResult(ctx context.Context, resourceGroupName string, resourceName string, commandID string) (result RunCommandResult, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ManagedClustersClient.GetCommandResult")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: resourceGroupName,
+ Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}}},
+ {TargetValue: resourceName,
+ Constraints: []validation.Constraint{{Target: "resourceName", Name: validation.MaxLength, Rule: 63, Chain: nil},
+ {Target: "resourceName", Name: validation.MinLength, Rule: 1, Chain: nil},
+ {Target: "resourceName", Name: validation.Pattern, Rule: `^[a-zA-Z0-9]$|^[a-zA-Z0-9][-_a-zA-Z0-9]{0,61}[a-zA-Z0-9]$`, Chain: nil}}}}); err != nil {
+ return result, validation.NewError("containerservice.ManagedClustersClient", "GetCommandResult", err.Error())
+ }
+
+ req, err := client.GetCommandResultPreparer(ctx, resourceGroupName, resourceName, commandID)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "containerservice.ManagedClustersClient", "GetCommandResult", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetCommandResultSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "containerservice.ManagedClustersClient", "GetCommandResult", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetCommandResultResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "containerservice.ManagedClustersClient", "GetCommandResult", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetCommandResultPreparer prepares the GetCommandResult request.
+func (client ManagedClustersClient) GetCommandResultPreparer(ctx context.Context, resourceGroupName string, resourceName string, commandID string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "commandId": autorest.Encode("path", commandID),
+ "resourceGroupName": autorest.Encode("path", resourceGroupName),
+ "resourceName": autorest.Encode("path", resourceName),
+ "subscriptionId": autorest.Encode("path", client.SubscriptionID),
+ }
+
+ const APIVersion = "2021-03-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/commandResults/{commandId}", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetCommandResultSender sends the GetCommandResult request. The method will close the
+// http.Response Body if it receives an error.
+func (client ManagedClustersClient) GetCommandResultSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, azure.DoRetryWithRegistration(client.Client))
+}
+
+// GetCommandResultResponder handles the response to the GetCommandResult request. The method always
+// closes the http.Response Body.
+func (client ManagedClustersClient) GetCommandResultResponder(resp *http.Response) (result RunCommandResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// GetOSOptions gets supported OS options in the specified subscription.
+// Parameters:
+// location - the name of a supported Azure region.
+// resourceType - resource type for which the OS options needs to be returned
+func (client ManagedClustersClient) GetOSOptions(ctx context.Context, location string, resourceType string) (result OSOptionProfile, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ManagedClustersClient.GetOSOptions")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.GetOSOptionsPreparer(ctx, location, resourceType)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "containerservice.ManagedClustersClient", "GetOSOptions", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetOSOptionsSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "containerservice.ManagedClustersClient", "GetOSOptions", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetOSOptionsResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "containerservice.ManagedClustersClient", "GetOSOptions", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetOSOptionsPreparer prepares the GetOSOptions request.
+func (client ManagedClustersClient) GetOSOptionsPreparer(ctx context.Context, location string, resourceType string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "location": autorest.Encode("path", location),
+ "subscriptionId": autorest.Encode("path", client.SubscriptionID),
+ }
+
+ const APIVersion = "2021-03-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+ if len(resourceType) > 0 {
+ queryParameters["resource-type"] = autorest.Encode("query", resourceType)
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.ContainerService/locations/{location}/osOptions/default", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetOSOptionsSender sends the GetOSOptions request. The method will close the
+// http.Response Body if it receives an error.
+func (client ManagedClustersClient) GetOSOptionsSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, azure.DoRetryWithRegistration(client.Client))
+}
+
+// GetOSOptionsResponder handles the response to the GetOSOptions request. The method always
+// closes the http.Response Body.
+func (client ManagedClustersClient) GetOSOptionsResponder(resp *http.Response) (result OSOptionProfile, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
// GetUpgradeProfile gets the details of the upgrade profile for a managed cluster with a specified resource group and
// name.
// Parameters:
@@ -484,7 +650,7 @@ func (client ManagedClustersClient) GetUpgradeProfilePreparer(ctx context.Contex
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -561,7 +727,7 @@ func (client ManagedClustersClient) ListPreparer(ctx context.Context) (*http.Req
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -684,7 +850,7 @@ func (client ManagedClustersClient) ListByResourceGroupPreparer(ctx context.Cont
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -808,7 +974,7 @@ func (client ManagedClustersClient) ListClusterAdminCredentialsPreparer(ctx cont
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -895,7 +1061,7 @@ func (client ManagedClustersClient) ListClusterMonitoringUserCredentialsPreparer
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -982,7 +1148,7 @@ func (client ManagedClustersClient) ListClusterUserCredentialsPreparer(ctx conte
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -1062,7 +1228,7 @@ func (client ManagedClustersClient) ResetAADProfilePreparer(ctx context.Context,
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -1154,7 +1320,7 @@ func (client ManagedClustersClient) ResetServicePrincipalProfilePreparer(ctx con
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -1243,7 +1409,7 @@ func (client ManagedClustersClient) RotateClusterCertificatesPreparer(ctx contex
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -1282,6 +1448,99 @@ func (client ManagedClustersClient) RotateClusterCertificatesResponder(resp *htt
return
}
+// RunCommand submit a command to run against managed kubernetes service, it will create a pod to run the command.
+// Parameters:
+// resourceGroupName - the name of the resource group.
+// resourceName - the name of the managed cluster resource.
+// requestPayload - parameters supplied to the RunCommand operation.
+func (client ManagedClustersClient) RunCommand(ctx context.Context, resourceGroupName string, resourceName string, requestPayload RunCommandRequest) (result ManagedClustersRunCommandFuture, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/ManagedClustersClient.RunCommand")
+ defer func() {
+ sc := -1
+ if result.FutureAPI != nil && result.FutureAPI.Response() != nil {
+ sc = result.FutureAPI.Response().StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: resourceGroupName,
+ Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}}},
+ {TargetValue: resourceName,
+ Constraints: []validation.Constraint{{Target: "resourceName", Name: validation.MaxLength, Rule: 63, Chain: nil},
+ {Target: "resourceName", Name: validation.MinLength, Rule: 1, Chain: nil},
+ {Target: "resourceName", Name: validation.Pattern, Rule: `^[a-zA-Z0-9]$|^[a-zA-Z0-9][-_a-zA-Z0-9]{0,61}[a-zA-Z0-9]$`, Chain: nil}}},
+ {TargetValue: requestPayload,
+ Constraints: []validation.Constraint{{Target: "requestPayload.Command", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil {
+ return result, validation.NewError("containerservice.ManagedClustersClient", "RunCommand", err.Error())
+ }
+
+ req, err := client.RunCommandPreparer(ctx, resourceGroupName, resourceName, requestPayload)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "containerservice.ManagedClustersClient", "RunCommand", nil, "Failure preparing request")
+ return
+ }
+
+ result, err = client.RunCommandSender(req)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "containerservice.ManagedClustersClient", "RunCommand", nil, "Failure sending request")
+ return
+ }
+
+ return
+}
+
+// RunCommandPreparer prepares the RunCommand request.
+func (client ManagedClustersClient) RunCommandPreparer(ctx context.Context, resourceGroupName string, resourceName string, requestPayload RunCommandRequest) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "resourceGroupName": autorest.Encode("path", resourceGroupName),
+ "resourceName": autorest.Encode("path", resourceName),
+ "subscriptionId": autorest.Encode("path", client.SubscriptionID),
+ }
+
+ const APIVersion = "2021-03-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsContentType("application/json; charset=utf-8"),
+ autorest.AsPost(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/runCommand", pathParameters),
+ autorest.WithJSON(requestPayload),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// RunCommandSender sends the RunCommand request. The method will close the
+// http.Response Body if it receives an error.
+func (client ManagedClustersClient) RunCommandSender(req *http.Request) (future ManagedClustersRunCommandFuture, err error) {
+ var resp *http.Response
+ resp, err = client.Send(req, azure.DoRetryWithRegistration(client.Client))
+ if err != nil {
+ return
+ }
+ var azf azure.Future
+ azf, err = azure.NewFutureFromResponse(resp)
+ future.FutureAPI = &azf
+ future.Result = future.result
+ return
+}
+
+// RunCommandResponder handles the response to the RunCommand request. The method always
+// closes the http.Response Body.
+func (client ManagedClustersClient) RunCommandResponder(resp *http.Response) (result RunCommandResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
// Start starts a Stopped Managed Cluster
// Parameters:
// resourceGroupName - the name of the resource group.
@@ -1330,7 +1589,7 @@ func (client ManagedClustersClient) StartPreparer(ctx context.Context, resourceG
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -1417,7 +1676,7 @@ func (client ManagedClustersClient) StopPreparer(ctx context.Context, resourceGr
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -1505,7 +1764,7 @@ func (client ManagedClustersClient) UpdateTagsPreparer(ctx context.Context, reso
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/models.go
similarity index 85%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/models.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/models.go
index 006fbc21b7b4e..54b1488561315 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/models.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/models.go
@@ -18,7 +18,7 @@ import (
)
// The package's fully qualified name.
-const fqdn = "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice"
+const fqdn = "github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice"
// AccessProfile profile for enabling a user to access a managed cluster.
type AccessProfile struct {
@@ -556,7 +556,7 @@ func (apup *AgentPoolUpgradeProfile) UnmarshalJSON(body []byte) error {
type AgentPoolUpgradeProfileProperties struct {
// KubernetesVersion - Kubernetes version (major, minor, patch).
KubernetesVersion *string `json:"kubernetesVersion,omitempty"`
- // OsType - OsType to be used to specify os type. Choose from Linux and Windows. Default to Linux. Possible values include: 'Linux', 'Windows'
+ // OsType - OsType to be used to specify os type. Choose from Linux and Windows. Default to Linux. Possible values include: 'OSTypeLinux', 'OSTypeWindows'
OsType OSType `json:"osType,omitempty"`
// Upgrades - List of orchestrator types and versions available for upgrade.
Upgrades *[]AgentPoolUpgradeProfilePropertiesUpgradesItem `json:"upgrades,omitempty"`
@@ -596,6 +596,22 @@ type CloudErrorBody struct {
Details *[]CloudErrorBody `json:"details,omitempty"`
}
+// CommandResultProperties ...
+type CommandResultProperties struct {
+ // ProvisioningState - READ-ONLY; provisioning State
+ ProvisioningState *string `json:"provisioningState,omitempty"`
+ // ExitCode - READ-ONLY; exit code of the command
+ ExitCode *int32 `json:"exitCode,omitempty"`
+ // StartedAt - READ-ONLY; time when the command started.
+ StartedAt *date.Time `json:"startedAt,omitempty"`
+ // FinishedAt - READ-ONLY; time when the command finished.
+ FinishedAt *date.Time `json:"finishedAt,omitempty"`
+ // Logs - READ-ONLY; command output.
+ Logs *string `json:"logs,omitempty"`
+ // Reason - READ-ONLY; explain why provisioningState is set to failed (if so).
+ Reason *string `json:"reason,omitempty"`
+}
+
// CredentialResult the credential result response.
type CredentialResult struct {
// Name - READ-ONLY; The name of the credential.
@@ -617,6 +633,14 @@ type DiagnosticsProfile struct {
VMDiagnostics *VMDiagnostics `json:"vmDiagnostics,omitempty"`
}
+// ExtendedLocation the complex type of the extended location.
+type ExtendedLocation struct {
+ // Name - The name of the extended location.
+ Name *string `json:"name,omitempty"`
+ // Type - The type of the extended location. Possible values include: 'ExtendedLocationTypesEdgeZone'
+ Type ExtendedLocationTypes `json:"type,omitempty"`
+}
+
// KubeletConfig kubelet configurations of agent nodes.
type KubeletConfig struct {
// CPUManagerPolicy - CPU Manager policy to use.
@@ -933,6 +957,8 @@ type ManagedCluster struct {
Identity *ManagedClusterIdentity `json:"identity,omitempty"`
// Sku - The managed cluster SKU.
Sku *ManagedClusterSKU `json:"sku,omitempty"`
+ // ExtendedLocation - The extended location of the Virtual Machine.
+ ExtendedLocation *ExtendedLocation `json:"extendedLocation,omitempty"`
// ID - READ-ONLY; Resource Id
ID *string `json:"id,omitempty"`
// Name - READ-ONLY; Resource name
@@ -957,6 +983,9 @@ func (mc ManagedCluster) MarshalJSON() ([]byte, error) {
if mc.Sku != nil {
objectMap["sku"] = mc.Sku
}
+ if mc.ExtendedLocation != nil {
+ objectMap["extendedLocation"] = mc.ExtendedLocation
+ }
if mc.Location != nil {
objectMap["location"] = mc.Location
}
@@ -1002,6 +1031,15 @@ func (mc *ManagedCluster) UnmarshalJSON(body []byte) error {
}
mc.Sku = &sku
}
+ case "extendedLocation":
+ if v != nil {
+ var extendedLocation ExtendedLocation
+ err = json.Unmarshal(*v, &extendedLocation)
+ if err != nil {
+ return err
+ }
+ mc.ExtendedLocation = &extendedLocation
+ }
case "id":
if v != nil {
var ID string
@@ -1210,13 +1248,13 @@ type ManagedClusterAgentPoolProfile struct {
Name *string `json:"name,omitempty"`
// Count - Number of agents (VMs) to host docker containers. Allowed values must be in the range of 0 to 100 (inclusive) for user pools and in the range of 1 to 100 (inclusive) for system pools. The default value is 1.
Count *int32 `json:"count,omitempty"`
- // VMSize - Size of agent VMs. Possible values include: 'StandardA1', 'StandardA10', 'StandardA11', 'StandardA1V2', 'StandardA2', 'StandardA2V2', 'StandardA2mV2', 'StandardA3', 'StandardA4', 'StandardA4V2', 'StandardA4mV2', 'StandardA5', 'StandardA6', 'StandardA7', 'StandardA8', 'StandardA8V2', 'StandardA8mV2', 'StandardA9', 'StandardB2ms', 'StandardB2s', 'StandardB4ms', 'StandardB8ms', 'StandardD1', 'StandardD11', 'StandardD11V2', 'StandardD11V2Promo', 'StandardD12', 'StandardD12V2', 'StandardD12V2Promo', 'StandardD13', 'StandardD13V2', 'StandardD13V2Promo', 'StandardD14', 'StandardD14V2', 'StandardD14V2Promo', 'StandardD15V2', 'StandardD16V3', 'StandardD16sV3', 'StandardD1V2', 'StandardD2', 'StandardD2V2', 'StandardD2V2Promo', 'StandardD2V3', 'StandardD2sV3', 'StandardD3', 'StandardD32V3', 'StandardD32sV3', 'StandardD3V2', 'StandardD3V2Promo', 'StandardD4', 'StandardD4V2', 'StandardD4V2Promo', 'StandardD4V3', 'StandardD4sV3', 'StandardD5V2', 'StandardD5V2Promo', 'StandardD64V3', 'StandardD64sV3', 'StandardD8V3', 'StandardD8sV3', 'StandardDS1', 'StandardDS11', 'StandardDS11V2', 'StandardDS11V2Promo', 'StandardDS12', 'StandardDS12V2', 'StandardDS12V2Promo', 'StandardDS13', 'StandardDS132V2', 'StandardDS134V2', 'StandardDS13V2', 'StandardDS13V2Promo', 'StandardDS14', 'StandardDS144V2', 'StandardDS148V2', 'StandardDS14V2', 'StandardDS14V2Promo', 'StandardDS15V2', 'StandardDS1V2', 'StandardDS2', 'StandardDS2V2', 'StandardDS2V2Promo', 'StandardDS3', 'StandardDS3V2', 'StandardDS3V2Promo', 'StandardDS4', 'StandardDS4V2', 'StandardDS4V2Promo', 'StandardDS5V2', 'StandardDS5V2Promo', 'StandardE16V3', 'StandardE16sV3', 'StandardE2V3', 'StandardE2sV3', 'StandardE3216sV3', 'StandardE328sV3', 'StandardE32V3', 'StandardE32sV3', 'StandardE4V3', 'StandardE4sV3', 'StandardE6416sV3', 'StandardE6432sV3', 'StandardE64V3', 'StandardE64sV3', 'StandardE8V3', 'StandardE8sV3', 'StandardF1', 'StandardF16', 'StandardF16s', 'StandardF16sV2', 'StandardF1s', 'StandardF2', 'StandardF2s', 'StandardF2sV2', 'StandardF32sV2', 'StandardF4', 'StandardF4s', 'StandardF4sV2', 'StandardF64sV2', 'StandardF72sV2', 'StandardF8', 'StandardF8s', 'StandardF8sV2', 'StandardG1', 'StandardG2', 'StandardG3', 'StandardG4', 'StandardG5', 'StandardGS1', 'StandardGS2', 'StandardGS3', 'StandardGS4', 'StandardGS44', 'StandardGS48', 'StandardGS5', 'StandardGS516', 'StandardGS58', 'StandardH16', 'StandardH16m', 'StandardH16mr', 'StandardH16r', 'StandardH8', 'StandardH8m', 'StandardL16s', 'StandardL32s', 'StandardL4s', 'StandardL8s', 'StandardM12832ms', 'StandardM12864ms', 'StandardM128ms', 'StandardM128s', 'StandardM6416ms', 'StandardM6432ms', 'StandardM64ms', 'StandardM64s', 'StandardNC12', 'StandardNC12sV2', 'StandardNC12sV3', 'StandardNC24', 'StandardNC24r', 'StandardNC24rsV2', 'StandardNC24rsV3', 'StandardNC24sV2', 'StandardNC24sV3', 'StandardNC6', 'StandardNC6sV2', 'StandardNC6sV3', 'StandardND12s', 'StandardND24rs', 'StandardND24s', 'StandardND6s', 'StandardNV12', 'StandardNV24', 'StandardNV6'
- VMSize VMSizeTypes `json:"vmSize,omitempty"`
+ // VMSize - Size of agent VMs.
+ VMSize *string `json:"vmSize,omitempty"`
// OsDiskSizeGB - OS Disk Size in GB to be used to specify the disk size for every machine in this master/agent pool. If you specify 0, it will apply the default osDisk size according to the vmSize specified.
OsDiskSizeGB *int32 `json:"osDiskSizeGB,omitempty"`
- // OsDiskType - OS disk type to be used for machines in a given agent pool. Allowed values are 'Ephemeral' and 'Managed'. Defaults to 'Managed'. May not be changed after creation. Possible values include: 'Managed', 'Ephemeral'
+ // OsDiskType - OS disk type to be used for machines in a given agent pool. Allowed values are 'Ephemeral' and 'Managed'. If unspecified, defaults to 'Ephemeral' when the VM supports ephemeral OS and has a cache disk larger than the requested OSDiskSizeGB. Otherwise, defaults to 'Managed'. May not be changed after creation. Possible values include: 'OSDiskTypeManaged', 'OSDiskTypeEphemeral'
OsDiskType OSDiskType `json:"osDiskType,omitempty"`
- // KubeletDiskType - KubeletDiskType determines the placement of emptyDir volumes, container runtime data root, and Kubelet ephemeral storage. Currently allows one value, OS, resulting in Kubelet using the OS disk for data. Possible values include: 'OS', 'Temporary'
+ // KubeletDiskType - KubeletDiskType determines the placement of emptyDir volumes, container runtime data root, and Kubelet ephemeral storage. Currently allows one value, OS, resulting in Kubelet using the OS disk for data. Possible values include: 'KubeletDiskTypeOS', 'KubeletDiskTypeTemporary'
KubeletDiskType KubeletDiskType `json:"kubeletDiskType,omitempty"`
// VnetSubnetID - VNet SubnetID specifies the VNet's subnet identifier for nodes and maybe pods
VnetSubnetID *string `json:"vnetSubnetID,omitempty"`
@@ -1224,17 +1262,19 @@ type ManagedClusterAgentPoolProfile struct {
PodSubnetID *string `json:"podSubnetID,omitempty"`
// MaxPods - Maximum number of pods that can run on a node.
MaxPods *int32 `json:"maxPods,omitempty"`
- // OsType - OsType to be used to specify os type. Choose from Linux and Windows. Default to Linux. Possible values include: 'Linux', 'Windows'
+ // OsType - OsType to be used to specify os type. Choose from Linux and Windows. Default to Linux. Possible values include: 'OSTypeLinux', 'OSTypeWindows'
OsType OSType `json:"osType,omitempty"`
+ // OsSKU - OsSKU to be used to specify os sku. Choose from Ubuntu(default) and CBLMariner for Linux OSType. Not applicable to Windows OSType. Possible values include: 'OSSKUUbuntu', 'OSSKUCBLMariner'
+ OsSKU OSSKU `json:"osSKU,omitempty"`
// MaxCount - Maximum number of nodes for auto-scaling
MaxCount *int32 `json:"maxCount,omitempty"`
// MinCount - Minimum number of nodes for auto-scaling
MinCount *int32 `json:"minCount,omitempty"`
// EnableAutoScaling - Whether to enable auto-scaler
EnableAutoScaling *bool `json:"enableAutoScaling,omitempty"`
- // Type - AgentPoolType represents types of an agent pool. Possible values include: 'VirtualMachineScaleSets', 'AvailabilitySet'
+ // Type - AgentPoolType represents types of an agent pool. Possible values include: 'AgentPoolTypeVirtualMachineScaleSets', 'AgentPoolTypeAvailabilitySet'
Type AgentPoolType `json:"type,omitempty"`
- // Mode - AgentPoolMode represents mode of an agent pool. Possible values include: 'System', 'User'
+ // Mode - AgentPoolMode represents mode of an agent pool. Possible values include: 'AgentPoolModeSystem', 'AgentPoolModeUser'
Mode AgentPoolMode `json:"mode,omitempty"`
// OrchestratorVersion - Version of orchestrator specified when creating the managed cluster.
OrchestratorVersion *string `json:"orchestratorVersion,omitempty"`
@@ -1252,9 +1292,9 @@ type ManagedClusterAgentPoolProfile struct {
EnableNodePublicIP *bool `json:"enableNodePublicIP,omitempty"`
// NodePublicIPPrefixID - Public IP Prefix ID. VM nodes use IPs assigned from this Public IP Prefix.
NodePublicIPPrefixID *string `json:"nodePublicIPPrefixID,omitempty"`
- // ScaleSetPriority - ScaleSetPriority to be used to specify virtual machine scale set priority. Default to regular. Possible values include: 'Spot', 'Regular'
+ // ScaleSetPriority - ScaleSetPriority to be used to specify virtual machine scale set priority. Default to regular. Possible values include: 'ScaleSetPrioritySpot', 'ScaleSetPriorityRegular'
ScaleSetPriority ScaleSetPriority `json:"scaleSetPriority,omitempty"`
- // ScaleSetEvictionPolicy - ScaleSetEvictionPolicy to be used to specify eviction policy for Spot virtual machine scale set. Default to Delete. Possible values include: 'Delete', 'Deallocate'
+ // ScaleSetEvictionPolicy - ScaleSetEvictionPolicy to be used to specify eviction policy for Spot virtual machine scale set. Default to Delete. Possible values include: 'ScaleSetEvictionPolicyDelete', 'ScaleSetEvictionPolicyDeallocate'
ScaleSetEvictionPolicy ScaleSetEvictionPolicy `json:"scaleSetEvictionPolicy,omitempty"`
// SpotMaxPrice - SpotMaxPrice to be used to specify the maximum price you are willing to pay in US Dollars. Possible values are any decimal value greater than zero or -1 which indicates default price to be up-to on-demand.
SpotMaxPrice *float64 `json:"spotMaxPrice,omitempty"`
@@ -1272,6 +1312,10 @@ type ManagedClusterAgentPoolProfile struct {
LinuxOSConfig *LinuxOSConfig `json:"linuxOSConfig,omitempty"`
// EnableEncryptionAtHost - Whether to enable EncryptionAtHost
EnableEncryptionAtHost *bool `json:"enableEncryptionAtHost,omitempty"`
+ // EnableFIPS - Whether to use FIPS enabled OS
+ EnableFIPS *bool `json:"enableFIPS,omitempty"`
+ // GpuInstanceProfile - GPUInstanceProfile to be used to specify GPU MIG instance profile for supported GPU VM SKU. Supported values are MIG1g, MIG2g, MIG3g, MIG4g and MIG7g. Possible values include: 'GPUInstanceProfileMIG1g', 'GPUInstanceProfileMIG2g', 'GPUInstanceProfileMIG3g', 'GPUInstanceProfileMIG4g', 'GPUInstanceProfileMIG7g'
+ GpuInstanceProfile GPUInstanceProfile `json:"gpuInstanceProfile,omitempty"`
}
// MarshalJSON is the custom marshaler for ManagedClusterAgentPoolProfile.
@@ -1283,7 +1327,7 @@ func (mcapp ManagedClusterAgentPoolProfile) MarshalJSON() ([]byte, error) {
if mcapp.Count != nil {
objectMap["count"] = mcapp.Count
}
- if mcapp.VMSize != "" {
+ if mcapp.VMSize != nil {
objectMap["vmSize"] = mcapp.VMSize
}
if mcapp.OsDiskSizeGB != nil {
@@ -1307,6 +1351,9 @@ func (mcapp ManagedClusterAgentPoolProfile) MarshalJSON() ([]byte, error) {
if mcapp.OsType != "" {
objectMap["osType"] = mcapp.OsType
}
+ if mcapp.OsSKU != "" {
+ objectMap["osSKU"] = mcapp.OsSKU
+ }
if mcapp.MaxCount != nil {
objectMap["maxCount"] = mcapp.MaxCount
}
@@ -1367,6 +1414,12 @@ func (mcapp ManagedClusterAgentPoolProfile) MarshalJSON() ([]byte, error) {
if mcapp.EnableEncryptionAtHost != nil {
objectMap["enableEncryptionAtHost"] = mcapp.EnableEncryptionAtHost
}
+ if mcapp.EnableFIPS != nil {
+ objectMap["enableFIPS"] = mcapp.EnableFIPS
+ }
+ if mcapp.GpuInstanceProfile != "" {
+ objectMap["gpuInstanceProfile"] = mcapp.GpuInstanceProfile
+ }
return json.Marshal(objectMap)
}
@@ -1374,13 +1427,13 @@ func (mcapp ManagedClusterAgentPoolProfile) MarshalJSON() ([]byte, error) {
type ManagedClusterAgentPoolProfileProperties struct {
// Count - Number of agents (VMs) to host docker containers. Allowed values must be in the range of 0 to 100 (inclusive) for user pools and in the range of 1 to 100 (inclusive) for system pools. The default value is 1.
Count *int32 `json:"count,omitempty"`
- // VMSize - Size of agent VMs. Possible values include: 'StandardA1', 'StandardA10', 'StandardA11', 'StandardA1V2', 'StandardA2', 'StandardA2V2', 'StandardA2mV2', 'StandardA3', 'StandardA4', 'StandardA4V2', 'StandardA4mV2', 'StandardA5', 'StandardA6', 'StandardA7', 'StandardA8', 'StandardA8V2', 'StandardA8mV2', 'StandardA9', 'StandardB2ms', 'StandardB2s', 'StandardB4ms', 'StandardB8ms', 'StandardD1', 'StandardD11', 'StandardD11V2', 'StandardD11V2Promo', 'StandardD12', 'StandardD12V2', 'StandardD12V2Promo', 'StandardD13', 'StandardD13V2', 'StandardD13V2Promo', 'StandardD14', 'StandardD14V2', 'StandardD14V2Promo', 'StandardD15V2', 'StandardD16V3', 'StandardD16sV3', 'StandardD1V2', 'StandardD2', 'StandardD2V2', 'StandardD2V2Promo', 'StandardD2V3', 'StandardD2sV3', 'StandardD3', 'StandardD32V3', 'StandardD32sV3', 'StandardD3V2', 'StandardD3V2Promo', 'StandardD4', 'StandardD4V2', 'StandardD4V2Promo', 'StandardD4V3', 'StandardD4sV3', 'StandardD5V2', 'StandardD5V2Promo', 'StandardD64V3', 'StandardD64sV3', 'StandardD8V3', 'StandardD8sV3', 'StandardDS1', 'StandardDS11', 'StandardDS11V2', 'StandardDS11V2Promo', 'StandardDS12', 'StandardDS12V2', 'StandardDS12V2Promo', 'StandardDS13', 'StandardDS132V2', 'StandardDS134V2', 'StandardDS13V2', 'StandardDS13V2Promo', 'StandardDS14', 'StandardDS144V2', 'StandardDS148V2', 'StandardDS14V2', 'StandardDS14V2Promo', 'StandardDS15V2', 'StandardDS1V2', 'StandardDS2', 'StandardDS2V2', 'StandardDS2V2Promo', 'StandardDS3', 'StandardDS3V2', 'StandardDS3V2Promo', 'StandardDS4', 'StandardDS4V2', 'StandardDS4V2Promo', 'StandardDS5V2', 'StandardDS5V2Promo', 'StandardE16V3', 'StandardE16sV3', 'StandardE2V3', 'StandardE2sV3', 'StandardE3216sV3', 'StandardE328sV3', 'StandardE32V3', 'StandardE32sV3', 'StandardE4V3', 'StandardE4sV3', 'StandardE6416sV3', 'StandardE6432sV3', 'StandardE64V3', 'StandardE64sV3', 'StandardE8V3', 'StandardE8sV3', 'StandardF1', 'StandardF16', 'StandardF16s', 'StandardF16sV2', 'StandardF1s', 'StandardF2', 'StandardF2s', 'StandardF2sV2', 'StandardF32sV2', 'StandardF4', 'StandardF4s', 'StandardF4sV2', 'StandardF64sV2', 'StandardF72sV2', 'StandardF8', 'StandardF8s', 'StandardF8sV2', 'StandardG1', 'StandardG2', 'StandardG3', 'StandardG4', 'StandardG5', 'StandardGS1', 'StandardGS2', 'StandardGS3', 'StandardGS4', 'StandardGS44', 'StandardGS48', 'StandardGS5', 'StandardGS516', 'StandardGS58', 'StandardH16', 'StandardH16m', 'StandardH16mr', 'StandardH16r', 'StandardH8', 'StandardH8m', 'StandardL16s', 'StandardL32s', 'StandardL4s', 'StandardL8s', 'StandardM12832ms', 'StandardM12864ms', 'StandardM128ms', 'StandardM128s', 'StandardM6416ms', 'StandardM6432ms', 'StandardM64ms', 'StandardM64s', 'StandardNC12', 'StandardNC12sV2', 'StandardNC12sV3', 'StandardNC24', 'StandardNC24r', 'StandardNC24rsV2', 'StandardNC24rsV3', 'StandardNC24sV2', 'StandardNC24sV3', 'StandardNC6', 'StandardNC6sV2', 'StandardNC6sV3', 'StandardND12s', 'StandardND24rs', 'StandardND24s', 'StandardND6s', 'StandardNV12', 'StandardNV24', 'StandardNV6'
- VMSize VMSizeTypes `json:"vmSize,omitempty"`
+ // VMSize - Size of agent VMs.
+ VMSize *string `json:"vmSize,omitempty"`
// OsDiskSizeGB - OS Disk Size in GB to be used to specify the disk size for every machine in this master/agent pool. If you specify 0, it will apply the default osDisk size according to the vmSize specified.
OsDiskSizeGB *int32 `json:"osDiskSizeGB,omitempty"`
- // OsDiskType - OS disk type to be used for machines in a given agent pool. Allowed values are 'Ephemeral' and 'Managed'. Defaults to 'Managed'. May not be changed after creation. Possible values include: 'Managed', 'Ephemeral'
+ // OsDiskType - OS disk type to be used for machines in a given agent pool. Allowed values are 'Ephemeral' and 'Managed'. If unspecified, defaults to 'Ephemeral' when the VM supports ephemeral OS and has a cache disk larger than the requested OSDiskSizeGB. Otherwise, defaults to 'Managed'. May not be changed after creation. Possible values include: 'OSDiskTypeManaged', 'OSDiskTypeEphemeral'
OsDiskType OSDiskType `json:"osDiskType,omitempty"`
- // KubeletDiskType - KubeletDiskType determines the placement of emptyDir volumes, container runtime data root, and Kubelet ephemeral storage. Currently allows one value, OS, resulting in Kubelet using the OS disk for data. Possible values include: 'OS', 'Temporary'
+ // KubeletDiskType - KubeletDiskType determines the placement of emptyDir volumes, container runtime data root, and Kubelet ephemeral storage. Currently allows one value, OS, resulting in Kubelet using the OS disk for data. Possible values include: 'KubeletDiskTypeOS', 'KubeletDiskTypeTemporary'
KubeletDiskType KubeletDiskType `json:"kubeletDiskType,omitempty"`
// VnetSubnetID - VNet SubnetID specifies the VNet's subnet identifier for nodes and maybe pods
VnetSubnetID *string `json:"vnetSubnetID,omitempty"`
@@ -1388,17 +1441,19 @@ type ManagedClusterAgentPoolProfileProperties struct {
PodSubnetID *string `json:"podSubnetID,omitempty"`
// MaxPods - Maximum number of pods that can run on a node.
MaxPods *int32 `json:"maxPods,omitempty"`
- // OsType - OsType to be used to specify os type. Choose from Linux and Windows. Default to Linux. Possible values include: 'Linux', 'Windows'
+ // OsType - OsType to be used to specify os type. Choose from Linux and Windows. Default to Linux. Possible values include: 'OSTypeLinux', 'OSTypeWindows'
OsType OSType `json:"osType,omitempty"`
+ // OsSKU - OsSKU to be used to specify os sku. Choose from Ubuntu(default) and CBLMariner for Linux OSType. Not applicable to Windows OSType. Possible values include: 'OSSKUUbuntu', 'OSSKUCBLMariner'
+ OsSKU OSSKU `json:"osSKU,omitempty"`
// MaxCount - Maximum number of nodes for auto-scaling
MaxCount *int32 `json:"maxCount,omitempty"`
// MinCount - Minimum number of nodes for auto-scaling
MinCount *int32 `json:"minCount,omitempty"`
// EnableAutoScaling - Whether to enable auto-scaler
EnableAutoScaling *bool `json:"enableAutoScaling,omitempty"`
- // Type - AgentPoolType represents types of an agent pool. Possible values include: 'VirtualMachineScaleSets', 'AvailabilitySet'
+ // Type - AgentPoolType represents types of an agent pool. Possible values include: 'AgentPoolTypeVirtualMachineScaleSets', 'AgentPoolTypeAvailabilitySet'
Type AgentPoolType `json:"type,omitempty"`
- // Mode - AgentPoolMode represents mode of an agent pool. Possible values include: 'System', 'User'
+ // Mode - AgentPoolMode represents mode of an agent pool. Possible values include: 'AgentPoolModeSystem', 'AgentPoolModeUser'
Mode AgentPoolMode `json:"mode,omitempty"`
// OrchestratorVersion - Version of orchestrator specified when creating the managed cluster.
OrchestratorVersion *string `json:"orchestratorVersion,omitempty"`
@@ -1416,9 +1471,9 @@ type ManagedClusterAgentPoolProfileProperties struct {
EnableNodePublicIP *bool `json:"enableNodePublicIP,omitempty"`
// NodePublicIPPrefixID - Public IP Prefix ID. VM nodes use IPs assigned from this Public IP Prefix.
NodePublicIPPrefixID *string `json:"nodePublicIPPrefixID,omitempty"`
- // ScaleSetPriority - ScaleSetPriority to be used to specify virtual machine scale set priority. Default to regular. Possible values include: 'Spot', 'Regular'
+ // ScaleSetPriority - ScaleSetPriority to be used to specify virtual machine scale set priority. Default to regular. Possible values include: 'ScaleSetPrioritySpot', 'ScaleSetPriorityRegular'
ScaleSetPriority ScaleSetPriority `json:"scaleSetPriority,omitempty"`
- // ScaleSetEvictionPolicy - ScaleSetEvictionPolicy to be used to specify eviction policy for Spot virtual machine scale set. Default to Delete. Possible values include: 'Delete', 'Deallocate'
+ // ScaleSetEvictionPolicy - ScaleSetEvictionPolicy to be used to specify eviction policy for Spot virtual machine scale set. Default to Delete. Possible values include: 'ScaleSetEvictionPolicyDelete', 'ScaleSetEvictionPolicyDeallocate'
ScaleSetEvictionPolicy ScaleSetEvictionPolicy `json:"scaleSetEvictionPolicy,omitempty"`
// SpotMaxPrice - SpotMaxPrice to be used to specify the maximum price you are willing to pay in US Dollars. Possible values are any decimal value greater than zero or -1 which indicates default price to be up-to on-demand.
SpotMaxPrice *float64 `json:"spotMaxPrice,omitempty"`
@@ -1436,6 +1491,10 @@ type ManagedClusterAgentPoolProfileProperties struct {
LinuxOSConfig *LinuxOSConfig `json:"linuxOSConfig,omitempty"`
// EnableEncryptionAtHost - Whether to enable EncryptionAtHost
EnableEncryptionAtHost *bool `json:"enableEncryptionAtHost,omitempty"`
+ // EnableFIPS - Whether to use FIPS enabled OS
+ EnableFIPS *bool `json:"enableFIPS,omitempty"`
+ // GpuInstanceProfile - GPUInstanceProfile to be used to specify GPU MIG instance profile for supported GPU VM SKU. Supported values are MIG1g, MIG2g, MIG3g, MIG4g and MIG7g. Possible values include: 'GPUInstanceProfileMIG1g', 'GPUInstanceProfileMIG2g', 'GPUInstanceProfileMIG3g', 'GPUInstanceProfileMIG4g', 'GPUInstanceProfileMIG7g'
+ GpuInstanceProfile GPUInstanceProfile `json:"gpuInstanceProfile,omitempty"`
}
// MarshalJSON is the custom marshaler for ManagedClusterAgentPoolProfileProperties.
@@ -1444,7 +1503,7 @@ func (mcappp ManagedClusterAgentPoolProfileProperties) MarshalJSON() ([]byte, er
if mcappp.Count != nil {
objectMap["count"] = mcappp.Count
}
- if mcappp.VMSize != "" {
+ if mcappp.VMSize != nil {
objectMap["vmSize"] = mcappp.VMSize
}
if mcappp.OsDiskSizeGB != nil {
@@ -1468,6 +1527,9 @@ func (mcappp ManagedClusterAgentPoolProfileProperties) MarshalJSON() ([]byte, er
if mcappp.OsType != "" {
objectMap["osType"] = mcappp.OsType
}
+ if mcappp.OsSKU != "" {
+ objectMap["osSKU"] = mcappp.OsSKU
+ }
if mcappp.MaxCount != nil {
objectMap["maxCount"] = mcappp.MaxCount
}
@@ -1528,6 +1590,12 @@ func (mcappp ManagedClusterAgentPoolProfileProperties) MarshalJSON() ([]byte, er
if mcappp.EnableEncryptionAtHost != nil {
objectMap["enableEncryptionAtHost"] = mcappp.EnableEncryptionAtHost
}
+ if mcappp.EnableFIPS != nil {
+ objectMap["enableFIPS"] = mcappp.EnableFIPS
+ }
+ if mcappp.GpuInstanceProfile != "" {
+ objectMap["gpuInstanceProfile"] = mcappp.GpuInstanceProfile
+ }
return json.Marshal(objectMap)
}
@@ -1543,10 +1611,22 @@ type ManagedClusterAPIServerAccessProfile struct {
// ManagedClusterAutoUpgradeProfile auto upgrade profile for a managed cluster.
type ManagedClusterAutoUpgradeProfile struct {
- // UpgradeChannel - upgrade channel for auto upgrade. Possible values include: 'UpgradeChannelRapid', 'UpgradeChannelStable', 'UpgradeChannelPatch', 'UpgradeChannelNone'
+ // UpgradeChannel - upgrade channel for auto upgrade. Possible values include: 'UpgradeChannelRapid', 'UpgradeChannelStable', 'UpgradeChannelPatch', 'UpgradeChannelNodeImage', 'UpgradeChannelNone'
UpgradeChannel UpgradeChannel `json:"upgradeChannel,omitempty"`
}
+// ManagedClusterHTTPProxyConfig configurations for provisioning the cluster with HTTP proxy servers.
+type ManagedClusterHTTPProxyConfig struct {
+ // HTTPProxy - HTTP proxy server endpoint to use.
+ HTTPProxy *string `json:"httpProxy,omitempty"`
+ // HTTPSProxy - HTTPS proxy server endpoint to use.
+ HTTPSProxy *string `json:"httpsProxy,omitempty"`
+ // NoProxy - Endpoints that should not go through proxy.
+ NoProxy *[]string `json:"noProxy,omitempty"`
+ // TrustedCa - Alternative CA cert to use for connecting to proxy servers.
+ TrustedCa *string `json:"trustedCa,omitempty"`
+}
+
// ManagedClusterIdentity identity for the managed cluster.
type ManagedClusterIdentity struct {
// PrincipalID - READ-ONLY; The principal id of the system assigned identity which is used by master components.
@@ -1790,9 +1870,11 @@ type ManagedClusterPodIdentity struct {
Name *string `json:"name,omitempty"`
// Namespace - Namespace of the pod identity.
Namespace *string `json:"namespace,omitempty"`
+ // BindingSelector - Binding selector to use for the AzureIdentityBinding resource.
+ BindingSelector *string `json:"bindingSelector,omitempty"`
// Identity - Information of the user assigned identity.
Identity *UserAssignedIdentity `json:"identity,omitempty"`
- // ProvisioningState - READ-ONLY; The current provisioning state of the pod identity. Possible values include: 'Assigned', 'Updating', 'Deleting', 'Failed'
+ // ProvisioningState - READ-ONLY; The current provisioning state of the pod identity. Possible values include: 'ManagedClusterPodIdentityProvisioningStateAssigned', 'ManagedClusterPodIdentityProvisioningStateUpdating', 'ManagedClusterPodIdentityProvisioningStateDeleting', 'ManagedClusterPodIdentityProvisioningStateFailed'
ProvisioningState ManagedClusterPodIdentityProvisioningState `json:"provisioningState,omitempty"`
// ProvisioningInfo - READ-ONLY
ProvisioningInfo *ManagedClusterPodIdentityProvisioningInfo `json:"provisioningInfo,omitempty"`
@@ -1807,6 +1889,9 @@ func (mcpi ManagedClusterPodIdentity) MarshalJSON() ([]byte, error) {
if mcpi.Namespace != nil {
objectMap["namespace"] = mcpi.Namespace
}
+ if mcpi.BindingSelector != nil {
+ objectMap["bindingSelector"] = mcpi.BindingSelector
+ }
if mcpi.Identity != nil {
objectMap["identity"] = mcpi.Identity
}
@@ -1862,7 +1947,7 @@ type ManagedClusterPoolUpgradeProfile struct {
KubernetesVersion *string `json:"kubernetesVersion,omitempty"`
// Name - Pool name.
Name *string `json:"name,omitempty"`
- // OsType - OsType to be used to specify os type. Choose from Linux and Windows. Default to Linux. Possible values include: 'Linux', 'Windows'
+ // OsType - OsType to be used to specify os type. Choose from Linux and Windows. Default to Linux. Possible values include: 'OSTypeLinux', 'OSTypeWindows'
OsType OSType `json:"osType,omitempty"`
// Upgrades - List of orchestrator types and versions available for upgrade.
Upgrades *[]ManagedClusterPoolUpgradeProfileUpgradesItem `json:"upgrades,omitempty"`
@@ -1928,6 +2013,12 @@ type ManagedClusterProperties struct {
DiskEncryptionSetID *string `json:"diskEncryptionSetID,omitempty"`
// IdentityProfile - Identities associated with the cluster.
IdentityProfile map[string]*ManagedClusterPropertiesIdentityProfileValue `json:"identityProfile"`
+ // PrivateLinkResources - Private link resources associated with the cluster.
+ PrivateLinkResources *[]PrivateLinkResource `json:"privateLinkResources,omitempty"`
+ // DisableLocalAccounts - If set to true, getting static credential will be disabled for this cluster. Expected to only be used for AAD clusters.
+ DisableLocalAccounts *bool `json:"disableLocalAccounts,omitempty"`
+ // HTTPProxyConfig - Configurations for provisioning the cluster with HTTP proxy servers.
+ HTTPProxyConfig *ManagedClusterHTTPProxyConfig `json:"httpProxyConfig,omitempty"`
}
// MarshalJSON is the custom marshaler for ManagedClusterProperties.
@@ -1990,6 +2081,15 @@ func (mcp ManagedClusterProperties) MarshalJSON() ([]byte, error) {
if mcp.IdentityProfile != nil {
objectMap["identityProfile"] = mcp.IdentityProfile
}
+ if mcp.PrivateLinkResources != nil {
+ objectMap["privateLinkResources"] = mcp.PrivateLinkResources
+ }
+ if mcp.DisableLocalAccounts != nil {
+ objectMap["disableLocalAccounts"] = mcp.DisableLocalAccounts
+ }
+ if mcp.HTTPProxyConfig != nil {
+ objectMap["httpProxyConfig"] = mcp.HTTPProxyConfig
+ }
return json.Marshal(objectMap)
}
@@ -1997,7 +2097,7 @@ func (mcp ManagedClusterProperties) MarshalJSON() ([]byte, error) {
// enabled
type ManagedClusterPropertiesAutoScalerProfile struct {
BalanceSimilarNodeGroups *string `json:"balance-similar-node-groups,omitempty"`
- // Expander - Possible values include: 'LeastWaste', 'MostPods', 'Priority', 'Random'
+ // Expander - Possible values include: 'ExpanderLeastWaste', 'ExpanderMostPods', 'ExpanderPriority', 'ExpanderRandom'
Expander Expander `json:"expander,omitempty"`
MaxEmptyBulkDelete *string `json:"max-empty-bulk-delete,omitempty"`
MaxGracefulTerminationSec *string `json:"max-graceful-termination-sec,omitempty"`
@@ -2119,7 +2219,7 @@ type ManagedClusterServicePrincipalProfile struct {
type ManagedClusterSKU struct {
// Name - Name of a managed cluster SKU. Possible values include: 'ManagedClusterSKUNameBasic'
Name ManagedClusterSKUName `json:"name,omitempty"`
- // Tier - Tier of a managed cluster SKU. Possible values include: 'Paid', 'Free'
+ // Tier - Tier of a managed cluster SKU. Possible values include: 'ManagedClusterSKUTierPaid', 'ManagedClusterSKUTierFree'
Tier ManagedClusterSKUTier `json:"tier,omitempty"`
}
@@ -2234,6 +2334,49 @@ func (future *ManagedClustersRotateClusterCertificatesFuture) result(client Mana
return
}
+// ManagedClustersRunCommandFuture an abstraction for monitoring and retrieving the results of a
+// long-running operation.
+type ManagedClustersRunCommandFuture struct {
+ azure.FutureAPI
+ // Result returns the result of the asynchronous operation.
+ // If the operation has not completed it will return an error.
+ Result func(ManagedClustersClient) (RunCommandResult, error)
+}
+
+// UnmarshalJSON is the custom unmarshaller for CreateFuture.
+func (future *ManagedClustersRunCommandFuture) UnmarshalJSON(body []byte) error {
+ var azFuture azure.Future
+ if err := json.Unmarshal(body, &azFuture); err != nil {
+ return err
+ }
+ future.FutureAPI = &azFuture
+ future.Result = future.result
+ return nil
+}
+
+// result is the default implementation for ManagedClustersRunCommandFuture.Result.
+func (future *ManagedClustersRunCommandFuture) result(client ManagedClustersClient) (rcr RunCommandResult, err error) {
+ var done bool
+ done, err = future.DoneWithContext(context.Background(), client)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "containerservice.ManagedClustersRunCommandFuture", "Result", future.Response(), "Polling failure")
+ return
+ }
+ if !done {
+ rcr.Response.Response = future.Response()
+ err = azure.NewAsyncOpIncompleteError("containerservice.ManagedClustersRunCommandFuture")
+ return
+ }
+ sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+ if rcr.Response.Response, err = future.GetResult(sender); err == nil && rcr.Response.Response.StatusCode != http.StatusNoContent {
+ rcr, err = client.RunCommandResponder(rcr.Response.Response)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "containerservice.ManagedClustersRunCommandFuture", "Result", rcr.Response.Response, "Failure responding to request")
+ }
+ }
+ return
+}
+
// ManagedClustersStartFuture an abstraction for monitoring and retrieving the results of a long-running
// operation.
type ManagedClustersStartFuture struct {
@@ -2438,8 +2581,10 @@ type ManagedClusterWindowsProfile struct {
AdminUsername *string `json:"adminUsername,omitempty"`
// AdminPassword - Specifies the password of the administrator account.
**Minimum-length:** 8 characters
**Max-length:** 123 characters
**Complexity requirements:** 3 out of 4 conditions below need to be fulfilled
Has lower characters
Has upper characters
Has a digit
Has a special character (Regex match [\W_])
**Disallowed values:** "abc@123", "P@$$w0rd", "P@ssw0rd", "P@ssword123", "Pa$$word", "pass@word1", "Password!", "Password1", "Password22", "iloveyou!"
AdminPassword *string `json:"adminPassword,omitempty"`
- // LicenseType - The licenseType to use for Windows VMs. Windows_Server is used to enable Azure Hybrid User Benefits for Windows VMs. Possible values include: 'None', 'WindowsServer'
+ // LicenseType - The licenseType to use for Windows VMs. Windows_Server is used to enable Azure Hybrid User Benefits for Windows VMs. Possible values include: 'LicenseTypeNone', 'LicenseTypeWindowsServer'
LicenseType LicenseType `json:"licenseType,omitempty"`
+ // EnableCSIProxy - Whether to enable CSI proxy.
+ EnableCSIProxy *bool `json:"enableCSIProxy,omitempty"`
}
// MasterProfile profile for the container service master.
@@ -2448,7 +2593,7 @@ type MasterProfile struct {
Count *int32 `json:"count,omitempty"`
// DNSPrefix - DNS prefix to be used to create the FQDN for the master pool.
DNSPrefix *string `json:"dnsPrefix,omitempty"`
- // VMSize - Size of agent VMs. Possible values include: 'StandardA1', 'StandardA10', 'StandardA11', 'StandardA1V2', 'StandardA2', 'StandardA2V2', 'StandardA2mV2', 'StandardA3', 'StandardA4', 'StandardA4V2', 'StandardA4mV2', 'StandardA5', 'StandardA6', 'StandardA7', 'StandardA8', 'StandardA8V2', 'StandardA8mV2', 'StandardA9', 'StandardB2ms', 'StandardB2s', 'StandardB4ms', 'StandardB8ms', 'StandardD1', 'StandardD11', 'StandardD11V2', 'StandardD11V2Promo', 'StandardD12', 'StandardD12V2', 'StandardD12V2Promo', 'StandardD13', 'StandardD13V2', 'StandardD13V2Promo', 'StandardD14', 'StandardD14V2', 'StandardD14V2Promo', 'StandardD15V2', 'StandardD16V3', 'StandardD16sV3', 'StandardD1V2', 'StandardD2', 'StandardD2V2', 'StandardD2V2Promo', 'StandardD2V3', 'StandardD2sV3', 'StandardD3', 'StandardD32V3', 'StandardD32sV3', 'StandardD3V2', 'StandardD3V2Promo', 'StandardD4', 'StandardD4V2', 'StandardD4V2Promo', 'StandardD4V3', 'StandardD4sV3', 'StandardD5V2', 'StandardD5V2Promo', 'StandardD64V3', 'StandardD64sV3', 'StandardD8V3', 'StandardD8sV3', 'StandardDS1', 'StandardDS11', 'StandardDS11V2', 'StandardDS11V2Promo', 'StandardDS12', 'StandardDS12V2', 'StandardDS12V2Promo', 'StandardDS13', 'StandardDS132V2', 'StandardDS134V2', 'StandardDS13V2', 'StandardDS13V2Promo', 'StandardDS14', 'StandardDS144V2', 'StandardDS148V2', 'StandardDS14V2', 'StandardDS14V2Promo', 'StandardDS15V2', 'StandardDS1V2', 'StandardDS2', 'StandardDS2V2', 'StandardDS2V2Promo', 'StandardDS3', 'StandardDS3V2', 'StandardDS3V2Promo', 'StandardDS4', 'StandardDS4V2', 'StandardDS4V2Promo', 'StandardDS5V2', 'StandardDS5V2Promo', 'StandardE16V3', 'StandardE16sV3', 'StandardE2V3', 'StandardE2sV3', 'StandardE3216sV3', 'StandardE328sV3', 'StandardE32V3', 'StandardE32sV3', 'StandardE4V3', 'StandardE4sV3', 'StandardE6416sV3', 'StandardE6432sV3', 'StandardE64V3', 'StandardE64sV3', 'StandardE8V3', 'StandardE8sV3', 'StandardF1', 'StandardF16', 'StandardF16s', 'StandardF16sV2', 'StandardF1s', 'StandardF2', 'StandardF2s', 'StandardF2sV2', 'StandardF32sV2', 'StandardF4', 'StandardF4s', 'StandardF4sV2', 'StandardF64sV2', 'StandardF72sV2', 'StandardF8', 'StandardF8s', 'StandardF8sV2', 'StandardG1', 'StandardG2', 'StandardG3', 'StandardG4', 'StandardG5', 'StandardGS1', 'StandardGS2', 'StandardGS3', 'StandardGS4', 'StandardGS44', 'StandardGS48', 'StandardGS5', 'StandardGS516', 'StandardGS58', 'StandardH16', 'StandardH16m', 'StandardH16mr', 'StandardH16r', 'StandardH8', 'StandardH8m', 'StandardL16s', 'StandardL32s', 'StandardL4s', 'StandardL8s', 'StandardM12832ms', 'StandardM12864ms', 'StandardM128ms', 'StandardM128s', 'StandardM6416ms', 'StandardM6432ms', 'StandardM64ms', 'StandardM64s', 'StandardNC12', 'StandardNC12sV2', 'StandardNC12sV3', 'StandardNC24', 'StandardNC24r', 'StandardNC24rsV2', 'StandardNC24rsV3', 'StandardNC24sV2', 'StandardNC24sV3', 'StandardNC6', 'StandardNC6sV2', 'StandardNC6sV3', 'StandardND12s', 'StandardND24rs', 'StandardND24s', 'StandardND6s', 'StandardNV12', 'StandardNV24', 'StandardNV6'
+ // VMSize - Size of agent VMs. Possible values include: 'VMSizeTypesStandardA1', 'VMSizeTypesStandardA10', 'VMSizeTypesStandardA11', 'VMSizeTypesStandardA1V2', 'VMSizeTypesStandardA2', 'VMSizeTypesStandardA2V2', 'VMSizeTypesStandardA2mV2', 'VMSizeTypesStandardA3', 'VMSizeTypesStandardA4', 'VMSizeTypesStandardA4V2', 'VMSizeTypesStandardA4mV2', 'VMSizeTypesStandardA5', 'VMSizeTypesStandardA6', 'VMSizeTypesStandardA7', 'VMSizeTypesStandardA8', 'VMSizeTypesStandardA8V2', 'VMSizeTypesStandardA8mV2', 'VMSizeTypesStandardA9', 'VMSizeTypesStandardB2ms', 'VMSizeTypesStandardB2s', 'VMSizeTypesStandardB4ms', 'VMSizeTypesStandardB8ms', 'VMSizeTypesStandardD1', 'VMSizeTypesStandardD11', 'VMSizeTypesStandardD11V2', 'VMSizeTypesStandardD11V2Promo', 'VMSizeTypesStandardD12', 'VMSizeTypesStandardD12V2', 'VMSizeTypesStandardD12V2Promo', 'VMSizeTypesStandardD13', 'VMSizeTypesStandardD13V2', 'VMSizeTypesStandardD13V2Promo', 'VMSizeTypesStandardD14', 'VMSizeTypesStandardD14V2', 'VMSizeTypesStandardD14V2Promo', 'VMSizeTypesStandardD15V2', 'VMSizeTypesStandardD16V3', 'VMSizeTypesStandardD16sV3', 'VMSizeTypesStandardD1V2', 'VMSizeTypesStandardD2', 'VMSizeTypesStandardD2V2', 'VMSizeTypesStandardD2V2Promo', 'VMSizeTypesStandardD2V3', 'VMSizeTypesStandardD2sV3', 'VMSizeTypesStandardD3', 'VMSizeTypesStandardD32V3', 'VMSizeTypesStandardD32sV3', 'VMSizeTypesStandardD3V2', 'VMSizeTypesStandardD3V2Promo', 'VMSizeTypesStandardD4', 'VMSizeTypesStandardD4V2', 'VMSizeTypesStandardD4V2Promo', 'VMSizeTypesStandardD4V3', 'VMSizeTypesStandardD4sV3', 'VMSizeTypesStandardD5V2', 'VMSizeTypesStandardD5V2Promo', 'VMSizeTypesStandardD64V3', 'VMSizeTypesStandardD64sV3', 'VMSizeTypesStandardD8V3', 'VMSizeTypesStandardD8sV3', 'VMSizeTypesStandardDS1', 'VMSizeTypesStandardDS11', 'VMSizeTypesStandardDS11V2', 'VMSizeTypesStandardDS11V2Promo', 'VMSizeTypesStandardDS12', 'VMSizeTypesStandardDS12V2', 'VMSizeTypesStandardDS12V2Promo', 'VMSizeTypesStandardDS13', 'VMSizeTypesStandardDS132V2', 'VMSizeTypesStandardDS134V2', 'VMSizeTypesStandardDS13V2', 'VMSizeTypesStandardDS13V2Promo', 'VMSizeTypesStandardDS14', 'VMSizeTypesStandardDS144V2', 'VMSizeTypesStandardDS148V2', 'VMSizeTypesStandardDS14V2', 'VMSizeTypesStandardDS14V2Promo', 'VMSizeTypesStandardDS15V2', 'VMSizeTypesStandardDS1V2', 'VMSizeTypesStandardDS2', 'VMSizeTypesStandardDS2V2', 'VMSizeTypesStandardDS2V2Promo', 'VMSizeTypesStandardDS3', 'VMSizeTypesStandardDS3V2', 'VMSizeTypesStandardDS3V2Promo', 'VMSizeTypesStandardDS4', 'VMSizeTypesStandardDS4V2', 'VMSizeTypesStandardDS4V2Promo', 'VMSizeTypesStandardDS5V2', 'VMSizeTypesStandardDS5V2Promo', 'VMSizeTypesStandardE16V3', 'VMSizeTypesStandardE16sV3', 'VMSizeTypesStandardE2V3', 'VMSizeTypesStandardE2sV3', 'VMSizeTypesStandardE3216sV3', 'VMSizeTypesStandardE328sV3', 'VMSizeTypesStandardE32V3', 'VMSizeTypesStandardE32sV3', 'VMSizeTypesStandardE4V3', 'VMSizeTypesStandardE4sV3', 'VMSizeTypesStandardE6416sV3', 'VMSizeTypesStandardE6432sV3', 'VMSizeTypesStandardE64V3', 'VMSizeTypesStandardE64sV3', 'VMSizeTypesStandardE8V3', 'VMSizeTypesStandardE8sV3', 'VMSizeTypesStandardF1', 'VMSizeTypesStandardF16', 'VMSizeTypesStandardF16s', 'VMSizeTypesStandardF16sV2', 'VMSizeTypesStandardF1s', 'VMSizeTypesStandardF2', 'VMSizeTypesStandardF2s', 'VMSizeTypesStandardF2sV2', 'VMSizeTypesStandardF32sV2', 'VMSizeTypesStandardF4', 'VMSizeTypesStandardF4s', 'VMSizeTypesStandardF4sV2', 'VMSizeTypesStandardF64sV2', 'VMSizeTypesStandardF72sV2', 'VMSizeTypesStandardF8', 'VMSizeTypesStandardF8s', 'VMSizeTypesStandardF8sV2', 'VMSizeTypesStandardG1', 'VMSizeTypesStandardG2', 'VMSizeTypesStandardG3', 'VMSizeTypesStandardG4', 'VMSizeTypesStandardG5', 'VMSizeTypesStandardGS1', 'VMSizeTypesStandardGS2', 'VMSizeTypesStandardGS3', 'VMSizeTypesStandardGS4', 'VMSizeTypesStandardGS44', 'VMSizeTypesStandardGS48', 'VMSizeTypesStandardGS5', 'VMSizeTypesStandardGS516', 'VMSizeTypesStandardGS58', 'VMSizeTypesStandardH16', 'VMSizeTypesStandardH16m', 'VMSizeTypesStandardH16mr', 'VMSizeTypesStandardH16r', 'VMSizeTypesStandardH8', 'VMSizeTypesStandardH8m', 'VMSizeTypesStandardL16s', 'VMSizeTypesStandardL32s', 'VMSizeTypesStandardL4s', 'VMSizeTypesStandardL8s', 'VMSizeTypesStandardM12832ms', 'VMSizeTypesStandardM12864ms', 'VMSizeTypesStandardM128ms', 'VMSizeTypesStandardM128s', 'VMSizeTypesStandardM6416ms', 'VMSizeTypesStandardM6432ms', 'VMSizeTypesStandardM64ms', 'VMSizeTypesStandardM64s', 'VMSizeTypesStandardNC12', 'VMSizeTypesStandardNC12sV2', 'VMSizeTypesStandardNC12sV3', 'VMSizeTypesStandardNC24', 'VMSizeTypesStandardNC24r', 'VMSizeTypesStandardNC24rsV2', 'VMSizeTypesStandardNC24rsV3', 'VMSizeTypesStandardNC24sV2', 'VMSizeTypesStandardNC24sV3', 'VMSizeTypesStandardNC6', 'VMSizeTypesStandardNC6sV2', 'VMSizeTypesStandardNC6sV3', 'VMSizeTypesStandardND12s', 'VMSizeTypesStandardND24rs', 'VMSizeTypesStandardND24s', 'VMSizeTypesStandardND6s', 'VMSizeTypesStandardNV12', 'VMSizeTypesStandardNV24', 'VMSizeTypesStandardNV6'
VMSize VMSizeTypes `json:"vmSize,omitempty"`
// OsDiskSizeGB - OS Disk Size in GB to be used to specify the disk size for every machine in this master/agent pool. If you specify 0, it will apply the default osDisk size according to the vmSize specified.
OsDiskSizeGB *int32 `json:"osDiskSizeGB,omitempty"`
@@ -2456,7 +2601,7 @@ type MasterProfile struct {
VnetSubnetID *string `json:"vnetSubnetID,omitempty"`
// FirstConsecutiveStaticIP - FirstConsecutiveStaticIP used to specify the first static ip of masters.
FirstConsecutiveStaticIP *string `json:"firstConsecutiveStaticIP,omitempty"`
- // StorageProfile - Storage profile specifies what kind of storage used. Choose from StorageAccount and ManagedDisks. Leave it empty, we will choose for you based on the orchestrator choice. Possible values include: 'StorageAccount', 'ManagedDisks'
+ // StorageProfile - Storage profile specifies what kind of storage used. Choose from StorageAccount and ManagedDisks. Leave it empty, we will choose for you based on the orchestrator choice. Possible values include: 'StorageProfileTypesStorageAccount', 'StorageProfileTypesManagedDisks'
StorageProfile StorageProfileTypes `json:"storageProfile,omitempty"`
// Fqdn - READ-ONLY; FQDN for the master pool.
Fqdn *string `json:"fqdn,omitempty"`
@@ -2491,11 +2636,11 @@ func (mp MasterProfile) MarshalJSON() ([]byte, error) {
// NetworkProfile profile of network configuration.
type NetworkProfile struct {
- // NetworkPlugin - Network plugin used for building Kubernetes network. Possible values include: 'Azure', 'Kubenet'
+ // NetworkPlugin - Network plugin used for building Kubernetes network. Possible values include: 'NetworkPluginAzure', 'NetworkPluginKubenet'
NetworkPlugin NetworkPlugin `json:"networkPlugin,omitempty"`
// NetworkPolicy - Network policy used for building Kubernetes network. Possible values include: 'NetworkPolicyCalico', 'NetworkPolicyAzure'
NetworkPolicy NetworkPolicy `json:"networkPolicy,omitempty"`
- // NetworkMode - Network mode used for building Kubernetes network. Possible values include: 'Transparent', 'Bridge'
+ // NetworkMode - Network mode used for building Kubernetes network. Possible values include: 'NetworkModeTransparent', 'NetworkModeBridge'
NetworkMode NetworkMode `json:"networkMode,omitempty"`
// PodCidr - A CIDR notation IP range from which to assign pod IPs when kubenet is used.
PodCidr *string `json:"podCidr,omitempty"`
@@ -2505,9 +2650,9 @@ type NetworkProfile struct {
DNSServiceIP *string `json:"dnsServiceIP,omitempty"`
// DockerBridgeCidr - A CIDR notation IP range assigned to the Docker bridge network. It must not overlap with any Subnet IP ranges or the Kubernetes service address range.
DockerBridgeCidr *string `json:"dockerBridgeCidr,omitempty"`
- // OutboundType - The outbound (egress) routing method. Possible values include: 'LoadBalancer', 'UserDefinedRouting'
+ // OutboundType - The outbound (egress) routing method. Possible values include: 'OutboundTypeLoadBalancer', 'OutboundTypeUserDefinedRouting'
OutboundType OutboundType `json:"outboundType,omitempty"`
- // LoadBalancerSku - The load balancer sku for the managed cluster. Possible values include: 'Standard', 'Basic'
+ // LoadBalancerSku - The load balancer sku for the managed cluster. Possible values include: 'LoadBalancerSkuStandard', 'LoadBalancerSkuBasic'
LoadBalancerSku LoadBalancerSku `json:"loadBalancerSku,omitempty"`
// LoadBalancerProfile - Profile of the cluster load balancer.
LoadBalancerProfile *ManagedClusterLoadBalancerProfile `json:"loadBalancerProfile,omitempty"`
@@ -2593,9 +2738,96 @@ type OperationValueDisplay struct {
Provider *string `json:"provider,omitempty"`
}
+// OSOptionProfile the OS option profile.
+type OSOptionProfile struct {
+ autorest.Response `json:"-"`
+ // ID - READ-ONLY; Id of the OS option profile.
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; Name of the OS option profile.
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; Type of the OS option profile.
+ Type *string `json:"type,omitempty"`
+ // OSOptionPropertyList - The list of an OS option properties.
+ *OSOptionPropertyList `json:"properties,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for OSOptionProfile.
+func (oop OSOptionProfile) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if oop.OSOptionPropertyList != nil {
+ objectMap["properties"] = oop.OSOptionPropertyList
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for OSOptionProfile struct.
+func (oop *OSOptionProfile) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ oop.ID = &ID
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ oop.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar string
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ oop.Type = &typeVar
+ }
+ case "properties":
+ if v != nil {
+ var oSOptionPropertyList OSOptionPropertyList
+ err = json.Unmarshal(*v, &oSOptionPropertyList)
+ if err != nil {
+ return err
+ }
+ oop.OSOptionPropertyList = &oSOptionPropertyList
+ }
+ }
+ }
+
+ return nil
+}
+
+// OSOptionProperty OS option property.
+type OSOptionProperty struct {
+ // OsType - OS type.
+ OsType *string `json:"os-type,omitempty"`
+ // EnableFipsImage - Whether FIPS image is enabled.
+ EnableFipsImage *bool `json:"enable-fips-image,omitempty"`
+}
+
+// OSOptionPropertyList the list of OS option properties.
+type OSOptionPropertyList struct {
+ // OsOptionPropertyList - The list of OS option properties.
+ OsOptionPropertyList *[]OSOptionProperty `json:"osOptionPropertyList,omitempty"`
+}
+
// PowerState describes the Power State of the cluster
type PowerState struct {
- // Code - Tells whether the cluster is Running or Stopped. Possible values include: 'Running', 'Stopped'
+ // Code - Tells whether the cluster is Running or Stopped. Possible values include: 'CodeRunning', 'CodeStopped'
Code Code `json:"code,omitempty"`
}
@@ -2791,7 +3023,7 @@ type PrivateLinkResourcesListResult struct {
// PrivateLinkServiceConnectionState the state of a private link service connection.
type PrivateLinkServiceConnectionState struct {
- // Status - The private link service connection status. Possible values include: 'Pending', 'Approved', 'Rejected', 'Disconnected'
+ // Status - The private link service connection status. Possible values include: 'ConnectionStatusPending', 'ConnectionStatusApproved', 'ConnectionStatusRejected', 'ConnectionStatusDisconnected'
Status ConnectionStatus `json:"status,omitempty"`
// Description - The private link service connection description.
Description *string `json:"description,omitempty"`
@@ -2829,6 +3061,67 @@ type ResourceReference struct {
ID *string `json:"id,omitempty"`
}
+// RunCommandRequest run command request
+type RunCommandRequest struct {
+ // Command - command to run.
+ Command *string `json:"command,omitempty"`
+ // Context - base64 encoded zip file, contains files required by the command
+ Context *string `json:"context,omitempty"`
+ // ClusterToken - AuthToken issued for AKS AAD Server App.
+ ClusterToken *string `json:"clusterToken,omitempty"`
+}
+
+// RunCommandResult run command result.
+type RunCommandResult struct {
+ autorest.Response `json:"-"`
+ // ID - READ-ONLY; command id.
+ ID *string `json:"id,omitempty"`
+ // CommandResultProperties - Properties of command result.
+ *CommandResultProperties `json:"properties,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for RunCommandResult.
+func (rcr RunCommandResult) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if rcr.CommandResultProperties != nil {
+ objectMap["properties"] = rcr.CommandResultProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for RunCommandResult struct.
+func (rcr *RunCommandResult) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "id":
+ if v != nil {
+ var ID string
+ err = json.Unmarshal(*v, &ID)
+ if err != nil {
+ return err
+ }
+ rcr.ID = &ID
+ }
+ case "properties":
+ if v != nil {
+ var commandResultProperties CommandResultProperties
+ err = json.Unmarshal(*v, &commandResultProperties)
+ if err != nil {
+ return err
+ }
+ rcr.CommandResultProperties = &commandResultProperties
+ }
+ }
+ }
+
+ return nil
+}
+
// SSHConfiguration SSH configuration for Linux-based VMs running on Azure.
type SSHConfiguration struct {
// PublicKeys - The list of SSH public keys used to authenticate with Linux-based VMs. Only expect one key specified.
@@ -2944,7 +3237,7 @@ func (toVar TagsObject) MarshalJSON() ([]byte, error) {
// TimeInWeek time in a week.
type TimeInWeek struct {
- // Day - A day in a week. Possible values include: 'Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'
+ // Day - A day in a week. Possible values include: 'WeekDaySunday', 'WeekDayMonday', 'WeekDayTuesday', 'WeekDayWednesday', 'WeekDayThursday', 'WeekDayFriday', 'WeekDaySaturday'
Day WeekDay `json:"day,omitempty"`
// HourSlots - hour slots in a day.
HourSlots *[]int32 `json:"hourSlots,omitempty"`
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/operations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/operations.go
similarity index 99%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/operations.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/operations.go
index 5b6185710851f..ac065c9a0db25 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/operations.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/operations.go
@@ -66,7 +66,7 @@ func (client OperationsClient) List(ctx context.Context) (result OperationListRe
// ListPreparer prepares the List request.
func (client OperationsClient) ListPreparer(ctx context.Context) (*http.Request, error) {
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/privateendpointconnections.go b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/privateendpointconnections.go
similarity index 99%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/privateendpointconnections.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/privateendpointconnections.go
index fa6f82f50152a..99f5919636aca 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/privateendpointconnections.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/privateendpointconnections.go
@@ -82,7 +82,7 @@ func (client PrivateEndpointConnectionsClient) DeletePreparer(ctx context.Contex
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -178,7 +178,7 @@ func (client PrivateEndpointConnectionsClient) GetPreparer(ctx context.Context,
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -265,7 +265,7 @@ func (client PrivateEndpointConnectionsClient) ListPreparer(ctx context.Context,
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -357,7 +357,7 @@ func (client PrivateEndpointConnectionsClient) UpdatePreparer(ctx context.Contex
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/privatelinkresources.go b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/privatelinkresources.go
similarity index 99%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/privatelinkresources.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/privatelinkresources.go
index 706d71084c830..75034c77181ed 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/privatelinkresources.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/privatelinkresources.go
@@ -88,7 +88,7 @@ func (client PrivateLinkResourcesClient) ListPreparer(ctx context.Context, resou
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/resolveprivatelinkserviceid.go b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/resolveprivatelinkserviceid.go
similarity index 99%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/resolveprivatelinkserviceid.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/resolveprivatelinkserviceid.go
index c00ae4fb28776..77bc97e441a3b 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/resolveprivatelinkserviceid.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/resolveprivatelinkserviceid.go
@@ -88,7 +88,7 @@ func (client ResolvePrivateLinkServiceIDClient) POSTPreparer(ctx context.Context
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2021-02-01"
+ const APIVersion = "2021-03-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/version.go b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/version.go
similarity index 98%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/version.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/version.go
index 2ff78e8b01725..644c452e97e61 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/version.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-03-01/containerservice/version.go
@@ -10,7 +10,7 @@ import "github.com/Azure/azure-sdk-for-go/version"
// UserAgent returns the UserAgent string to use when sending http.Requests.
func UserAgent() string {
- return "Azure-SDK-For-Go/" + Version() + " containerservice/2021-02-01"
+ return "Azure-SDK-For-Go/" + Version() + " containerservice/2021-03-01"
}
// Version returns the semantic version (see http://semver.org) of the client.
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/enums.go b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/enums.go
deleted file mode 100644
index 677f73fb7d981..0000000000000
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/enums.go
+++ /dev/null
@@ -1,22 +0,0 @@
-package maps
-
-// Copyright (c) Microsoft Corporation. All rights reserved.
-// Licensed under the MIT License. See License.txt in the project root for license information.
-//
-// Code generated by Microsoft (R) AutoRest Code Generator.
-// Changes may cause incorrect behavior and will be lost if the code is regenerated.
-
-// KeyType enumerates the values for key type.
-type KeyType string
-
-const (
- // Primary ...
- Primary KeyType = "primary"
- // Secondary ...
- Secondary KeyType = "secondary"
-)
-
-// PossibleKeyTypeValues returns an array of possible values for the KeyType const type.
-func PossibleKeyTypeValues() []KeyType {
- return []KeyType{Primary, Secondary}
-}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/models.go
deleted file mode 100644
index 408b7b885d9dd..0000000000000
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/models.go
+++ /dev/null
@@ -1,211 +0,0 @@
-package maps
-
-// Copyright (c) Microsoft Corporation. All rights reserved.
-// Licensed under the MIT License. See License.txt in the project root for license information.
-//
-// Code generated by Microsoft (R) AutoRest Code Generator.
-// Changes may cause incorrect behavior and will be lost if the code is regenerated.
-
-import (
- "encoding/json"
- "github.com/Azure/go-autorest/autorest"
-)
-
-// The package's fully qualified name.
-const fqdn = "github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps"
-
-// Account an Azure resource which represents access to a suite of Maps REST APIs.
-type Account struct {
- autorest.Response `json:"-"`
- // Location - READ-ONLY; The location of the resource.
- Location *string `json:"location,omitempty"`
- // Tags - READ-ONLY; Gets a list of key value pairs that describe the resource. These tags can be used in viewing and grouping this resource (across resource groups). A maximum of 15 tags can be provided for a resource. Each tag must have a key no greater than 128 characters and value no greater than 256 characters.
- Tags map[string]*string `json:"tags"`
- // Sku - READ-ONLY; The SKU of this account.
- Sku *Sku `json:"sku,omitempty"`
- // Properties - READ-ONLY; The map account properties.
- Properties *AccountProperties `json:"properties,omitempty"`
- // ID - READ-ONLY; The fully qualified Maps Account resource identifier.
- ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; The name of the Maps Account, which is unique within a Resource Group.
- Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Azure resource type.
- Type *string `json:"type,omitempty"`
-}
-
-// MarshalJSON is the custom marshaler for Account.
-func (a Account) MarshalJSON() ([]byte, error) {
- objectMap := make(map[string]interface{})
- return json.Marshal(objectMap)
-}
-
-// AccountCreateParameters parameters used to create a new Maps Account.
-type AccountCreateParameters struct {
- // Location - The location of the resource.
- Location *string `json:"location,omitempty"`
- // Tags - Gets or sets a list of key value pairs that describe the resource. These tags can be used in viewing and grouping this resource (across resource groups). A maximum of 15 tags can be provided for a resource. Each tag must have a key no greater than 128 characters and value no greater than 256 characters.
- Tags map[string]*string `json:"tags"`
- // Sku - The SKU of this account.
- Sku *Sku `json:"sku,omitempty"`
-}
-
-// MarshalJSON is the custom marshaler for AccountCreateParameters.
-func (acp AccountCreateParameters) MarshalJSON() ([]byte, error) {
- objectMap := make(map[string]interface{})
- if acp.Location != nil {
- objectMap["location"] = acp.Location
- }
- if acp.Tags != nil {
- objectMap["tags"] = acp.Tags
- }
- if acp.Sku != nil {
- objectMap["sku"] = acp.Sku
- }
- return json.Marshal(objectMap)
-}
-
-// AccountKeys the set of keys which can be used to access the Maps REST APIs. Two keys are provided for
-// key rotation without interruption.
-type AccountKeys struct {
- autorest.Response `json:"-"`
- // ID - READ-ONLY; The full Azure resource identifier of the Maps Account.
- ID *string `json:"id,omitempty"`
- // PrimaryKey - READ-ONLY; The primary key for accessing the Maps REST APIs.
- PrimaryKey *string `json:"primaryKey,omitempty"`
- // SecondaryKey - READ-ONLY; The secondary key for accessing the Maps REST APIs.
- SecondaryKey *string `json:"secondaryKey,omitempty"`
-}
-
-// AccountProperties additional Map account properties
-type AccountProperties struct {
- // XMsClientID - A unique identifier for the maps account
- XMsClientID *string `json:"x-ms-client-id,omitempty"`
-}
-
-// Accounts a list of Maps Accounts.
-type Accounts struct {
- autorest.Response `json:"-"`
- // Value - READ-ONLY; a Maps Account.
- Value *[]Account `json:"value,omitempty"`
-}
-
-// AccountsMoveRequest the description of what resources to move between resource groups.
-type AccountsMoveRequest struct {
- // TargetResourceGroup - The name of the destination resource group.
- TargetResourceGroup *string `json:"targetResourceGroup,omitempty"`
- // ResourceIds - A list of resource names to move from the source resource group.
- ResourceIds *[]string `json:"resourceIds,omitempty"`
-}
-
-// AccountUpdateParameters parameters used to update an existing Maps Account.
-type AccountUpdateParameters struct {
- // Tags - Gets or sets a list of key value pairs that describe the resource. These tags can be used in viewing and grouping this resource (across resource groups). A maximum of 15 tags can be provided for a resource. Each tag must have a key no greater than 128 characters and value no greater than 256 characters.
- Tags map[string]*string `json:"tags"`
- // Sku - The SKU of this account.
- Sku *Sku `json:"sku,omitempty"`
-}
-
-// MarshalJSON is the custom marshaler for AccountUpdateParameters.
-func (aup AccountUpdateParameters) MarshalJSON() ([]byte, error) {
- objectMap := make(map[string]interface{})
- if aup.Tags != nil {
- objectMap["tags"] = aup.Tags
- }
- if aup.Sku != nil {
- objectMap["sku"] = aup.Sku
- }
- return json.Marshal(objectMap)
-}
-
-// Error this object is returned when an error occurs in the Maps API
-type Error struct {
- // Code - READ-ONLY; Error code.
- Code *string `json:"code,omitempty"`
- // Message - READ-ONLY; If available, a human readable description of the error.
- Message *string `json:"message,omitempty"`
- // Target - READ-ONLY; If available, the component generating the error.
- Target *string `json:"target,omitempty"`
- // Details - READ-ONLY; If available, a list of additional details about the error.
- Details *[]ErrorDetailsItem `json:"details,omitempty"`
-}
-
-// ErrorDetailsItem ...
-type ErrorDetailsItem struct {
- // Code - READ-ONLY; Error code.
- Code *string `json:"code,omitempty"`
- // Message - READ-ONLY; If available, a human readable description of the error.
- Message *string `json:"message,omitempty"`
- // Target - READ-ONLY; If available, the component generating the error.
- Target *string `json:"target,omitempty"`
-}
-
-// KeySpecification whether the operation refers to the primary or secondary key.
-type KeySpecification struct {
- // KeyType - Whether the operation refers to the primary or secondary key. Possible values include: 'Primary', 'Secondary'
- KeyType KeyType `json:"keyType,omitempty"`
-}
-
-// Operations the set of operations available for Maps.
-type Operations struct {
- autorest.Response `json:"-"`
- // Value - READ-ONLY; An operation available for Maps.
- Value *[]OperationsValueItem `json:"value,omitempty"`
-}
-
-// OperationsValueItem ...
-type OperationsValueItem struct {
- // Name - READ-ONLY; Operation name: {provider}/{resource}/{operation}.
- Name *string `json:"name,omitempty"`
- // Display - The human-readable description of the operation.
- Display *OperationsValueItemDisplay `json:"display,omitempty"`
- // Origin - READ-ONLY; The origin of the operation.
- Origin *string `json:"origin,omitempty"`
-}
-
-// MarshalJSON is the custom marshaler for OperationsValueItem.
-func (oI OperationsValueItem) MarshalJSON() ([]byte, error) {
- objectMap := make(map[string]interface{})
- if oI.Display != nil {
- objectMap["display"] = oI.Display
- }
- return json.Marshal(objectMap)
-}
-
-// OperationsValueItemDisplay the human-readable description of the operation.
-type OperationsValueItemDisplay struct {
- // Provider - READ-ONLY; Service provider: Microsoft Maps.
- Provider *string `json:"provider,omitempty"`
- // Resource - READ-ONLY; Resource on which the operation is performed.
- Resource *string `json:"resource,omitempty"`
- // Operation - READ-ONLY; The action that users can perform, based on their permission level.
- Operation *string `json:"operation,omitempty"`
- // Description - READ-ONLY; The description of the operation.
- Description *string `json:"description,omitempty"`
-}
-
-// Resource an Azure resource
-type Resource struct {
- // ID - READ-ONLY; The fully qualified Maps Account resource identifier.
- ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; The name of the Maps Account, which is unique within a Resource Group.
- Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Azure resource type.
- Type *string `json:"type,omitempty"`
-}
-
-// Sku the SKU of the Maps Account.
-type Sku struct {
- // Name - The name of the SKU, in standard format (such as S0).
- Name *string `json:"name,omitempty"`
- // Tier - READ-ONLY; Gets the sku tier. This is based on the SKU name.
- Tier *string `json:"tier,omitempty"`
-}
-
-// MarshalJSON is the custom marshaler for Sku.
-func (s Sku) MarshalJSON() ([]byte, error) {
- objectMap := make(map[string]interface{})
- if s.Name != nil {
- objectMap["name"] = s.Name
- }
- return json.Marshal(objectMap)
-}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/CHANGELOG.md b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/CHANGELOG.md
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/CHANGELOG.md
rename to vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/CHANGELOG.md
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/_meta.json b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/_meta.json
similarity index 58%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/_meta.json
rename to vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/_meta.json
index 5e8297aa606e4..423197245e857 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2021-02-01/containerservice/_meta.json
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/_meta.json
@@ -1,11 +1,11 @@
{
- "commit": "80e4e1b77162711ca1123042f50db03ffbf1bb40",
- "readme": "/_/azure-rest-api-specs/specification/containerservice/resource-manager/readme.md",
+ "commit": "c2ea3a3ccd14293b4bd1d17e684ef9129f0dc604",
+ "readme": "/_/azure-rest-api-specs/specification/maps/resource-manager/readme.md",
"tag": "package-2021-02",
"use": "@microsoft.azure/autorest.go@2.1.180",
"repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
- "autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.180 --tag=package-2021-02 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/containerservice/resource-manager/readme.md",
+ "autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.180 --tag=package-2021-02 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION --enum-prefix /_/azure-rest-api-specs/specification/maps/resource-manager/readme.md",
"additional_properties": {
- "additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
+ "additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION --enum-prefix"
}
}
\ No newline at end of file
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/accounts.go b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/accounts.go
similarity index 71%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/accounts.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/accounts.go
index 0d1429b27a27d..ddf78f1bf0273 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/accounts.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/accounts.go
@@ -15,7 +15,7 @@ import (
"net/http"
)
-// AccountsClient is the resource Provider
+// AccountsClient is the azure Maps
type AccountsClient struct {
BaseClient
}
@@ -34,10 +34,10 @@ func NewAccountsClientWithBaseURI(baseURI string, subscriptionID string) Account
// CreateOrUpdate create or update a Maps Account. A Maps Account holds the keys which allow access to the Maps REST
// APIs.
// Parameters:
-// resourceGroupName - the name of the Azure Resource Group.
+// resourceGroupName - the name of the resource group. The name is case insensitive.
// accountName - the name of the Maps Account.
-// mapsAccountCreateParameters - the new or updated parameters for the Maps Account.
-func (client AccountsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, accountName string, mapsAccountCreateParameters AccountCreateParameters) (result Account, err error) {
+// mapsAccount - the new or updated parameters for the Maps Account.
+func (client AccountsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, accountName string, mapsAccount Account) (result Account, err error) {
if tracing.IsEnabled() {
ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.CreateOrUpdate")
defer func() {
@@ -49,14 +49,18 @@ func (client AccountsClient) CreateOrUpdate(ctx context.Context, resourceGroupNa
}()
}
if err := validation.Validate([]validation.Validation{
- {TargetValue: mapsAccountCreateParameters,
- Constraints: []validation.Constraint{{Target: "mapsAccountCreateParameters.Location", Name: validation.Null, Rule: true, Chain: nil},
- {Target: "mapsAccountCreateParameters.Sku", Name: validation.Null, Rule: true,
- Chain: []validation.Constraint{{Target: "mapsAccountCreateParameters.Sku.Name", Name: validation.Null, Rule: true, Chain: nil}}}}}}); err != nil {
+ {TargetValue: client.SubscriptionID,
+ Constraints: []validation.Constraint{{Target: "client.SubscriptionID", Name: validation.MinLength, Rule: 1, Chain: nil}}},
+ {TargetValue: resourceGroupName,
+ Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}},
+ {TargetValue: mapsAccount,
+ Constraints: []validation.Constraint{{Target: "mapsAccount.Sku", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil {
return result, validation.NewError("maps.AccountsClient", "CreateOrUpdate", err.Error())
}
- req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, accountName, mapsAccountCreateParameters)
+ req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, accountName, mapsAccount)
if err != nil {
err = autorest.NewErrorWithError(err, "maps.AccountsClient", "CreateOrUpdate", nil, "Failure preparing request")
return
@@ -79,24 +83,25 @@ func (client AccountsClient) CreateOrUpdate(ctx context.Context, resourceGroupNa
}
// CreateOrUpdatePreparer prepares the CreateOrUpdate request.
-func (client AccountsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, accountName string, mapsAccountCreateParameters AccountCreateParameters) (*http.Request, error) {
+func (client AccountsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, accountName string, mapsAccount Account) (*http.Request, error) {
pathParameters := map[string]interface{}{
"accountName": autorest.Encode("path", accountName),
"resourceGroupName": autorest.Encode("path", resourceGroupName),
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2018-05-01"
+ const APIVersion = "2021-02-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
+ mapsAccount.SystemData = nil
preparer := autorest.CreatePreparer(
autorest.AsContentType("application/json; charset=utf-8"),
autorest.AsPut(),
autorest.WithBaseURL(client.BaseURI),
autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Maps/accounts/{accountName}", pathParameters),
- autorest.WithJSON(mapsAccountCreateParameters),
+ autorest.WithJSON(mapsAccount),
autorest.WithQueryParameters(queryParameters))
return preparer.Prepare((&http.Request{}).WithContext(ctx))
}
@@ -121,7 +126,7 @@ func (client AccountsClient) CreateOrUpdateResponder(resp *http.Response) (resul
// Delete delete a Maps Account.
// Parameters:
-// resourceGroupName - the name of the Azure Resource Group.
+// resourceGroupName - the name of the resource group. The name is case insensitive.
// accountName - the name of the Maps Account.
func (client AccountsClient) Delete(ctx context.Context, resourceGroupName string, accountName string) (result autorest.Response, err error) {
if tracing.IsEnabled() {
@@ -134,6 +139,16 @@ func (client AccountsClient) Delete(ctx context.Context, resourceGroupName strin
tracing.EndSpan(ctx, sc, err)
}()
}
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: client.SubscriptionID,
+ Constraints: []validation.Constraint{{Target: "client.SubscriptionID", Name: validation.MinLength, Rule: 1, Chain: nil}}},
+ {TargetValue: resourceGroupName,
+ Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil {
+ return result, validation.NewError("maps.AccountsClient", "Delete", err.Error())
+ }
+
req, err := client.DeletePreparer(ctx, resourceGroupName, accountName)
if err != nil {
err = autorest.NewErrorWithError(err, "maps.AccountsClient", "Delete", nil, "Failure preparing request")
@@ -164,7 +179,7 @@ func (client AccountsClient) DeletePreparer(ctx context.Context, resourceGroupNa
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2018-05-01"
+ const APIVersion = "2021-02-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -196,7 +211,7 @@ func (client AccountsClient) DeleteResponder(resp *http.Response) (result autore
// Get get a Maps Account.
// Parameters:
-// resourceGroupName - the name of the Azure Resource Group.
+// resourceGroupName - the name of the resource group. The name is case insensitive.
// accountName - the name of the Maps Account.
func (client AccountsClient) Get(ctx context.Context, resourceGroupName string, accountName string) (result Account, err error) {
if tracing.IsEnabled() {
@@ -209,6 +224,16 @@ func (client AccountsClient) Get(ctx context.Context, resourceGroupName string,
tracing.EndSpan(ctx, sc, err)
}()
}
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: client.SubscriptionID,
+ Constraints: []validation.Constraint{{Target: "client.SubscriptionID", Name: validation.MinLength, Rule: 1, Chain: nil}}},
+ {TargetValue: resourceGroupName,
+ Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil {
+ return result, validation.NewError("maps.AccountsClient", "Get", err.Error())
+ }
+
req, err := client.GetPreparer(ctx, resourceGroupName, accountName)
if err != nil {
err = autorest.NewErrorWithError(err, "maps.AccountsClient", "Get", nil, "Failure preparing request")
@@ -239,7 +264,7 @@ func (client AccountsClient) GetPreparer(ctx context.Context, resourceGroupName
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2018-05-01"
+ const APIVersion = "2021-02-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -263,7 +288,7 @@ func (client AccountsClient) GetSender(req *http.Request) (*http.Response, error
func (client AccountsClient) GetResponder(resp *http.Response) (result Account, err error) {
err = autorest.Respond(
resp,
- azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNotFound),
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
autorest.ByUnmarshallingJSON(&result),
autorest.ByClosing())
result.Response = autorest.Response{Response: resp}
@@ -272,18 +297,29 @@ func (client AccountsClient) GetResponder(resp *http.Response) (result Account,
// ListByResourceGroup get all Maps Accounts in a Resource Group
// Parameters:
-// resourceGroupName - the name of the Azure Resource Group.
-func (client AccountsClient) ListByResourceGroup(ctx context.Context, resourceGroupName string) (result Accounts, err error) {
+// resourceGroupName - the name of the resource group. The name is case insensitive.
+func (client AccountsClient) ListByResourceGroup(ctx context.Context, resourceGroupName string) (result AccountsPage, err error) {
if tracing.IsEnabled() {
ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.ListByResourceGroup")
defer func() {
sc := -1
- if result.Response.Response != nil {
- sc = result.Response.Response.StatusCode
+ if result.a.Response.Response != nil {
+ sc = result.a.Response.Response.StatusCode
}
tracing.EndSpan(ctx, sc, err)
}()
}
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: client.SubscriptionID,
+ Constraints: []validation.Constraint{{Target: "client.SubscriptionID", Name: validation.MinLength, Rule: 1, Chain: nil}}},
+ {TargetValue: resourceGroupName,
+ Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil {
+ return result, validation.NewError("maps.AccountsClient", "ListByResourceGroup", err.Error())
+ }
+
+ result.fn = client.listByResourceGroupNextResults
req, err := client.ListByResourceGroupPreparer(ctx, resourceGroupName)
if err != nil {
err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListByResourceGroup", nil, "Failure preparing request")
@@ -292,16 +328,20 @@ func (client AccountsClient) ListByResourceGroup(ctx context.Context, resourceGr
resp, err := client.ListByResourceGroupSender(req)
if err != nil {
- result.Response = autorest.Response{Response: resp}
+ result.a.Response = autorest.Response{Response: resp}
err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListByResourceGroup", resp, "Failure sending request")
return
}
- result, err = client.ListByResourceGroupResponder(resp)
+ result.a, err = client.ListByResourceGroupResponder(resp)
if err != nil {
err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListByResourceGroup", resp, "Failure responding to request")
return
}
+ if result.a.hasNextLink() && result.a.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
return
}
@@ -313,7 +353,7 @@ func (client AccountsClient) ListByResourceGroupPreparer(ctx context.Context, re
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2018-05-01"
+ const APIVersion = "2021-02-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -344,18 +384,62 @@ func (client AccountsClient) ListByResourceGroupResponder(resp *http.Response) (
return
}
+// listByResourceGroupNextResults retrieves the next set of results, if any.
+func (client AccountsClient) listByResourceGroupNextResults(ctx context.Context, lastResults Accounts) (result Accounts, err error) {
+ req, err := lastResults.accountsPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "maps.AccountsClient", "listByResourceGroupNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListByResourceGroupSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "maps.AccountsClient", "listByResourceGroupNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListByResourceGroupResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.AccountsClient", "listByResourceGroupNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListByResourceGroupComplete enumerates all values, automatically crossing page boundaries as required.
+func (client AccountsClient) ListByResourceGroupComplete(ctx context.Context, resourceGroupName string) (result AccountsIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.ListByResourceGroup")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.ListByResourceGroup(ctx, resourceGroupName)
+ return
+}
+
// ListBySubscription get all Maps Accounts in a Subscription
-func (client AccountsClient) ListBySubscription(ctx context.Context) (result Accounts, err error) {
+func (client AccountsClient) ListBySubscription(ctx context.Context) (result AccountsPage, err error) {
if tracing.IsEnabled() {
ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.ListBySubscription")
defer func() {
sc := -1
- if result.Response.Response != nil {
- sc = result.Response.Response.StatusCode
+ if result.a.Response.Response != nil {
+ sc = result.a.Response.Response.StatusCode
}
tracing.EndSpan(ctx, sc, err)
}()
}
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: client.SubscriptionID,
+ Constraints: []validation.Constraint{{Target: "client.SubscriptionID", Name: validation.MinLength, Rule: 1, Chain: nil}}}}); err != nil {
+ return result, validation.NewError("maps.AccountsClient", "ListBySubscription", err.Error())
+ }
+
+ result.fn = client.listBySubscriptionNextResults
req, err := client.ListBySubscriptionPreparer(ctx)
if err != nil {
err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListBySubscription", nil, "Failure preparing request")
@@ -364,16 +448,20 @@ func (client AccountsClient) ListBySubscription(ctx context.Context) (result Acc
resp, err := client.ListBySubscriptionSender(req)
if err != nil {
- result.Response = autorest.Response{Response: resp}
+ result.a.Response = autorest.Response{Response: resp}
err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListBySubscription", resp, "Failure sending request")
return
}
- result, err = client.ListBySubscriptionResponder(resp)
+ result.a, err = client.ListBySubscriptionResponder(resp)
if err != nil {
err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListBySubscription", resp, "Failure responding to request")
return
}
+ if result.a.hasNextLink() && result.a.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
return
}
@@ -384,7 +472,7 @@ func (client AccountsClient) ListBySubscriptionPreparer(ctx context.Context) (*h
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2018-05-01"
+ const APIVersion = "2021-02-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -415,237 +503,134 @@ func (client AccountsClient) ListBySubscriptionResponder(resp *http.Response) (r
return
}
-// ListKeys get the keys to use with the Maps APIs. A key is used to authenticate and authorize access to the Maps REST
-// APIs. Only one key is needed at a time; two are given to provide seamless key regeneration.
-// Parameters:
-// resourceGroupName - the name of the Azure Resource Group.
-// accountName - the name of the Maps Account.
-func (client AccountsClient) ListKeys(ctx context.Context, resourceGroupName string, accountName string) (result AccountKeys, err error) {
- if tracing.IsEnabled() {
- ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.ListKeys")
- defer func() {
- sc := -1
- if result.Response.Response != nil {
- sc = result.Response.Response.StatusCode
- }
- tracing.EndSpan(ctx, sc, err)
- }()
- }
- req, err := client.ListKeysPreparer(ctx, resourceGroupName, accountName)
+// listBySubscriptionNextResults retrieves the next set of results, if any.
+func (client AccountsClient) listBySubscriptionNextResults(ctx context.Context, lastResults Accounts) (result Accounts, err error) {
+ req, err := lastResults.accountsPreparer(ctx)
if err != nil {
- err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListKeys", nil, "Failure preparing request")
+ return result, autorest.NewErrorWithError(err, "maps.AccountsClient", "listBySubscriptionNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
return
}
-
- resp, err := client.ListKeysSender(req)
+ resp, err := client.ListBySubscriptionSender(req)
if err != nil {
result.Response = autorest.Response{Response: resp}
- err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListKeys", resp, "Failure sending request")
- return
+ return result, autorest.NewErrorWithError(err, "maps.AccountsClient", "listBySubscriptionNextResults", resp, "Failure sending next results request")
}
-
- result, err = client.ListKeysResponder(resp)
+ result, err = client.ListBySubscriptionResponder(resp)
if err != nil {
- err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListKeys", resp, "Failure responding to request")
- return
+ err = autorest.NewErrorWithError(err, "maps.AccountsClient", "listBySubscriptionNextResults", resp, "Failure responding to next results request")
}
-
return
}
-// ListKeysPreparer prepares the ListKeys request.
-func (client AccountsClient) ListKeysPreparer(ctx context.Context, resourceGroupName string, accountName string) (*http.Request, error) {
- pathParameters := map[string]interface{}{
- "accountName": autorest.Encode("path", accountName),
- "resourceGroupName": autorest.Encode("path", resourceGroupName),
- "subscriptionId": autorest.Encode("path", client.SubscriptionID),
- }
-
- const APIVersion = "2018-05-01"
- queryParameters := map[string]interface{}{
- "api-version": APIVersion,
- }
-
- preparer := autorest.CreatePreparer(
- autorest.AsPost(),
- autorest.WithBaseURL(client.BaseURI),
- autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Maps/accounts/{accountName}/listKeys", pathParameters),
- autorest.WithQueryParameters(queryParameters))
- return preparer.Prepare((&http.Request{}).WithContext(ctx))
-}
-
-// ListKeysSender sends the ListKeys request. The method will close the
-// http.Response Body if it receives an error.
-func (client AccountsClient) ListKeysSender(req *http.Request) (*http.Response, error) {
- return client.Send(req, azure.DoRetryWithRegistration(client.Client))
-}
-
-// ListKeysResponder handles the response to the ListKeys request. The method always
-// closes the http.Response Body.
-func (client AccountsClient) ListKeysResponder(resp *http.Response) (result AccountKeys, err error) {
- err = autorest.Respond(
- resp,
- azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNotFound),
- autorest.ByUnmarshallingJSON(&result),
- autorest.ByClosing())
- result.Response = autorest.Response{Response: resp}
- return
-}
-
-// ListOperations list operations available for the Maps Resource Provider
-func (client AccountsClient) ListOperations(ctx context.Context) (result Operations, err error) {
+// ListBySubscriptionComplete enumerates all values, automatically crossing page boundaries as required.
+func (client AccountsClient) ListBySubscriptionComplete(ctx context.Context) (result AccountsIterator, err error) {
if tracing.IsEnabled() {
- ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.ListOperations")
+ ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.ListBySubscription")
defer func() {
sc := -1
- if result.Response.Response != nil {
- sc = result.Response.Response.StatusCode
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
}
tracing.EndSpan(ctx, sc, err)
}()
}
- req, err := client.ListOperationsPreparer(ctx)
- if err != nil {
- err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListOperations", nil, "Failure preparing request")
- return
- }
-
- resp, err := client.ListOperationsSender(req)
- if err != nil {
- result.Response = autorest.Response{Response: resp}
- err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListOperations", resp, "Failure sending request")
- return
- }
-
- result, err = client.ListOperationsResponder(resp)
- if err != nil {
- err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListOperations", resp, "Failure responding to request")
- return
- }
-
- return
-}
-
-// ListOperationsPreparer prepares the ListOperations request.
-func (client AccountsClient) ListOperationsPreparer(ctx context.Context) (*http.Request, error) {
- const APIVersion = "2018-05-01"
- queryParameters := map[string]interface{}{
- "api-version": APIVersion,
- }
-
- preparer := autorest.CreatePreparer(
- autorest.AsGet(),
- autorest.WithBaseURL(client.BaseURI),
- autorest.WithPath("/providers/Microsoft.Maps/operations"),
- autorest.WithQueryParameters(queryParameters))
- return preparer.Prepare((&http.Request{}).WithContext(ctx))
-}
-
-// ListOperationsSender sends the ListOperations request. The method will close the
-// http.Response Body if it receives an error.
-func (client AccountsClient) ListOperationsSender(req *http.Request) (*http.Response, error) {
- return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
-}
-
-// ListOperationsResponder handles the response to the ListOperations request. The method always
-// closes the http.Response Body.
-func (client AccountsClient) ListOperationsResponder(resp *http.Response) (result Operations, err error) {
- err = autorest.Respond(
- resp,
- azure.WithErrorUnlessStatusCode(http.StatusOK),
- autorest.ByUnmarshallingJSON(&result),
- autorest.ByClosing())
- result.Response = autorest.Response{Response: resp}
+ result.page, err = client.ListBySubscription(ctx)
return
}
-// Move moves Maps Accounts from one ResourceGroup (or Subscription) to another
+// ListKeys get the keys to use with the Maps APIs. A key is used to authenticate and authorize access to the Maps REST
+// APIs. Only one key is needed at a time; two are given to provide seamless key regeneration.
// Parameters:
-// resourceGroupName - the name of the resource group that contains Maps Account to move.
-// moveRequest - the details of the Maps Account move.
-func (client AccountsClient) Move(ctx context.Context, resourceGroupName string, moveRequest AccountsMoveRequest) (result autorest.Response, err error) {
+// resourceGroupName - the name of the resource group. The name is case insensitive.
+// accountName - the name of the Maps Account.
+func (client AccountsClient) ListKeys(ctx context.Context, resourceGroupName string, accountName string) (result AccountKeys, err error) {
if tracing.IsEnabled() {
- ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.Move")
+ ctx = tracing.StartSpan(ctx, fqdn+"/AccountsClient.ListKeys")
defer func() {
sc := -1
- if result.Response != nil {
- sc = result.Response.StatusCode
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
}
tracing.EndSpan(ctx, sc, err)
}()
}
if err := validation.Validate([]validation.Validation{
- {TargetValue: moveRequest,
- Constraints: []validation.Constraint{{Target: "moveRequest.TargetResourceGroup", Name: validation.Null, Rule: true, Chain: nil},
- {Target: "moveRequest.ResourceIds", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil {
- return result, validation.NewError("maps.AccountsClient", "Move", err.Error())
+ {TargetValue: client.SubscriptionID,
+ Constraints: []validation.Constraint{{Target: "client.SubscriptionID", Name: validation.MinLength, Rule: 1, Chain: nil}}},
+ {TargetValue: resourceGroupName,
+ Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil {
+ return result, validation.NewError("maps.AccountsClient", "ListKeys", err.Error())
}
- req, err := client.MovePreparer(ctx, resourceGroupName, moveRequest)
+ req, err := client.ListKeysPreparer(ctx, resourceGroupName, accountName)
if err != nil {
- err = autorest.NewErrorWithError(err, "maps.AccountsClient", "Move", nil, "Failure preparing request")
+ err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListKeys", nil, "Failure preparing request")
return
}
- resp, err := client.MoveSender(req)
+ resp, err := client.ListKeysSender(req)
if err != nil {
- result.Response = resp
- err = autorest.NewErrorWithError(err, "maps.AccountsClient", "Move", resp, "Failure sending request")
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListKeys", resp, "Failure sending request")
return
}
- result, err = client.MoveResponder(resp)
+ result, err = client.ListKeysResponder(resp)
if err != nil {
- err = autorest.NewErrorWithError(err, "maps.AccountsClient", "Move", resp, "Failure responding to request")
+ err = autorest.NewErrorWithError(err, "maps.AccountsClient", "ListKeys", resp, "Failure responding to request")
return
}
return
}
-// MovePreparer prepares the Move request.
-func (client AccountsClient) MovePreparer(ctx context.Context, resourceGroupName string, moveRequest AccountsMoveRequest) (*http.Request, error) {
+// ListKeysPreparer prepares the ListKeys request.
+func (client AccountsClient) ListKeysPreparer(ctx context.Context, resourceGroupName string, accountName string) (*http.Request, error) {
pathParameters := map[string]interface{}{
+ "accountName": autorest.Encode("path", accountName),
"resourceGroupName": autorest.Encode("path", resourceGroupName),
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2018-05-01"
+ const APIVersion = "2021-02-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
preparer := autorest.CreatePreparer(
- autorest.AsContentType("application/json; charset=utf-8"),
autorest.AsPost(),
autorest.WithBaseURL(client.BaseURI),
- autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/moveResources", pathParameters),
- autorest.WithJSON(moveRequest),
+ autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Maps/accounts/{accountName}/listKeys", pathParameters),
autorest.WithQueryParameters(queryParameters))
return preparer.Prepare((&http.Request{}).WithContext(ctx))
}
-// MoveSender sends the Move request. The method will close the
+// ListKeysSender sends the ListKeys request. The method will close the
// http.Response Body if it receives an error.
-func (client AccountsClient) MoveSender(req *http.Request) (*http.Response, error) {
+func (client AccountsClient) ListKeysSender(req *http.Request) (*http.Response, error) {
return client.Send(req, azure.DoRetryWithRegistration(client.Client))
}
-// MoveResponder handles the response to the Move request. The method always
+// ListKeysResponder handles the response to the ListKeys request. The method always
// closes the http.Response Body.
-func (client AccountsClient) MoveResponder(resp *http.Response) (result autorest.Response, err error) {
+func (client AccountsClient) ListKeysResponder(resp *http.Response) (result AccountKeys, err error) {
err = autorest.Respond(
resp,
azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
autorest.ByClosing())
- result.Response = resp
+ result.Response = autorest.Response{Response: resp}
return
}
// RegenerateKeys regenerate either the primary or secondary key for use with the Maps APIs. The old key will stop
// working immediately.
// Parameters:
-// resourceGroupName - the name of the Azure Resource Group.
+// resourceGroupName - the name of the resource group. The name is case insensitive.
// accountName - the name of the Maps Account.
// keySpecification - which key to regenerate: primary or secondary.
func (client AccountsClient) RegenerateKeys(ctx context.Context, resourceGroupName string, accountName string, keySpecification KeySpecification) (result AccountKeys, err error) {
@@ -659,6 +644,16 @@ func (client AccountsClient) RegenerateKeys(ctx context.Context, resourceGroupNa
tracing.EndSpan(ctx, sc, err)
}()
}
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: client.SubscriptionID,
+ Constraints: []validation.Constraint{{Target: "client.SubscriptionID", Name: validation.MinLength, Rule: 1, Chain: nil}}},
+ {TargetValue: resourceGroupName,
+ Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil {
+ return result, validation.NewError("maps.AccountsClient", "RegenerateKeys", err.Error())
+ }
+
req, err := client.RegenerateKeysPreparer(ctx, resourceGroupName, accountName, keySpecification)
if err != nil {
err = autorest.NewErrorWithError(err, "maps.AccountsClient", "RegenerateKeys", nil, "Failure preparing request")
@@ -689,7 +684,7 @@ func (client AccountsClient) RegenerateKeysPreparer(ctx context.Context, resourc
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2018-05-01"
+ const APIVersion = "2021-02-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -715,16 +710,17 @@ func (client AccountsClient) RegenerateKeysSender(req *http.Request) (*http.Resp
func (client AccountsClient) RegenerateKeysResponder(resp *http.Response) (result AccountKeys, err error) {
err = autorest.Respond(
resp,
- azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNotFound),
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
autorest.ByUnmarshallingJSON(&result),
autorest.ByClosing())
result.Response = autorest.Response{Response: resp}
return
}
-// Update updates a Maps Account. Only a subset of the parameters may be updated after creation, such as Sku and Tags.
+// Update updates a Maps Account. Only a subset of the parameters may be updated after creation, such as Sku, Tags,
+// Properties.
// Parameters:
-// resourceGroupName - the name of the Azure Resource Group.
+// resourceGroupName - the name of the resource group. The name is case insensitive.
// accountName - the name of the Maps Account.
// mapsAccountUpdateParameters - the updated parameters for the Maps Account.
func (client AccountsClient) Update(ctx context.Context, resourceGroupName string, accountName string, mapsAccountUpdateParameters AccountUpdateParameters) (result Account, err error) {
@@ -738,6 +734,16 @@ func (client AccountsClient) Update(ctx context.Context, resourceGroupName strin
tracing.EndSpan(ctx, sc, err)
}()
}
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: client.SubscriptionID,
+ Constraints: []validation.Constraint{{Target: "client.SubscriptionID", Name: validation.MinLength, Rule: 1, Chain: nil}}},
+ {TargetValue: resourceGroupName,
+ Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil {
+ return result, validation.NewError("maps.AccountsClient", "Update", err.Error())
+ }
+
req, err := client.UpdatePreparer(ctx, resourceGroupName, accountName, mapsAccountUpdateParameters)
if err != nil {
err = autorest.NewErrorWithError(err, "maps.AccountsClient", "Update", nil, "Failure preparing request")
@@ -768,7 +774,7 @@ func (client AccountsClient) UpdatePreparer(ctx context.Context, resourceGroupNa
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2018-05-01"
+ const APIVersion = "2021-02-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -794,7 +800,7 @@ func (client AccountsClient) UpdateSender(req *http.Request) (*http.Response, er
func (client AccountsClient) UpdateResponder(resp *http.Response) (result Account, err error) {
err = autorest.Respond(
resp,
- azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNotFound),
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
autorest.ByUnmarshallingJSON(&result),
autorest.ByClosing())
result.Response = autorest.Response{Response: resp}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/client.go b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/client.go
similarity index 97%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/client.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/client.go
index 4ced11fd948b9..c452debd4eceb 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/client.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/client.go
@@ -1,6 +1,6 @@
-// Package maps implements the Azure ARM Maps service API version 2018-05-01.
+// Package maps implements the Azure ARM Maps service API version 2021-02-01.
//
-// Resource Provider
+// Azure Maps
package maps
// Copyright (c) Microsoft Corporation. All rights reserved.
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/creators.go b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/creators.go
new file mode 100644
index 0000000000000..77a7d495dd263
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/creators.go
@@ -0,0 +1,526 @@
+package maps
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/autorest/validation"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// CreatorsClient is the azure Maps
+type CreatorsClient struct {
+ BaseClient
+}
+
+// NewCreatorsClient creates an instance of the CreatorsClient client.
+func NewCreatorsClient(subscriptionID string) CreatorsClient {
+ return NewCreatorsClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewCreatorsClientWithBaseURI creates an instance of the CreatorsClient client using a custom endpoint. Use this
+// when interacting with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewCreatorsClientWithBaseURI(baseURI string, subscriptionID string) CreatorsClient {
+ return CreatorsClient{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// CreateOrUpdate create or update a Maps Creator resource. Creator resource will manage Azure resources required to
+// populate a custom set of mapping data. It requires an account to exist before it can be created.
+// Parameters:
+// resourceGroupName - the name of the resource group. The name is case insensitive.
+// accountName - the name of the Maps Account.
+// creatorName - the name of the Maps Creator instance.
+// creatorResource - the new or updated parameters for the Creator resource.
+func (client CreatorsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, accountName string, creatorName string, creatorResource Creator) (result Creator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/CreatorsClient.CreateOrUpdate")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: client.SubscriptionID,
+ Constraints: []validation.Constraint{{Target: "client.SubscriptionID", Name: validation.MinLength, Rule: 1, Chain: nil}}},
+ {TargetValue: resourceGroupName,
+ Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}},
+ {TargetValue: creatorResource,
+ Constraints: []validation.Constraint{{Target: "creatorResource.Properties", Name: validation.Null, Rule: true,
+ Chain: []validation.Constraint{{Target: "creatorResource.Properties.StorageUnits", Name: validation.Null, Rule: true,
+ Chain: []validation.Constraint{{Target: "creatorResource.Properties.StorageUnits", Name: validation.InclusiveMaximum, Rule: int64(100), Chain: nil},
+ {Target: "creatorResource.Properties.StorageUnits", Name: validation.InclusiveMinimum, Rule: int64(1), Chain: nil},
+ }},
+ }}}}}); err != nil {
+ return result, validation.NewError("maps.CreatorsClient", "CreateOrUpdate", err.Error())
+ }
+
+ req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, accountName, creatorName, creatorResource)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "CreateOrUpdate", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.CreateOrUpdateSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "CreateOrUpdate", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.CreateOrUpdateResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "CreateOrUpdate", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// CreateOrUpdatePreparer prepares the CreateOrUpdate request.
+func (client CreatorsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, accountName string, creatorName string, creatorResource Creator) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "accountName": autorest.Encode("path", accountName),
+ "creatorName": autorest.Encode("path", creatorName),
+ "resourceGroupName": autorest.Encode("path", resourceGroupName),
+ "subscriptionId": autorest.Encode("path", client.SubscriptionID),
+ }
+
+ const APIVersion = "2021-02-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsContentType("application/json; charset=utf-8"),
+ autorest.AsPut(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Maps/accounts/{accountName}/creators/{creatorName}", pathParameters),
+ autorest.WithJSON(creatorResource),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the
+// http.Response Body if it receives an error.
+func (client CreatorsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, azure.DoRetryWithRegistration(client.Client))
+}
+
+// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always
+// closes the http.Response Body.
+func (client CreatorsClient) CreateOrUpdateResponder(resp *http.Response) (result Creator, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// Delete delete a Maps Creator resource.
+// Parameters:
+// resourceGroupName - the name of the resource group. The name is case insensitive.
+// accountName - the name of the Maps Account.
+// creatorName - the name of the Maps Creator instance.
+func (client CreatorsClient) Delete(ctx context.Context, resourceGroupName string, accountName string, creatorName string) (result autorest.Response, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/CreatorsClient.Delete")
+ defer func() {
+ sc := -1
+ if result.Response != nil {
+ sc = result.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: client.SubscriptionID,
+ Constraints: []validation.Constraint{{Target: "client.SubscriptionID", Name: validation.MinLength, Rule: 1, Chain: nil}}},
+ {TargetValue: resourceGroupName,
+ Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil {
+ return result, validation.NewError("maps.CreatorsClient", "Delete", err.Error())
+ }
+
+ req, err := client.DeletePreparer(ctx, resourceGroupName, accountName, creatorName)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "Delete", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.DeleteSender(req)
+ if err != nil {
+ result.Response = resp
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "Delete", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.DeleteResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "Delete", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// DeletePreparer prepares the Delete request.
+func (client CreatorsClient) DeletePreparer(ctx context.Context, resourceGroupName string, accountName string, creatorName string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "accountName": autorest.Encode("path", accountName),
+ "creatorName": autorest.Encode("path", creatorName),
+ "resourceGroupName": autorest.Encode("path", resourceGroupName),
+ "subscriptionId": autorest.Encode("path", client.SubscriptionID),
+ }
+
+ const APIVersion = "2021-02-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsDelete(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Maps/accounts/{accountName}/creators/{creatorName}", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// DeleteSender sends the Delete request. The method will close the
+// http.Response Body if it receives an error.
+func (client CreatorsClient) DeleteSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, azure.DoRetryWithRegistration(client.Client))
+}
+
+// DeleteResponder handles the response to the Delete request. The method always
+// closes the http.Response Body.
+func (client CreatorsClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent),
+ autorest.ByClosing())
+ result.Response = resp
+ return
+}
+
+// Get get a Maps Creator resource.
+// Parameters:
+// resourceGroupName - the name of the resource group. The name is case insensitive.
+// accountName - the name of the Maps Account.
+// creatorName - the name of the Maps Creator instance.
+func (client CreatorsClient) Get(ctx context.Context, resourceGroupName string, accountName string, creatorName string) (result Creator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/CreatorsClient.Get")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: client.SubscriptionID,
+ Constraints: []validation.Constraint{{Target: "client.SubscriptionID", Name: validation.MinLength, Rule: 1, Chain: nil}}},
+ {TargetValue: resourceGroupName,
+ Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil {
+ return result, validation.NewError("maps.CreatorsClient", "Get", err.Error())
+ }
+
+ req, err := client.GetPreparer(ctx, resourceGroupName, accountName, creatorName)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "Get", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "Get", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "Get", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetPreparer prepares the Get request.
+func (client CreatorsClient) GetPreparer(ctx context.Context, resourceGroupName string, accountName string, creatorName string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "accountName": autorest.Encode("path", accountName),
+ "creatorName": autorest.Encode("path", creatorName),
+ "resourceGroupName": autorest.Encode("path", resourceGroupName),
+ "subscriptionId": autorest.Encode("path", client.SubscriptionID),
+ }
+
+ const APIVersion = "2021-02-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Maps/accounts/{accountName}/creators/{creatorName}", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetSender sends the Get request. The method will close the
+// http.Response Body if it receives an error.
+func (client CreatorsClient) GetSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, azure.DoRetryWithRegistration(client.Client))
+}
+
+// GetResponder handles the response to the Get request. The method always
+// closes the http.Response Body.
+func (client CreatorsClient) GetResponder(resp *http.Response) (result Creator, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// ListByAccount get all Creator instances for an Azure Maps Account
+// Parameters:
+// resourceGroupName - the name of the resource group. The name is case insensitive.
+// accountName - the name of the Maps Account.
+func (client CreatorsClient) ListByAccount(ctx context.Context, resourceGroupName string, accountName string) (result CreatorListPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/CreatorsClient.ListByAccount")
+ defer func() {
+ sc := -1
+ if result.cl.Response.Response != nil {
+ sc = result.cl.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: client.SubscriptionID,
+ Constraints: []validation.Constraint{{Target: "client.SubscriptionID", Name: validation.MinLength, Rule: 1, Chain: nil}}},
+ {TargetValue: resourceGroupName,
+ Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil {
+ return result, validation.NewError("maps.CreatorsClient", "ListByAccount", err.Error())
+ }
+
+ result.fn = client.listByAccountNextResults
+ req, err := client.ListByAccountPreparer(ctx, resourceGroupName, accountName)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "ListByAccount", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListByAccountSender(req)
+ if err != nil {
+ result.cl.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "ListByAccount", resp, "Failure sending request")
+ return
+ }
+
+ result.cl, err = client.ListByAccountResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "ListByAccount", resp, "Failure responding to request")
+ return
+ }
+ if result.cl.hasNextLink() && result.cl.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListByAccountPreparer prepares the ListByAccount request.
+func (client CreatorsClient) ListByAccountPreparer(ctx context.Context, resourceGroupName string, accountName string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "accountName": autorest.Encode("path", accountName),
+ "resourceGroupName": autorest.Encode("path", resourceGroupName),
+ "subscriptionId": autorest.Encode("path", client.SubscriptionID),
+ }
+
+ const APIVersion = "2021-02-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Maps/accounts/{accountName}/creators", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListByAccountSender sends the ListByAccount request. The method will close the
+// http.Response Body if it receives an error.
+func (client CreatorsClient) ListByAccountSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, azure.DoRetryWithRegistration(client.Client))
+}
+
+// ListByAccountResponder handles the response to the ListByAccount request. The method always
+// closes the http.Response Body.
+func (client CreatorsClient) ListByAccountResponder(resp *http.Response) (result CreatorList, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listByAccountNextResults retrieves the next set of results, if any.
+func (client CreatorsClient) listByAccountNextResults(ctx context.Context, lastResults CreatorList) (result CreatorList, err error) {
+ req, err := lastResults.creatorListPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "maps.CreatorsClient", "listByAccountNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListByAccountSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "maps.CreatorsClient", "listByAccountNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListByAccountResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "listByAccountNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListByAccountComplete enumerates all values, automatically crossing page boundaries as required.
+func (client CreatorsClient) ListByAccountComplete(ctx context.Context, resourceGroupName string, accountName string) (result CreatorListIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/CreatorsClient.ListByAccount")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.ListByAccount(ctx, resourceGroupName, accountName)
+ return
+}
+
+// Update updates the Maps Creator resource. Only a subset of the parameters may be updated after creation, such as
+// Tags.
+// Parameters:
+// resourceGroupName - the name of the resource group. The name is case insensitive.
+// accountName - the name of the Maps Account.
+// creatorName - the name of the Maps Creator instance.
+// creatorUpdateParameters - the update parameters for Maps Creator.
+func (client CreatorsClient) Update(ctx context.Context, resourceGroupName string, accountName string, creatorName string, creatorUpdateParameters CreatorUpdateParameters) (result Creator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/CreatorsClient.Update")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ if err := validation.Validate([]validation.Validation{
+ {TargetValue: client.SubscriptionID,
+ Constraints: []validation.Constraint{{Target: "client.SubscriptionID", Name: validation.MinLength, Rule: 1, Chain: nil}}},
+ {TargetValue: resourceGroupName,
+ Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil},
+ {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil {
+ return result, validation.NewError("maps.CreatorsClient", "Update", err.Error())
+ }
+
+ req, err := client.UpdatePreparer(ctx, resourceGroupName, accountName, creatorName, creatorUpdateParameters)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "Update", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.UpdateSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "Update", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.UpdateResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.CreatorsClient", "Update", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// UpdatePreparer prepares the Update request.
+func (client CreatorsClient) UpdatePreparer(ctx context.Context, resourceGroupName string, accountName string, creatorName string, creatorUpdateParameters CreatorUpdateParameters) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "accountName": autorest.Encode("path", accountName),
+ "creatorName": autorest.Encode("path", creatorName),
+ "resourceGroupName": autorest.Encode("path", resourceGroupName),
+ "subscriptionId": autorest.Encode("path", client.SubscriptionID),
+ }
+
+ const APIVersion = "2021-02-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsContentType("application/json; charset=utf-8"),
+ autorest.AsPatch(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Maps/accounts/{accountName}/creators/{creatorName}", pathParameters),
+ autorest.WithJSON(creatorUpdateParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// UpdateSender sends the Update request. The method will close the
+// http.Response Body if it receives an error.
+func (client CreatorsClient) UpdateSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, azure.DoRetryWithRegistration(client.Client))
+}
+
+// UpdateResponder handles the response to the Update request. The method always
+// closes the http.Response Body.
+func (client CreatorsClient) UpdateResponder(resp *http.Response) (result Creator, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/enums.go b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/enums.go
new file mode 100644
index 0000000000000..d1a4911414e4a
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/enums.go
@@ -0,0 +1,73 @@
+package maps
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+// CreatedByType enumerates the values for created by type.
+type CreatedByType string
+
+const (
+ // CreatedByTypeApplication ...
+ CreatedByTypeApplication CreatedByType = "Application"
+ // CreatedByTypeKey ...
+ CreatedByTypeKey CreatedByType = "Key"
+ // CreatedByTypeManagedIdentity ...
+ CreatedByTypeManagedIdentity CreatedByType = "ManagedIdentity"
+ // CreatedByTypeUser ...
+ CreatedByTypeUser CreatedByType = "User"
+)
+
+// PossibleCreatedByTypeValues returns an array of possible values for the CreatedByType const type.
+func PossibleCreatedByTypeValues() []CreatedByType {
+ return []CreatedByType{CreatedByTypeApplication, CreatedByTypeKey, CreatedByTypeManagedIdentity, CreatedByTypeUser}
+}
+
+// KeyType enumerates the values for key type.
+type KeyType string
+
+const (
+ // KeyTypePrimary ...
+ KeyTypePrimary KeyType = "primary"
+ // KeyTypeSecondary ...
+ KeyTypeSecondary KeyType = "secondary"
+)
+
+// PossibleKeyTypeValues returns an array of possible values for the KeyType const type.
+func PossibleKeyTypeValues() []KeyType {
+ return []KeyType{KeyTypePrimary, KeyTypeSecondary}
+}
+
+// Kind enumerates the values for kind.
+type Kind string
+
+const (
+ // KindGen1 ...
+ KindGen1 Kind = "Gen1"
+ // KindGen2 ...
+ KindGen2 Kind = "Gen2"
+)
+
+// PossibleKindValues returns an array of possible values for the Kind const type.
+func PossibleKindValues() []Kind {
+ return []Kind{KindGen1, KindGen2}
+}
+
+// Name enumerates the values for name.
+type Name string
+
+const (
+ // NameG2 ...
+ NameG2 Name = "G2"
+ // NameS0 ...
+ NameS0 Name = "S0"
+ // NameS1 ...
+ NameS1 Name = "S1"
+)
+
+// PossibleNameValues returns an array of possible values for the Name const type.
+func PossibleNameValues() []Name {
+ return []Name{NameG2, NameS0, NameS1}
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/maps.go b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/maps.go
new file mode 100644
index 0000000000000..b21469c5c4831
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/maps.go
@@ -0,0 +1,140 @@
+package maps
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/azure"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// Client is the azure Maps
+type Client struct {
+ BaseClient
+}
+
+// NewClient creates an instance of the Client client.
+func NewClient(subscriptionID string) Client {
+ return NewClientWithBaseURI(DefaultBaseURI, subscriptionID)
+}
+
+// NewClientWithBaseURI creates an instance of the Client client using a custom endpoint. Use this when interacting
+// with an Azure cloud that uses a non-standard base URI (sovereign clouds, Azure stack).
+func NewClientWithBaseURI(baseURI string, subscriptionID string) Client {
+ return Client{NewWithBaseURI(baseURI, subscriptionID)}
+}
+
+// ListOperations list operations available for the Maps Resource Provider
+func (client Client) ListOperations(ctx context.Context) (result OperationsPage, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/Client.ListOperations")
+ defer func() {
+ sc := -1
+ if result.o.Response.Response != nil {
+ sc = result.o.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.fn = client.listOperationsNextResults
+ req, err := client.ListOperationsPreparer(ctx)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.Client", "ListOperations", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.ListOperationsSender(req)
+ if err != nil {
+ result.o.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "maps.Client", "ListOperations", resp, "Failure sending request")
+ return
+ }
+
+ result.o, err = client.ListOperationsResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.Client", "ListOperations", resp, "Failure responding to request")
+ return
+ }
+ if result.o.hasNextLink() && result.o.IsEmpty() {
+ err = result.NextWithContext(ctx)
+ return
+ }
+
+ return
+}
+
+// ListOperationsPreparer prepares the ListOperations request.
+func (client Client) ListOperationsPreparer(ctx context.Context) (*http.Request, error) {
+ const APIVersion = "2021-02-01"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsGet(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPath("/providers/Microsoft.Maps/operations"),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// ListOperationsSender sends the ListOperations request. The method will close the
+// http.Response Body if it receives an error.
+func (client Client) ListOperationsSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// ListOperationsResponder handles the response to the ListOperations request. The method always
+// closes the http.Response Body.
+func (client Client) ListOperationsResponder(resp *http.Response) (result Operations, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
+// listOperationsNextResults retrieves the next set of results, if any.
+func (client Client) listOperationsNextResults(ctx context.Context, lastResults Operations) (result Operations, err error) {
+ req, err := lastResults.operationsPreparer(ctx)
+ if err != nil {
+ return result, autorest.NewErrorWithError(err, "maps.Client", "listOperationsNextResults", nil, "Failure preparing next results request")
+ }
+ if req == nil {
+ return
+ }
+ resp, err := client.ListOperationsSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ return result, autorest.NewErrorWithError(err, "maps.Client", "listOperationsNextResults", resp, "Failure sending next results request")
+ }
+ result, err = client.ListOperationsResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "maps.Client", "listOperationsNextResults", resp, "Failure responding to next results request")
+ }
+ return
+}
+
+// ListOperationsComplete enumerates all values, automatically crossing page boundaries as required.
+func (client Client) ListOperationsComplete(ctx context.Context) (result OperationsIterator, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/Client.ListOperations")
+ defer func() {
+ sc := -1
+ if result.Response().Response.Response != nil {
+ sc = result.page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ result.page, err = client.ListOperations(ctx)
+ return
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/models.go
new file mode 100644
index 0000000000000..b3271a62d8b08
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/models.go
@@ -0,0 +1,1065 @@
+package maps
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+import (
+ "context"
+ "encoding/json"
+ "github.com/Azure/go-autorest/autorest"
+ "github.com/Azure/go-autorest/autorest/date"
+ "github.com/Azure/go-autorest/autorest/to"
+ "github.com/Azure/go-autorest/tracing"
+ "net/http"
+)
+
+// The package's fully qualified name.
+const fqdn = "github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps"
+
+// Account an Azure resource which represents access to a suite of Maps REST APIs.
+type Account struct {
+ autorest.Response `json:"-"`
+ // Sku - The SKU of this account.
+ Sku *Sku `json:"sku,omitempty"`
+ // Kind - Get or Set Kind property. Possible values include: 'KindGen1', 'KindGen2'
+ Kind Kind `json:"kind,omitempty"`
+ // SystemData - READ-ONLY; The system meta data relating to this resource.
+ SystemData *SystemData `json:"systemData,omitempty"`
+ // Properties - The map account properties.
+ Properties *AccountProperties `json:"properties,omitempty"`
+ // Tags - Resource tags.
+ Tags map[string]*string `json:"tags"`
+ // Location - The geo-location where the resource lives
+ Location *string `json:"location,omitempty"`
+ // ID - READ-ONLY; Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; The name of the resource
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
+ Type *string `json:"type,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for Account.
+func (a Account) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if a.Sku != nil {
+ objectMap["sku"] = a.Sku
+ }
+ if a.Kind != "" {
+ objectMap["kind"] = a.Kind
+ }
+ if a.Properties != nil {
+ objectMap["properties"] = a.Properties
+ }
+ if a.Tags != nil {
+ objectMap["tags"] = a.Tags
+ }
+ if a.Location != nil {
+ objectMap["location"] = a.Location
+ }
+ return json.Marshal(objectMap)
+}
+
+// AccountKeys the set of keys which can be used to access the Maps REST APIs. Two keys are provided for
+// key rotation without interruption.
+type AccountKeys struct {
+ autorest.Response `json:"-"`
+ // PrimaryKeyLastUpdated - READ-ONLY; The last updated date and time of the primary key.
+ PrimaryKeyLastUpdated *string `json:"primaryKeyLastUpdated,omitempty"`
+ // PrimaryKey - READ-ONLY; The primary key for accessing the Maps REST APIs.
+ PrimaryKey *string `json:"primaryKey,omitempty"`
+ // SecondaryKey - READ-ONLY; The secondary key for accessing the Maps REST APIs.
+ SecondaryKey *string `json:"secondaryKey,omitempty"`
+ // SecondaryKeyLastUpdated - READ-ONLY; The last updated date and time of the secondary key.
+ SecondaryKeyLastUpdated *string `json:"secondaryKeyLastUpdated,omitempty"`
+}
+
+// AccountProperties additional Map account properties
+type AccountProperties struct {
+ // UniqueID - READ-ONLY; A unique identifier for the maps account
+ UniqueID *string `json:"uniqueId,omitempty"`
+ // DisableLocalAuth - Allows toggle functionality on Azure Policy to disable Azure Maps local authentication support. This will disable Shared Keys authentication from any usage.
+ DisableLocalAuth *bool `json:"disableLocalAuth,omitempty"`
+ // ProvisioningState - READ-ONLY; the state of the provisioning.
+ ProvisioningState *string `json:"provisioningState,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for AccountProperties.
+func (ap AccountProperties) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if ap.DisableLocalAuth != nil {
+ objectMap["disableLocalAuth"] = ap.DisableLocalAuth
+ }
+ return json.Marshal(objectMap)
+}
+
+// Accounts a list of Maps Accounts.
+type Accounts struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; a Maps Account.
+ Value *[]Account `json:"value,omitempty"`
+ // NextLink - URL client should use to fetch the next page (per server side paging).
+ // It's null for now, added for future use.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for Accounts.
+func (a Accounts) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if a.NextLink != nil {
+ objectMap["nextLink"] = a.NextLink
+ }
+ return json.Marshal(objectMap)
+}
+
+// AccountsIterator provides access to a complete listing of Account values.
+type AccountsIterator struct {
+ i int
+ page AccountsPage
+}
+
+// NextWithContext advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+func (iter *AccountsIterator) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/AccountsIterator.NextWithContext")
+ defer func() {
+ sc := -1
+ if iter.Response().Response.Response != nil {
+ sc = iter.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ iter.i++
+ if iter.i < len(iter.page.Values()) {
+ return nil
+ }
+ err = iter.page.NextWithContext(ctx)
+ if err != nil {
+ iter.i--
+ return err
+ }
+ iter.i = 0
+ return nil
+}
+
+// Next advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (iter *AccountsIterator) Next() error {
+ return iter.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the enumeration should be started or is not yet complete.
+func (iter AccountsIterator) NotDone() bool {
+ return iter.page.NotDone() && iter.i < len(iter.page.Values())
+}
+
+// Response returns the raw server response from the last page request.
+func (iter AccountsIterator) Response() Accounts {
+ return iter.page.Response()
+}
+
+// Value returns the current value or a zero-initialized value if the
+// iterator has advanced beyond the end of the collection.
+func (iter AccountsIterator) Value() Account {
+ if !iter.page.NotDone() {
+ return Account{}
+ }
+ return iter.page.Values()[iter.i]
+}
+
+// Creates a new instance of the AccountsIterator type.
+func NewAccountsIterator(page AccountsPage) AccountsIterator {
+ return AccountsIterator{page: page}
+}
+
+// IsEmpty returns true if the ListResult contains no values.
+func (a Accounts) IsEmpty() bool {
+ return a.Value == nil || len(*a.Value) == 0
+}
+
+// hasNextLink returns true if the NextLink is not empty.
+func (a Accounts) hasNextLink() bool {
+ return a.NextLink != nil && len(*a.NextLink) != 0
+}
+
+// accountsPreparer prepares a request to retrieve the next set of results.
+// It returns nil if no more results exist.
+func (a Accounts) accountsPreparer(ctx context.Context) (*http.Request, error) {
+ if !a.hasNextLink() {
+ return nil, nil
+ }
+ return autorest.Prepare((&http.Request{}).WithContext(ctx),
+ autorest.AsJSON(),
+ autorest.AsGet(),
+ autorest.WithBaseURL(to.String(a.NextLink)))
+}
+
+// AccountsPage contains a page of Account values.
+type AccountsPage struct {
+ fn func(context.Context, Accounts) (Accounts, error)
+ a Accounts
+}
+
+// NextWithContext advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+func (page *AccountsPage) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/AccountsPage.NextWithContext")
+ defer func() {
+ sc := -1
+ if page.Response().Response.Response != nil {
+ sc = page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ for {
+ next, err := page.fn(ctx, page.a)
+ if err != nil {
+ return err
+ }
+ page.a = next
+ if !next.hasNextLink() || !next.IsEmpty() {
+ break
+ }
+ }
+ return nil
+}
+
+// Next advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (page *AccountsPage) Next() error {
+ return page.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the page enumeration should be started or is not yet complete.
+func (page AccountsPage) NotDone() bool {
+ return !page.a.IsEmpty()
+}
+
+// Response returns the raw server response from the last page request.
+func (page AccountsPage) Response() Accounts {
+ return page.a
+}
+
+// Values returns the slice of values for the current page or nil if there are no values.
+func (page AccountsPage) Values() []Account {
+ if page.a.IsEmpty() {
+ return nil
+ }
+ return *page.a.Value
+}
+
+// Creates a new instance of the AccountsPage type.
+func NewAccountsPage(cur Accounts, getNextPage func(context.Context, Accounts) (Accounts, error)) AccountsPage {
+ return AccountsPage{
+ fn: getNextPage,
+ a: cur,
+ }
+}
+
+// AccountUpdateParameters parameters used to update an existing Maps Account.
+type AccountUpdateParameters struct {
+ // Tags - Gets or sets a list of key value pairs that describe the resource. These tags can be used in viewing and grouping this resource (across resource groups). A maximum of 15 tags can be provided for a resource. Each tag must have a key no greater than 128 characters and value no greater than 256 characters.
+ Tags map[string]*string `json:"tags"`
+ // Kind - Get or Set Kind property. Possible values include: 'KindGen1', 'KindGen2'
+ Kind Kind `json:"kind,omitempty"`
+ // Sku - The SKU of this account.
+ Sku *Sku `json:"sku,omitempty"`
+ // AccountProperties - The map account properties.
+ *AccountProperties `json:"properties,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for AccountUpdateParameters.
+func (aup AccountUpdateParameters) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if aup.Tags != nil {
+ objectMap["tags"] = aup.Tags
+ }
+ if aup.Kind != "" {
+ objectMap["kind"] = aup.Kind
+ }
+ if aup.Sku != nil {
+ objectMap["sku"] = aup.Sku
+ }
+ if aup.AccountProperties != nil {
+ objectMap["properties"] = aup.AccountProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for AccountUpdateParameters struct.
+func (aup *AccountUpdateParameters) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ aup.Tags = tags
+ }
+ case "kind":
+ if v != nil {
+ var kind Kind
+ err = json.Unmarshal(*v, &kind)
+ if err != nil {
+ return err
+ }
+ aup.Kind = kind
+ }
+ case "sku":
+ if v != nil {
+ var sku Sku
+ err = json.Unmarshal(*v, &sku)
+ if err != nil {
+ return err
+ }
+ aup.Sku = &sku
+ }
+ case "properties":
+ if v != nil {
+ var accountProperties AccountProperties
+ err = json.Unmarshal(*v, &accountProperties)
+ if err != nil {
+ return err
+ }
+ aup.AccountProperties = &accountProperties
+ }
+ }
+ }
+
+ return nil
+}
+
+// AzureEntityResource the resource model definition for an Azure Resource Manager resource with an etag.
+type AzureEntityResource struct {
+ // Etag - READ-ONLY; Resource Etag.
+ Etag *string `json:"etag,omitempty"`
+ // ID - READ-ONLY; Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; The name of the resource
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
+ Type *string `json:"type,omitempty"`
+}
+
+// Creator an Azure resource which represents Maps Creator product and provides ability to manage private
+// location data.
+type Creator struct {
+ autorest.Response `json:"-"`
+ // Properties - The Creator resource properties.
+ Properties *CreatorProperties `json:"properties,omitempty"`
+ // Tags - Resource tags.
+ Tags map[string]*string `json:"tags"`
+ // Location - The geo-location where the resource lives
+ Location *string `json:"location,omitempty"`
+ // ID - READ-ONLY; Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; The name of the resource
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
+ Type *string `json:"type,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for Creator.
+func (c Creator) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if c.Properties != nil {
+ objectMap["properties"] = c.Properties
+ }
+ if c.Tags != nil {
+ objectMap["tags"] = c.Tags
+ }
+ if c.Location != nil {
+ objectMap["location"] = c.Location
+ }
+ return json.Marshal(objectMap)
+}
+
+// CreatorList a list of Creator resources.
+type CreatorList struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; a Creator account.
+ Value *[]Creator `json:"value,omitempty"`
+ // NextLink - URL client should use to fetch the next page (per server side paging).
+ // It's null for now, added for future use.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for CreatorList.
+func (cl CreatorList) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if cl.NextLink != nil {
+ objectMap["nextLink"] = cl.NextLink
+ }
+ return json.Marshal(objectMap)
+}
+
+// CreatorListIterator provides access to a complete listing of Creator values.
+type CreatorListIterator struct {
+ i int
+ page CreatorListPage
+}
+
+// NextWithContext advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+func (iter *CreatorListIterator) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/CreatorListIterator.NextWithContext")
+ defer func() {
+ sc := -1
+ if iter.Response().Response.Response != nil {
+ sc = iter.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ iter.i++
+ if iter.i < len(iter.page.Values()) {
+ return nil
+ }
+ err = iter.page.NextWithContext(ctx)
+ if err != nil {
+ iter.i--
+ return err
+ }
+ iter.i = 0
+ return nil
+}
+
+// Next advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (iter *CreatorListIterator) Next() error {
+ return iter.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the enumeration should be started or is not yet complete.
+func (iter CreatorListIterator) NotDone() bool {
+ return iter.page.NotDone() && iter.i < len(iter.page.Values())
+}
+
+// Response returns the raw server response from the last page request.
+func (iter CreatorListIterator) Response() CreatorList {
+ return iter.page.Response()
+}
+
+// Value returns the current value or a zero-initialized value if the
+// iterator has advanced beyond the end of the collection.
+func (iter CreatorListIterator) Value() Creator {
+ if !iter.page.NotDone() {
+ return Creator{}
+ }
+ return iter.page.Values()[iter.i]
+}
+
+// Creates a new instance of the CreatorListIterator type.
+func NewCreatorListIterator(page CreatorListPage) CreatorListIterator {
+ return CreatorListIterator{page: page}
+}
+
+// IsEmpty returns true if the ListResult contains no values.
+func (cl CreatorList) IsEmpty() bool {
+ return cl.Value == nil || len(*cl.Value) == 0
+}
+
+// hasNextLink returns true if the NextLink is not empty.
+func (cl CreatorList) hasNextLink() bool {
+ return cl.NextLink != nil && len(*cl.NextLink) != 0
+}
+
+// creatorListPreparer prepares a request to retrieve the next set of results.
+// It returns nil if no more results exist.
+func (cl CreatorList) creatorListPreparer(ctx context.Context) (*http.Request, error) {
+ if !cl.hasNextLink() {
+ return nil, nil
+ }
+ return autorest.Prepare((&http.Request{}).WithContext(ctx),
+ autorest.AsJSON(),
+ autorest.AsGet(),
+ autorest.WithBaseURL(to.String(cl.NextLink)))
+}
+
+// CreatorListPage contains a page of Creator values.
+type CreatorListPage struct {
+ fn func(context.Context, CreatorList) (CreatorList, error)
+ cl CreatorList
+}
+
+// NextWithContext advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+func (page *CreatorListPage) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/CreatorListPage.NextWithContext")
+ defer func() {
+ sc := -1
+ if page.Response().Response.Response != nil {
+ sc = page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ for {
+ next, err := page.fn(ctx, page.cl)
+ if err != nil {
+ return err
+ }
+ page.cl = next
+ if !next.hasNextLink() || !next.IsEmpty() {
+ break
+ }
+ }
+ return nil
+}
+
+// Next advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (page *CreatorListPage) Next() error {
+ return page.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the page enumeration should be started or is not yet complete.
+func (page CreatorListPage) NotDone() bool {
+ return !page.cl.IsEmpty()
+}
+
+// Response returns the raw server response from the last page request.
+func (page CreatorListPage) Response() CreatorList {
+ return page.cl
+}
+
+// Values returns the slice of values for the current page or nil if there are no values.
+func (page CreatorListPage) Values() []Creator {
+ if page.cl.IsEmpty() {
+ return nil
+ }
+ return *page.cl.Value
+}
+
+// Creates a new instance of the CreatorListPage type.
+func NewCreatorListPage(cur CreatorList, getNextPage func(context.Context, CreatorList) (CreatorList, error)) CreatorListPage {
+ return CreatorListPage{
+ fn: getNextPage,
+ cl: cur,
+ }
+}
+
+// CreatorProperties creator resource properties
+type CreatorProperties struct {
+ // ProvisioningState - READ-ONLY; The state of the resource provisioning, terminal states: Succeeded, Failed, Canceled
+ ProvisioningState *string `json:"provisioningState,omitempty"`
+ // StorageUnits - The storage units to be allocated. Integer values from 1 to 100, inclusive.
+ StorageUnits *int32 `json:"storageUnits,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for CreatorProperties.
+func (cp CreatorProperties) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if cp.StorageUnits != nil {
+ objectMap["storageUnits"] = cp.StorageUnits
+ }
+ return json.Marshal(objectMap)
+}
+
+// CreatorUpdateParameters parameters used to update an existing Creator resource.
+type CreatorUpdateParameters struct {
+ // Tags - Gets or sets a list of key value pairs that describe the resource. These tags can be used in viewing and grouping this resource (across resource groups). A maximum of 15 tags can be provided for a resource. Each tag must have a key no greater than 128 characters and value no greater than 256 characters.
+ Tags map[string]*string `json:"tags"`
+ // CreatorProperties - Creator resource properties.
+ *CreatorProperties `json:"properties,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for CreatorUpdateParameters.
+func (cup CreatorUpdateParameters) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if cup.Tags != nil {
+ objectMap["tags"] = cup.Tags
+ }
+ if cup.CreatorProperties != nil {
+ objectMap["properties"] = cup.CreatorProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for CreatorUpdateParameters struct.
+func (cup *CreatorUpdateParameters) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ cup.Tags = tags
+ }
+ case "properties":
+ if v != nil {
+ var creatorProperties CreatorProperties
+ err = json.Unmarshal(*v, &creatorProperties)
+ if err != nil {
+ return err
+ }
+ cup.CreatorProperties = &creatorProperties
+ }
+ }
+ }
+
+ return nil
+}
+
+// Dimension dimension of map account, for example API Category, Api Name, Result Type, and Response Code.
+type Dimension struct {
+ // Name - Display name of dimension.
+ Name *string `json:"name,omitempty"`
+ // DisplayName - Display name of dimension.
+ DisplayName *string `json:"displayName,omitempty"`
+}
+
+// ErrorAdditionalInfo the resource management error additional info.
+type ErrorAdditionalInfo struct {
+ // Type - READ-ONLY; The additional info type.
+ Type *string `json:"type,omitempty"`
+ // Info - READ-ONLY; The additional info.
+ Info interface{} `json:"info,omitempty"`
+}
+
+// ErrorDetail the error detail.
+type ErrorDetail struct {
+ // Code - READ-ONLY; The error code.
+ Code *string `json:"code,omitempty"`
+ // Message - READ-ONLY; The error message.
+ Message *string `json:"message,omitempty"`
+ // Target - READ-ONLY; The error target.
+ Target *string `json:"target,omitempty"`
+ // Details - READ-ONLY; The error details.
+ Details *[]ErrorDetail `json:"details,omitempty"`
+ // AdditionalInfo - READ-ONLY; The error additional info.
+ AdditionalInfo *[]ErrorAdditionalInfo `json:"additionalInfo,omitempty"`
+}
+
+// ErrorResponse common error response for all Azure Resource Manager APIs to return error details for
+// failed operations. (This also follows the OData error response format.).
+type ErrorResponse struct {
+ // Error - The error object.
+ Error *ErrorDetail `json:"error,omitempty"`
+}
+
+// KeySpecification whether the operation refers to the primary or secondary key.
+type KeySpecification struct {
+ // KeyType - Whether the operation refers to the primary or secondary key. Possible values include: 'KeyTypePrimary', 'KeyTypeSecondary'
+ KeyType KeyType `json:"keyType,omitempty"`
+}
+
+// MetricSpecification metric specification of operation.
+type MetricSpecification struct {
+ // Name - Name of metric specification.
+ Name *string `json:"name,omitempty"`
+ // DisplayName - Display name of metric specification.
+ DisplayName *string `json:"displayName,omitempty"`
+ // DisplayDescription - Display description of metric specification.
+ DisplayDescription *string `json:"displayDescription,omitempty"`
+ // Unit - Unit could be Count.
+ Unit *string `json:"unit,omitempty"`
+ // Dimensions - Dimensions of map account.
+ Dimensions *[]Dimension `json:"dimensions,omitempty"`
+ // AggregationType - Aggregation type could be Average.
+ AggregationType *string `json:"aggregationType,omitempty"`
+ // FillGapWithZero - The property to decide fill gap with zero or not.
+ FillGapWithZero *bool `json:"fillGapWithZero,omitempty"`
+ // Category - The category this metric specification belong to, could be Capacity.
+ Category *string `json:"category,omitempty"`
+ // ResourceIDDimensionNameOverride - Account Resource Id.
+ ResourceIDDimensionNameOverride *string `json:"resourceIdDimensionNameOverride,omitempty"`
+}
+
+// OperationDetail operation detail payload
+type OperationDetail struct {
+ // Name - Name of the operation
+ Name *string `json:"name,omitempty"`
+ // IsDataAction - Indicates whether the operation is a data action
+ IsDataAction *bool `json:"isDataAction,omitempty"`
+ // Display - Display of the operation
+ Display *OperationDisplay `json:"display,omitempty"`
+ // Origin - Origin of the operation
+ Origin *string `json:"origin,omitempty"`
+ // OperationProperties - Properties of the operation
+ *OperationProperties `json:"properties,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for OperationDetail.
+func (od OperationDetail) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if od.Name != nil {
+ objectMap["name"] = od.Name
+ }
+ if od.IsDataAction != nil {
+ objectMap["isDataAction"] = od.IsDataAction
+ }
+ if od.Display != nil {
+ objectMap["display"] = od.Display
+ }
+ if od.Origin != nil {
+ objectMap["origin"] = od.Origin
+ }
+ if od.OperationProperties != nil {
+ objectMap["properties"] = od.OperationProperties
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for OperationDetail struct.
+func (od *OperationDetail) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ od.Name = &name
+ }
+ case "isDataAction":
+ if v != nil {
+ var isDataAction bool
+ err = json.Unmarshal(*v, &isDataAction)
+ if err != nil {
+ return err
+ }
+ od.IsDataAction = &isDataAction
+ }
+ case "display":
+ if v != nil {
+ var display OperationDisplay
+ err = json.Unmarshal(*v, &display)
+ if err != nil {
+ return err
+ }
+ od.Display = &display
+ }
+ case "origin":
+ if v != nil {
+ var origin string
+ err = json.Unmarshal(*v, &origin)
+ if err != nil {
+ return err
+ }
+ od.Origin = &origin
+ }
+ case "properties":
+ if v != nil {
+ var operationProperties OperationProperties
+ err = json.Unmarshal(*v, &operationProperties)
+ if err != nil {
+ return err
+ }
+ od.OperationProperties = &operationProperties
+ }
+ }
+ }
+
+ return nil
+}
+
+// OperationDisplay operation display payload
+type OperationDisplay struct {
+ // Provider - Resource provider of the operation
+ Provider *string `json:"provider,omitempty"`
+ // Resource - Resource of the operation
+ Resource *string `json:"resource,omitempty"`
+ // Operation - Localized friendly name for the operation
+ Operation *string `json:"operation,omitempty"`
+ // Description - Localized friendly description for the operation
+ Description *string `json:"description,omitempty"`
+}
+
+// OperationProperties properties of operation, include metric specifications.
+type OperationProperties struct {
+ // ServiceSpecification - One property of operation, include metric specifications.
+ ServiceSpecification *ServiceSpecification `json:"serviceSpecification,omitempty"`
+}
+
+// Operations the set of operations available for Maps.
+type Operations struct {
+ autorest.Response `json:"-"`
+ // Value - READ-ONLY; An operation available for Maps.
+ Value *[]OperationDetail `json:"value,omitempty"`
+ // NextLink - URL client should use to fetch the next page (per server side paging).
+ // It's null for now, added for future use.
+ NextLink *string `json:"nextLink,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for Operations.
+func (o Operations) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if o.NextLink != nil {
+ objectMap["nextLink"] = o.NextLink
+ }
+ return json.Marshal(objectMap)
+}
+
+// OperationsIterator provides access to a complete listing of OperationDetail values.
+type OperationsIterator struct {
+ i int
+ page OperationsPage
+}
+
+// NextWithContext advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+func (iter *OperationsIterator) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/OperationsIterator.NextWithContext")
+ defer func() {
+ sc := -1
+ if iter.Response().Response.Response != nil {
+ sc = iter.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ iter.i++
+ if iter.i < len(iter.page.Values()) {
+ return nil
+ }
+ err = iter.page.NextWithContext(ctx)
+ if err != nil {
+ iter.i--
+ return err
+ }
+ iter.i = 0
+ return nil
+}
+
+// Next advances to the next value. If there was an error making
+// the request the iterator does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (iter *OperationsIterator) Next() error {
+ return iter.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the enumeration should be started or is not yet complete.
+func (iter OperationsIterator) NotDone() bool {
+ return iter.page.NotDone() && iter.i < len(iter.page.Values())
+}
+
+// Response returns the raw server response from the last page request.
+func (iter OperationsIterator) Response() Operations {
+ return iter.page.Response()
+}
+
+// Value returns the current value or a zero-initialized value if the
+// iterator has advanced beyond the end of the collection.
+func (iter OperationsIterator) Value() OperationDetail {
+ if !iter.page.NotDone() {
+ return OperationDetail{}
+ }
+ return iter.page.Values()[iter.i]
+}
+
+// Creates a new instance of the OperationsIterator type.
+func NewOperationsIterator(page OperationsPage) OperationsIterator {
+ return OperationsIterator{page: page}
+}
+
+// IsEmpty returns true if the ListResult contains no values.
+func (o Operations) IsEmpty() bool {
+ return o.Value == nil || len(*o.Value) == 0
+}
+
+// hasNextLink returns true if the NextLink is not empty.
+func (o Operations) hasNextLink() bool {
+ return o.NextLink != nil && len(*o.NextLink) != 0
+}
+
+// operationsPreparer prepares a request to retrieve the next set of results.
+// It returns nil if no more results exist.
+func (o Operations) operationsPreparer(ctx context.Context) (*http.Request, error) {
+ if !o.hasNextLink() {
+ return nil, nil
+ }
+ return autorest.Prepare((&http.Request{}).WithContext(ctx),
+ autorest.AsJSON(),
+ autorest.AsGet(),
+ autorest.WithBaseURL(to.String(o.NextLink)))
+}
+
+// OperationsPage contains a page of OperationDetail values.
+type OperationsPage struct {
+ fn func(context.Context, Operations) (Operations, error)
+ o Operations
+}
+
+// NextWithContext advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+func (page *OperationsPage) NextWithContext(ctx context.Context) (err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/OperationsPage.NextWithContext")
+ defer func() {
+ sc := -1
+ if page.Response().Response.Response != nil {
+ sc = page.Response().Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ for {
+ next, err := page.fn(ctx, page.o)
+ if err != nil {
+ return err
+ }
+ page.o = next
+ if !next.hasNextLink() || !next.IsEmpty() {
+ break
+ }
+ }
+ return nil
+}
+
+// Next advances to the next page of values. If there was an error making
+// the request the page does not advance and the error is returned.
+// Deprecated: Use NextWithContext() instead.
+func (page *OperationsPage) Next() error {
+ return page.NextWithContext(context.Background())
+}
+
+// NotDone returns true if the page enumeration should be started or is not yet complete.
+func (page OperationsPage) NotDone() bool {
+ return !page.o.IsEmpty()
+}
+
+// Response returns the raw server response from the last page request.
+func (page OperationsPage) Response() Operations {
+ return page.o
+}
+
+// Values returns the slice of values for the current page or nil if there are no values.
+func (page OperationsPage) Values() []OperationDetail {
+ if page.o.IsEmpty() {
+ return nil
+ }
+ return *page.o.Value
+}
+
+// Creates a new instance of the OperationsPage type.
+func NewOperationsPage(cur Operations, getNextPage func(context.Context, Operations) (Operations, error)) OperationsPage {
+ return OperationsPage{
+ fn: getNextPage,
+ o: cur,
+ }
+}
+
+// ProxyResource the resource model definition for a Azure Resource Manager proxy resource. It will not
+// have tags and a location
+type ProxyResource struct {
+ // ID - READ-ONLY; Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; The name of the resource
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
+ Type *string `json:"type,omitempty"`
+}
+
+// Resource common fields that are returned in the response for all Azure Resource Manager resources
+type Resource struct {
+ // ID - READ-ONLY; Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; The name of the resource
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
+ Type *string `json:"type,omitempty"`
+}
+
+// ServiceSpecification one property of operation, include metric specifications.
+type ServiceSpecification struct {
+ // MetricSpecifications - Metric specifications of operation.
+ MetricSpecifications *[]MetricSpecification `json:"metricSpecifications,omitempty"`
+}
+
+// Sku the SKU of the Maps Account.
+type Sku struct {
+ // Name - The name of the SKU, in standard format (such as S0). Possible values include: 'NameS0', 'NameS1', 'NameG2'
+ Name Name `json:"name,omitempty"`
+ // Tier - READ-ONLY; Gets the sku tier. This is based on the SKU name.
+ Tier *string `json:"tier,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for Sku.
+func (s Sku) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if s.Name != "" {
+ objectMap["name"] = s.Name
+ }
+ return json.Marshal(objectMap)
+}
+
+// SystemData metadata pertaining to creation and last modification of the resource.
+type SystemData struct {
+ // CreatedBy - The identity that created the resource.
+ CreatedBy *string `json:"createdBy,omitempty"`
+ // CreatedByType - The type of identity that created the resource. Possible values include: 'CreatedByTypeUser', 'CreatedByTypeApplication', 'CreatedByTypeManagedIdentity', 'CreatedByTypeKey'
+ CreatedByType CreatedByType `json:"createdByType,omitempty"`
+ // CreatedAt - The timestamp of resource creation (UTC).
+ CreatedAt *date.Time `json:"createdAt,omitempty"`
+ // LastModifiedBy - The identity that last modified the resource.
+ LastModifiedBy *string `json:"lastModifiedBy,omitempty"`
+ // LastModifiedByType - The type of identity that last modified the resource. Possible values include: 'CreatedByTypeUser', 'CreatedByTypeApplication', 'CreatedByTypeManagedIdentity', 'CreatedByTypeKey'
+ LastModifiedByType CreatedByType `json:"lastModifiedByType,omitempty"`
+ // LastModifiedAt - The timestamp of resource last modification (UTC)
+ LastModifiedAt *date.Time `json:"lastModifiedAt,omitempty"`
+}
+
+// TrackedResource the resource model definition for an Azure Resource Manager tracked top level resource
+// which has 'tags' and a 'location'
+type TrackedResource struct {
+ // Tags - Resource tags.
+ Tags map[string]*string `json:"tags"`
+ // Location - The geo-location where the resource lives
+ Location *string `json:"location,omitempty"`
+ // ID - READ-ONLY; Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
+ ID *string `json:"id,omitempty"`
+ // Name - READ-ONLY; The name of the resource
+ Name *string `json:"name,omitempty"`
+ // Type - READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
+ Type *string `json:"type,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for TrackedResource.
+func (tr TrackedResource) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if tr.Tags != nil {
+ objectMap["tags"] = tr.Tags
+ }
+ if tr.Location != nil {
+ objectMap["location"] = tr.Location
+ }
+ return json.Marshal(objectMap)
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/version.go b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/version.go
similarity index 90%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/version.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/version.go
index ec7c6cc0f4330..50d028e3f1e32 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2018-05-01/maps/version.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/maps/mgmt/2021-02-01/maps/version.go
@@ -10,7 +10,7 @@ import "github.com/Azure/azure-sdk-for-go/version"
// UserAgent returns the UserAgent string to use when sending http.Requests.
func UserAgent() string {
- return "Azure-SDK-For-Go/" + Version() + " maps/2018-05-01"
+ return "Azure-SDK-For-Go/" + Version() + " maps/2021-02-01"
}
// Version returns the semantic version (see http://semver.org) of the client.
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/enums.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/enums.go
deleted file mode 100644
index b5ef56891011a..0000000000000
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/enums.go
+++ /dev/null
@@ -1,1434 +0,0 @@
-package media
-
-// Copyright (c) Microsoft Corporation. All rights reserved.
-// Licensed under the MIT License. See License.txt in the project root for license information.
-//
-// Code generated by Microsoft (R) AutoRest Code Generator.
-// Changes may cause incorrect behavior and will be lost if the code is regenerated.
-
-// AacAudioProfile enumerates the values for aac audio profile.
-type AacAudioProfile string
-
-const (
- // AacLc Specifies that the output audio is to be encoded into AAC Low Complexity profile (AAC-LC).
- AacLc AacAudioProfile = "AacLc"
- // HeAacV1 Specifies that the output audio is to be encoded into HE-AAC v1 profile.
- HeAacV1 AacAudioProfile = "HeAacV1"
- // HeAacV2 Specifies that the output audio is to be encoded into HE-AAC v2 profile.
- HeAacV2 AacAudioProfile = "HeAacV2"
-)
-
-// PossibleAacAudioProfileValues returns an array of possible values for the AacAudioProfile const type.
-func PossibleAacAudioProfileValues() []AacAudioProfile {
- return []AacAudioProfile{AacLc, HeAacV1, HeAacV2}
-}
-
-// AccountEncryptionKeyType enumerates the values for account encryption key type.
-type AccountEncryptionKeyType string
-
-const (
- // CustomerKey The Account Key is encrypted with a Customer Key.
- CustomerKey AccountEncryptionKeyType = "CustomerKey"
- // SystemKey The Account Key is encrypted with a System Key.
- SystemKey AccountEncryptionKeyType = "SystemKey"
-)
-
-// PossibleAccountEncryptionKeyTypeValues returns an array of possible values for the AccountEncryptionKeyType const type.
-func PossibleAccountEncryptionKeyTypeValues() []AccountEncryptionKeyType {
- return []AccountEncryptionKeyType{CustomerKey, SystemKey}
-}
-
-// AnalysisResolution enumerates the values for analysis resolution.
-type AnalysisResolution string
-
-const (
- // SourceResolution ...
- SourceResolution AnalysisResolution = "SourceResolution"
- // StandardDefinition ...
- StandardDefinition AnalysisResolution = "StandardDefinition"
-)
-
-// PossibleAnalysisResolutionValues returns an array of possible values for the AnalysisResolution const type.
-func PossibleAnalysisResolutionValues() []AnalysisResolution {
- return []AnalysisResolution{SourceResolution, StandardDefinition}
-}
-
-// AssetContainerPermission enumerates the values for asset container permission.
-type AssetContainerPermission string
-
-const (
- // Read The SAS URL will allow read access to the container.
- Read AssetContainerPermission = "Read"
- // ReadWrite The SAS URL will allow read and write access to the container.
- ReadWrite AssetContainerPermission = "ReadWrite"
- // ReadWriteDelete The SAS URL will allow read, write and delete access to the container.
- ReadWriteDelete AssetContainerPermission = "ReadWriteDelete"
-)
-
-// PossibleAssetContainerPermissionValues returns an array of possible values for the AssetContainerPermission const type.
-func PossibleAssetContainerPermissionValues() []AssetContainerPermission {
- return []AssetContainerPermission{Read, ReadWrite, ReadWriteDelete}
-}
-
-// AssetStorageEncryptionFormat enumerates the values for asset storage encryption format.
-type AssetStorageEncryptionFormat string
-
-const (
- // MediaStorageClientEncryption The Asset is encrypted with Media Services client-side encryption.
- MediaStorageClientEncryption AssetStorageEncryptionFormat = "MediaStorageClientEncryption"
- // None The Asset does not use client-side storage encryption (this is the only allowed value for new
- // Assets).
- None AssetStorageEncryptionFormat = "None"
-)
-
-// PossibleAssetStorageEncryptionFormatValues returns an array of possible values for the AssetStorageEncryptionFormat const type.
-func PossibleAssetStorageEncryptionFormatValues() []AssetStorageEncryptionFormat {
- return []AssetStorageEncryptionFormat{MediaStorageClientEncryption, None}
-}
-
-// AttributeFilter enumerates the values for attribute filter.
-type AttributeFilter string
-
-const (
- // All All tracks will be included.
- All AttributeFilter = "All"
- // Bottom The first track will be included when the attribute is sorted in ascending order. Generally used
- // to select the smallest bitrate.
- Bottom AttributeFilter = "Bottom"
- // Top The first track will be included when the attribute is sorted in descending order. Generally used
- // to select the largest bitrate.
- Top AttributeFilter = "Top"
- // ValueEquals Any tracks that have an attribute equal to the value given will be included.
- ValueEquals AttributeFilter = "ValueEquals"
-)
-
-// PossibleAttributeFilterValues returns an array of possible values for the AttributeFilter const type.
-func PossibleAttributeFilterValues() []AttributeFilter {
- return []AttributeFilter{All, Bottom, Top, ValueEquals}
-}
-
-// AudioAnalysisMode enumerates the values for audio analysis mode.
-type AudioAnalysisMode string
-
-const (
- // Basic This mode performs speech-to-text transcription and generation of a VTT subtitle/caption file. The
- // output of this mode includes an Insights JSON file including only the keywords, transcription,and timing
- // information. Automatic language detection and speaker diarization are not included in this mode.
- Basic AudioAnalysisMode = "Basic"
- // Standard Performs all operations included in the Basic mode, additionally performing language detection
- // and speaker diarization.
- Standard AudioAnalysisMode = "Standard"
-)
-
-// PossibleAudioAnalysisModeValues returns an array of possible values for the AudioAnalysisMode const type.
-func PossibleAudioAnalysisModeValues() []AudioAnalysisMode {
- return []AudioAnalysisMode{Basic, Standard}
-}
-
-// BlurType enumerates the values for blur type.
-type BlurType string
-
-const (
- // Black Black: Black out filter
- Black BlurType = "Black"
- // Box Box: debug filter, bounding box only
- Box BlurType = "Box"
- // High High: Confuse blur filter
- High BlurType = "High"
- // Low Low: box-car blur filter
- Low BlurType = "Low"
- // Med Med: Gaussian blur filter
- Med BlurType = "Med"
-)
-
-// PossibleBlurTypeValues returns an array of possible values for the BlurType const type.
-func PossibleBlurTypeValues() []BlurType {
- return []BlurType{Black, Box, High, Low, Med}
-}
-
-// ChannelMapping enumerates the values for channel mapping.
-type ChannelMapping string
-
-const (
- // BackLeft The Back Left Channel. Sometimes referred to as the Left Surround Channel.
- BackLeft ChannelMapping = "BackLeft"
- // BackRight The Back Right Channel. Sometimes referred to as the Right Surround Channel.
- BackRight ChannelMapping = "BackRight"
- // Center The Center Channel.
- Center ChannelMapping = "Center"
- // FrontLeft The Front Left Channel.
- FrontLeft ChannelMapping = "FrontLeft"
- // FrontRight The Front Right Channel.
- FrontRight ChannelMapping = "FrontRight"
- // LowFrequencyEffects Low Frequency Effects Channel. Sometimes referred to as the Subwoofer.
- LowFrequencyEffects ChannelMapping = "LowFrequencyEffects"
- // StereoLeft The Left Stereo channel. Sometimes referred to as Down Mix Left.
- StereoLeft ChannelMapping = "StereoLeft"
- // StereoRight The Right Stereo channel. Sometimes referred to as Down Mix Right.
- StereoRight ChannelMapping = "StereoRight"
-)
-
-// PossibleChannelMappingValues returns an array of possible values for the ChannelMapping const type.
-func PossibleChannelMappingValues() []ChannelMapping {
- return []ChannelMapping{BackLeft, BackRight, Center, FrontLeft, FrontRight, LowFrequencyEffects, StereoLeft, StereoRight}
-}
-
-// ContentKeyPolicyFairPlayRentalAndLeaseKeyType enumerates the values for content key policy fair play rental
-// and lease key type.
-type ContentKeyPolicyFairPlayRentalAndLeaseKeyType string
-
-const (
- // DualExpiry Dual expiry for offline rental.
- DualExpiry ContentKeyPolicyFairPlayRentalAndLeaseKeyType = "DualExpiry"
- // PersistentLimited Content key can be persisted and the valid duration is limited by the Rental Duration
- // value
- PersistentLimited ContentKeyPolicyFairPlayRentalAndLeaseKeyType = "PersistentLimited"
- // PersistentUnlimited Content key can be persisted with an unlimited duration
- PersistentUnlimited ContentKeyPolicyFairPlayRentalAndLeaseKeyType = "PersistentUnlimited"
- // Undefined Key duration is not specified.
- Undefined ContentKeyPolicyFairPlayRentalAndLeaseKeyType = "Undefined"
- // Unknown Represents a ContentKeyPolicyFairPlayRentalAndLeaseKeyType that is unavailable in current API
- // version.
- Unknown ContentKeyPolicyFairPlayRentalAndLeaseKeyType = "Unknown"
-)
-
-// PossibleContentKeyPolicyFairPlayRentalAndLeaseKeyTypeValues returns an array of possible values for the ContentKeyPolicyFairPlayRentalAndLeaseKeyType const type.
-func PossibleContentKeyPolicyFairPlayRentalAndLeaseKeyTypeValues() []ContentKeyPolicyFairPlayRentalAndLeaseKeyType {
- return []ContentKeyPolicyFairPlayRentalAndLeaseKeyType{DualExpiry, PersistentLimited, PersistentUnlimited, Undefined, Unknown}
-}
-
-// ContentKeyPolicyPlayReadyContentType enumerates the values for content key policy play ready content type.
-type ContentKeyPolicyPlayReadyContentType string
-
-const (
- // ContentKeyPolicyPlayReadyContentTypeUltraVioletDownload Ultraviolet download content type.
- ContentKeyPolicyPlayReadyContentTypeUltraVioletDownload ContentKeyPolicyPlayReadyContentType = "UltraVioletDownload"
- // ContentKeyPolicyPlayReadyContentTypeUltraVioletStreaming Ultraviolet streaming content type.
- ContentKeyPolicyPlayReadyContentTypeUltraVioletStreaming ContentKeyPolicyPlayReadyContentType = "UltraVioletStreaming"
- // ContentKeyPolicyPlayReadyContentTypeUnknown Represents a ContentKeyPolicyPlayReadyContentType that is
- // unavailable in current API version.
- ContentKeyPolicyPlayReadyContentTypeUnknown ContentKeyPolicyPlayReadyContentType = "Unknown"
- // ContentKeyPolicyPlayReadyContentTypeUnspecified Unspecified content type.
- ContentKeyPolicyPlayReadyContentTypeUnspecified ContentKeyPolicyPlayReadyContentType = "Unspecified"
-)
-
-// PossibleContentKeyPolicyPlayReadyContentTypeValues returns an array of possible values for the ContentKeyPolicyPlayReadyContentType const type.
-func PossibleContentKeyPolicyPlayReadyContentTypeValues() []ContentKeyPolicyPlayReadyContentType {
- return []ContentKeyPolicyPlayReadyContentType{ContentKeyPolicyPlayReadyContentTypeUltraVioletDownload, ContentKeyPolicyPlayReadyContentTypeUltraVioletStreaming, ContentKeyPolicyPlayReadyContentTypeUnknown, ContentKeyPolicyPlayReadyContentTypeUnspecified}
-}
-
-// ContentKeyPolicyPlayReadyLicenseType enumerates the values for content key policy play ready license type.
-type ContentKeyPolicyPlayReadyLicenseType string
-
-const (
- // ContentKeyPolicyPlayReadyLicenseTypeNonPersistent Non persistent license.
- ContentKeyPolicyPlayReadyLicenseTypeNonPersistent ContentKeyPolicyPlayReadyLicenseType = "NonPersistent"
- // ContentKeyPolicyPlayReadyLicenseTypePersistent Persistent license. Allows offline playback.
- ContentKeyPolicyPlayReadyLicenseTypePersistent ContentKeyPolicyPlayReadyLicenseType = "Persistent"
- // ContentKeyPolicyPlayReadyLicenseTypeUnknown Represents a ContentKeyPolicyPlayReadyLicenseType that is
- // unavailable in current API version.
- ContentKeyPolicyPlayReadyLicenseTypeUnknown ContentKeyPolicyPlayReadyLicenseType = "Unknown"
-)
-
-// PossibleContentKeyPolicyPlayReadyLicenseTypeValues returns an array of possible values for the ContentKeyPolicyPlayReadyLicenseType const type.
-func PossibleContentKeyPolicyPlayReadyLicenseTypeValues() []ContentKeyPolicyPlayReadyLicenseType {
- return []ContentKeyPolicyPlayReadyLicenseType{ContentKeyPolicyPlayReadyLicenseTypeNonPersistent, ContentKeyPolicyPlayReadyLicenseTypePersistent, ContentKeyPolicyPlayReadyLicenseTypeUnknown}
-}
-
-// ContentKeyPolicyPlayReadyUnknownOutputPassingOption enumerates the values for content key policy play ready
-// unknown output passing option.
-type ContentKeyPolicyPlayReadyUnknownOutputPassingOption string
-
-const (
- // ContentKeyPolicyPlayReadyUnknownOutputPassingOptionAllowed Passing the video portion of protected
- // content to an Unknown Output is allowed.
- ContentKeyPolicyPlayReadyUnknownOutputPassingOptionAllowed ContentKeyPolicyPlayReadyUnknownOutputPassingOption = "Allowed"
- // ContentKeyPolicyPlayReadyUnknownOutputPassingOptionAllowedWithVideoConstriction Passing the video
- // portion of protected content to an Unknown Output is allowed but with constrained resolution.
- ContentKeyPolicyPlayReadyUnknownOutputPassingOptionAllowedWithVideoConstriction ContentKeyPolicyPlayReadyUnknownOutputPassingOption = "AllowedWithVideoConstriction"
- // ContentKeyPolicyPlayReadyUnknownOutputPassingOptionNotAllowed Passing the video portion of protected
- // content to an Unknown Output is not allowed.
- ContentKeyPolicyPlayReadyUnknownOutputPassingOptionNotAllowed ContentKeyPolicyPlayReadyUnknownOutputPassingOption = "NotAllowed"
- // ContentKeyPolicyPlayReadyUnknownOutputPassingOptionUnknown Represents a
- // ContentKeyPolicyPlayReadyUnknownOutputPassingOption that is unavailable in current API version.
- ContentKeyPolicyPlayReadyUnknownOutputPassingOptionUnknown ContentKeyPolicyPlayReadyUnknownOutputPassingOption = "Unknown"
-)
-
-// PossibleContentKeyPolicyPlayReadyUnknownOutputPassingOptionValues returns an array of possible values for the ContentKeyPolicyPlayReadyUnknownOutputPassingOption const type.
-func PossibleContentKeyPolicyPlayReadyUnknownOutputPassingOptionValues() []ContentKeyPolicyPlayReadyUnknownOutputPassingOption {
- return []ContentKeyPolicyPlayReadyUnknownOutputPassingOption{ContentKeyPolicyPlayReadyUnknownOutputPassingOptionAllowed, ContentKeyPolicyPlayReadyUnknownOutputPassingOptionAllowedWithVideoConstriction, ContentKeyPolicyPlayReadyUnknownOutputPassingOptionNotAllowed, ContentKeyPolicyPlayReadyUnknownOutputPassingOptionUnknown}
-}
-
-// ContentKeyPolicyRestrictionTokenType enumerates the values for content key policy restriction token type.
-type ContentKeyPolicyRestrictionTokenType string
-
-const (
- // ContentKeyPolicyRestrictionTokenTypeJwt JSON Web Token.
- ContentKeyPolicyRestrictionTokenTypeJwt ContentKeyPolicyRestrictionTokenType = "Jwt"
- // ContentKeyPolicyRestrictionTokenTypeSwt Simple Web Token.
- ContentKeyPolicyRestrictionTokenTypeSwt ContentKeyPolicyRestrictionTokenType = "Swt"
- // ContentKeyPolicyRestrictionTokenTypeUnknown Represents a ContentKeyPolicyRestrictionTokenType that is
- // unavailable in current API version.
- ContentKeyPolicyRestrictionTokenTypeUnknown ContentKeyPolicyRestrictionTokenType = "Unknown"
-)
-
-// PossibleContentKeyPolicyRestrictionTokenTypeValues returns an array of possible values for the ContentKeyPolicyRestrictionTokenType const type.
-func PossibleContentKeyPolicyRestrictionTokenTypeValues() []ContentKeyPolicyRestrictionTokenType {
- return []ContentKeyPolicyRestrictionTokenType{ContentKeyPolicyRestrictionTokenTypeJwt, ContentKeyPolicyRestrictionTokenTypeSwt, ContentKeyPolicyRestrictionTokenTypeUnknown}
-}
-
-// CreatedByType enumerates the values for created by type.
-type CreatedByType string
-
-const (
- // Application ...
- Application CreatedByType = "Application"
- // Key ...
- Key CreatedByType = "Key"
- // ManagedIdentity ...
- ManagedIdentity CreatedByType = "ManagedIdentity"
- // User ...
- User CreatedByType = "User"
-)
-
-// PossibleCreatedByTypeValues returns an array of possible values for the CreatedByType const type.
-func PossibleCreatedByTypeValues() []CreatedByType {
- return []CreatedByType{Application, Key, ManagedIdentity, User}
-}
-
-// DeinterlaceMode enumerates the values for deinterlace mode.
-type DeinterlaceMode string
-
-const (
- // AutoPixelAdaptive Apply automatic pixel adaptive de-interlacing on each frame in the input video.
- AutoPixelAdaptive DeinterlaceMode = "AutoPixelAdaptive"
- // Off Disables de-interlacing of the source video.
- Off DeinterlaceMode = "Off"
-)
-
-// PossibleDeinterlaceModeValues returns an array of possible values for the DeinterlaceMode const type.
-func PossibleDeinterlaceModeValues() []DeinterlaceMode {
- return []DeinterlaceMode{AutoPixelAdaptive, Off}
-}
-
-// DeinterlaceParity enumerates the values for deinterlace parity.
-type DeinterlaceParity string
-
-const (
- // Auto Automatically detect the order of fields
- Auto DeinterlaceParity = "Auto"
- // BottomFieldFirst Apply bottom field first processing of input video.
- BottomFieldFirst DeinterlaceParity = "BottomFieldFirst"
- // TopFieldFirst Apply top field first processing of input video.
- TopFieldFirst DeinterlaceParity = "TopFieldFirst"
-)
-
-// PossibleDeinterlaceParityValues returns an array of possible values for the DeinterlaceParity const type.
-func PossibleDeinterlaceParityValues() []DeinterlaceParity {
- return []DeinterlaceParity{Auto, BottomFieldFirst, TopFieldFirst}
-}
-
-// EncoderNamedPreset enumerates the values for encoder named preset.
-type EncoderNamedPreset string
-
-const (
- // AACGoodQualityAudio Produces a single MP4 file containing only stereo audio encoded at 192 kbps.
- AACGoodQualityAudio EncoderNamedPreset = "AACGoodQualityAudio"
- // AdaptiveStreaming Produces a set of GOP aligned MP4 files with H.264 video and stereo AAC audio.
- // Auto-generates a bitrate ladder based on the input resolution, bitrate and frame rate. The
- // auto-generated preset will never exceed the input resolution. For example, if the input is 720p, output
- // will remain 720p at best.
- AdaptiveStreaming EncoderNamedPreset = "AdaptiveStreaming"
- // ContentAwareEncoding Produces a set of GOP-aligned MP4s by using content-aware encoding. Given any input
- // content, the service performs an initial lightweight analysis of the input content, and uses the results
- // to determine the optimal number of layers, appropriate bitrate and resolution settings for delivery by
- // adaptive streaming. This preset is particularly effective for low and medium complexity videos, where
- // the output files will be at lower bitrates but at a quality that still delivers a good experience to
- // viewers. The output will contain MP4 files with video and audio interleaved.
- ContentAwareEncoding EncoderNamedPreset = "ContentAwareEncoding"
- // ContentAwareEncodingExperimental Exposes an experimental preset for content-aware encoding. Given any
- // input content, the service attempts to automatically determine the optimal number of layers, appropriate
- // bitrate and resolution settings for delivery by adaptive streaming. The underlying algorithms will
- // continue to evolve over time. The output will contain MP4 files with video and audio interleaved.
- ContentAwareEncodingExperimental EncoderNamedPreset = "ContentAwareEncodingExperimental"
- // CopyAllBitrateNonInterleaved Copy all video and audio streams from the input asset as non-interleaved
- // video and audio output files. This preset can be used to clip an existing asset or convert a group of
- // key frame (GOP) aligned MP4 files as an asset that can be streamed.
- CopyAllBitrateNonInterleaved EncoderNamedPreset = "CopyAllBitrateNonInterleaved"
- // H264MultipleBitrate1080p Produces a set of 8 GOP-aligned MP4 files, ranging from 6000 kbps to 400 kbps,
- // and stereo AAC audio. Resolution starts at 1080p and goes down to 180p.
- H264MultipleBitrate1080p EncoderNamedPreset = "H264MultipleBitrate1080p"
- // H264MultipleBitrate720p Produces a set of 6 GOP-aligned MP4 files, ranging from 3400 kbps to 400 kbps,
- // and stereo AAC audio. Resolution starts at 720p and goes down to 180p.
- H264MultipleBitrate720p EncoderNamedPreset = "H264MultipleBitrate720p"
- // H264MultipleBitrateSD Produces a set of 5 GOP-aligned MP4 files, ranging from 1900kbps to 400 kbps, and
- // stereo AAC audio. Resolution starts at 480p and goes down to 240p.
- H264MultipleBitrateSD EncoderNamedPreset = "H264MultipleBitrateSD"
- // H264SingleBitrate1080p Produces an MP4 file where the video is encoded with H.264 codec at 6750 kbps and
- // a picture height of 1080 pixels, and the stereo audio is encoded with AAC-LC codec at 128 kbps.
- H264SingleBitrate1080p EncoderNamedPreset = "H264SingleBitrate1080p"
- // H264SingleBitrate720p Produces an MP4 file where the video is encoded with H.264 codec at 4500 kbps and
- // a picture height of 720 pixels, and the stereo audio is encoded with AAC-LC codec at 128 kbps.
- H264SingleBitrate720p EncoderNamedPreset = "H264SingleBitrate720p"
- // H264SingleBitrateSD Produces an MP4 file where the video is encoded with H.264 codec at 2200 kbps and a
- // picture height of 480 pixels, and the stereo audio is encoded with AAC-LC codec at 128 kbps.
- H264SingleBitrateSD EncoderNamedPreset = "H264SingleBitrateSD"
- // H265AdaptiveStreaming Produces a set of GOP aligned MP4 files with H.265 video and stereo AAC audio.
- // Auto-generates a bitrate ladder based on the input resolution, bitrate and frame rate. The
- // auto-generated preset will never exceed the input resolution. For example, if the input is 720p, output
- // will remain 720p at best.
- H265AdaptiveStreaming EncoderNamedPreset = "H265AdaptiveStreaming"
- // H265ContentAwareEncoding Produces a set of GOP-aligned MP4s by using content-aware encoding. Given any
- // input content, the service performs an initial lightweight analysis of the input content, and uses the
- // results to determine the optimal number of layers, appropriate bitrate and resolution settings for
- // delivery by adaptive streaming. This preset is particularly effective for low and medium complexity
- // videos, where the output files will be at lower bitrates but at a quality that still delivers a good
- // experience to viewers. The output will contain MP4 files with video and audio interleaved.
- H265ContentAwareEncoding EncoderNamedPreset = "H265ContentAwareEncoding"
- // H265SingleBitrate1080p Produces an MP4 file where the video is encoded with H.265 codec at 3500 kbps and
- // a picture height of 1080 pixels, and the stereo audio is encoded with AAC-LC codec at 128 kbps.
- H265SingleBitrate1080p EncoderNamedPreset = "H265SingleBitrate1080p"
- // H265SingleBitrate4K Produces an MP4 file where the video is encoded with H.265 codec at 9500 kbps and a
- // picture height of 2160 pixels, and the stereo audio is encoded with AAC-LC codec at 128 kbps.
- H265SingleBitrate4K EncoderNamedPreset = "H265SingleBitrate4K"
- // H265SingleBitrate720p Produces an MP4 file where the video is encoded with H.265 codec at 1800 kbps and
- // a picture height of 720 pixels, and the stereo audio is encoded with AAC-LC codec at 128 kbps.
- H265SingleBitrate720p EncoderNamedPreset = "H265SingleBitrate720p"
-)
-
-// PossibleEncoderNamedPresetValues returns an array of possible values for the EncoderNamedPreset const type.
-func PossibleEncoderNamedPresetValues() []EncoderNamedPreset {
- return []EncoderNamedPreset{AACGoodQualityAudio, AdaptiveStreaming, ContentAwareEncoding, ContentAwareEncodingExperimental, CopyAllBitrateNonInterleaved, H264MultipleBitrate1080p, H264MultipleBitrate720p, H264MultipleBitrateSD, H264SingleBitrate1080p, H264SingleBitrate720p, H264SingleBitrateSD, H265AdaptiveStreaming, H265ContentAwareEncoding, H265SingleBitrate1080p, H265SingleBitrate4K, H265SingleBitrate720p}
-}
-
-// EncryptionScheme enumerates the values for encryption scheme.
-type EncryptionScheme string
-
-const (
- // EncryptionSchemeCommonEncryptionCbcs CommonEncryptionCbcs scheme
- EncryptionSchemeCommonEncryptionCbcs EncryptionScheme = "CommonEncryptionCbcs"
- // EncryptionSchemeCommonEncryptionCenc CommonEncryptionCenc scheme
- EncryptionSchemeCommonEncryptionCenc EncryptionScheme = "CommonEncryptionCenc"
- // EncryptionSchemeEnvelopeEncryption EnvelopeEncryption scheme
- EncryptionSchemeEnvelopeEncryption EncryptionScheme = "EnvelopeEncryption"
- // EncryptionSchemeNoEncryption NoEncryption scheme
- EncryptionSchemeNoEncryption EncryptionScheme = "NoEncryption"
-)
-
-// PossibleEncryptionSchemeValues returns an array of possible values for the EncryptionScheme const type.
-func PossibleEncryptionSchemeValues() []EncryptionScheme {
- return []EncryptionScheme{EncryptionSchemeCommonEncryptionCbcs, EncryptionSchemeCommonEncryptionCenc, EncryptionSchemeEnvelopeEncryption, EncryptionSchemeNoEncryption}
-}
-
-// EntropyMode enumerates the values for entropy mode.
-type EntropyMode string
-
-const (
- // Cabac Context Adaptive Binary Arithmetic Coder (CABAC) entropy encoding.
- Cabac EntropyMode = "Cabac"
- // Cavlc Context Adaptive Variable Length Coder (CAVLC) entropy encoding.
- Cavlc EntropyMode = "Cavlc"
-)
-
-// PossibleEntropyModeValues returns an array of possible values for the EntropyMode const type.
-func PossibleEntropyModeValues() []EntropyMode {
- return []EntropyMode{Cabac, Cavlc}
-}
-
-// FaceRedactorMode enumerates the values for face redactor mode.
-type FaceRedactorMode string
-
-const (
- // Analyze Analyze mode detects faces and outputs a metadata file with the results. Allows editing of the
- // metadata file before faces are blurred with Redact mode.
- Analyze FaceRedactorMode = "Analyze"
- // Combined Combined mode does the Analyze and Redact steps in one pass when editing the analyzed faces is
- // not desired.
- Combined FaceRedactorMode = "Combined"
- // Redact Redact mode consumes the metadata file from Analyze mode and redacts the faces found.
- Redact FaceRedactorMode = "Redact"
-)
-
-// PossibleFaceRedactorModeValues returns an array of possible values for the FaceRedactorMode const type.
-func PossibleFaceRedactorModeValues() []FaceRedactorMode {
- return []FaceRedactorMode{Analyze, Combined, Redact}
-}
-
-// FilterTrackPropertyCompareOperation enumerates the values for filter track property compare operation.
-type FilterTrackPropertyCompareOperation string
-
-const (
- // Equal The equal operation.
- Equal FilterTrackPropertyCompareOperation = "Equal"
- // NotEqual The not equal operation.
- NotEqual FilterTrackPropertyCompareOperation = "NotEqual"
-)
-
-// PossibleFilterTrackPropertyCompareOperationValues returns an array of possible values for the FilterTrackPropertyCompareOperation const type.
-func PossibleFilterTrackPropertyCompareOperationValues() []FilterTrackPropertyCompareOperation {
- return []FilterTrackPropertyCompareOperation{Equal, NotEqual}
-}
-
-// FilterTrackPropertyType enumerates the values for filter track property type.
-type FilterTrackPropertyType string
-
-const (
- // FilterTrackPropertyTypeBitrate The bitrate.
- FilterTrackPropertyTypeBitrate FilterTrackPropertyType = "Bitrate"
- // FilterTrackPropertyTypeFourCC The fourCC.
- FilterTrackPropertyTypeFourCC FilterTrackPropertyType = "FourCC"
- // FilterTrackPropertyTypeLanguage The language.
- FilterTrackPropertyTypeLanguage FilterTrackPropertyType = "Language"
- // FilterTrackPropertyTypeName The name.
- FilterTrackPropertyTypeName FilterTrackPropertyType = "Name"
- // FilterTrackPropertyTypeType The type.
- FilterTrackPropertyTypeType FilterTrackPropertyType = "Type"
- // FilterTrackPropertyTypeUnknown The unknown track property type.
- FilterTrackPropertyTypeUnknown FilterTrackPropertyType = "Unknown"
-)
-
-// PossibleFilterTrackPropertyTypeValues returns an array of possible values for the FilterTrackPropertyType const type.
-func PossibleFilterTrackPropertyTypeValues() []FilterTrackPropertyType {
- return []FilterTrackPropertyType{FilterTrackPropertyTypeBitrate, FilterTrackPropertyTypeFourCC, FilterTrackPropertyTypeLanguage, FilterTrackPropertyTypeName, FilterTrackPropertyTypeType, FilterTrackPropertyTypeUnknown}
-}
-
-// H264Complexity enumerates the values for h264 complexity.
-type H264Complexity string
-
-const (
- // Balanced Tells the encoder to use settings that achieve a balance between speed and quality.
- Balanced H264Complexity = "Balanced"
- // Quality Tells the encoder to use settings that are optimized to produce higher quality output at the
- // expense of slower overall encode time.
- Quality H264Complexity = "Quality"
- // Speed Tells the encoder to use settings that are optimized for faster encoding. Quality is sacrificed to
- // decrease encoding time.
- Speed H264Complexity = "Speed"
-)
-
-// PossibleH264ComplexityValues returns an array of possible values for the H264Complexity const type.
-func PossibleH264ComplexityValues() []H264Complexity {
- return []H264Complexity{Balanced, Quality, Speed}
-}
-
-// H264VideoProfile enumerates the values for h264 video profile.
-type H264VideoProfile string
-
-const (
- // H264VideoProfileAuto Tells the encoder to automatically determine the appropriate H.264 profile.
- H264VideoProfileAuto H264VideoProfile = "Auto"
- // H264VideoProfileBaseline Baseline profile
- H264VideoProfileBaseline H264VideoProfile = "Baseline"
- // H264VideoProfileHigh High profile.
- H264VideoProfileHigh H264VideoProfile = "High"
- // H264VideoProfileHigh422 High 4:2:2 profile.
- H264VideoProfileHigh422 H264VideoProfile = "High422"
- // H264VideoProfileHigh444 High 4:4:4 predictive profile.
- H264VideoProfileHigh444 H264VideoProfile = "High444"
- // H264VideoProfileMain Main profile
- H264VideoProfileMain H264VideoProfile = "Main"
-)
-
-// PossibleH264VideoProfileValues returns an array of possible values for the H264VideoProfile const type.
-func PossibleH264VideoProfileValues() []H264VideoProfile {
- return []H264VideoProfile{H264VideoProfileAuto, H264VideoProfileBaseline, H264VideoProfileHigh, H264VideoProfileHigh422, H264VideoProfileHigh444, H264VideoProfileMain}
-}
-
-// H265Complexity enumerates the values for h265 complexity.
-type H265Complexity string
-
-const (
- // H265ComplexityBalanced Tells the encoder to use settings that achieve a balance between speed and
- // quality.
- H265ComplexityBalanced H265Complexity = "Balanced"
- // H265ComplexityQuality Tells the encoder to use settings that are optimized to produce higher quality
- // output at the expense of slower overall encode time.
- H265ComplexityQuality H265Complexity = "Quality"
- // H265ComplexitySpeed Tells the encoder to use settings that are optimized for faster encoding. Quality is
- // sacrificed to decrease encoding time.
- H265ComplexitySpeed H265Complexity = "Speed"
-)
-
-// PossibleH265ComplexityValues returns an array of possible values for the H265Complexity const type.
-func PossibleH265ComplexityValues() []H265Complexity {
- return []H265Complexity{H265ComplexityBalanced, H265ComplexityQuality, H265ComplexitySpeed}
-}
-
-// H265VideoProfile enumerates the values for h265 video profile.
-type H265VideoProfile string
-
-const (
- // H265VideoProfileAuto Tells the encoder to automatically determine the appropriate H.265 profile.
- H265VideoProfileAuto H265VideoProfile = "Auto"
- // H265VideoProfileMain Main profile
- // (https://x265.readthedocs.io/en/default/cli.html?highlight=profile#profile-level-tier)
- H265VideoProfileMain H265VideoProfile = "Main"
-)
-
-// PossibleH265VideoProfileValues returns an array of possible values for the H265VideoProfile const type.
-func PossibleH265VideoProfileValues() []H265VideoProfile {
- return []H265VideoProfile{H265VideoProfileAuto, H265VideoProfileMain}
-}
-
-// InsightsType enumerates the values for insights type.
-type InsightsType string
-
-const (
- // AllInsights Generate both audio and video insights. Fails if either audio or video Insights fail.
- AllInsights InsightsType = "AllInsights"
- // AudioInsightsOnly Generate audio only insights. Ignore video even if present. Fails if no audio is
- // present.
- AudioInsightsOnly InsightsType = "AudioInsightsOnly"
- // VideoInsightsOnly Generate video only insights. Ignore audio if present. Fails if no video is present.
- VideoInsightsOnly InsightsType = "VideoInsightsOnly"
-)
-
-// PossibleInsightsTypeValues returns an array of possible values for the InsightsType const type.
-func PossibleInsightsTypeValues() []InsightsType {
- return []InsightsType{AllInsights, AudioInsightsOnly, VideoInsightsOnly}
-}
-
-// JobErrorCategory enumerates the values for job error category.
-type JobErrorCategory string
-
-const (
- // JobErrorCategoryConfiguration The error is configuration related.
- JobErrorCategoryConfiguration JobErrorCategory = "Configuration"
- // JobErrorCategoryContent The error is related to data in the input files.
- JobErrorCategoryContent JobErrorCategory = "Content"
- // JobErrorCategoryDownload The error is download related.
- JobErrorCategoryDownload JobErrorCategory = "Download"
- // JobErrorCategoryService The error is service related.
- JobErrorCategoryService JobErrorCategory = "Service"
- // JobErrorCategoryUpload The error is upload related.
- JobErrorCategoryUpload JobErrorCategory = "Upload"
-)
-
-// PossibleJobErrorCategoryValues returns an array of possible values for the JobErrorCategory const type.
-func PossibleJobErrorCategoryValues() []JobErrorCategory {
- return []JobErrorCategory{JobErrorCategoryConfiguration, JobErrorCategoryContent, JobErrorCategoryDownload, JobErrorCategoryService, JobErrorCategoryUpload}
-}
-
-// JobErrorCode enumerates the values for job error code.
-type JobErrorCode string
-
-const (
- // ConfigurationUnsupported There was a problem with the combination of input files and the configuration
- // settings applied, fix the configuration settings and retry with the same input, or change input to match
- // the configuration.
- ConfigurationUnsupported JobErrorCode = "ConfigurationUnsupported"
- // ContentMalformed There was a problem with the input content (for example: zero byte files, or
- // corrupt/non-decodable files), check the input files.
- ContentMalformed JobErrorCode = "ContentMalformed"
- // ContentUnsupported There was a problem with the format of the input (not valid media file, or an
- // unsupported file/codec), check the validity of the input files.
- ContentUnsupported JobErrorCode = "ContentUnsupported"
- // DownloadNotAccessible While trying to download the input files, the files were not accessible, please
- // check the availability of the source.
- DownloadNotAccessible JobErrorCode = "DownloadNotAccessible"
- // DownloadTransientError While trying to download the input files, there was an issue during transfer
- // (storage service, network errors), see details and check your source.
- DownloadTransientError JobErrorCode = "DownloadTransientError"
- // ServiceError Fatal service error, please contact support.
- ServiceError JobErrorCode = "ServiceError"
- // ServiceTransientError Transient error, please retry, if retry is unsuccessful, please contact support.
- ServiceTransientError JobErrorCode = "ServiceTransientError"
- // UploadNotAccessible While trying to upload the output files, the destination was not reachable, please
- // check the availability of the destination.
- UploadNotAccessible JobErrorCode = "UploadNotAccessible"
- // UploadTransientError While trying to upload the output files, there was an issue during transfer
- // (storage service, network errors), see details and check your destination.
- UploadTransientError JobErrorCode = "UploadTransientError"
-)
-
-// PossibleJobErrorCodeValues returns an array of possible values for the JobErrorCode const type.
-func PossibleJobErrorCodeValues() []JobErrorCode {
- return []JobErrorCode{ConfigurationUnsupported, ContentMalformed, ContentUnsupported, DownloadNotAccessible, DownloadTransientError, ServiceError, ServiceTransientError, UploadNotAccessible, UploadTransientError}
-}
-
-// JobRetry enumerates the values for job retry.
-type JobRetry string
-
-const (
- // DoNotRetry Issue needs to be investigated and then the job resubmitted with corrections or retried once
- // the underlying issue has been corrected.
- DoNotRetry JobRetry = "DoNotRetry"
- // MayRetry Issue may be resolved after waiting for a period of time and resubmitting the same Job.
- MayRetry JobRetry = "MayRetry"
-)
-
-// PossibleJobRetryValues returns an array of possible values for the JobRetry const type.
-func PossibleJobRetryValues() []JobRetry {
- return []JobRetry{DoNotRetry, MayRetry}
-}
-
-// JobState enumerates the values for job state.
-type JobState string
-
-const (
- // Canceled The job was canceled. This is a final state for the job.
- Canceled JobState = "Canceled"
- // Canceling The job is in the process of being canceled. This is a transient state for the job.
- Canceling JobState = "Canceling"
- // Error The job has encountered an error. This is a final state for the job.
- Error JobState = "Error"
- // Finished The job is finished. This is a final state for the job.
- Finished JobState = "Finished"
- // Processing The job is processing. This is a transient state for the job.
- Processing JobState = "Processing"
- // Queued The job is in a queued state, waiting for resources to become available. This is a transient
- // state.
- Queued JobState = "Queued"
- // Scheduled The job is being scheduled to run on an available resource. This is a transient state, between
- // queued and processing states.
- Scheduled JobState = "Scheduled"
-)
-
-// PossibleJobStateValues returns an array of possible values for the JobState const type.
-func PossibleJobStateValues() []JobState {
- return []JobState{Canceled, Canceling, Error, Finished, Processing, Queued, Scheduled}
-}
-
-// LiveEventEncodingType enumerates the values for live event encoding type.
-type LiveEventEncodingType string
-
-const (
- // LiveEventEncodingTypeNone A contribution live encoder sends a multiple bitrate stream. The ingested
- // stream passes through the live event without any further processing. It is also called the pass-through
- // mode.
- LiveEventEncodingTypeNone LiveEventEncodingType = "None"
- // LiveEventEncodingTypePremium1080p A contribution live encoder sends a single bitrate stream to the live
- // event and Media Services creates multiple bitrate streams. The output cannot exceed 1080p in resolution.
- LiveEventEncodingTypePremium1080p LiveEventEncodingType = "Premium1080p"
- // LiveEventEncodingTypeStandard A contribution live encoder sends a single bitrate stream to the live
- // event and Media Services creates multiple bitrate streams. The output cannot exceed 720p in resolution.
- LiveEventEncodingTypeStandard LiveEventEncodingType = "Standard"
-)
-
-// PossibleLiveEventEncodingTypeValues returns an array of possible values for the LiveEventEncodingType const type.
-func PossibleLiveEventEncodingTypeValues() []LiveEventEncodingType {
- return []LiveEventEncodingType{LiveEventEncodingTypeNone, LiveEventEncodingTypePremium1080p, LiveEventEncodingTypeStandard}
-}
-
-// LiveEventInputProtocol enumerates the values for live event input protocol.
-type LiveEventInputProtocol string
-
-const (
- // FragmentedMP4 Smooth Streaming input will be sent by the contribution encoder to the live event.
- FragmentedMP4 LiveEventInputProtocol = "FragmentedMP4"
- // RTMP RTMP input will be sent by the contribution encoder to the live event.
- RTMP LiveEventInputProtocol = "RTMP"
-)
-
-// PossibleLiveEventInputProtocolValues returns an array of possible values for the LiveEventInputProtocol const type.
-func PossibleLiveEventInputProtocolValues() []LiveEventInputProtocol {
- return []LiveEventInputProtocol{FragmentedMP4, RTMP}
-}
-
-// LiveEventResourceState enumerates the values for live event resource state.
-type LiveEventResourceState string
-
-const (
- // Allocating Allocate action was called on the live event and resources are being provisioned for this
- // live event. Once allocation completes successfully, the live event will transition to StandBy state.
- Allocating LiveEventResourceState = "Allocating"
- // Deleting The live event is being deleted. No billing occurs in this transient state. Updates or
- // streaming are not allowed during this state.
- Deleting LiveEventResourceState = "Deleting"
- // Running The live event resources have been allocated, ingest and preview URLs have been generated, and
- // it is capable of receiving live streams. At this point, billing is active. You must explicitly call Stop
- // on the live event resource to halt further billing.
- Running LiveEventResourceState = "Running"
- // StandBy Live event resources have been provisioned and is ready to start. Billing occurs in this state.
- // Most properties can still be updated, however ingest or streaming is not allowed during this state.
- StandBy LiveEventResourceState = "StandBy"
- // Starting The live event is being started and resources are being allocated. No billing occurs in this
- // state. Updates or streaming are not allowed during this state. If an error occurs, the live event
- // returns to the Stopped state.
- Starting LiveEventResourceState = "Starting"
- // Stopped This is the initial state of the live event after creation (unless autostart was set to true.)
- // No billing occurs in this state. In this state, the live event properties can be updated but streaming
- // is not allowed.
- Stopped LiveEventResourceState = "Stopped"
- // Stopping The live event is being stopped and resources are being de-provisioned. No billing occurs in
- // this transient state. Updates or streaming are not allowed during this state.
- Stopping LiveEventResourceState = "Stopping"
-)
-
-// PossibleLiveEventResourceStateValues returns an array of possible values for the LiveEventResourceState const type.
-func PossibleLiveEventResourceStateValues() []LiveEventResourceState {
- return []LiveEventResourceState{Allocating, Deleting, Running, StandBy, Starting, Stopped, Stopping}
-}
-
-// LiveOutputResourceState enumerates the values for live output resource state.
-type LiveOutputResourceState string
-
-const (
- // LiveOutputResourceStateCreating Live output is being created. No content is archived in the asset until
- // the live output is in running state.
- LiveOutputResourceStateCreating LiveOutputResourceState = "Creating"
- // LiveOutputResourceStateDeleting Live output is being deleted. The live asset is being converted from
- // live to on-demand asset. Any streaming URLs created on the live output asset continue to work.
- LiveOutputResourceStateDeleting LiveOutputResourceState = "Deleting"
- // LiveOutputResourceStateRunning Live output is running and archiving live streaming content to the asset
- // if there is valid input from a contribution encoder.
- LiveOutputResourceStateRunning LiveOutputResourceState = "Running"
-)
-
-// PossibleLiveOutputResourceStateValues returns an array of possible values for the LiveOutputResourceState const type.
-func PossibleLiveOutputResourceStateValues() []LiveOutputResourceState {
- return []LiveOutputResourceState{LiveOutputResourceStateCreating, LiveOutputResourceStateDeleting, LiveOutputResourceStateRunning}
-}
-
-// ManagedIdentityType enumerates the values for managed identity type.
-type ManagedIdentityType string
-
-const (
- // ManagedIdentityTypeNone No managed identity.
- ManagedIdentityTypeNone ManagedIdentityType = "None"
- // ManagedIdentityTypeSystemAssigned A system-assigned managed identity.
- ManagedIdentityTypeSystemAssigned ManagedIdentityType = "SystemAssigned"
-)
-
-// PossibleManagedIdentityTypeValues returns an array of possible values for the ManagedIdentityType const type.
-func PossibleManagedIdentityTypeValues() []ManagedIdentityType {
- return []ManagedIdentityType{ManagedIdentityTypeNone, ManagedIdentityTypeSystemAssigned}
-}
-
-// MetricAggregationType enumerates the values for metric aggregation type.
-type MetricAggregationType string
-
-const (
- // Average The average.
- Average MetricAggregationType = "Average"
- // Count The count of a number of items, usually requests.
- Count MetricAggregationType = "Count"
- // Total The sum.
- Total MetricAggregationType = "Total"
-)
-
-// PossibleMetricAggregationTypeValues returns an array of possible values for the MetricAggregationType const type.
-func PossibleMetricAggregationTypeValues() []MetricAggregationType {
- return []MetricAggregationType{Average, Count, Total}
-}
-
-// MetricUnit enumerates the values for metric unit.
-type MetricUnit string
-
-const (
- // MetricUnitBytes The number of bytes.
- MetricUnitBytes MetricUnit = "Bytes"
- // MetricUnitCount The count.
- MetricUnitCount MetricUnit = "Count"
- // MetricUnitMilliseconds The number of milliseconds.
- MetricUnitMilliseconds MetricUnit = "Milliseconds"
-)
-
-// PossibleMetricUnitValues returns an array of possible values for the MetricUnit const type.
-func PossibleMetricUnitValues() []MetricUnit {
- return []MetricUnit{MetricUnitBytes, MetricUnitCount, MetricUnitMilliseconds}
-}
-
-// OdataType enumerates the values for odata type.
-type OdataType string
-
-const (
- // OdataTypeContentKeyPolicyPlayReadyContentKeyLocation ...
- OdataTypeContentKeyPolicyPlayReadyContentKeyLocation OdataType = "ContentKeyPolicyPlayReadyContentKeyLocation"
- // OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyContentEncryptionKeyFromHeader ...
- OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyContentEncryptionKeyFromHeader OdataType = "#Microsoft.Media.ContentKeyPolicyPlayReadyContentEncryptionKeyFromHeader"
- // OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyContentEncryptionKeyFromKeyIdentifier ...
- OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyContentEncryptionKeyFromKeyIdentifier OdataType = "#Microsoft.Media.ContentKeyPolicyPlayReadyContentEncryptionKeyFromKeyIdentifier"
-)
-
-// PossibleOdataTypeValues returns an array of possible values for the OdataType const type.
-func PossibleOdataTypeValues() []OdataType {
- return []OdataType{OdataTypeContentKeyPolicyPlayReadyContentKeyLocation, OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyContentEncryptionKeyFromHeader, OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyContentEncryptionKeyFromKeyIdentifier}
-}
-
-// OdataTypeBasicClipTime enumerates the values for odata type basic clip time.
-type OdataTypeBasicClipTime string
-
-const (
- // OdataTypeClipTime ...
- OdataTypeClipTime OdataTypeBasicClipTime = "ClipTime"
- // OdataTypeMicrosoftMediaAbsoluteClipTime ...
- OdataTypeMicrosoftMediaAbsoluteClipTime OdataTypeBasicClipTime = "#Microsoft.Media.AbsoluteClipTime"
- // OdataTypeMicrosoftMediaUtcClipTime ...
- OdataTypeMicrosoftMediaUtcClipTime OdataTypeBasicClipTime = "#Microsoft.Media.UtcClipTime"
-)
-
-// PossibleOdataTypeBasicClipTimeValues returns an array of possible values for the OdataTypeBasicClipTime const type.
-func PossibleOdataTypeBasicClipTimeValues() []OdataTypeBasicClipTime {
- return []OdataTypeBasicClipTime{OdataTypeClipTime, OdataTypeMicrosoftMediaAbsoluteClipTime, OdataTypeMicrosoftMediaUtcClipTime}
-}
-
-// OdataTypeBasicCodec enumerates the values for odata type basic codec.
-type OdataTypeBasicCodec string
-
-const (
- // OdataTypeCodec ...
- OdataTypeCodec OdataTypeBasicCodec = "Codec"
- // OdataTypeMicrosoftMediaAacAudio ...
- OdataTypeMicrosoftMediaAacAudio OdataTypeBasicCodec = "#Microsoft.Media.AacAudio"
- // OdataTypeMicrosoftMediaAudio ...
- OdataTypeMicrosoftMediaAudio OdataTypeBasicCodec = "#Microsoft.Media.Audio"
- // OdataTypeMicrosoftMediaCopyAudio ...
- OdataTypeMicrosoftMediaCopyAudio OdataTypeBasicCodec = "#Microsoft.Media.CopyAudio"
- // OdataTypeMicrosoftMediaCopyVideo ...
- OdataTypeMicrosoftMediaCopyVideo OdataTypeBasicCodec = "#Microsoft.Media.CopyVideo"
- // OdataTypeMicrosoftMediaH264Video ...
- OdataTypeMicrosoftMediaH264Video OdataTypeBasicCodec = "#Microsoft.Media.H264Video"
- // OdataTypeMicrosoftMediaH265Video ...
- OdataTypeMicrosoftMediaH265Video OdataTypeBasicCodec = "#Microsoft.Media.H265Video"
- // OdataTypeMicrosoftMediaImage ...
- OdataTypeMicrosoftMediaImage OdataTypeBasicCodec = "#Microsoft.Media.Image"
- // OdataTypeMicrosoftMediaJpgImage ...
- OdataTypeMicrosoftMediaJpgImage OdataTypeBasicCodec = "#Microsoft.Media.JpgImage"
- // OdataTypeMicrosoftMediaPngImage ...
- OdataTypeMicrosoftMediaPngImage OdataTypeBasicCodec = "#Microsoft.Media.PngImage"
- // OdataTypeMicrosoftMediaVideo ...
- OdataTypeMicrosoftMediaVideo OdataTypeBasicCodec = "#Microsoft.Media.Video"
-)
-
-// PossibleOdataTypeBasicCodecValues returns an array of possible values for the OdataTypeBasicCodec const type.
-func PossibleOdataTypeBasicCodecValues() []OdataTypeBasicCodec {
- return []OdataTypeBasicCodec{OdataTypeCodec, OdataTypeMicrosoftMediaAacAudio, OdataTypeMicrosoftMediaAudio, OdataTypeMicrosoftMediaCopyAudio, OdataTypeMicrosoftMediaCopyVideo, OdataTypeMicrosoftMediaH264Video, OdataTypeMicrosoftMediaH265Video, OdataTypeMicrosoftMediaImage, OdataTypeMicrosoftMediaJpgImage, OdataTypeMicrosoftMediaPngImage, OdataTypeMicrosoftMediaVideo}
-}
-
-// OdataTypeBasicContentKeyPolicyConfiguration enumerates the values for odata type basic content key policy
-// configuration.
-type OdataTypeBasicContentKeyPolicyConfiguration string
-
-const (
- // OdataTypeContentKeyPolicyConfiguration ...
- OdataTypeContentKeyPolicyConfiguration OdataTypeBasicContentKeyPolicyConfiguration = "ContentKeyPolicyConfiguration"
- // OdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration ...
- OdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration OdataTypeBasicContentKeyPolicyConfiguration = "#Microsoft.Media.ContentKeyPolicyClearKeyConfiguration"
- // OdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration ...
- OdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration OdataTypeBasicContentKeyPolicyConfiguration = "#Microsoft.Media.ContentKeyPolicyFairPlayConfiguration"
- // OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration ...
- OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration OdataTypeBasicContentKeyPolicyConfiguration = "#Microsoft.Media.ContentKeyPolicyPlayReadyConfiguration"
- // OdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration ...
- OdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration OdataTypeBasicContentKeyPolicyConfiguration = "#Microsoft.Media.ContentKeyPolicyUnknownConfiguration"
- // OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration ...
- OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration OdataTypeBasicContentKeyPolicyConfiguration = "#Microsoft.Media.ContentKeyPolicyWidevineConfiguration"
-)
-
-// PossibleOdataTypeBasicContentKeyPolicyConfigurationValues returns an array of possible values for the OdataTypeBasicContentKeyPolicyConfiguration const type.
-func PossibleOdataTypeBasicContentKeyPolicyConfigurationValues() []OdataTypeBasicContentKeyPolicyConfiguration {
- return []OdataTypeBasicContentKeyPolicyConfiguration{OdataTypeContentKeyPolicyConfiguration, OdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration, OdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration, OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration, OdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration, OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration}
-}
-
-// OdataTypeBasicContentKeyPolicyRestriction enumerates the values for odata type basic content key policy
-// restriction.
-type OdataTypeBasicContentKeyPolicyRestriction string
-
-const (
- // OdataTypeContentKeyPolicyRestriction ...
- OdataTypeContentKeyPolicyRestriction OdataTypeBasicContentKeyPolicyRestriction = "ContentKeyPolicyRestriction"
- // OdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction ...
- OdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction OdataTypeBasicContentKeyPolicyRestriction = "#Microsoft.Media.ContentKeyPolicyOpenRestriction"
- // OdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction ...
- OdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction OdataTypeBasicContentKeyPolicyRestriction = "#Microsoft.Media.ContentKeyPolicyTokenRestriction"
- // OdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction ...
- OdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction OdataTypeBasicContentKeyPolicyRestriction = "#Microsoft.Media.ContentKeyPolicyUnknownRestriction"
-)
-
-// PossibleOdataTypeBasicContentKeyPolicyRestrictionValues returns an array of possible values for the OdataTypeBasicContentKeyPolicyRestriction const type.
-func PossibleOdataTypeBasicContentKeyPolicyRestrictionValues() []OdataTypeBasicContentKeyPolicyRestriction {
- return []OdataTypeBasicContentKeyPolicyRestriction{OdataTypeContentKeyPolicyRestriction, OdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction, OdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction, OdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction}
-}
-
-// OdataTypeBasicContentKeyPolicyRestrictionTokenKey enumerates the values for odata type basic content key
-// policy restriction token key.
-type OdataTypeBasicContentKeyPolicyRestrictionTokenKey string
-
-const (
- // OdataTypeContentKeyPolicyRestrictionTokenKey ...
- OdataTypeContentKeyPolicyRestrictionTokenKey OdataTypeBasicContentKeyPolicyRestrictionTokenKey = "ContentKeyPolicyRestrictionTokenKey"
- // OdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey ...
- OdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey OdataTypeBasicContentKeyPolicyRestrictionTokenKey = "#Microsoft.Media.ContentKeyPolicyRsaTokenKey"
- // OdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey ...
- OdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey OdataTypeBasicContentKeyPolicyRestrictionTokenKey = "#Microsoft.Media.ContentKeyPolicySymmetricTokenKey"
- // OdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey ...
- OdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey OdataTypeBasicContentKeyPolicyRestrictionTokenKey = "#Microsoft.Media.ContentKeyPolicyX509CertificateTokenKey"
-)
-
-// PossibleOdataTypeBasicContentKeyPolicyRestrictionTokenKeyValues returns an array of possible values for the OdataTypeBasicContentKeyPolicyRestrictionTokenKey const type.
-func PossibleOdataTypeBasicContentKeyPolicyRestrictionTokenKeyValues() []OdataTypeBasicContentKeyPolicyRestrictionTokenKey {
- return []OdataTypeBasicContentKeyPolicyRestrictionTokenKey{OdataTypeContentKeyPolicyRestrictionTokenKey, OdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey, OdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey, OdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey}
-}
-
-// OdataTypeBasicFormat enumerates the values for odata type basic format.
-type OdataTypeBasicFormat string
-
-const (
- // OdataTypeFormat ...
- OdataTypeFormat OdataTypeBasicFormat = "Format"
- // OdataTypeMicrosoftMediaImageFormat ...
- OdataTypeMicrosoftMediaImageFormat OdataTypeBasicFormat = "#Microsoft.Media.ImageFormat"
- // OdataTypeMicrosoftMediaJpgFormat ...
- OdataTypeMicrosoftMediaJpgFormat OdataTypeBasicFormat = "#Microsoft.Media.JpgFormat"
- // OdataTypeMicrosoftMediaMp4Format ...
- OdataTypeMicrosoftMediaMp4Format OdataTypeBasicFormat = "#Microsoft.Media.Mp4Format"
- // OdataTypeMicrosoftMediaMultiBitrateFormat ...
- OdataTypeMicrosoftMediaMultiBitrateFormat OdataTypeBasicFormat = "#Microsoft.Media.MultiBitrateFormat"
- // OdataTypeMicrosoftMediaPngFormat ...
- OdataTypeMicrosoftMediaPngFormat OdataTypeBasicFormat = "#Microsoft.Media.PngFormat"
- // OdataTypeMicrosoftMediaTransportStreamFormat ...
- OdataTypeMicrosoftMediaTransportStreamFormat OdataTypeBasicFormat = "#Microsoft.Media.TransportStreamFormat"
-)
-
-// PossibleOdataTypeBasicFormatValues returns an array of possible values for the OdataTypeBasicFormat const type.
-func PossibleOdataTypeBasicFormatValues() []OdataTypeBasicFormat {
- return []OdataTypeBasicFormat{OdataTypeFormat, OdataTypeMicrosoftMediaImageFormat, OdataTypeMicrosoftMediaJpgFormat, OdataTypeMicrosoftMediaMp4Format, OdataTypeMicrosoftMediaMultiBitrateFormat, OdataTypeMicrosoftMediaPngFormat, OdataTypeMicrosoftMediaTransportStreamFormat}
-}
-
-// OdataTypeBasicInputDefinition enumerates the values for odata type basic input definition.
-type OdataTypeBasicInputDefinition string
-
-const (
- // OdataTypeInputDefinition ...
- OdataTypeInputDefinition OdataTypeBasicInputDefinition = "InputDefinition"
- // OdataTypeMicrosoftMediaFromAllInputFile ...
- OdataTypeMicrosoftMediaFromAllInputFile OdataTypeBasicInputDefinition = "#Microsoft.Media.FromAllInputFile"
- // OdataTypeMicrosoftMediaFromEachInputFile ...
- OdataTypeMicrosoftMediaFromEachInputFile OdataTypeBasicInputDefinition = "#Microsoft.Media.FromEachInputFile"
- // OdataTypeMicrosoftMediaInputFile ...
- OdataTypeMicrosoftMediaInputFile OdataTypeBasicInputDefinition = "#Microsoft.Media.InputFile"
-)
-
-// PossibleOdataTypeBasicInputDefinitionValues returns an array of possible values for the OdataTypeBasicInputDefinition const type.
-func PossibleOdataTypeBasicInputDefinitionValues() []OdataTypeBasicInputDefinition {
- return []OdataTypeBasicInputDefinition{OdataTypeInputDefinition, OdataTypeMicrosoftMediaFromAllInputFile, OdataTypeMicrosoftMediaFromEachInputFile, OdataTypeMicrosoftMediaInputFile}
-}
-
-// OdataTypeBasicJobInput enumerates the values for odata type basic job input.
-type OdataTypeBasicJobInput string
-
-const (
- // OdataTypeJobInput ...
- OdataTypeJobInput OdataTypeBasicJobInput = "JobInput"
- // OdataTypeMicrosoftMediaJobInputAsset ...
- OdataTypeMicrosoftMediaJobInputAsset OdataTypeBasicJobInput = "#Microsoft.Media.JobInputAsset"
- // OdataTypeMicrosoftMediaJobInputClip ...
- OdataTypeMicrosoftMediaJobInputClip OdataTypeBasicJobInput = "#Microsoft.Media.JobInputClip"
- // OdataTypeMicrosoftMediaJobInputHTTP ...
- OdataTypeMicrosoftMediaJobInputHTTP OdataTypeBasicJobInput = "#Microsoft.Media.JobInputHttp"
- // OdataTypeMicrosoftMediaJobInputs ...
- OdataTypeMicrosoftMediaJobInputs OdataTypeBasicJobInput = "#Microsoft.Media.JobInputs"
- // OdataTypeMicrosoftMediaJobInputSequence ...
- OdataTypeMicrosoftMediaJobInputSequence OdataTypeBasicJobInput = "#Microsoft.Media.JobInputSequence"
-)
-
-// PossibleOdataTypeBasicJobInputValues returns an array of possible values for the OdataTypeBasicJobInput const type.
-func PossibleOdataTypeBasicJobInputValues() []OdataTypeBasicJobInput {
- return []OdataTypeBasicJobInput{OdataTypeJobInput, OdataTypeMicrosoftMediaJobInputAsset, OdataTypeMicrosoftMediaJobInputClip, OdataTypeMicrosoftMediaJobInputHTTP, OdataTypeMicrosoftMediaJobInputs, OdataTypeMicrosoftMediaJobInputSequence}
-}
-
-// OdataTypeBasicJobOutput enumerates the values for odata type basic job output.
-type OdataTypeBasicJobOutput string
-
-const (
- // OdataTypeJobOutput ...
- OdataTypeJobOutput OdataTypeBasicJobOutput = "JobOutput"
- // OdataTypeMicrosoftMediaJobOutputAsset ...
- OdataTypeMicrosoftMediaJobOutputAsset OdataTypeBasicJobOutput = "#Microsoft.Media.JobOutputAsset"
-)
-
-// PossibleOdataTypeBasicJobOutputValues returns an array of possible values for the OdataTypeBasicJobOutput const type.
-func PossibleOdataTypeBasicJobOutputValues() []OdataTypeBasicJobOutput {
- return []OdataTypeBasicJobOutput{OdataTypeJobOutput, OdataTypeMicrosoftMediaJobOutputAsset}
-}
-
-// OdataTypeBasicLayer enumerates the values for odata type basic layer.
-type OdataTypeBasicLayer string
-
-const (
- // OdataTypeLayer ...
- OdataTypeLayer OdataTypeBasicLayer = "Layer"
- // OdataTypeMicrosoftMediaH264Layer ...
- OdataTypeMicrosoftMediaH264Layer OdataTypeBasicLayer = "#Microsoft.Media.H264Layer"
- // OdataTypeMicrosoftMediaH265Layer ...
- OdataTypeMicrosoftMediaH265Layer OdataTypeBasicLayer = "#Microsoft.Media.H265Layer"
- // OdataTypeMicrosoftMediaH265VideoLayer ...
- OdataTypeMicrosoftMediaH265VideoLayer OdataTypeBasicLayer = "#Microsoft.Media.H265VideoLayer"
- // OdataTypeMicrosoftMediaJpgLayer ...
- OdataTypeMicrosoftMediaJpgLayer OdataTypeBasicLayer = "#Microsoft.Media.JpgLayer"
- // OdataTypeMicrosoftMediaPngLayer ...
- OdataTypeMicrosoftMediaPngLayer OdataTypeBasicLayer = "#Microsoft.Media.PngLayer"
- // OdataTypeMicrosoftMediaVideoLayer ...
- OdataTypeMicrosoftMediaVideoLayer OdataTypeBasicLayer = "#Microsoft.Media.VideoLayer"
-)
-
-// PossibleOdataTypeBasicLayerValues returns an array of possible values for the OdataTypeBasicLayer const type.
-func PossibleOdataTypeBasicLayerValues() []OdataTypeBasicLayer {
- return []OdataTypeBasicLayer{OdataTypeLayer, OdataTypeMicrosoftMediaH264Layer, OdataTypeMicrosoftMediaH265Layer, OdataTypeMicrosoftMediaH265VideoLayer, OdataTypeMicrosoftMediaJpgLayer, OdataTypeMicrosoftMediaPngLayer, OdataTypeMicrosoftMediaVideoLayer}
-}
-
-// OdataTypeBasicOverlay enumerates the values for odata type basic overlay.
-type OdataTypeBasicOverlay string
-
-const (
- // OdataTypeMicrosoftMediaAudioOverlay ...
- OdataTypeMicrosoftMediaAudioOverlay OdataTypeBasicOverlay = "#Microsoft.Media.AudioOverlay"
- // OdataTypeMicrosoftMediaVideoOverlay ...
- OdataTypeMicrosoftMediaVideoOverlay OdataTypeBasicOverlay = "#Microsoft.Media.VideoOverlay"
- // OdataTypeOverlay ...
- OdataTypeOverlay OdataTypeBasicOverlay = "Overlay"
-)
-
-// PossibleOdataTypeBasicOverlayValues returns an array of possible values for the OdataTypeBasicOverlay const type.
-func PossibleOdataTypeBasicOverlayValues() []OdataTypeBasicOverlay {
- return []OdataTypeBasicOverlay{OdataTypeMicrosoftMediaAudioOverlay, OdataTypeMicrosoftMediaVideoOverlay, OdataTypeOverlay}
-}
-
-// OdataTypeBasicPreset enumerates the values for odata type basic preset.
-type OdataTypeBasicPreset string
-
-const (
- // OdataTypeMicrosoftMediaAudioAnalyzerPreset ...
- OdataTypeMicrosoftMediaAudioAnalyzerPreset OdataTypeBasicPreset = "#Microsoft.Media.AudioAnalyzerPreset"
- // OdataTypeMicrosoftMediaBuiltInStandardEncoderPreset ...
- OdataTypeMicrosoftMediaBuiltInStandardEncoderPreset OdataTypeBasicPreset = "#Microsoft.Media.BuiltInStandardEncoderPreset"
- // OdataTypeMicrosoftMediaFaceDetectorPreset ...
- OdataTypeMicrosoftMediaFaceDetectorPreset OdataTypeBasicPreset = "#Microsoft.Media.FaceDetectorPreset"
- // OdataTypeMicrosoftMediaStandardEncoderPreset ...
- OdataTypeMicrosoftMediaStandardEncoderPreset OdataTypeBasicPreset = "#Microsoft.Media.StandardEncoderPreset"
- // OdataTypeMicrosoftMediaVideoAnalyzerPreset ...
- OdataTypeMicrosoftMediaVideoAnalyzerPreset OdataTypeBasicPreset = "#Microsoft.Media.VideoAnalyzerPreset"
- // OdataTypePreset ...
- OdataTypePreset OdataTypeBasicPreset = "Preset"
-)
-
-// PossibleOdataTypeBasicPresetValues returns an array of possible values for the OdataTypeBasicPreset const type.
-func PossibleOdataTypeBasicPresetValues() []OdataTypeBasicPreset {
- return []OdataTypeBasicPreset{OdataTypeMicrosoftMediaAudioAnalyzerPreset, OdataTypeMicrosoftMediaBuiltInStandardEncoderPreset, OdataTypeMicrosoftMediaFaceDetectorPreset, OdataTypeMicrosoftMediaStandardEncoderPreset, OdataTypeMicrosoftMediaVideoAnalyzerPreset, OdataTypePreset}
-}
-
-// OdataTypeBasicTrackDescriptor enumerates the values for odata type basic track descriptor.
-type OdataTypeBasicTrackDescriptor string
-
-const (
- // OdataTypeMicrosoftMediaAudioTrackDescriptor ...
- OdataTypeMicrosoftMediaAudioTrackDescriptor OdataTypeBasicTrackDescriptor = "#Microsoft.Media.AudioTrackDescriptor"
- // OdataTypeMicrosoftMediaSelectAudioTrackByAttribute ...
- OdataTypeMicrosoftMediaSelectAudioTrackByAttribute OdataTypeBasicTrackDescriptor = "#Microsoft.Media.SelectAudioTrackByAttribute"
- // OdataTypeMicrosoftMediaSelectAudioTrackByID ...
- OdataTypeMicrosoftMediaSelectAudioTrackByID OdataTypeBasicTrackDescriptor = "#Microsoft.Media.SelectAudioTrackById"
- // OdataTypeMicrosoftMediaSelectVideoTrackByAttribute ...
- OdataTypeMicrosoftMediaSelectVideoTrackByAttribute OdataTypeBasicTrackDescriptor = "#Microsoft.Media.SelectVideoTrackByAttribute"
- // OdataTypeMicrosoftMediaSelectVideoTrackByID ...
- OdataTypeMicrosoftMediaSelectVideoTrackByID OdataTypeBasicTrackDescriptor = "#Microsoft.Media.SelectVideoTrackById"
- // OdataTypeMicrosoftMediaVideoTrackDescriptor ...
- OdataTypeMicrosoftMediaVideoTrackDescriptor OdataTypeBasicTrackDescriptor = "#Microsoft.Media.VideoTrackDescriptor"
- // OdataTypeTrackDescriptor ...
- OdataTypeTrackDescriptor OdataTypeBasicTrackDescriptor = "TrackDescriptor"
-)
-
-// PossibleOdataTypeBasicTrackDescriptorValues returns an array of possible values for the OdataTypeBasicTrackDescriptor const type.
-func PossibleOdataTypeBasicTrackDescriptorValues() []OdataTypeBasicTrackDescriptor {
- return []OdataTypeBasicTrackDescriptor{OdataTypeMicrosoftMediaAudioTrackDescriptor, OdataTypeMicrosoftMediaSelectAudioTrackByAttribute, OdataTypeMicrosoftMediaSelectAudioTrackByID, OdataTypeMicrosoftMediaSelectVideoTrackByAttribute, OdataTypeMicrosoftMediaSelectVideoTrackByID, OdataTypeMicrosoftMediaVideoTrackDescriptor, OdataTypeTrackDescriptor}
-}
-
-// OnErrorType enumerates the values for on error type.
-type OnErrorType string
-
-const (
- // ContinueJob Tells the service that if this TransformOutput fails, then allow any other TransformOutput
- // to continue.
- ContinueJob OnErrorType = "ContinueJob"
- // StopProcessingJob Tells the service that if this TransformOutput fails, then any other incomplete
- // TransformOutputs can be stopped.
- StopProcessingJob OnErrorType = "StopProcessingJob"
-)
-
-// PossibleOnErrorTypeValues returns an array of possible values for the OnErrorType const type.
-func PossibleOnErrorTypeValues() []OnErrorType {
- return []OnErrorType{ContinueJob, StopProcessingJob}
-}
-
-// Priority enumerates the values for priority.
-type Priority string
-
-const (
- // PriorityHigh Used for TransformOutputs that should take precedence over others.
- PriorityHigh Priority = "High"
- // PriorityLow Used for TransformOutputs that can be generated after Normal and High priority
- // TransformOutputs.
- PriorityLow Priority = "Low"
- // PriorityNormal Used for TransformOutputs that can be generated at Normal priority.
- PriorityNormal Priority = "Normal"
-)
-
-// PossiblePriorityValues returns an array of possible values for the Priority const type.
-func PossiblePriorityValues() []Priority {
- return []Priority{PriorityHigh, PriorityLow, PriorityNormal}
-}
-
-// PrivateEndpointConnectionProvisioningState enumerates the values for private endpoint connection
-// provisioning state.
-type PrivateEndpointConnectionProvisioningState string
-
-const (
- // PrivateEndpointConnectionProvisioningStateCreating ...
- PrivateEndpointConnectionProvisioningStateCreating PrivateEndpointConnectionProvisioningState = "Creating"
- // PrivateEndpointConnectionProvisioningStateDeleting ...
- PrivateEndpointConnectionProvisioningStateDeleting PrivateEndpointConnectionProvisioningState = "Deleting"
- // PrivateEndpointConnectionProvisioningStateFailed ...
- PrivateEndpointConnectionProvisioningStateFailed PrivateEndpointConnectionProvisioningState = "Failed"
- // PrivateEndpointConnectionProvisioningStateSucceeded ...
- PrivateEndpointConnectionProvisioningStateSucceeded PrivateEndpointConnectionProvisioningState = "Succeeded"
-)
-
-// PossiblePrivateEndpointConnectionProvisioningStateValues returns an array of possible values for the PrivateEndpointConnectionProvisioningState const type.
-func PossiblePrivateEndpointConnectionProvisioningStateValues() []PrivateEndpointConnectionProvisioningState {
- return []PrivateEndpointConnectionProvisioningState{PrivateEndpointConnectionProvisioningStateCreating, PrivateEndpointConnectionProvisioningStateDeleting, PrivateEndpointConnectionProvisioningStateFailed, PrivateEndpointConnectionProvisioningStateSucceeded}
-}
-
-// PrivateEndpointServiceConnectionStatus enumerates the values for private endpoint service connection status.
-type PrivateEndpointServiceConnectionStatus string
-
-const (
- // Approved ...
- Approved PrivateEndpointServiceConnectionStatus = "Approved"
- // Pending ...
- Pending PrivateEndpointServiceConnectionStatus = "Pending"
- // Rejected ...
- Rejected PrivateEndpointServiceConnectionStatus = "Rejected"
-)
-
-// PossiblePrivateEndpointServiceConnectionStatusValues returns an array of possible values for the PrivateEndpointServiceConnectionStatus const type.
-func PossiblePrivateEndpointServiceConnectionStatusValues() []PrivateEndpointServiceConnectionStatus {
- return []PrivateEndpointServiceConnectionStatus{Approved, Pending, Rejected}
-}
-
-// Rotation enumerates the values for rotation.
-type Rotation string
-
-const (
- // RotationAuto Automatically detect and rotate as needed.
- RotationAuto Rotation = "Auto"
- // RotationNone Do not rotate the video. If the output format supports it, any metadata about rotation is
- // kept intact.
- RotationNone Rotation = "None"
- // RotationRotate0 Do not rotate the video but remove any metadata about the rotation.
- RotationRotate0 Rotation = "Rotate0"
- // RotationRotate180 Rotate 180 degrees clockwise.
- RotationRotate180 Rotation = "Rotate180"
- // RotationRotate270 Rotate 270 degrees clockwise.
- RotationRotate270 Rotation = "Rotate270"
- // RotationRotate90 Rotate 90 degrees clockwise.
- RotationRotate90 Rotation = "Rotate90"
-)
-
-// PossibleRotationValues returns an array of possible values for the Rotation const type.
-func PossibleRotationValues() []Rotation {
- return []Rotation{RotationAuto, RotationNone, RotationRotate0, RotationRotate180, RotationRotate270, RotationRotate90}
-}
-
-// StorageAccountType enumerates the values for storage account type.
-type StorageAccountType string
-
-const (
- // Primary The primary storage account for the Media Services account.
- Primary StorageAccountType = "Primary"
- // Secondary A secondary storage account for the Media Services account.
- Secondary StorageAccountType = "Secondary"
-)
-
-// PossibleStorageAccountTypeValues returns an array of possible values for the StorageAccountType const type.
-func PossibleStorageAccountTypeValues() []StorageAccountType {
- return []StorageAccountType{Primary, Secondary}
-}
-
-// StorageAuthentication enumerates the values for storage authentication.
-type StorageAuthentication string
-
-const (
- // StorageAuthenticationManagedIdentity Managed Identity authentication.
- StorageAuthenticationManagedIdentity StorageAuthentication = "ManagedIdentity"
- // StorageAuthenticationSystem System authentication.
- StorageAuthenticationSystem StorageAuthentication = "System"
-)
-
-// PossibleStorageAuthenticationValues returns an array of possible values for the StorageAuthentication const type.
-func PossibleStorageAuthenticationValues() []StorageAuthentication {
- return []StorageAuthentication{StorageAuthenticationManagedIdentity, StorageAuthenticationSystem}
-}
-
-// StreamingEndpointResourceState enumerates the values for streaming endpoint resource state.
-type StreamingEndpointResourceState string
-
-const (
- // StreamingEndpointResourceStateDeleting The streaming endpoint is being deleted.
- StreamingEndpointResourceStateDeleting StreamingEndpointResourceState = "Deleting"
- // StreamingEndpointResourceStateRunning The streaming endpoint is running. It is able to stream content to
- // clients
- StreamingEndpointResourceStateRunning StreamingEndpointResourceState = "Running"
- // StreamingEndpointResourceStateScaling The streaming endpoint is increasing or decreasing scale units.
- StreamingEndpointResourceStateScaling StreamingEndpointResourceState = "Scaling"
- // StreamingEndpointResourceStateStarting The streaming endpoint is transitioning to the running state.
- StreamingEndpointResourceStateStarting StreamingEndpointResourceState = "Starting"
- // StreamingEndpointResourceStateStopped The initial state of a streaming endpoint after creation. Content
- // is not ready to be streamed from this endpoint.
- StreamingEndpointResourceStateStopped StreamingEndpointResourceState = "Stopped"
- // StreamingEndpointResourceStateStopping The streaming endpoint is transitioning to the stopped state.
- StreamingEndpointResourceStateStopping StreamingEndpointResourceState = "Stopping"
-)
-
-// PossibleStreamingEndpointResourceStateValues returns an array of possible values for the StreamingEndpointResourceState const type.
-func PossibleStreamingEndpointResourceStateValues() []StreamingEndpointResourceState {
- return []StreamingEndpointResourceState{StreamingEndpointResourceStateDeleting, StreamingEndpointResourceStateRunning, StreamingEndpointResourceStateScaling, StreamingEndpointResourceStateStarting, StreamingEndpointResourceStateStopped, StreamingEndpointResourceStateStopping}
-}
-
-// StreamingLocatorContentKeyType enumerates the values for streaming locator content key type.
-type StreamingLocatorContentKeyType string
-
-const (
- // StreamingLocatorContentKeyTypeCommonEncryptionCbcs Common Encryption using CBCS
- StreamingLocatorContentKeyTypeCommonEncryptionCbcs StreamingLocatorContentKeyType = "CommonEncryptionCbcs"
- // StreamingLocatorContentKeyTypeCommonEncryptionCenc Common Encryption using CENC
- StreamingLocatorContentKeyTypeCommonEncryptionCenc StreamingLocatorContentKeyType = "CommonEncryptionCenc"
- // StreamingLocatorContentKeyTypeEnvelopeEncryption Envelope Encryption
- StreamingLocatorContentKeyTypeEnvelopeEncryption StreamingLocatorContentKeyType = "EnvelopeEncryption"
-)
-
-// PossibleStreamingLocatorContentKeyTypeValues returns an array of possible values for the StreamingLocatorContentKeyType const type.
-func PossibleStreamingLocatorContentKeyTypeValues() []StreamingLocatorContentKeyType {
- return []StreamingLocatorContentKeyType{StreamingLocatorContentKeyTypeCommonEncryptionCbcs, StreamingLocatorContentKeyTypeCommonEncryptionCenc, StreamingLocatorContentKeyTypeEnvelopeEncryption}
-}
-
-// StreamingPolicyStreamingProtocol enumerates the values for streaming policy streaming protocol.
-type StreamingPolicyStreamingProtocol string
-
-const (
- // StreamingPolicyStreamingProtocolDash DASH protocol
- StreamingPolicyStreamingProtocolDash StreamingPolicyStreamingProtocol = "Dash"
- // StreamingPolicyStreamingProtocolDownload Download protocol
- StreamingPolicyStreamingProtocolDownload StreamingPolicyStreamingProtocol = "Download"
- // StreamingPolicyStreamingProtocolHls HLS protocol
- StreamingPolicyStreamingProtocolHls StreamingPolicyStreamingProtocol = "Hls"
- // StreamingPolicyStreamingProtocolSmoothStreaming SmoothStreaming protocol
- StreamingPolicyStreamingProtocolSmoothStreaming StreamingPolicyStreamingProtocol = "SmoothStreaming"
-)
-
-// PossibleStreamingPolicyStreamingProtocolValues returns an array of possible values for the StreamingPolicyStreamingProtocol const type.
-func PossibleStreamingPolicyStreamingProtocolValues() []StreamingPolicyStreamingProtocol {
- return []StreamingPolicyStreamingProtocol{StreamingPolicyStreamingProtocolDash, StreamingPolicyStreamingProtocolDownload, StreamingPolicyStreamingProtocolHls, StreamingPolicyStreamingProtocolSmoothStreaming}
-}
-
-// StreamOptionsFlag enumerates the values for stream options flag.
-type StreamOptionsFlag string
-
-const (
- // Default Live streaming with no special latency optimizations.
- Default StreamOptionsFlag = "Default"
- // LowLatency The live event provides lower end to end latency by reducing its internal buffers. This could
- // result in more client buffering during playback if network bandwidth is low.
- LowLatency StreamOptionsFlag = "LowLatency"
-)
-
-// PossibleStreamOptionsFlagValues returns an array of possible values for the StreamOptionsFlag const type.
-func PossibleStreamOptionsFlagValues() []StreamOptionsFlag {
- return []StreamOptionsFlag{Default, LowLatency}
-}
-
-// StretchMode enumerates the values for stretch mode.
-type StretchMode string
-
-const (
- // StretchModeAutoFit Pad the output (with either letterbox or pillar box) to honor the output resolution,
- // while ensuring that the active video region in the output has the same aspect ratio as the input. For
- // example, if the input is 1920x1080 and the encoding preset asks for 1280x1280, then the output will be
- // at 1280x1280, which contains an inner rectangle of 1280x720 at aspect ratio of 16:9, and pillar box
- // regions 280 pixels wide at the left and right.
- StretchModeAutoFit StretchMode = "AutoFit"
- // StretchModeAutoSize Override the output resolution, and change it to match the display aspect ratio of
- // the input, without padding. For example, if the input is 1920x1080 and the encoding preset asks for
- // 1280x1280, then the value in the preset is overridden, and the output will be at 1280x720, which
- // maintains the input aspect ratio of 16:9.
- StretchModeAutoSize StretchMode = "AutoSize"
- // StretchModeNone Strictly respect the output resolution without considering the pixel aspect ratio or
- // display aspect ratio of the input video.
- StretchModeNone StretchMode = "None"
-)
-
-// PossibleStretchModeValues returns an array of possible values for the StretchMode const type.
-func PossibleStretchModeValues() []StretchMode {
- return []StretchMode{StretchModeAutoFit, StretchModeAutoSize, StretchModeNone}
-}
-
-// TrackAttribute enumerates the values for track attribute.
-type TrackAttribute string
-
-const (
- // Bitrate The bitrate of the track.
- Bitrate TrackAttribute = "Bitrate"
- // Language The language of the track.
- Language TrackAttribute = "Language"
-)
-
-// PossibleTrackAttributeValues returns an array of possible values for the TrackAttribute const type.
-func PossibleTrackAttributeValues() []TrackAttribute {
- return []TrackAttribute{Bitrate, Language}
-}
-
-// TrackPropertyCompareOperation enumerates the values for track property compare operation.
-type TrackPropertyCompareOperation string
-
-const (
- // TrackPropertyCompareOperationEqual Equal operation
- TrackPropertyCompareOperationEqual TrackPropertyCompareOperation = "Equal"
- // TrackPropertyCompareOperationUnknown Unknown track property compare operation
- TrackPropertyCompareOperationUnknown TrackPropertyCompareOperation = "Unknown"
-)
-
-// PossibleTrackPropertyCompareOperationValues returns an array of possible values for the TrackPropertyCompareOperation const type.
-func PossibleTrackPropertyCompareOperationValues() []TrackPropertyCompareOperation {
- return []TrackPropertyCompareOperation{TrackPropertyCompareOperationEqual, TrackPropertyCompareOperationUnknown}
-}
-
-// TrackPropertyType enumerates the values for track property type.
-type TrackPropertyType string
-
-const (
- // TrackPropertyTypeFourCC Track FourCC
- TrackPropertyTypeFourCC TrackPropertyType = "FourCC"
- // TrackPropertyTypeUnknown Unknown track property
- TrackPropertyTypeUnknown TrackPropertyType = "Unknown"
-)
-
-// PossibleTrackPropertyTypeValues returns an array of possible values for the TrackPropertyType const type.
-func PossibleTrackPropertyTypeValues() []TrackPropertyType {
- return []TrackPropertyType{TrackPropertyTypeFourCC, TrackPropertyTypeUnknown}
-}
-
-// VideoSyncMode enumerates the values for video sync mode.
-type VideoSyncMode string
-
-const (
- // VideoSyncModeAuto This is the default method. Chooses between Cfr and Vfr depending on muxer
- // capabilities. For output format MP4, the default mode is Cfr.
- VideoSyncModeAuto VideoSyncMode = "Auto"
- // VideoSyncModeCfr Input frames will be repeated and/or dropped as needed to achieve exactly the requested
- // constant frame rate. Recommended when the output frame rate is explicitly set at a specified value
- VideoSyncModeCfr VideoSyncMode = "Cfr"
- // VideoSyncModePassthrough The presentation timestamps on frames are passed through from the input file to
- // the output file writer. Recommended when the input source has variable frame rate, and are attempting to
- // produce multiple layers for adaptive streaming in the output which have aligned GOP boundaries. Note: if
- // two or more frames in the input have duplicate timestamps, then the output will also have the same
- // behavior
- VideoSyncModePassthrough VideoSyncMode = "Passthrough"
- // VideoSyncModeVfr Similar to the Passthrough mode, but if the input has frames that have duplicate
- // timestamps, then only one frame is passed through to the output, and others are dropped. Recommended
- // when the number of output frames is expected to be equal to the number of input frames. For example, the
- // output is used to calculate a quality metric like PSNR against the input
- VideoSyncModeVfr VideoSyncMode = "Vfr"
-)
-
-// PossibleVideoSyncModeValues returns an array of possible values for the VideoSyncMode const type.
-func PossibleVideoSyncModeValues() []VideoSyncMode {
- return []VideoSyncMode{VideoSyncModeAuto, VideoSyncModeCfr, VideoSyncModePassthrough, VideoSyncModeVfr}
-}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/CHANGELOG.md b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/CHANGELOG.md
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/CHANGELOG.md
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/CHANGELOG.md
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/_meta.json b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/_meta.json
similarity index 50%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/_meta.json
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/_meta.json
index 40dde62fde77f..0c219d9139326 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/_meta.json
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/_meta.json
@@ -1,11 +1,11 @@
{
- "commit": "92ab22b49bd085116af0c61fada2c6c360702e9e",
+ "commit": "e6ee3d4f6a29f081eddada399bd1cb373133af02",
"readme": "/_/azure-rest-api-specs/specification/mediaservices/resource-manager/readme.md",
- "tag": "package-2020-05",
- "use": "@microsoft.azure/autorest.go@2.1.180",
+ "tag": "package-2021-05",
+ "use": "@microsoft.azure/autorest.go@2.1.183",
"repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
- "autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.180 --tag=package-2020-05 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/mediaservices/resource-manager/readme.md",
+ "autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.183 --tag=package-2021-05 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION --enum-prefix /_/azure-rest-api-specs/specification/mediaservices/resource-manager/readme.md",
"additional_properties": {
- "additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
+ "additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION --enum-prefix"
}
}
\ No newline at end of file
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/accountfilters.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/accountfilters.go
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/accountfilters.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/accountfilters.go
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/assetfilters.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/assetfilters.go
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/assetfilters.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/assetfilters.go
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/assets.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/assets.go
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/assets.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/assets.go
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/client.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/client.go
similarity index 98%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/client.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/client.go
index 4fb86774490c1..9ff0d9c224277 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/client.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/client.go
@@ -1,4 +1,4 @@
-// Package media implements the Azure ARM Media service API version 2020-05-01.
+// Package media implements the Azure ARM Media service API version .
//
//
package media
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/contentkeypolicies.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/contentkeypolicies.go
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/contentkeypolicies.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/contentkeypolicies.go
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/enums.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/enums.go
new file mode 100644
index 0000000000000..c8e82cb50cc64
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/enums.go
@@ -0,0 +1,1487 @@
+package media
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License. See License.txt in the project root for license information.
+//
+// Code generated by Microsoft (R) AutoRest Code Generator.
+// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+
+// AacAudioProfile enumerates the values for aac audio profile.
+type AacAudioProfile string
+
+const (
+ // AacAudioProfileAacLc Specifies that the output audio is to be encoded into AAC Low Complexity profile
+ // (AAC-LC).
+ AacAudioProfileAacLc AacAudioProfile = "AacLc"
+ // AacAudioProfileHeAacV1 Specifies that the output audio is to be encoded into HE-AAC v1 profile.
+ AacAudioProfileHeAacV1 AacAudioProfile = "HeAacV1"
+ // AacAudioProfileHeAacV2 Specifies that the output audio is to be encoded into HE-AAC v2 profile.
+ AacAudioProfileHeAacV2 AacAudioProfile = "HeAacV2"
+)
+
+// PossibleAacAudioProfileValues returns an array of possible values for the AacAudioProfile const type.
+func PossibleAacAudioProfileValues() []AacAudioProfile {
+ return []AacAudioProfile{AacAudioProfileAacLc, AacAudioProfileHeAacV1, AacAudioProfileHeAacV2}
+}
+
+// AccountEncryptionKeyType enumerates the values for account encryption key type.
+type AccountEncryptionKeyType string
+
+const (
+ // AccountEncryptionKeyTypeCustomerKey The Account Key is encrypted with a Customer Key.
+ AccountEncryptionKeyTypeCustomerKey AccountEncryptionKeyType = "CustomerKey"
+ // AccountEncryptionKeyTypeSystemKey The Account Key is encrypted with a System Key.
+ AccountEncryptionKeyTypeSystemKey AccountEncryptionKeyType = "SystemKey"
+)
+
+// PossibleAccountEncryptionKeyTypeValues returns an array of possible values for the AccountEncryptionKeyType const type.
+func PossibleAccountEncryptionKeyTypeValues() []AccountEncryptionKeyType {
+ return []AccountEncryptionKeyType{AccountEncryptionKeyTypeCustomerKey, AccountEncryptionKeyTypeSystemKey}
+}
+
+// ActionType enumerates the values for action type.
+type ActionType string
+
+const (
+ // ActionTypeInternal An internal action.
+ ActionTypeInternal ActionType = "Internal"
+)
+
+// PossibleActionTypeValues returns an array of possible values for the ActionType const type.
+func PossibleActionTypeValues() []ActionType {
+ return []ActionType{ActionTypeInternal}
+}
+
+// AnalysisResolution enumerates the values for analysis resolution.
+type AnalysisResolution string
+
+const (
+ // AnalysisResolutionSourceResolution ...
+ AnalysisResolutionSourceResolution AnalysisResolution = "SourceResolution"
+ // AnalysisResolutionStandardDefinition ...
+ AnalysisResolutionStandardDefinition AnalysisResolution = "StandardDefinition"
+)
+
+// PossibleAnalysisResolutionValues returns an array of possible values for the AnalysisResolution const type.
+func PossibleAnalysisResolutionValues() []AnalysisResolution {
+ return []AnalysisResolution{AnalysisResolutionSourceResolution, AnalysisResolutionStandardDefinition}
+}
+
+// AssetContainerPermission enumerates the values for asset container permission.
+type AssetContainerPermission string
+
+const (
+ // AssetContainerPermissionRead The SAS URL will allow read access to the container.
+ AssetContainerPermissionRead AssetContainerPermission = "Read"
+ // AssetContainerPermissionReadWrite The SAS URL will allow read and write access to the container.
+ AssetContainerPermissionReadWrite AssetContainerPermission = "ReadWrite"
+ // AssetContainerPermissionReadWriteDelete The SAS URL will allow read, write and delete access to the
+ // container.
+ AssetContainerPermissionReadWriteDelete AssetContainerPermission = "ReadWriteDelete"
+)
+
+// PossibleAssetContainerPermissionValues returns an array of possible values for the AssetContainerPermission const type.
+func PossibleAssetContainerPermissionValues() []AssetContainerPermission {
+ return []AssetContainerPermission{AssetContainerPermissionRead, AssetContainerPermissionReadWrite, AssetContainerPermissionReadWriteDelete}
+}
+
+// AssetStorageEncryptionFormat enumerates the values for asset storage encryption format.
+type AssetStorageEncryptionFormat string
+
+const (
+ // AssetStorageEncryptionFormatMediaStorageClientEncryption The Asset is encrypted with Media Services
+ // client-side encryption.
+ AssetStorageEncryptionFormatMediaStorageClientEncryption AssetStorageEncryptionFormat = "MediaStorageClientEncryption"
+ // AssetStorageEncryptionFormatNone The Asset does not use client-side storage encryption (this is the only
+ // allowed value for new Assets).
+ AssetStorageEncryptionFormatNone AssetStorageEncryptionFormat = "None"
+)
+
+// PossibleAssetStorageEncryptionFormatValues returns an array of possible values for the AssetStorageEncryptionFormat const type.
+func PossibleAssetStorageEncryptionFormatValues() []AssetStorageEncryptionFormat {
+ return []AssetStorageEncryptionFormat{AssetStorageEncryptionFormatMediaStorageClientEncryption, AssetStorageEncryptionFormatNone}
+}
+
+// AttributeFilter enumerates the values for attribute filter.
+type AttributeFilter string
+
+const (
+ // AttributeFilterAll All tracks will be included.
+ AttributeFilterAll AttributeFilter = "All"
+ // AttributeFilterBottom The first track will be included when the attribute is sorted in ascending order.
+ // Generally used to select the smallest bitrate.
+ AttributeFilterBottom AttributeFilter = "Bottom"
+ // AttributeFilterTop The first track will be included when the attribute is sorted in descending order.
+ // Generally used to select the largest bitrate.
+ AttributeFilterTop AttributeFilter = "Top"
+ // AttributeFilterValueEquals Any tracks that have an attribute equal to the value given will be included.
+ AttributeFilterValueEquals AttributeFilter = "ValueEquals"
+)
+
+// PossibleAttributeFilterValues returns an array of possible values for the AttributeFilter const type.
+func PossibleAttributeFilterValues() []AttributeFilter {
+ return []AttributeFilter{AttributeFilterAll, AttributeFilterBottom, AttributeFilterTop, AttributeFilterValueEquals}
+}
+
+// AudioAnalysisMode enumerates the values for audio analysis mode.
+type AudioAnalysisMode string
+
+const (
+ // AudioAnalysisModeBasic This mode performs speech-to-text transcription and generation of a VTT
+ // subtitle/caption file. The output of this mode includes an Insights JSON file including only the
+ // keywords, transcription,and timing information. Automatic language detection and speaker diarization are
+ // not included in this mode.
+ AudioAnalysisModeBasic AudioAnalysisMode = "Basic"
+ // AudioAnalysisModeStandard Performs all operations included in the Basic mode, additionally performing
+ // language detection and speaker diarization.
+ AudioAnalysisModeStandard AudioAnalysisMode = "Standard"
+)
+
+// PossibleAudioAnalysisModeValues returns an array of possible values for the AudioAnalysisMode const type.
+func PossibleAudioAnalysisModeValues() []AudioAnalysisMode {
+ return []AudioAnalysisMode{AudioAnalysisModeBasic, AudioAnalysisModeStandard}
+}
+
+// BlurType enumerates the values for blur type.
+type BlurType string
+
+const (
+ // BlurTypeBlack Black: Black out filter
+ BlurTypeBlack BlurType = "Black"
+ // BlurTypeBox Box: debug filter, bounding box only
+ BlurTypeBox BlurType = "Box"
+ // BlurTypeHigh High: Confuse blur filter
+ BlurTypeHigh BlurType = "High"
+ // BlurTypeLow Low: box-car blur filter
+ BlurTypeLow BlurType = "Low"
+ // BlurTypeMed Med: Gaussian blur filter
+ BlurTypeMed BlurType = "Med"
+)
+
+// PossibleBlurTypeValues returns an array of possible values for the BlurType const type.
+func PossibleBlurTypeValues() []BlurType {
+ return []BlurType{BlurTypeBlack, BlurTypeBox, BlurTypeHigh, BlurTypeLow, BlurTypeMed}
+}
+
+// ChannelMapping enumerates the values for channel mapping.
+type ChannelMapping string
+
+const (
+ // ChannelMappingBackLeft The Back Left Channel. Sometimes referred to as the Left Surround Channel.
+ ChannelMappingBackLeft ChannelMapping = "BackLeft"
+ // ChannelMappingBackRight The Back Right Channel. Sometimes referred to as the Right Surround Channel.
+ ChannelMappingBackRight ChannelMapping = "BackRight"
+ // ChannelMappingCenter The Center Channel.
+ ChannelMappingCenter ChannelMapping = "Center"
+ // ChannelMappingFrontLeft The Front Left Channel.
+ ChannelMappingFrontLeft ChannelMapping = "FrontLeft"
+ // ChannelMappingFrontRight The Front Right Channel.
+ ChannelMappingFrontRight ChannelMapping = "FrontRight"
+ // ChannelMappingLowFrequencyEffects Low Frequency Effects Channel. Sometimes referred to as the
+ // Subwoofer.
+ ChannelMappingLowFrequencyEffects ChannelMapping = "LowFrequencyEffects"
+ // ChannelMappingStereoLeft The Left Stereo channel. Sometimes referred to as Down Mix Left.
+ ChannelMappingStereoLeft ChannelMapping = "StereoLeft"
+ // ChannelMappingStereoRight The Right Stereo channel. Sometimes referred to as Down Mix Right.
+ ChannelMappingStereoRight ChannelMapping = "StereoRight"
+)
+
+// PossibleChannelMappingValues returns an array of possible values for the ChannelMapping const type.
+func PossibleChannelMappingValues() []ChannelMapping {
+ return []ChannelMapping{ChannelMappingBackLeft, ChannelMappingBackRight, ChannelMappingCenter, ChannelMappingFrontLeft, ChannelMappingFrontRight, ChannelMappingLowFrequencyEffects, ChannelMappingStereoLeft, ChannelMappingStereoRight}
+}
+
+// ContentKeyPolicyFairPlayRentalAndLeaseKeyType enumerates the values for content key policy fair play rental
+// and lease key type.
+type ContentKeyPolicyFairPlayRentalAndLeaseKeyType string
+
+const (
+ // ContentKeyPolicyFairPlayRentalAndLeaseKeyTypeDualExpiry Dual expiry for offline rental.
+ ContentKeyPolicyFairPlayRentalAndLeaseKeyTypeDualExpiry ContentKeyPolicyFairPlayRentalAndLeaseKeyType = "DualExpiry"
+ // ContentKeyPolicyFairPlayRentalAndLeaseKeyTypePersistentLimited Content key can be persisted and the
+ // valid duration is limited by the Rental Duration value
+ ContentKeyPolicyFairPlayRentalAndLeaseKeyTypePersistentLimited ContentKeyPolicyFairPlayRentalAndLeaseKeyType = "PersistentLimited"
+ // ContentKeyPolicyFairPlayRentalAndLeaseKeyTypePersistentUnlimited Content key can be persisted with an
+ // unlimited duration
+ ContentKeyPolicyFairPlayRentalAndLeaseKeyTypePersistentUnlimited ContentKeyPolicyFairPlayRentalAndLeaseKeyType = "PersistentUnlimited"
+ // ContentKeyPolicyFairPlayRentalAndLeaseKeyTypeUndefined Key duration is not specified.
+ ContentKeyPolicyFairPlayRentalAndLeaseKeyTypeUndefined ContentKeyPolicyFairPlayRentalAndLeaseKeyType = "Undefined"
+ // ContentKeyPolicyFairPlayRentalAndLeaseKeyTypeUnknown Represents a
+ // ContentKeyPolicyFairPlayRentalAndLeaseKeyType that is unavailable in current API version.
+ ContentKeyPolicyFairPlayRentalAndLeaseKeyTypeUnknown ContentKeyPolicyFairPlayRentalAndLeaseKeyType = "Unknown"
+)
+
+// PossibleContentKeyPolicyFairPlayRentalAndLeaseKeyTypeValues returns an array of possible values for the ContentKeyPolicyFairPlayRentalAndLeaseKeyType const type.
+func PossibleContentKeyPolicyFairPlayRentalAndLeaseKeyTypeValues() []ContentKeyPolicyFairPlayRentalAndLeaseKeyType {
+ return []ContentKeyPolicyFairPlayRentalAndLeaseKeyType{ContentKeyPolicyFairPlayRentalAndLeaseKeyTypeDualExpiry, ContentKeyPolicyFairPlayRentalAndLeaseKeyTypePersistentLimited, ContentKeyPolicyFairPlayRentalAndLeaseKeyTypePersistentUnlimited, ContentKeyPolicyFairPlayRentalAndLeaseKeyTypeUndefined, ContentKeyPolicyFairPlayRentalAndLeaseKeyTypeUnknown}
+}
+
+// ContentKeyPolicyPlayReadyContentType enumerates the values for content key policy play ready content type.
+type ContentKeyPolicyPlayReadyContentType string
+
+const (
+ // ContentKeyPolicyPlayReadyContentTypeUltraVioletDownload Ultraviolet download content type.
+ ContentKeyPolicyPlayReadyContentTypeUltraVioletDownload ContentKeyPolicyPlayReadyContentType = "UltraVioletDownload"
+ // ContentKeyPolicyPlayReadyContentTypeUltraVioletStreaming Ultraviolet streaming content type.
+ ContentKeyPolicyPlayReadyContentTypeUltraVioletStreaming ContentKeyPolicyPlayReadyContentType = "UltraVioletStreaming"
+ // ContentKeyPolicyPlayReadyContentTypeUnknown Represents a ContentKeyPolicyPlayReadyContentType that is
+ // unavailable in current API version.
+ ContentKeyPolicyPlayReadyContentTypeUnknown ContentKeyPolicyPlayReadyContentType = "Unknown"
+ // ContentKeyPolicyPlayReadyContentTypeUnspecified Unspecified content type.
+ ContentKeyPolicyPlayReadyContentTypeUnspecified ContentKeyPolicyPlayReadyContentType = "Unspecified"
+)
+
+// PossibleContentKeyPolicyPlayReadyContentTypeValues returns an array of possible values for the ContentKeyPolicyPlayReadyContentType const type.
+func PossibleContentKeyPolicyPlayReadyContentTypeValues() []ContentKeyPolicyPlayReadyContentType {
+ return []ContentKeyPolicyPlayReadyContentType{ContentKeyPolicyPlayReadyContentTypeUltraVioletDownload, ContentKeyPolicyPlayReadyContentTypeUltraVioletStreaming, ContentKeyPolicyPlayReadyContentTypeUnknown, ContentKeyPolicyPlayReadyContentTypeUnspecified}
+}
+
+// ContentKeyPolicyPlayReadyLicenseType enumerates the values for content key policy play ready license type.
+type ContentKeyPolicyPlayReadyLicenseType string
+
+const (
+ // ContentKeyPolicyPlayReadyLicenseTypeNonPersistent Non persistent license.
+ ContentKeyPolicyPlayReadyLicenseTypeNonPersistent ContentKeyPolicyPlayReadyLicenseType = "NonPersistent"
+ // ContentKeyPolicyPlayReadyLicenseTypePersistent Persistent license. Allows offline playback.
+ ContentKeyPolicyPlayReadyLicenseTypePersistent ContentKeyPolicyPlayReadyLicenseType = "Persistent"
+ // ContentKeyPolicyPlayReadyLicenseTypeUnknown Represents a ContentKeyPolicyPlayReadyLicenseType that is
+ // unavailable in current API version.
+ ContentKeyPolicyPlayReadyLicenseTypeUnknown ContentKeyPolicyPlayReadyLicenseType = "Unknown"
+)
+
+// PossibleContentKeyPolicyPlayReadyLicenseTypeValues returns an array of possible values for the ContentKeyPolicyPlayReadyLicenseType const type.
+func PossibleContentKeyPolicyPlayReadyLicenseTypeValues() []ContentKeyPolicyPlayReadyLicenseType {
+ return []ContentKeyPolicyPlayReadyLicenseType{ContentKeyPolicyPlayReadyLicenseTypeNonPersistent, ContentKeyPolicyPlayReadyLicenseTypePersistent, ContentKeyPolicyPlayReadyLicenseTypeUnknown}
+}
+
+// ContentKeyPolicyPlayReadyUnknownOutputPassingOption enumerates the values for content key policy play ready
+// unknown output passing option.
+type ContentKeyPolicyPlayReadyUnknownOutputPassingOption string
+
+const (
+ // ContentKeyPolicyPlayReadyUnknownOutputPassingOptionAllowed Passing the video portion of protected
+ // content to an Unknown Output is allowed.
+ ContentKeyPolicyPlayReadyUnknownOutputPassingOptionAllowed ContentKeyPolicyPlayReadyUnknownOutputPassingOption = "Allowed"
+ // ContentKeyPolicyPlayReadyUnknownOutputPassingOptionAllowedWithVideoConstriction Passing the video
+ // portion of protected content to an Unknown Output is allowed but with constrained resolution.
+ ContentKeyPolicyPlayReadyUnknownOutputPassingOptionAllowedWithVideoConstriction ContentKeyPolicyPlayReadyUnknownOutputPassingOption = "AllowedWithVideoConstriction"
+ // ContentKeyPolicyPlayReadyUnknownOutputPassingOptionNotAllowed Passing the video portion of protected
+ // content to an Unknown Output is not allowed.
+ ContentKeyPolicyPlayReadyUnknownOutputPassingOptionNotAllowed ContentKeyPolicyPlayReadyUnknownOutputPassingOption = "NotAllowed"
+ // ContentKeyPolicyPlayReadyUnknownOutputPassingOptionUnknown Represents a
+ // ContentKeyPolicyPlayReadyUnknownOutputPassingOption that is unavailable in current API version.
+ ContentKeyPolicyPlayReadyUnknownOutputPassingOptionUnknown ContentKeyPolicyPlayReadyUnknownOutputPassingOption = "Unknown"
+)
+
+// PossibleContentKeyPolicyPlayReadyUnknownOutputPassingOptionValues returns an array of possible values for the ContentKeyPolicyPlayReadyUnknownOutputPassingOption const type.
+func PossibleContentKeyPolicyPlayReadyUnknownOutputPassingOptionValues() []ContentKeyPolicyPlayReadyUnknownOutputPassingOption {
+ return []ContentKeyPolicyPlayReadyUnknownOutputPassingOption{ContentKeyPolicyPlayReadyUnknownOutputPassingOptionAllowed, ContentKeyPolicyPlayReadyUnknownOutputPassingOptionAllowedWithVideoConstriction, ContentKeyPolicyPlayReadyUnknownOutputPassingOptionNotAllowed, ContentKeyPolicyPlayReadyUnknownOutputPassingOptionUnknown}
+}
+
+// ContentKeyPolicyRestrictionTokenType enumerates the values for content key policy restriction token type.
+type ContentKeyPolicyRestrictionTokenType string
+
+const (
+ // ContentKeyPolicyRestrictionTokenTypeJwt JSON Web Token.
+ ContentKeyPolicyRestrictionTokenTypeJwt ContentKeyPolicyRestrictionTokenType = "Jwt"
+ // ContentKeyPolicyRestrictionTokenTypeSwt Simple Web Token.
+ ContentKeyPolicyRestrictionTokenTypeSwt ContentKeyPolicyRestrictionTokenType = "Swt"
+ // ContentKeyPolicyRestrictionTokenTypeUnknown Represents a ContentKeyPolicyRestrictionTokenType that is
+ // unavailable in current API version.
+ ContentKeyPolicyRestrictionTokenTypeUnknown ContentKeyPolicyRestrictionTokenType = "Unknown"
+)
+
+// PossibleContentKeyPolicyRestrictionTokenTypeValues returns an array of possible values for the ContentKeyPolicyRestrictionTokenType const type.
+func PossibleContentKeyPolicyRestrictionTokenTypeValues() []ContentKeyPolicyRestrictionTokenType {
+ return []ContentKeyPolicyRestrictionTokenType{ContentKeyPolicyRestrictionTokenTypeJwt, ContentKeyPolicyRestrictionTokenTypeSwt, ContentKeyPolicyRestrictionTokenTypeUnknown}
+}
+
+// CreatedByType enumerates the values for created by type.
+type CreatedByType string
+
+const (
+ // CreatedByTypeApplication ...
+ CreatedByTypeApplication CreatedByType = "Application"
+ // CreatedByTypeKey ...
+ CreatedByTypeKey CreatedByType = "Key"
+ // CreatedByTypeManagedIdentity ...
+ CreatedByTypeManagedIdentity CreatedByType = "ManagedIdentity"
+ // CreatedByTypeUser ...
+ CreatedByTypeUser CreatedByType = "User"
+)
+
+// PossibleCreatedByTypeValues returns an array of possible values for the CreatedByType const type.
+func PossibleCreatedByTypeValues() []CreatedByType {
+ return []CreatedByType{CreatedByTypeApplication, CreatedByTypeKey, CreatedByTypeManagedIdentity, CreatedByTypeUser}
+}
+
+// DefaultAction enumerates the values for default action.
+type DefaultAction string
+
+const (
+ // DefaultActionAllow All public IP addresses are allowed.
+ DefaultActionAllow DefaultAction = "Allow"
+ // DefaultActionDeny Public IP addresses are blocked.
+ DefaultActionDeny DefaultAction = "Deny"
+)
+
+// PossibleDefaultActionValues returns an array of possible values for the DefaultAction const type.
+func PossibleDefaultActionValues() []DefaultAction {
+ return []DefaultAction{DefaultActionAllow, DefaultActionDeny}
+}
+
+// DeinterlaceMode enumerates the values for deinterlace mode.
+type DeinterlaceMode string
+
+const (
+ // DeinterlaceModeAutoPixelAdaptive Apply automatic pixel adaptive de-interlacing on each frame in the
+ // input video.
+ DeinterlaceModeAutoPixelAdaptive DeinterlaceMode = "AutoPixelAdaptive"
+ // DeinterlaceModeOff Disables de-interlacing of the source video.
+ DeinterlaceModeOff DeinterlaceMode = "Off"
+)
+
+// PossibleDeinterlaceModeValues returns an array of possible values for the DeinterlaceMode const type.
+func PossibleDeinterlaceModeValues() []DeinterlaceMode {
+ return []DeinterlaceMode{DeinterlaceModeAutoPixelAdaptive, DeinterlaceModeOff}
+}
+
+// DeinterlaceParity enumerates the values for deinterlace parity.
+type DeinterlaceParity string
+
+const (
+ // DeinterlaceParityAuto Automatically detect the order of fields
+ DeinterlaceParityAuto DeinterlaceParity = "Auto"
+ // DeinterlaceParityBottomFieldFirst Apply bottom field first processing of input video.
+ DeinterlaceParityBottomFieldFirst DeinterlaceParity = "BottomFieldFirst"
+ // DeinterlaceParityTopFieldFirst Apply top field first processing of input video.
+ DeinterlaceParityTopFieldFirst DeinterlaceParity = "TopFieldFirst"
+)
+
+// PossibleDeinterlaceParityValues returns an array of possible values for the DeinterlaceParity const type.
+func PossibleDeinterlaceParityValues() []DeinterlaceParity {
+ return []DeinterlaceParity{DeinterlaceParityAuto, DeinterlaceParityBottomFieldFirst, DeinterlaceParityTopFieldFirst}
+}
+
+// EncoderNamedPreset enumerates the values for encoder named preset.
+type EncoderNamedPreset string
+
+const (
+ // EncoderNamedPresetAACGoodQualityAudio Produces a single MP4 file containing only stereo audio encoded at
+ // 192 kbps.
+ EncoderNamedPresetAACGoodQualityAudio EncoderNamedPreset = "AACGoodQualityAudio"
+ // EncoderNamedPresetAdaptiveStreaming Produces a set of GOP aligned MP4 files with H.264 video and stereo
+ // AAC audio. Auto-generates a bitrate ladder based on the input resolution, bitrate and frame rate. The
+ // auto-generated preset will never exceed the input resolution. For example, if the input is 720p, output
+ // will remain 720p at best.
+ EncoderNamedPresetAdaptiveStreaming EncoderNamedPreset = "AdaptiveStreaming"
+ // EncoderNamedPresetContentAwareEncoding Produces a set of GOP-aligned MP4s by using content-aware
+ // encoding. Given any input content, the service performs an initial lightweight analysis of the input
+ // content, and uses the results to determine the optimal number of layers, appropriate bitrate and
+ // resolution settings for delivery by adaptive streaming. This preset is particularly effective for low
+ // and medium complexity videos, where the output files will be at lower bitrates but at a quality that
+ // still delivers a good experience to viewers. The output will contain MP4 files with video and audio
+ // interleaved.
+ EncoderNamedPresetContentAwareEncoding EncoderNamedPreset = "ContentAwareEncoding"
+ // EncoderNamedPresetContentAwareEncodingExperimental Exposes an experimental preset for content-aware
+ // encoding. Given any input content, the service attempts to automatically determine the optimal number of
+ // layers, appropriate bitrate and resolution settings for delivery by adaptive streaming. The underlying
+ // algorithms will continue to evolve over time. The output will contain MP4 files with video and audio
+ // interleaved.
+ EncoderNamedPresetContentAwareEncodingExperimental EncoderNamedPreset = "ContentAwareEncodingExperimental"
+ // EncoderNamedPresetCopyAllBitrateNonInterleaved Copy all video and audio streams from the input asset as
+ // non-interleaved video and audio output files. This preset can be used to clip an existing asset or
+ // convert a group of key frame (GOP) aligned MP4 files as an asset that can be streamed.
+ EncoderNamedPresetCopyAllBitrateNonInterleaved EncoderNamedPreset = "CopyAllBitrateNonInterleaved"
+ // EncoderNamedPresetH264MultipleBitrate1080p Produces a set of 8 GOP-aligned MP4 files, ranging from 6000
+ // kbps to 400 kbps, and stereo AAC audio. Resolution starts at 1080p and goes down to 180p.
+ EncoderNamedPresetH264MultipleBitrate1080p EncoderNamedPreset = "H264MultipleBitrate1080p"
+ // EncoderNamedPresetH264MultipleBitrate720p Produces a set of 6 GOP-aligned MP4 files, ranging from 3400
+ // kbps to 400 kbps, and stereo AAC audio. Resolution starts at 720p and goes down to 180p.
+ EncoderNamedPresetH264MultipleBitrate720p EncoderNamedPreset = "H264MultipleBitrate720p"
+ // EncoderNamedPresetH264MultipleBitrateSD Produces a set of 5 GOP-aligned MP4 files, ranging from 1900kbps
+ // to 400 kbps, and stereo AAC audio. Resolution starts at 480p and goes down to 240p.
+ EncoderNamedPresetH264MultipleBitrateSD EncoderNamedPreset = "H264MultipleBitrateSD"
+ // EncoderNamedPresetH264SingleBitrate1080p Produces an MP4 file where the video is encoded with H.264
+ // codec at 6750 kbps and a picture height of 1080 pixels, and the stereo audio is encoded with AAC-LC
+ // codec at 128 kbps.
+ EncoderNamedPresetH264SingleBitrate1080p EncoderNamedPreset = "H264SingleBitrate1080p"
+ // EncoderNamedPresetH264SingleBitrate720p Produces an MP4 file where the video is encoded with H.264 codec
+ // at 4500 kbps and a picture height of 720 pixels, and the stereo audio is encoded with AAC-LC codec at
+ // 128 kbps.
+ EncoderNamedPresetH264SingleBitrate720p EncoderNamedPreset = "H264SingleBitrate720p"
+ // EncoderNamedPresetH264SingleBitrateSD Produces an MP4 file where the video is encoded with H.264 codec
+ // at 2200 kbps and a picture height of 480 pixels, and the stereo audio is encoded with AAC-LC codec at
+ // 128 kbps.
+ EncoderNamedPresetH264SingleBitrateSD EncoderNamedPreset = "H264SingleBitrateSD"
+ // EncoderNamedPresetH265AdaptiveStreaming Produces a set of GOP aligned MP4 files with H.265 video and
+ // stereo AAC audio. Auto-generates a bitrate ladder based on the input resolution, bitrate and frame rate.
+ // The auto-generated preset will never exceed the input resolution. For example, if the input is 720p,
+ // output will remain 720p at best.
+ EncoderNamedPresetH265AdaptiveStreaming EncoderNamedPreset = "H265AdaptiveStreaming"
+ // EncoderNamedPresetH265ContentAwareEncoding Produces a set of GOP-aligned MP4s by using content-aware
+ // encoding. Given any input content, the service performs an initial lightweight analysis of the input
+ // content, and uses the results to determine the optimal number of layers, appropriate bitrate and
+ // resolution settings for delivery by adaptive streaming. This preset is particularly effective for low
+ // and medium complexity videos, where the output files will be at lower bitrates but at a quality that
+ // still delivers a good experience to viewers. The output will contain MP4 files with video and audio
+ // interleaved.
+ EncoderNamedPresetH265ContentAwareEncoding EncoderNamedPreset = "H265ContentAwareEncoding"
+ // EncoderNamedPresetH265SingleBitrate1080p Produces an MP4 file where the video is encoded with H.265
+ // codec at 3500 kbps and a picture height of 1080 pixels, and the stereo audio is encoded with AAC-LC
+ // codec at 128 kbps.
+ EncoderNamedPresetH265SingleBitrate1080p EncoderNamedPreset = "H265SingleBitrate1080p"
+ // EncoderNamedPresetH265SingleBitrate4K Produces an MP4 file where the video is encoded with H.265 codec
+ // at 9500 kbps and a picture height of 2160 pixels, and the stereo audio is encoded with AAC-LC codec at
+ // 128 kbps.
+ EncoderNamedPresetH265SingleBitrate4K EncoderNamedPreset = "H265SingleBitrate4K"
+ // EncoderNamedPresetH265SingleBitrate720p Produces an MP4 file where the video is encoded with H.265 codec
+ // at 1800 kbps and a picture height of 720 pixels, and the stereo audio is encoded with AAC-LC codec at
+ // 128 kbps.
+ EncoderNamedPresetH265SingleBitrate720p EncoderNamedPreset = "H265SingleBitrate720p"
+)
+
+// PossibleEncoderNamedPresetValues returns an array of possible values for the EncoderNamedPreset const type.
+func PossibleEncoderNamedPresetValues() []EncoderNamedPreset {
+ return []EncoderNamedPreset{EncoderNamedPresetAACGoodQualityAudio, EncoderNamedPresetAdaptiveStreaming, EncoderNamedPresetContentAwareEncoding, EncoderNamedPresetContentAwareEncodingExperimental, EncoderNamedPresetCopyAllBitrateNonInterleaved, EncoderNamedPresetH264MultipleBitrate1080p, EncoderNamedPresetH264MultipleBitrate720p, EncoderNamedPresetH264MultipleBitrateSD, EncoderNamedPresetH264SingleBitrate1080p, EncoderNamedPresetH264SingleBitrate720p, EncoderNamedPresetH264SingleBitrateSD, EncoderNamedPresetH265AdaptiveStreaming, EncoderNamedPresetH265ContentAwareEncoding, EncoderNamedPresetH265SingleBitrate1080p, EncoderNamedPresetH265SingleBitrate4K, EncoderNamedPresetH265SingleBitrate720p}
+}
+
+// EncryptionScheme enumerates the values for encryption scheme.
+type EncryptionScheme string
+
+const (
+ // EncryptionSchemeCommonEncryptionCbcs CommonEncryptionCbcs scheme
+ EncryptionSchemeCommonEncryptionCbcs EncryptionScheme = "CommonEncryptionCbcs"
+ // EncryptionSchemeCommonEncryptionCenc CommonEncryptionCenc scheme
+ EncryptionSchemeCommonEncryptionCenc EncryptionScheme = "CommonEncryptionCenc"
+ // EncryptionSchemeEnvelopeEncryption EnvelopeEncryption scheme
+ EncryptionSchemeEnvelopeEncryption EncryptionScheme = "EnvelopeEncryption"
+ // EncryptionSchemeNoEncryption NoEncryption scheme
+ EncryptionSchemeNoEncryption EncryptionScheme = "NoEncryption"
+)
+
+// PossibleEncryptionSchemeValues returns an array of possible values for the EncryptionScheme const type.
+func PossibleEncryptionSchemeValues() []EncryptionScheme {
+ return []EncryptionScheme{EncryptionSchemeCommonEncryptionCbcs, EncryptionSchemeCommonEncryptionCenc, EncryptionSchemeEnvelopeEncryption, EncryptionSchemeNoEncryption}
+}
+
+// EntropyMode enumerates the values for entropy mode.
+type EntropyMode string
+
+const (
+ // EntropyModeCabac Context Adaptive Binary Arithmetic Coder (CABAC) entropy encoding.
+ EntropyModeCabac EntropyMode = "Cabac"
+ // EntropyModeCavlc Context Adaptive Variable Length Coder (CAVLC) entropy encoding.
+ EntropyModeCavlc EntropyMode = "Cavlc"
+)
+
+// PossibleEntropyModeValues returns an array of possible values for the EntropyMode const type.
+func PossibleEntropyModeValues() []EntropyMode {
+ return []EntropyMode{EntropyModeCabac, EntropyModeCavlc}
+}
+
+// FaceRedactorMode enumerates the values for face redactor mode.
+type FaceRedactorMode string
+
+const (
+ // FaceRedactorModeAnalyze Analyze mode detects faces and outputs a metadata file with the results. Allows
+ // editing of the metadata file before faces are blurred with Redact mode.
+ FaceRedactorModeAnalyze FaceRedactorMode = "Analyze"
+ // FaceRedactorModeCombined Combined mode does the Analyze and Redact steps in one pass when editing the
+ // analyzed faces is not desired.
+ FaceRedactorModeCombined FaceRedactorMode = "Combined"
+ // FaceRedactorModeRedact Redact mode consumes the metadata file from Analyze mode and redacts the faces
+ // found.
+ FaceRedactorModeRedact FaceRedactorMode = "Redact"
+)
+
+// PossibleFaceRedactorModeValues returns an array of possible values for the FaceRedactorMode const type.
+func PossibleFaceRedactorModeValues() []FaceRedactorMode {
+ return []FaceRedactorMode{FaceRedactorModeAnalyze, FaceRedactorModeCombined, FaceRedactorModeRedact}
+}
+
+// FilterTrackPropertyCompareOperation enumerates the values for filter track property compare operation.
+type FilterTrackPropertyCompareOperation string
+
+const (
+ // FilterTrackPropertyCompareOperationEqual The equal operation.
+ FilterTrackPropertyCompareOperationEqual FilterTrackPropertyCompareOperation = "Equal"
+ // FilterTrackPropertyCompareOperationNotEqual The not equal operation.
+ FilterTrackPropertyCompareOperationNotEqual FilterTrackPropertyCompareOperation = "NotEqual"
+)
+
+// PossibleFilterTrackPropertyCompareOperationValues returns an array of possible values for the FilterTrackPropertyCompareOperation const type.
+func PossibleFilterTrackPropertyCompareOperationValues() []FilterTrackPropertyCompareOperation {
+ return []FilterTrackPropertyCompareOperation{FilterTrackPropertyCompareOperationEqual, FilterTrackPropertyCompareOperationNotEqual}
+}
+
+// FilterTrackPropertyType enumerates the values for filter track property type.
+type FilterTrackPropertyType string
+
+const (
+ // FilterTrackPropertyTypeBitrate The bitrate.
+ FilterTrackPropertyTypeBitrate FilterTrackPropertyType = "Bitrate"
+ // FilterTrackPropertyTypeFourCC The fourCC.
+ FilterTrackPropertyTypeFourCC FilterTrackPropertyType = "FourCC"
+ // FilterTrackPropertyTypeLanguage The language.
+ FilterTrackPropertyTypeLanguage FilterTrackPropertyType = "Language"
+ // FilterTrackPropertyTypeName The name.
+ FilterTrackPropertyTypeName FilterTrackPropertyType = "Name"
+ // FilterTrackPropertyTypeType The type.
+ FilterTrackPropertyTypeType FilterTrackPropertyType = "Type"
+ // FilterTrackPropertyTypeUnknown The unknown track property type.
+ FilterTrackPropertyTypeUnknown FilterTrackPropertyType = "Unknown"
+)
+
+// PossibleFilterTrackPropertyTypeValues returns an array of possible values for the FilterTrackPropertyType const type.
+func PossibleFilterTrackPropertyTypeValues() []FilterTrackPropertyType {
+ return []FilterTrackPropertyType{FilterTrackPropertyTypeBitrate, FilterTrackPropertyTypeFourCC, FilterTrackPropertyTypeLanguage, FilterTrackPropertyTypeName, FilterTrackPropertyTypeType, FilterTrackPropertyTypeUnknown}
+}
+
+// H264Complexity enumerates the values for h264 complexity.
+type H264Complexity string
+
+const (
+ // H264ComplexityBalanced Tells the encoder to use settings that achieve a balance between speed and
+ // quality.
+ H264ComplexityBalanced H264Complexity = "Balanced"
+ // H264ComplexityQuality Tells the encoder to use settings that are optimized to produce higher quality
+ // output at the expense of slower overall encode time.
+ H264ComplexityQuality H264Complexity = "Quality"
+ // H264ComplexitySpeed Tells the encoder to use settings that are optimized for faster encoding. Quality is
+ // sacrificed to decrease encoding time.
+ H264ComplexitySpeed H264Complexity = "Speed"
+)
+
+// PossibleH264ComplexityValues returns an array of possible values for the H264Complexity const type.
+func PossibleH264ComplexityValues() []H264Complexity {
+ return []H264Complexity{H264ComplexityBalanced, H264ComplexityQuality, H264ComplexitySpeed}
+}
+
+// H264VideoProfile enumerates the values for h264 video profile.
+type H264VideoProfile string
+
+const (
+ // H264VideoProfileAuto Tells the encoder to automatically determine the appropriate H.264 profile.
+ H264VideoProfileAuto H264VideoProfile = "Auto"
+ // H264VideoProfileBaseline Baseline profile
+ H264VideoProfileBaseline H264VideoProfile = "Baseline"
+ // H264VideoProfileHigh High profile.
+ H264VideoProfileHigh H264VideoProfile = "High"
+ // H264VideoProfileHigh422 High 4:2:2 profile.
+ H264VideoProfileHigh422 H264VideoProfile = "High422"
+ // H264VideoProfileHigh444 High 4:4:4 predictive profile.
+ H264VideoProfileHigh444 H264VideoProfile = "High444"
+ // H264VideoProfileMain Main profile
+ H264VideoProfileMain H264VideoProfile = "Main"
+)
+
+// PossibleH264VideoProfileValues returns an array of possible values for the H264VideoProfile const type.
+func PossibleH264VideoProfileValues() []H264VideoProfile {
+ return []H264VideoProfile{H264VideoProfileAuto, H264VideoProfileBaseline, H264VideoProfileHigh, H264VideoProfileHigh422, H264VideoProfileHigh444, H264VideoProfileMain}
+}
+
+// H265Complexity enumerates the values for h265 complexity.
+type H265Complexity string
+
+const (
+ // H265ComplexityBalanced Tells the encoder to use settings that achieve a balance between speed and
+ // quality.
+ H265ComplexityBalanced H265Complexity = "Balanced"
+ // H265ComplexityQuality Tells the encoder to use settings that are optimized to produce higher quality
+ // output at the expense of slower overall encode time.
+ H265ComplexityQuality H265Complexity = "Quality"
+ // H265ComplexitySpeed Tells the encoder to use settings that are optimized for faster encoding. Quality is
+ // sacrificed to decrease encoding time.
+ H265ComplexitySpeed H265Complexity = "Speed"
+)
+
+// PossibleH265ComplexityValues returns an array of possible values for the H265Complexity const type.
+func PossibleH265ComplexityValues() []H265Complexity {
+ return []H265Complexity{H265ComplexityBalanced, H265ComplexityQuality, H265ComplexitySpeed}
+}
+
+// H265VideoProfile enumerates the values for h265 video profile.
+type H265VideoProfile string
+
+const (
+ // H265VideoProfileAuto Tells the encoder to automatically determine the appropriate H.265 profile.
+ H265VideoProfileAuto H265VideoProfile = "Auto"
+ // H265VideoProfileMain Main profile
+ // (https://x265.readthedocs.io/en/default/cli.html?highlight=profile#profile-level-tier)
+ H265VideoProfileMain H265VideoProfile = "Main"
+)
+
+// PossibleH265VideoProfileValues returns an array of possible values for the H265VideoProfile const type.
+func PossibleH265VideoProfileValues() []H265VideoProfile {
+ return []H265VideoProfile{H265VideoProfileAuto, H265VideoProfileMain}
+}
+
+// InsightsType enumerates the values for insights type.
+type InsightsType string
+
+const (
+ // InsightsTypeAllInsights Generate both audio and video insights. Fails if either audio or video Insights
+ // fail.
+ InsightsTypeAllInsights InsightsType = "AllInsights"
+ // InsightsTypeAudioInsightsOnly Generate audio only insights. Ignore video even if present. Fails if no
+ // audio is present.
+ InsightsTypeAudioInsightsOnly InsightsType = "AudioInsightsOnly"
+ // InsightsTypeVideoInsightsOnly Generate video only insights. Ignore audio if present. Fails if no video
+ // is present.
+ InsightsTypeVideoInsightsOnly InsightsType = "VideoInsightsOnly"
+)
+
+// PossibleInsightsTypeValues returns an array of possible values for the InsightsType const type.
+func PossibleInsightsTypeValues() []InsightsType {
+ return []InsightsType{InsightsTypeAllInsights, InsightsTypeAudioInsightsOnly, InsightsTypeVideoInsightsOnly}
+}
+
+// JobErrorCategory enumerates the values for job error category.
+type JobErrorCategory string
+
+const (
+ // JobErrorCategoryConfiguration The error is configuration related.
+ JobErrorCategoryConfiguration JobErrorCategory = "Configuration"
+ // JobErrorCategoryContent The error is related to data in the input files.
+ JobErrorCategoryContent JobErrorCategory = "Content"
+ // JobErrorCategoryDownload The error is download related.
+ JobErrorCategoryDownload JobErrorCategory = "Download"
+ // JobErrorCategoryService The error is service related.
+ JobErrorCategoryService JobErrorCategory = "Service"
+ // JobErrorCategoryUpload The error is upload related.
+ JobErrorCategoryUpload JobErrorCategory = "Upload"
+)
+
+// PossibleJobErrorCategoryValues returns an array of possible values for the JobErrorCategory const type.
+func PossibleJobErrorCategoryValues() []JobErrorCategory {
+ return []JobErrorCategory{JobErrorCategoryConfiguration, JobErrorCategoryContent, JobErrorCategoryDownload, JobErrorCategoryService, JobErrorCategoryUpload}
+}
+
+// JobErrorCode enumerates the values for job error code.
+type JobErrorCode string
+
+const (
+ // JobErrorCodeConfigurationUnsupported There was a problem with the combination of input files and the
+ // configuration settings applied, fix the configuration settings and retry with the same input, or change
+ // input to match the configuration.
+ JobErrorCodeConfigurationUnsupported JobErrorCode = "ConfigurationUnsupported"
+ // JobErrorCodeContentMalformed There was a problem with the input content (for example: zero byte files,
+ // or corrupt/non-decodable files), check the input files.
+ JobErrorCodeContentMalformed JobErrorCode = "ContentMalformed"
+ // JobErrorCodeContentUnsupported There was a problem with the format of the input (not valid media file,
+ // or an unsupported file/codec), check the validity of the input files.
+ JobErrorCodeContentUnsupported JobErrorCode = "ContentUnsupported"
+ // JobErrorCodeDownloadNotAccessible While trying to download the input files, the files were not
+ // accessible, please check the availability of the source.
+ JobErrorCodeDownloadNotAccessible JobErrorCode = "DownloadNotAccessible"
+ // JobErrorCodeDownloadTransientError While trying to download the input files, there was an issue during
+ // transfer (storage service, network errors), see details and check your source.
+ JobErrorCodeDownloadTransientError JobErrorCode = "DownloadTransientError"
+ // JobErrorCodeServiceError Fatal service error, please contact support.
+ JobErrorCodeServiceError JobErrorCode = "ServiceError"
+ // JobErrorCodeServiceTransientError Transient error, please retry, if retry is unsuccessful, please
+ // contact support.
+ JobErrorCodeServiceTransientError JobErrorCode = "ServiceTransientError"
+ // JobErrorCodeUploadNotAccessible While trying to upload the output files, the destination was not
+ // reachable, please check the availability of the destination.
+ JobErrorCodeUploadNotAccessible JobErrorCode = "UploadNotAccessible"
+ // JobErrorCodeUploadTransientError While trying to upload the output files, there was an issue during
+ // transfer (storage service, network errors), see details and check your destination.
+ JobErrorCodeUploadTransientError JobErrorCode = "UploadTransientError"
+)
+
+// PossibleJobErrorCodeValues returns an array of possible values for the JobErrorCode const type.
+func PossibleJobErrorCodeValues() []JobErrorCode {
+ return []JobErrorCode{JobErrorCodeConfigurationUnsupported, JobErrorCodeContentMalformed, JobErrorCodeContentUnsupported, JobErrorCodeDownloadNotAccessible, JobErrorCodeDownloadTransientError, JobErrorCodeServiceError, JobErrorCodeServiceTransientError, JobErrorCodeUploadNotAccessible, JobErrorCodeUploadTransientError}
+}
+
+// JobRetry enumerates the values for job retry.
+type JobRetry string
+
+const (
+ // JobRetryDoNotRetry Issue needs to be investigated and then the job resubmitted with corrections or
+ // retried once the underlying issue has been corrected.
+ JobRetryDoNotRetry JobRetry = "DoNotRetry"
+ // JobRetryMayRetry Issue may be resolved after waiting for a period of time and resubmitting the same Job.
+ JobRetryMayRetry JobRetry = "MayRetry"
+)
+
+// PossibleJobRetryValues returns an array of possible values for the JobRetry const type.
+func PossibleJobRetryValues() []JobRetry {
+ return []JobRetry{JobRetryDoNotRetry, JobRetryMayRetry}
+}
+
+// JobState enumerates the values for job state.
+type JobState string
+
+const (
+ // JobStateCanceled The job was canceled. This is a final state for the job.
+ JobStateCanceled JobState = "Canceled"
+ // JobStateCanceling The job is in the process of being canceled. This is a transient state for the job.
+ JobStateCanceling JobState = "Canceling"
+ // JobStateError The job has encountered an error. This is a final state for the job.
+ JobStateError JobState = "Error"
+ // JobStateFinished The job is finished. This is a final state for the job.
+ JobStateFinished JobState = "Finished"
+ // JobStateProcessing The job is processing. This is a transient state for the job.
+ JobStateProcessing JobState = "Processing"
+ // JobStateQueued The job is in a queued state, waiting for resources to become available. This is a
+ // transient state.
+ JobStateQueued JobState = "Queued"
+ // JobStateScheduled The job is being scheduled to run on an available resource. This is a transient state,
+ // between queued and processing states.
+ JobStateScheduled JobState = "Scheduled"
+)
+
+// PossibleJobStateValues returns an array of possible values for the JobState const type.
+func PossibleJobStateValues() []JobState {
+ return []JobState{JobStateCanceled, JobStateCanceling, JobStateError, JobStateFinished, JobStateProcessing, JobStateQueued, JobStateScheduled}
+}
+
+// LiveEventEncodingType enumerates the values for live event encoding type.
+type LiveEventEncodingType string
+
+const (
+ // LiveEventEncodingTypeNone A contribution live encoder sends a multiple bitrate stream. The ingested
+ // stream passes through the live event without any further processing. It is also called the pass-through
+ // mode.
+ LiveEventEncodingTypeNone LiveEventEncodingType = "None"
+ // LiveEventEncodingTypePremium1080p A contribution live encoder sends a single bitrate stream to the live
+ // event and Media Services creates multiple bitrate streams. The output cannot exceed 1080p in resolution.
+ LiveEventEncodingTypePremium1080p LiveEventEncodingType = "Premium1080p"
+ // LiveEventEncodingTypeStandard A contribution live encoder sends a single bitrate stream to the live
+ // event and Media Services creates multiple bitrate streams. The output cannot exceed 720p in resolution.
+ LiveEventEncodingTypeStandard LiveEventEncodingType = "Standard"
+)
+
+// PossibleLiveEventEncodingTypeValues returns an array of possible values for the LiveEventEncodingType const type.
+func PossibleLiveEventEncodingTypeValues() []LiveEventEncodingType {
+ return []LiveEventEncodingType{LiveEventEncodingTypeNone, LiveEventEncodingTypePremium1080p, LiveEventEncodingTypeStandard}
+}
+
+// LiveEventInputProtocol enumerates the values for live event input protocol.
+type LiveEventInputProtocol string
+
+const (
+ // LiveEventInputProtocolFragmentedMP4 Smooth Streaming input will be sent by the contribution encoder to
+ // the live event.
+ LiveEventInputProtocolFragmentedMP4 LiveEventInputProtocol = "FragmentedMP4"
+ // LiveEventInputProtocolRTMP RTMP input will be sent by the contribution encoder to the live event.
+ LiveEventInputProtocolRTMP LiveEventInputProtocol = "RTMP"
+)
+
+// PossibleLiveEventInputProtocolValues returns an array of possible values for the LiveEventInputProtocol const type.
+func PossibleLiveEventInputProtocolValues() []LiveEventInputProtocol {
+ return []LiveEventInputProtocol{LiveEventInputProtocolFragmentedMP4, LiveEventInputProtocolRTMP}
+}
+
+// LiveEventResourceState enumerates the values for live event resource state.
+type LiveEventResourceState string
+
+const (
+ // LiveEventResourceStateAllocating Allocate action was called on the live event and resources are being
+ // provisioned for this live event. Once allocation completes successfully, the live event will transition
+ // to StandBy state.
+ LiveEventResourceStateAllocating LiveEventResourceState = "Allocating"
+ // LiveEventResourceStateDeleting The live event is being deleted. No billing occurs in this transient
+ // state. Updates or streaming are not allowed during this state.
+ LiveEventResourceStateDeleting LiveEventResourceState = "Deleting"
+ // LiveEventResourceStateRunning The live event resources have been allocated, ingest and preview URLs have
+ // been generated, and it is capable of receiving live streams. At this point, billing is active. You must
+ // explicitly call Stop on the live event resource to halt further billing.
+ LiveEventResourceStateRunning LiveEventResourceState = "Running"
+ // LiveEventResourceStateStandBy Live event resources have been provisioned and is ready to start. Billing
+ // occurs in this state. Most properties can still be updated, however ingest or streaming is not allowed
+ // during this state.
+ LiveEventResourceStateStandBy LiveEventResourceState = "StandBy"
+ // LiveEventResourceStateStarting The live event is being started and resources are being allocated. No
+ // billing occurs in this state. Updates or streaming are not allowed during this state. If an error
+ // occurs, the live event returns to the Stopped state.
+ LiveEventResourceStateStarting LiveEventResourceState = "Starting"
+ // LiveEventResourceStateStopped This is the initial state of the live event after creation (unless
+ // autostart was set to true.) No billing occurs in this state. In this state, the live event properties
+ // can be updated but streaming is not allowed.
+ LiveEventResourceStateStopped LiveEventResourceState = "Stopped"
+ // LiveEventResourceStateStopping The live event is being stopped and resources are being de-provisioned.
+ // No billing occurs in this transient state. Updates or streaming are not allowed during this state.
+ LiveEventResourceStateStopping LiveEventResourceState = "Stopping"
+)
+
+// PossibleLiveEventResourceStateValues returns an array of possible values for the LiveEventResourceState const type.
+func PossibleLiveEventResourceStateValues() []LiveEventResourceState {
+ return []LiveEventResourceState{LiveEventResourceStateAllocating, LiveEventResourceStateDeleting, LiveEventResourceStateRunning, LiveEventResourceStateStandBy, LiveEventResourceStateStarting, LiveEventResourceStateStopped, LiveEventResourceStateStopping}
+}
+
+// LiveOutputResourceState enumerates the values for live output resource state.
+type LiveOutputResourceState string
+
+const (
+ // LiveOutputResourceStateCreating Live output is being created. No content is archived in the asset until
+ // the live output is in running state.
+ LiveOutputResourceStateCreating LiveOutputResourceState = "Creating"
+ // LiveOutputResourceStateDeleting Live output is being deleted. The live asset is being converted from
+ // live to on-demand asset. Any streaming URLs created on the live output asset continue to work.
+ LiveOutputResourceStateDeleting LiveOutputResourceState = "Deleting"
+ // LiveOutputResourceStateRunning Live output is running and archiving live streaming content to the asset
+ // if there is valid input from a contribution encoder.
+ LiveOutputResourceStateRunning LiveOutputResourceState = "Running"
+)
+
+// PossibleLiveOutputResourceStateValues returns an array of possible values for the LiveOutputResourceState const type.
+func PossibleLiveOutputResourceStateValues() []LiveOutputResourceState {
+ return []LiveOutputResourceState{LiveOutputResourceStateCreating, LiveOutputResourceStateDeleting, LiveOutputResourceStateRunning}
+}
+
+// ManagedIdentityType enumerates the values for managed identity type.
+type ManagedIdentityType string
+
+const (
+ // ManagedIdentityTypeNone No managed identity.
+ ManagedIdentityTypeNone ManagedIdentityType = "None"
+ // ManagedIdentityTypeSystemAssigned A system-assigned managed identity.
+ ManagedIdentityTypeSystemAssigned ManagedIdentityType = "SystemAssigned"
+)
+
+// PossibleManagedIdentityTypeValues returns an array of possible values for the ManagedIdentityType const type.
+func PossibleManagedIdentityTypeValues() []ManagedIdentityType {
+ return []ManagedIdentityType{ManagedIdentityTypeNone, ManagedIdentityTypeSystemAssigned}
+}
+
+// MetricAggregationType enumerates the values for metric aggregation type.
+type MetricAggregationType string
+
+const (
+ // MetricAggregationTypeAverage The average.
+ MetricAggregationTypeAverage MetricAggregationType = "Average"
+ // MetricAggregationTypeCount The count of a number of items, usually requests.
+ MetricAggregationTypeCount MetricAggregationType = "Count"
+ // MetricAggregationTypeTotal The sum.
+ MetricAggregationTypeTotal MetricAggregationType = "Total"
+)
+
+// PossibleMetricAggregationTypeValues returns an array of possible values for the MetricAggregationType const type.
+func PossibleMetricAggregationTypeValues() []MetricAggregationType {
+ return []MetricAggregationType{MetricAggregationTypeAverage, MetricAggregationTypeCount, MetricAggregationTypeTotal}
+}
+
+// MetricUnit enumerates the values for metric unit.
+type MetricUnit string
+
+const (
+ // MetricUnitBytes The number of bytes.
+ MetricUnitBytes MetricUnit = "Bytes"
+ // MetricUnitCount The count.
+ MetricUnitCount MetricUnit = "Count"
+ // MetricUnitMilliseconds The number of milliseconds.
+ MetricUnitMilliseconds MetricUnit = "Milliseconds"
+)
+
+// PossibleMetricUnitValues returns an array of possible values for the MetricUnit const type.
+func PossibleMetricUnitValues() []MetricUnit {
+ return []MetricUnit{MetricUnitBytes, MetricUnitCount, MetricUnitMilliseconds}
+}
+
+// OdataType enumerates the values for odata type.
+type OdataType string
+
+const (
+ // OdataTypeContentKeyPolicyPlayReadyContentKeyLocation ...
+ OdataTypeContentKeyPolicyPlayReadyContentKeyLocation OdataType = "ContentKeyPolicyPlayReadyContentKeyLocation"
+ // OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyContentEncryptionKeyFromHeader ...
+ OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyContentEncryptionKeyFromHeader OdataType = "#Microsoft.Media.ContentKeyPolicyPlayReadyContentEncryptionKeyFromHeader"
+ // OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyContentEncryptionKeyFromKeyIdentifier ...
+ OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyContentEncryptionKeyFromKeyIdentifier OdataType = "#Microsoft.Media.ContentKeyPolicyPlayReadyContentEncryptionKeyFromKeyIdentifier"
+)
+
+// PossibleOdataTypeValues returns an array of possible values for the OdataType const type.
+func PossibleOdataTypeValues() []OdataType {
+ return []OdataType{OdataTypeContentKeyPolicyPlayReadyContentKeyLocation, OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyContentEncryptionKeyFromHeader, OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyContentEncryptionKeyFromKeyIdentifier}
+}
+
+// OdataTypeBasicClipTime enumerates the values for odata type basic clip time.
+type OdataTypeBasicClipTime string
+
+const (
+ // OdataTypeBasicClipTimeOdataTypeClipTime ...
+ OdataTypeBasicClipTimeOdataTypeClipTime OdataTypeBasicClipTime = "ClipTime"
+ // OdataTypeBasicClipTimeOdataTypeMicrosoftMediaAbsoluteClipTime ...
+ OdataTypeBasicClipTimeOdataTypeMicrosoftMediaAbsoluteClipTime OdataTypeBasicClipTime = "#Microsoft.Media.AbsoluteClipTime"
+ // OdataTypeBasicClipTimeOdataTypeMicrosoftMediaUtcClipTime ...
+ OdataTypeBasicClipTimeOdataTypeMicrosoftMediaUtcClipTime OdataTypeBasicClipTime = "#Microsoft.Media.UtcClipTime"
+)
+
+// PossibleOdataTypeBasicClipTimeValues returns an array of possible values for the OdataTypeBasicClipTime const type.
+func PossibleOdataTypeBasicClipTimeValues() []OdataTypeBasicClipTime {
+ return []OdataTypeBasicClipTime{OdataTypeBasicClipTimeOdataTypeClipTime, OdataTypeBasicClipTimeOdataTypeMicrosoftMediaAbsoluteClipTime, OdataTypeBasicClipTimeOdataTypeMicrosoftMediaUtcClipTime}
+}
+
+// OdataTypeBasicCodec enumerates the values for odata type basic codec.
+type OdataTypeBasicCodec string
+
+const (
+ // OdataTypeBasicCodecOdataTypeCodec ...
+ OdataTypeBasicCodecOdataTypeCodec OdataTypeBasicCodec = "Codec"
+ // OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio ...
+ OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio OdataTypeBasicCodec = "#Microsoft.Media.AacAudio"
+ // OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio ...
+ OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio OdataTypeBasicCodec = "#Microsoft.Media.Audio"
+ // OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio ...
+ OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio OdataTypeBasicCodec = "#Microsoft.Media.CopyAudio"
+ // OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo ...
+ OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo OdataTypeBasicCodec = "#Microsoft.Media.CopyVideo"
+ // OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video ...
+ OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video OdataTypeBasicCodec = "#Microsoft.Media.H264Video"
+ // OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video ...
+ OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video OdataTypeBasicCodec = "#Microsoft.Media.H265Video"
+ // OdataTypeBasicCodecOdataTypeMicrosoftMediaImage ...
+ OdataTypeBasicCodecOdataTypeMicrosoftMediaImage OdataTypeBasicCodec = "#Microsoft.Media.Image"
+ // OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage ...
+ OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage OdataTypeBasicCodec = "#Microsoft.Media.JpgImage"
+ // OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage ...
+ OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage OdataTypeBasicCodec = "#Microsoft.Media.PngImage"
+ // OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo ...
+ OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo OdataTypeBasicCodec = "#Microsoft.Media.Video"
+)
+
+// PossibleOdataTypeBasicCodecValues returns an array of possible values for the OdataTypeBasicCodec const type.
+func PossibleOdataTypeBasicCodecValues() []OdataTypeBasicCodec {
+ return []OdataTypeBasicCodec{OdataTypeBasicCodecOdataTypeCodec, OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio, OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio, OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio, OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo, OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video, OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video, OdataTypeBasicCodecOdataTypeMicrosoftMediaImage, OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage, OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage, OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo}
+}
+
+// OdataTypeBasicContentKeyPolicyConfiguration enumerates the values for odata type basic content key policy
+// configuration.
+type OdataTypeBasicContentKeyPolicyConfiguration string
+
+const (
+ // OdataTypeBasicContentKeyPolicyConfigurationOdataTypeContentKeyPolicyConfiguration ...
+ OdataTypeBasicContentKeyPolicyConfigurationOdataTypeContentKeyPolicyConfiguration OdataTypeBasicContentKeyPolicyConfiguration = "ContentKeyPolicyConfiguration"
+ // OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration ...
+ OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration OdataTypeBasicContentKeyPolicyConfiguration = "#Microsoft.Media.ContentKeyPolicyClearKeyConfiguration"
+ // OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration ...
+ OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration OdataTypeBasicContentKeyPolicyConfiguration = "#Microsoft.Media.ContentKeyPolicyFairPlayConfiguration"
+ // OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration ...
+ OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration OdataTypeBasicContentKeyPolicyConfiguration = "#Microsoft.Media.ContentKeyPolicyPlayReadyConfiguration"
+ // OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration ...
+ OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration OdataTypeBasicContentKeyPolicyConfiguration = "#Microsoft.Media.ContentKeyPolicyUnknownConfiguration"
+ // OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration ...
+ OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration OdataTypeBasicContentKeyPolicyConfiguration = "#Microsoft.Media.ContentKeyPolicyWidevineConfiguration"
+)
+
+// PossibleOdataTypeBasicContentKeyPolicyConfigurationValues returns an array of possible values for the OdataTypeBasicContentKeyPolicyConfiguration const type.
+func PossibleOdataTypeBasicContentKeyPolicyConfigurationValues() []OdataTypeBasicContentKeyPolicyConfiguration {
+ return []OdataTypeBasicContentKeyPolicyConfiguration{OdataTypeBasicContentKeyPolicyConfigurationOdataTypeContentKeyPolicyConfiguration, OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration, OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration, OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration, OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration, OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration}
+}
+
+// OdataTypeBasicContentKeyPolicyRestriction enumerates the values for odata type basic content key policy
+// restriction.
+type OdataTypeBasicContentKeyPolicyRestriction string
+
+const (
+ // OdataTypeBasicContentKeyPolicyRestrictionOdataTypeContentKeyPolicyRestriction ...
+ OdataTypeBasicContentKeyPolicyRestrictionOdataTypeContentKeyPolicyRestriction OdataTypeBasicContentKeyPolicyRestriction = "ContentKeyPolicyRestriction"
+ // OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction ...
+ OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction OdataTypeBasicContentKeyPolicyRestriction = "#Microsoft.Media.ContentKeyPolicyOpenRestriction"
+ // OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction ...
+ OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction OdataTypeBasicContentKeyPolicyRestriction = "#Microsoft.Media.ContentKeyPolicyTokenRestriction"
+ // OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction ...
+ OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction OdataTypeBasicContentKeyPolicyRestriction = "#Microsoft.Media.ContentKeyPolicyUnknownRestriction"
+)
+
+// PossibleOdataTypeBasicContentKeyPolicyRestrictionValues returns an array of possible values for the OdataTypeBasicContentKeyPolicyRestriction const type.
+func PossibleOdataTypeBasicContentKeyPolicyRestrictionValues() []OdataTypeBasicContentKeyPolicyRestriction {
+ return []OdataTypeBasicContentKeyPolicyRestriction{OdataTypeBasicContentKeyPolicyRestrictionOdataTypeContentKeyPolicyRestriction, OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction, OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction, OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction}
+}
+
+// OdataTypeBasicContentKeyPolicyRestrictionTokenKey enumerates the values for odata type basic content key
+// policy restriction token key.
+type OdataTypeBasicContentKeyPolicyRestrictionTokenKey string
+
+const (
+ // OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeContentKeyPolicyRestrictionTokenKey ...
+ OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeContentKeyPolicyRestrictionTokenKey OdataTypeBasicContentKeyPolicyRestrictionTokenKey = "ContentKeyPolicyRestrictionTokenKey"
+ // OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey ...
+ OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey OdataTypeBasicContentKeyPolicyRestrictionTokenKey = "#Microsoft.Media.ContentKeyPolicyRsaTokenKey"
+ // OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey ...
+ OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey OdataTypeBasicContentKeyPolicyRestrictionTokenKey = "#Microsoft.Media.ContentKeyPolicySymmetricTokenKey"
+ // OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey ...
+ OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey OdataTypeBasicContentKeyPolicyRestrictionTokenKey = "#Microsoft.Media.ContentKeyPolicyX509CertificateTokenKey"
+)
+
+// PossibleOdataTypeBasicContentKeyPolicyRestrictionTokenKeyValues returns an array of possible values for the OdataTypeBasicContentKeyPolicyRestrictionTokenKey const type.
+func PossibleOdataTypeBasicContentKeyPolicyRestrictionTokenKeyValues() []OdataTypeBasicContentKeyPolicyRestrictionTokenKey {
+ return []OdataTypeBasicContentKeyPolicyRestrictionTokenKey{OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeContentKeyPolicyRestrictionTokenKey, OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey, OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey, OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey}
+}
+
+// OdataTypeBasicFormat enumerates the values for odata type basic format.
+type OdataTypeBasicFormat string
+
+const (
+ // OdataTypeBasicFormatOdataTypeFormat ...
+ OdataTypeBasicFormatOdataTypeFormat OdataTypeBasicFormat = "Format"
+ // OdataTypeBasicFormatOdataTypeMicrosoftMediaImageFormat ...
+ OdataTypeBasicFormatOdataTypeMicrosoftMediaImageFormat OdataTypeBasicFormat = "#Microsoft.Media.ImageFormat"
+ // OdataTypeBasicFormatOdataTypeMicrosoftMediaJpgFormat ...
+ OdataTypeBasicFormatOdataTypeMicrosoftMediaJpgFormat OdataTypeBasicFormat = "#Microsoft.Media.JpgFormat"
+ // OdataTypeBasicFormatOdataTypeMicrosoftMediaMp4Format ...
+ OdataTypeBasicFormatOdataTypeMicrosoftMediaMp4Format OdataTypeBasicFormat = "#Microsoft.Media.Mp4Format"
+ // OdataTypeBasicFormatOdataTypeMicrosoftMediaMultiBitrateFormat ...
+ OdataTypeBasicFormatOdataTypeMicrosoftMediaMultiBitrateFormat OdataTypeBasicFormat = "#Microsoft.Media.MultiBitrateFormat"
+ // OdataTypeBasicFormatOdataTypeMicrosoftMediaPngFormat ...
+ OdataTypeBasicFormatOdataTypeMicrosoftMediaPngFormat OdataTypeBasicFormat = "#Microsoft.Media.PngFormat"
+ // OdataTypeBasicFormatOdataTypeMicrosoftMediaTransportStreamFormat ...
+ OdataTypeBasicFormatOdataTypeMicrosoftMediaTransportStreamFormat OdataTypeBasicFormat = "#Microsoft.Media.TransportStreamFormat"
+)
+
+// PossibleOdataTypeBasicFormatValues returns an array of possible values for the OdataTypeBasicFormat const type.
+func PossibleOdataTypeBasicFormatValues() []OdataTypeBasicFormat {
+ return []OdataTypeBasicFormat{OdataTypeBasicFormatOdataTypeFormat, OdataTypeBasicFormatOdataTypeMicrosoftMediaImageFormat, OdataTypeBasicFormatOdataTypeMicrosoftMediaJpgFormat, OdataTypeBasicFormatOdataTypeMicrosoftMediaMp4Format, OdataTypeBasicFormatOdataTypeMicrosoftMediaMultiBitrateFormat, OdataTypeBasicFormatOdataTypeMicrosoftMediaPngFormat, OdataTypeBasicFormatOdataTypeMicrosoftMediaTransportStreamFormat}
+}
+
+// OdataTypeBasicInputDefinition enumerates the values for odata type basic input definition.
+type OdataTypeBasicInputDefinition string
+
+const (
+ // OdataTypeBasicInputDefinitionOdataTypeInputDefinition ...
+ OdataTypeBasicInputDefinitionOdataTypeInputDefinition OdataTypeBasicInputDefinition = "InputDefinition"
+ // OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromAllInputFile ...
+ OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromAllInputFile OdataTypeBasicInputDefinition = "#Microsoft.Media.FromAllInputFile"
+ // OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromEachInputFile ...
+ OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromEachInputFile OdataTypeBasicInputDefinition = "#Microsoft.Media.FromEachInputFile"
+ // OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaInputFile ...
+ OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaInputFile OdataTypeBasicInputDefinition = "#Microsoft.Media.InputFile"
+)
+
+// PossibleOdataTypeBasicInputDefinitionValues returns an array of possible values for the OdataTypeBasicInputDefinition const type.
+func PossibleOdataTypeBasicInputDefinitionValues() []OdataTypeBasicInputDefinition {
+ return []OdataTypeBasicInputDefinition{OdataTypeBasicInputDefinitionOdataTypeInputDefinition, OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromAllInputFile, OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromEachInputFile, OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaInputFile}
+}
+
+// OdataTypeBasicJobInput enumerates the values for odata type basic job input.
+type OdataTypeBasicJobInput string
+
+const (
+ // OdataTypeBasicJobInputOdataTypeJobInput ...
+ OdataTypeBasicJobInputOdataTypeJobInput OdataTypeBasicJobInput = "JobInput"
+ // OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputAsset ...
+ OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputAsset OdataTypeBasicJobInput = "#Microsoft.Media.JobInputAsset"
+ // OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputClip ...
+ OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputClip OdataTypeBasicJobInput = "#Microsoft.Media.JobInputClip"
+ // OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputHTTP ...
+ OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputHTTP OdataTypeBasicJobInput = "#Microsoft.Media.JobInputHttp"
+ // OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputs ...
+ OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputs OdataTypeBasicJobInput = "#Microsoft.Media.JobInputs"
+ // OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputSequence ...
+ OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputSequence OdataTypeBasicJobInput = "#Microsoft.Media.JobInputSequence"
+)
+
+// PossibleOdataTypeBasicJobInputValues returns an array of possible values for the OdataTypeBasicJobInput const type.
+func PossibleOdataTypeBasicJobInputValues() []OdataTypeBasicJobInput {
+ return []OdataTypeBasicJobInput{OdataTypeBasicJobInputOdataTypeJobInput, OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputAsset, OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputClip, OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputHTTP, OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputs, OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputSequence}
+}
+
+// OdataTypeBasicJobOutput enumerates the values for odata type basic job output.
+type OdataTypeBasicJobOutput string
+
+const (
+ // OdataTypeBasicJobOutputOdataTypeJobOutput ...
+ OdataTypeBasicJobOutputOdataTypeJobOutput OdataTypeBasicJobOutput = "JobOutput"
+ // OdataTypeBasicJobOutputOdataTypeMicrosoftMediaJobOutputAsset ...
+ OdataTypeBasicJobOutputOdataTypeMicrosoftMediaJobOutputAsset OdataTypeBasicJobOutput = "#Microsoft.Media.JobOutputAsset"
+)
+
+// PossibleOdataTypeBasicJobOutputValues returns an array of possible values for the OdataTypeBasicJobOutput const type.
+func PossibleOdataTypeBasicJobOutputValues() []OdataTypeBasicJobOutput {
+ return []OdataTypeBasicJobOutput{OdataTypeBasicJobOutputOdataTypeJobOutput, OdataTypeBasicJobOutputOdataTypeMicrosoftMediaJobOutputAsset}
+}
+
+// OdataTypeBasicLayer enumerates the values for odata type basic layer.
+type OdataTypeBasicLayer string
+
+const (
+ // OdataTypeBasicLayerOdataTypeLayer ...
+ OdataTypeBasicLayerOdataTypeLayer OdataTypeBasicLayer = "Layer"
+ // OdataTypeBasicLayerOdataTypeMicrosoftMediaH264Layer ...
+ OdataTypeBasicLayerOdataTypeMicrosoftMediaH264Layer OdataTypeBasicLayer = "#Microsoft.Media.H264Layer"
+ // OdataTypeBasicLayerOdataTypeMicrosoftMediaH265Layer ...
+ OdataTypeBasicLayerOdataTypeMicrosoftMediaH265Layer OdataTypeBasicLayer = "#Microsoft.Media.H265Layer"
+ // OdataTypeBasicLayerOdataTypeMicrosoftMediaH265VideoLayer ...
+ OdataTypeBasicLayerOdataTypeMicrosoftMediaH265VideoLayer OdataTypeBasicLayer = "#Microsoft.Media.H265VideoLayer"
+ // OdataTypeBasicLayerOdataTypeMicrosoftMediaJpgLayer ...
+ OdataTypeBasicLayerOdataTypeMicrosoftMediaJpgLayer OdataTypeBasicLayer = "#Microsoft.Media.JpgLayer"
+ // OdataTypeBasicLayerOdataTypeMicrosoftMediaPngLayer ...
+ OdataTypeBasicLayerOdataTypeMicrosoftMediaPngLayer OdataTypeBasicLayer = "#Microsoft.Media.PngLayer"
+ // OdataTypeBasicLayerOdataTypeMicrosoftMediaVideoLayer ...
+ OdataTypeBasicLayerOdataTypeMicrosoftMediaVideoLayer OdataTypeBasicLayer = "#Microsoft.Media.VideoLayer"
+)
+
+// PossibleOdataTypeBasicLayerValues returns an array of possible values for the OdataTypeBasicLayer const type.
+func PossibleOdataTypeBasicLayerValues() []OdataTypeBasicLayer {
+ return []OdataTypeBasicLayer{OdataTypeBasicLayerOdataTypeLayer, OdataTypeBasicLayerOdataTypeMicrosoftMediaH264Layer, OdataTypeBasicLayerOdataTypeMicrosoftMediaH265Layer, OdataTypeBasicLayerOdataTypeMicrosoftMediaH265VideoLayer, OdataTypeBasicLayerOdataTypeMicrosoftMediaJpgLayer, OdataTypeBasicLayerOdataTypeMicrosoftMediaPngLayer, OdataTypeBasicLayerOdataTypeMicrosoftMediaVideoLayer}
+}
+
+// OdataTypeBasicOverlay enumerates the values for odata type basic overlay.
+type OdataTypeBasicOverlay string
+
+const (
+ // OdataTypeBasicOverlayOdataTypeMicrosoftMediaAudioOverlay ...
+ OdataTypeBasicOverlayOdataTypeMicrosoftMediaAudioOverlay OdataTypeBasicOverlay = "#Microsoft.Media.AudioOverlay"
+ // OdataTypeBasicOverlayOdataTypeMicrosoftMediaVideoOverlay ...
+ OdataTypeBasicOverlayOdataTypeMicrosoftMediaVideoOverlay OdataTypeBasicOverlay = "#Microsoft.Media.VideoOverlay"
+ // OdataTypeBasicOverlayOdataTypeOverlay ...
+ OdataTypeBasicOverlayOdataTypeOverlay OdataTypeBasicOverlay = "Overlay"
+)
+
+// PossibleOdataTypeBasicOverlayValues returns an array of possible values for the OdataTypeBasicOverlay const type.
+func PossibleOdataTypeBasicOverlayValues() []OdataTypeBasicOverlay {
+ return []OdataTypeBasicOverlay{OdataTypeBasicOverlayOdataTypeMicrosoftMediaAudioOverlay, OdataTypeBasicOverlayOdataTypeMicrosoftMediaVideoOverlay, OdataTypeBasicOverlayOdataTypeOverlay}
+}
+
+// OdataTypeBasicPreset enumerates the values for odata type basic preset.
+type OdataTypeBasicPreset string
+
+const (
+ // OdataTypeBasicPresetOdataTypeMicrosoftMediaAudioAnalyzerPreset ...
+ OdataTypeBasicPresetOdataTypeMicrosoftMediaAudioAnalyzerPreset OdataTypeBasicPreset = "#Microsoft.Media.AudioAnalyzerPreset"
+ // OdataTypeBasicPresetOdataTypeMicrosoftMediaBuiltInStandardEncoderPreset ...
+ OdataTypeBasicPresetOdataTypeMicrosoftMediaBuiltInStandardEncoderPreset OdataTypeBasicPreset = "#Microsoft.Media.BuiltInStandardEncoderPreset"
+ // OdataTypeBasicPresetOdataTypeMicrosoftMediaFaceDetectorPreset ...
+ OdataTypeBasicPresetOdataTypeMicrosoftMediaFaceDetectorPreset OdataTypeBasicPreset = "#Microsoft.Media.FaceDetectorPreset"
+ // OdataTypeBasicPresetOdataTypeMicrosoftMediaStandardEncoderPreset ...
+ OdataTypeBasicPresetOdataTypeMicrosoftMediaStandardEncoderPreset OdataTypeBasicPreset = "#Microsoft.Media.StandardEncoderPreset"
+ // OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset ...
+ OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset OdataTypeBasicPreset = "#Microsoft.Media.VideoAnalyzerPreset"
+ // OdataTypeBasicPresetOdataTypePreset ...
+ OdataTypeBasicPresetOdataTypePreset OdataTypeBasicPreset = "Preset"
+)
+
+// PossibleOdataTypeBasicPresetValues returns an array of possible values for the OdataTypeBasicPreset const type.
+func PossibleOdataTypeBasicPresetValues() []OdataTypeBasicPreset {
+ return []OdataTypeBasicPreset{OdataTypeBasicPresetOdataTypeMicrosoftMediaAudioAnalyzerPreset, OdataTypeBasicPresetOdataTypeMicrosoftMediaBuiltInStandardEncoderPreset, OdataTypeBasicPresetOdataTypeMicrosoftMediaFaceDetectorPreset, OdataTypeBasicPresetOdataTypeMicrosoftMediaStandardEncoderPreset, OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset, OdataTypeBasicPresetOdataTypePreset}
+}
+
+// OdataTypeBasicTrackDescriptor enumerates the values for odata type basic track descriptor.
+type OdataTypeBasicTrackDescriptor string
+
+const (
+ // OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaAudioTrackDescriptor ...
+ OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaAudioTrackDescriptor OdataTypeBasicTrackDescriptor = "#Microsoft.Media.AudioTrackDescriptor"
+ // OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByAttribute ...
+ OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByAttribute OdataTypeBasicTrackDescriptor = "#Microsoft.Media.SelectAudioTrackByAttribute"
+ // OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByID ...
+ OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByID OdataTypeBasicTrackDescriptor = "#Microsoft.Media.SelectAudioTrackById"
+ // OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByAttribute ...
+ OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByAttribute OdataTypeBasicTrackDescriptor = "#Microsoft.Media.SelectVideoTrackByAttribute"
+ // OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByID ...
+ OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByID OdataTypeBasicTrackDescriptor = "#Microsoft.Media.SelectVideoTrackById"
+ // OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaVideoTrackDescriptor ...
+ OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaVideoTrackDescriptor OdataTypeBasicTrackDescriptor = "#Microsoft.Media.VideoTrackDescriptor"
+ // OdataTypeBasicTrackDescriptorOdataTypeTrackDescriptor ...
+ OdataTypeBasicTrackDescriptorOdataTypeTrackDescriptor OdataTypeBasicTrackDescriptor = "TrackDescriptor"
+)
+
+// PossibleOdataTypeBasicTrackDescriptorValues returns an array of possible values for the OdataTypeBasicTrackDescriptor const type.
+func PossibleOdataTypeBasicTrackDescriptorValues() []OdataTypeBasicTrackDescriptor {
+ return []OdataTypeBasicTrackDescriptor{OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaAudioTrackDescriptor, OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByAttribute, OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByID, OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByAttribute, OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByID, OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaVideoTrackDescriptor, OdataTypeBasicTrackDescriptorOdataTypeTrackDescriptor}
+}
+
+// OnErrorType enumerates the values for on error type.
+type OnErrorType string
+
+const (
+ // OnErrorTypeContinueJob Tells the service that if this TransformOutput fails, then allow any other
+ // TransformOutput to continue.
+ OnErrorTypeContinueJob OnErrorType = "ContinueJob"
+ // OnErrorTypeStopProcessingJob Tells the service that if this TransformOutput fails, then any other
+ // incomplete TransformOutputs can be stopped.
+ OnErrorTypeStopProcessingJob OnErrorType = "StopProcessingJob"
+)
+
+// PossibleOnErrorTypeValues returns an array of possible values for the OnErrorType const type.
+func PossibleOnErrorTypeValues() []OnErrorType {
+ return []OnErrorType{OnErrorTypeContinueJob, OnErrorTypeStopProcessingJob}
+}
+
+// Priority enumerates the values for priority.
+type Priority string
+
+const (
+ // PriorityHigh Used for TransformOutputs that should take precedence over others.
+ PriorityHigh Priority = "High"
+ // PriorityLow Used for TransformOutputs that can be generated after Normal and High priority
+ // TransformOutputs.
+ PriorityLow Priority = "Low"
+ // PriorityNormal Used for TransformOutputs that can be generated at Normal priority.
+ PriorityNormal Priority = "Normal"
+)
+
+// PossiblePriorityValues returns an array of possible values for the Priority const type.
+func PossiblePriorityValues() []Priority {
+ return []Priority{PriorityHigh, PriorityLow, PriorityNormal}
+}
+
+// PrivateEndpointConnectionProvisioningState enumerates the values for private endpoint connection
+// provisioning state.
+type PrivateEndpointConnectionProvisioningState string
+
+const (
+ // PrivateEndpointConnectionProvisioningStateCreating ...
+ PrivateEndpointConnectionProvisioningStateCreating PrivateEndpointConnectionProvisioningState = "Creating"
+ // PrivateEndpointConnectionProvisioningStateDeleting ...
+ PrivateEndpointConnectionProvisioningStateDeleting PrivateEndpointConnectionProvisioningState = "Deleting"
+ // PrivateEndpointConnectionProvisioningStateFailed ...
+ PrivateEndpointConnectionProvisioningStateFailed PrivateEndpointConnectionProvisioningState = "Failed"
+ // PrivateEndpointConnectionProvisioningStateSucceeded ...
+ PrivateEndpointConnectionProvisioningStateSucceeded PrivateEndpointConnectionProvisioningState = "Succeeded"
+)
+
+// PossiblePrivateEndpointConnectionProvisioningStateValues returns an array of possible values for the PrivateEndpointConnectionProvisioningState const type.
+func PossiblePrivateEndpointConnectionProvisioningStateValues() []PrivateEndpointConnectionProvisioningState {
+ return []PrivateEndpointConnectionProvisioningState{PrivateEndpointConnectionProvisioningStateCreating, PrivateEndpointConnectionProvisioningStateDeleting, PrivateEndpointConnectionProvisioningStateFailed, PrivateEndpointConnectionProvisioningStateSucceeded}
+}
+
+// PrivateEndpointServiceConnectionStatus enumerates the values for private endpoint service connection status.
+type PrivateEndpointServiceConnectionStatus string
+
+const (
+ // PrivateEndpointServiceConnectionStatusApproved ...
+ PrivateEndpointServiceConnectionStatusApproved PrivateEndpointServiceConnectionStatus = "Approved"
+ // PrivateEndpointServiceConnectionStatusPending ...
+ PrivateEndpointServiceConnectionStatusPending PrivateEndpointServiceConnectionStatus = "Pending"
+ // PrivateEndpointServiceConnectionStatusRejected ...
+ PrivateEndpointServiceConnectionStatusRejected PrivateEndpointServiceConnectionStatus = "Rejected"
+)
+
+// PossiblePrivateEndpointServiceConnectionStatusValues returns an array of possible values for the PrivateEndpointServiceConnectionStatus const type.
+func PossiblePrivateEndpointServiceConnectionStatusValues() []PrivateEndpointServiceConnectionStatus {
+ return []PrivateEndpointServiceConnectionStatus{PrivateEndpointServiceConnectionStatusApproved, PrivateEndpointServiceConnectionStatusPending, PrivateEndpointServiceConnectionStatusRejected}
+}
+
+// Rotation enumerates the values for rotation.
+type Rotation string
+
+const (
+ // RotationAuto Automatically detect and rotate as needed.
+ RotationAuto Rotation = "Auto"
+ // RotationNone Do not rotate the video. If the output format supports it, any metadata about rotation is
+ // kept intact.
+ RotationNone Rotation = "None"
+ // RotationRotate0 Do not rotate the video but remove any metadata about the rotation.
+ RotationRotate0 Rotation = "Rotate0"
+ // RotationRotate180 Rotate 180 degrees clockwise.
+ RotationRotate180 Rotation = "Rotate180"
+ // RotationRotate270 Rotate 270 degrees clockwise.
+ RotationRotate270 Rotation = "Rotate270"
+ // RotationRotate90 Rotate 90 degrees clockwise.
+ RotationRotate90 Rotation = "Rotate90"
+)
+
+// PossibleRotationValues returns an array of possible values for the Rotation const type.
+func PossibleRotationValues() []Rotation {
+ return []Rotation{RotationAuto, RotationNone, RotationRotate0, RotationRotate180, RotationRotate270, RotationRotate90}
+}
+
+// StorageAccountType enumerates the values for storage account type.
+type StorageAccountType string
+
+const (
+ // StorageAccountTypePrimary The primary storage account for the Media Services account.
+ StorageAccountTypePrimary StorageAccountType = "Primary"
+ // StorageAccountTypeSecondary A secondary storage account for the Media Services account.
+ StorageAccountTypeSecondary StorageAccountType = "Secondary"
+)
+
+// PossibleStorageAccountTypeValues returns an array of possible values for the StorageAccountType const type.
+func PossibleStorageAccountTypeValues() []StorageAccountType {
+ return []StorageAccountType{StorageAccountTypePrimary, StorageAccountTypeSecondary}
+}
+
+// StorageAuthentication enumerates the values for storage authentication.
+type StorageAuthentication string
+
+const (
+ // StorageAuthenticationManagedIdentity Managed Identity authentication.
+ StorageAuthenticationManagedIdentity StorageAuthentication = "ManagedIdentity"
+ // StorageAuthenticationSystem System authentication.
+ StorageAuthenticationSystem StorageAuthentication = "System"
+)
+
+// PossibleStorageAuthenticationValues returns an array of possible values for the StorageAuthentication const type.
+func PossibleStorageAuthenticationValues() []StorageAuthentication {
+ return []StorageAuthentication{StorageAuthenticationManagedIdentity, StorageAuthenticationSystem}
+}
+
+// StreamingEndpointResourceState enumerates the values for streaming endpoint resource state.
+type StreamingEndpointResourceState string
+
+const (
+ // StreamingEndpointResourceStateDeleting The streaming endpoint is being deleted.
+ StreamingEndpointResourceStateDeleting StreamingEndpointResourceState = "Deleting"
+ // StreamingEndpointResourceStateRunning The streaming endpoint is running. It is able to stream content to
+ // clients
+ StreamingEndpointResourceStateRunning StreamingEndpointResourceState = "Running"
+ // StreamingEndpointResourceStateScaling The streaming endpoint is increasing or decreasing scale units.
+ StreamingEndpointResourceStateScaling StreamingEndpointResourceState = "Scaling"
+ // StreamingEndpointResourceStateStarting The streaming endpoint is transitioning to the running state.
+ StreamingEndpointResourceStateStarting StreamingEndpointResourceState = "Starting"
+ // StreamingEndpointResourceStateStopped The initial state of a streaming endpoint after creation. Content
+ // is not ready to be streamed from this endpoint.
+ StreamingEndpointResourceStateStopped StreamingEndpointResourceState = "Stopped"
+ // StreamingEndpointResourceStateStopping The streaming endpoint is transitioning to the stopped state.
+ StreamingEndpointResourceStateStopping StreamingEndpointResourceState = "Stopping"
+)
+
+// PossibleStreamingEndpointResourceStateValues returns an array of possible values for the StreamingEndpointResourceState const type.
+func PossibleStreamingEndpointResourceStateValues() []StreamingEndpointResourceState {
+ return []StreamingEndpointResourceState{StreamingEndpointResourceStateDeleting, StreamingEndpointResourceStateRunning, StreamingEndpointResourceStateScaling, StreamingEndpointResourceStateStarting, StreamingEndpointResourceStateStopped, StreamingEndpointResourceStateStopping}
+}
+
+// StreamingLocatorContentKeyType enumerates the values for streaming locator content key type.
+type StreamingLocatorContentKeyType string
+
+const (
+ // StreamingLocatorContentKeyTypeCommonEncryptionCbcs Common Encryption using CBCS
+ StreamingLocatorContentKeyTypeCommonEncryptionCbcs StreamingLocatorContentKeyType = "CommonEncryptionCbcs"
+ // StreamingLocatorContentKeyTypeCommonEncryptionCenc Common Encryption using CENC
+ StreamingLocatorContentKeyTypeCommonEncryptionCenc StreamingLocatorContentKeyType = "CommonEncryptionCenc"
+ // StreamingLocatorContentKeyTypeEnvelopeEncryption Envelope Encryption
+ StreamingLocatorContentKeyTypeEnvelopeEncryption StreamingLocatorContentKeyType = "EnvelopeEncryption"
+)
+
+// PossibleStreamingLocatorContentKeyTypeValues returns an array of possible values for the StreamingLocatorContentKeyType const type.
+func PossibleStreamingLocatorContentKeyTypeValues() []StreamingLocatorContentKeyType {
+ return []StreamingLocatorContentKeyType{StreamingLocatorContentKeyTypeCommonEncryptionCbcs, StreamingLocatorContentKeyTypeCommonEncryptionCenc, StreamingLocatorContentKeyTypeEnvelopeEncryption}
+}
+
+// StreamingPolicyStreamingProtocol enumerates the values for streaming policy streaming protocol.
+type StreamingPolicyStreamingProtocol string
+
+const (
+ // StreamingPolicyStreamingProtocolDash DASH protocol
+ StreamingPolicyStreamingProtocolDash StreamingPolicyStreamingProtocol = "Dash"
+ // StreamingPolicyStreamingProtocolDownload Download protocol
+ StreamingPolicyStreamingProtocolDownload StreamingPolicyStreamingProtocol = "Download"
+ // StreamingPolicyStreamingProtocolHls HLS protocol
+ StreamingPolicyStreamingProtocolHls StreamingPolicyStreamingProtocol = "Hls"
+ // StreamingPolicyStreamingProtocolSmoothStreaming SmoothStreaming protocol
+ StreamingPolicyStreamingProtocolSmoothStreaming StreamingPolicyStreamingProtocol = "SmoothStreaming"
+)
+
+// PossibleStreamingPolicyStreamingProtocolValues returns an array of possible values for the StreamingPolicyStreamingProtocol const type.
+func PossibleStreamingPolicyStreamingProtocolValues() []StreamingPolicyStreamingProtocol {
+ return []StreamingPolicyStreamingProtocol{StreamingPolicyStreamingProtocolDash, StreamingPolicyStreamingProtocolDownload, StreamingPolicyStreamingProtocolHls, StreamingPolicyStreamingProtocolSmoothStreaming}
+}
+
+// StreamOptionsFlag enumerates the values for stream options flag.
+type StreamOptionsFlag string
+
+const (
+ // StreamOptionsFlagDefault Live streaming with no special latency optimizations.
+ StreamOptionsFlagDefault StreamOptionsFlag = "Default"
+ // StreamOptionsFlagLowLatency The live event provides lower end to end latency by reducing its internal
+ // buffers. This could result in more client buffering during playback if network bandwidth is low.
+ StreamOptionsFlagLowLatency StreamOptionsFlag = "LowLatency"
+)
+
+// PossibleStreamOptionsFlagValues returns an array of possible values for the StreamOptionsFlag const type.
+func PossibleStreamOptionsFlagValues() []StreamOptionsFlag {
+ return []StreamOptionsFlag{StreamOptionsFlagDefault, StreamOptionsFlagLowLatency}
+}
+
+// StretchMode enumerates the values for stretch mode.
+type StretchMode string
+
+const (
+ // StretchModeAutoFit Pad the output (with either letterbox or pillar box) to honor the output resolution,
+ // while ensuring that the active video region in the output has the same aspect ratio as the input. For
+ // example, if the input is 1920x1080 and the encoding preset asks for 1280x1280, then the output will be
+ // at 1280x1280, which contains an inner rectangle of 1280x720 at aspect ratio of 16:9, and pillar box
+ // regions 280 pixels wide at the left and right.
+ StretchModeAutoFit StretchMode = "AutoFit"
+ // StretchModeAutoSize Override the output resolution, and change it to match the display aspect ratio of
+ // the input, without padding. For example, if the input is 1920x1080 and the encoding preset asks for
+ // 1280x1280, then the value in the preset is overridden, and the output will be at 1280x720, which
+ // maintains the input aspect ratio of 16:9.
+ StretchModeAutoSize StretchMode = "AutoSize"
+ // StretchModeNone Strictly respect the output resolution without considering the pixel aspect ratio or
+ // display aspect ratio of the input video.
+ StretchModeNone StretchMode = "None"
+)
+
+// PossibleStretchModeValues returns an array of possible values for the StretchMode const type.
+func PossibleStretchModeValues() []StretchMode {
+ return []StretchMode{StretchModeAutoFit, StretchModeAutoSize, StretchModeNone}
+}
+
+// TrackAttribute enumerates the values for track attribute.
+type TrackAttribute string
+
+const (
+ // TrackAttributeBitrate The bitrate of the track.
+ TrackAttributeBitrate TrackAttribute = "Bitrate"
+ // TrackAttributeLanguage The language of the track.
+ TrackAttributeLanguage TrackAttribute = "Language"
+)
+
+// PossibleTrackAttributeValues returns an array of possible values for the TrackAttribute const type.
+func PossibleTrackAttributeValues() []TrackAttribute {
+ return []TrackAttribute{TrackAttributeBitrate, TrackAttributeLanguage}
+}
+
+// TrackPropertyCompareOperation enumerates the values for track property compare operation.
+type TrackPropertyCompareOperation string
+
+const (
+ // TrackPropertyCompareOperationEqual Equal operation
+ TrackPropertyCompareOperationEqual TrackPropertyCompareOperation = "Equal"
+ // TrackPropertyCompareOperationUnknown Unknown track property compare operation
+ TrackPropertyCompareOperationUnknown TrackPropertyCompareOperation = "Unknown"
+)
+
+// PossibleTrackPropertyCompareOperationValues returns an array of possible values for the TrackPropertyCompareOperation const type.
+func PossibleTrackPropertyCompareOperationValues() []TrackPropertyCompareOperation {
+ return []TrackPropertyCompareOperation{TrackPropertyCompareOperationEqual, TrackPropertyCompareOperationUnknown}
+}
+
+// TrackPropertyType enumerates the values for track property type.
+type TrackPropertyType string
+
+const (
+ // TrackPropertyTypeFourCC Track FourCC
+ TrackPropertyTypeFourCC TrackPropertyType = "FourCC"
+ // TrackPropertyTypeUnknown Unknown track property
+ TrackPropertyTypeUnknown TrackPropertyType = "Unknown"
+)
+
+// PossibleTrackPropertyTypeValues returns an array of possible values for the TrackPropertyType const type.
+func PossibleTrackPropertyTypeValues() []TrackPropertyType {
+ return []TrackPropertyType{TrackPropertyTypeFourCC, TrackPropertyTypeUnknown}
+}
+
+// VideoSyncMode enumerates the values for video sync mode.
+type VideoSyncMode string
+
+const (
+ // VideoSyncModeAuto This is the default method. Chooses between Cfr and Vfr depending on muxer
+ // capabilities. For output format MP4, the default mode is Cfr.
+ VideoSyncModeAuto VideoSyncMode = "Auto"
+ // VideoSyncModeCfr Input frames will be repeated and/or dropped as needed to achieve exactly the requested
+ // constant frame rate. Recommended when the output frame rate is explicitly set at a specified value
+ VideoSyncModeCfr VideoSyncMode = "Cfr"
+ // VideoSyncModePassthrough The presentation timestamps on frames are passed through from the input file to
+ // the output file writer. Recommended when the input source has variable frame rate, and are attempting to
+ // produce multiple layers for adaptive streaming in the output which have aligned GOP boundaries. Note: if
+ // two or more frames in the input have duplicate timestamps, then the output will also have the same
+ // behavior
+ VideoSyncModePassthrough VideoSyncMode = "Passthrough"
+ // VideoSyncModeVfr Similar to the Passthrough mode, but if the input has frames that have duplicate
+ // timestamps, then only one frame is passed through to the output, and others are dropped. Recommended
+ // when the number of output frames is expected to be equal to the number of input frames. For example, the
+ // output is used to calculate a quality metric like PSNR against the input
+ VideoSyncModeVfr VideoSyncMode = "Vfr"
+)
+
+// PossibleVideoSyncModeValues returns an array of possible values for the VideoSyncMode const type.
+func PossibleVideoSyncModeValues() []VideoSyncMode {
+ return []VideoSyncMode{VideoSyncModeAuto, VideoSyncModeCfr, VideoSyncModePassthrough, VideoSyncModeVfr}
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/jobs.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/jobs.go
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/jobs.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/jobs.go
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/liveevents.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/liveevents.go
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/liveevents.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/liveevents.go
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/liveoutputs.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/liveoutputs.go
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/liveoutputs.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/liveoutputs.go
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/locations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/locations.go
similarity index 99%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/locations.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/locations.go
index 3d9bc4efdea9c..bb6a143fa4ef1 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/locations.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/locations.go
@@ -74,7 +74,7 @@ func (client LocationsClient) CheckNameAvailabilityPreparer(ctx context.Context,
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/mediaservices.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/mediaservices.go
similarity index 89%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/mediaservices.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/mediaservices.go
index 57c0a8e70e2b2..233c075f37c46 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/mediaservices.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/mediaservices.go
@@ -76,7 +76,7 @@ func (client MediaservicesClient) CreateOrUpdatePreparer(ctx context.Context, re
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -155,7 +155,7 @@ func (client MediaservicesClient) DeletePreparer(ctx context.Context, resourceGr
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -230,7 +230,7 @@ func (client MediaservicesClient) GetPreparer(ctx context.Context, resourceGroup
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -261,80 +261,6 @@ func (client MediaservicesClient) GetResponder(resp *http.Response) (result Serv
return
}
-// GetBySubscription get the details of a Media Services account
-// Parameters:
-// accountName - the Media Services account name.
-func (client MediaservicesClient) GetBySubscription(ctx context.Context, accountName string) (result Service, err error) {
- if tracing.IsEnabled() {
- ctx = tracing.StartSpan(ctx, fqdn+"/MediaservicesClient.GetBySubscription")
- defer func() {
- sc := -1
- if result.Response.Response != nil {
- sc = result.Response.Response.StatusCode
- }
- tracing.EndSpan(ctx, sc, err)
- }()
- }
- req, err := client.GetBySubscriptionPreparer(ctx, accountName)
- if err != nil {
- err = autorest.NewErrorWithError(err, "media.MediaservicesClient", "GetBySubscription", nil, "Failure preparing request")
- return
- }
-
- resp, err := client.GetBySubscriptionSender(req)
- if err != nil {
- result.Response = autorest.Response{Response: resp}
- err = autorest.NewErrorWithError(err, "media.MediaservicesClient", "GetBySubscription", resp, "Failure sending request")
- return
- }
-
- result, err = client.GetBySubscriptionResponder(resp)
- if err != nil {
- err = autorest.NewErrorWithError(err, "media.MediaservicesClient", "GetBySubscription", resp, "Failure responding to request")
- return
- }
-
- return
-}
-
-// GetBySubscriptionPreparer prepares the GetBySubscription request.
-func (client MediaservicesClient) GetBySubscriptionPreparer(ctx context.Context, accountName string) (*http.Request, error) {
- pathParameters := map[string]interface{}{
- "accountName": autorest.Encode("path", accountName),
- "subscriptionId": autorest.Encode("path", client.SubscriptionID),
- }
-
- const APIVersion = "2020-05-01"
- queryParameters := map[string]interface{}{
- "api-version": APIVersion,
- }
-
- preparer := autorest.CreatePreparer(
- autorest.AsGet(),
- autorest.WithBaseURL(client.BaseURI),
- autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Media/mediaservices/{accountName}", pathParameters),
- autorest.WithQueryParameters(queryParameters))
- return preparer.Prepare((&http.Request{}).WithContext(ctx))
-}
-
-// GetBySubscriptionSender sends the GetBySubscription request. The method will close the
-// http.Response Body if it receives an error.
-func (client MediaservicesClient) GetBySubscriptionSender(req *http.Request) (*http.Response, error) {
- return client.Send(req, azure.DoRetryWithRegistration(client.Client))
-}
-
-// GetBySubscriptionResponder handles the response to the GetBySubscription request. The method always
-// closes the http.Response Body.
-func (client MediaservicesClient) GetBySubscriptionResponder(resp *http.Response) (result Service, err error) {
- err = autorest.Respond(
- resp,
- azure.WithErrorUnlessStatusCode(http.StatusOK),
- autorest.ByUnmarshallingJSON(&result),
- autorest.ByClosing())
- result.Response = autorest.Response{Response: resp}
- return
-}
-
// List list Media Services accounts in the resource group
// Parameters:
// resourceGroupName - the name of the resource group within the Azure subscription.
@@ -383,7 +309,7 @@ func (client MediaservicesClient) ListPreparer(ctx context.Context, resourceGrou
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -496,7 +422,7 @@ func (client MediaservicesClient) ListBySubscriptionPreparer(ctx context.Context
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -610,7 +536,7 @@ func (client MediaservicesClient) ListEdgePoliciesPreparer(ctx context.Context,
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -689,7 +615,7 @@ func (client MediaservicesClient) SyncStorageKeysPreparer(ctx context.Context, r
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -726,7 +652,7 @@ func (client MediaservicesClient) SyncStorageKeysResponder(resp *http.Response)
// resourceGroupName - the name of the resource group within the Azure subscription.
// accountName - the Media Services account name.
// parameters - the request parameters
-func (client MediaservicesClient) Update(ctx context.Context, resourceGroupName string, accountName string, parameters Service) (result Service, err error) {
+func (client MediaservicesClient) Update(ctx context.Context, resourceGroupName string, accountName string, parameters ServiceUpdate) (result Service, err error) {
if tracing.IsEnabled() {
ctx = tracing.StartSpan(ctx, fqdn+"/MediaservicesClient.Update")
defer func() {
@@ -760,19 +686,18 @@ func (client MediaservicesClient) Update(ctx context.Context, resourceGroupName
}
// UpdatePreparer prepares the Update request.
-func (client MediaservicesClient) UpdatePreparer(ctx context.Context, resourceGroupName string, accountName string, parameters Service) (*http.Request, error) {
+func (client MediaservicesClient) UpdatePreparer(ctx context.Context, resourceGroupName string, accountName string, parameters ServiceUpdate) (*http.Request, error) {
pathParameters := map[string]interface{}{
"accountName": autorest.Encode("path", accountName),
"resourceGroupName": autorest.Encode("path", resourceGroupName),
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
- parameters.SystemData = nil
preparer := autorest.CreatePreparer(
autorest.AsContentType("application/json; charset=utf-8"),
autorest.AsPatch(),
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/models.go
similarity index 89%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/models.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/models.go
index 261778b9f4888..85a6214dcd996 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/models.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/models.go
@@ -19,11 +19,11 @@ import (
)
// The package's fully qualified name.
-const fqdn = "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media"
+const fqdn = "github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media"
// AacAudio describes Advanced Audio Codec (AAC) audio encoding settings.
type AacAudio struct {
- // Profile - The encoding profile to be used when encoding audio with AAC. Possible values include: 'AacLc', 'HeAacV1', 'HeAacV2'
+ // Profile - The encoding profile to be used when encoding audio with AAC. Possible values include: 'AacAudioProfileAacLc', 'AacAudioProfileHeAacV1', 'AacAudioProfileHeAacV2'
Profile AacAudioProfile `json:"profile,omitempty"`
// Channels - The number of channels in the audio.
Channels *int32 `json:"channels,omitempty"`
@@ -33,13 +33,13 @@ type AacAudio struct {
Bitrate *int32 `json:"bitrate,omitempty"`
// Label - An optional label for the codec. The label can be used to control muxing behavior.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeCodec', 'OdataTypeMicrosoftMediaAudio', 'OdataTypeMicrosoftMediaAacAudio', 'OdataTypeMicrosoftMediaVideo', 'OdataTypeMicrosoftMediaH265Video', 'OdataTypeMicrosoftMediaCopyVideo', 'OdataTypeMicrosoftMediaImage', 'OdataTypeMicrosoftMediaCopyAudio', 'OdataTypeMicrosoftMediaH264Video', 'OdataTypeMicrosoftMediaJpgImage', 'OdataTypeMicrosoftMediaPngImage'
+ // OdataType - Possible values include: 'OdataTypeBasicCodecOdataTypeCodec', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage'
OdataType OdataTypeBasicCodec `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for AacAudio.
func (aa AacAudio) MarshalJSON() ([]byte, error) {
- aa.OdataType = OdataTypeMicrosoftMediaAacAudio
+ aa.OdataType = OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio
objectMap := make(map[string]interface{})
if aa.Profile != "" {
objectMap["profile"] = aa.Profile
@@ -143,13 +143,13 @@ func (aa AacAudio) AsBasicCodec() (BasicCodec, bool) {
type AbsoluteClipTime struct {
// Time - The time position on the timeline of the input media. It is usually specified as an ISO8601 period. e.g PT30S for 30 seconds.
Time *string `json:"time,omitempty"`
- // OdataType - Possible values include: 'OdataTypeClipTime', 'OdataTypeMicrosoftMediaAbsoluteClipTime', 'OdataTypeMicrosoftMediaUtcClipTime'
+ // OdataType - Possible values include: 'OdataTypeBasicClipTimeOdataTypeClipTime', 'OdataTypeBasicClipTimeOdataTypeMicrosoftMediaAbsoluteClipTime', 'OdataTypeBasicClipTimeOdataTypeMicrosoftMediaUtcClipTime'
OdataType OdataTypeBasicClipTime `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for AbsoluteClipTime.
func (act AbsoluteClipTime) MarshalJSON() ([]byte, error) {
- act.OdataType = OdataTypeMicrosoftMediaAbsoluteClipTime
+ act.OdataType = OdataTypeBasicClipTimeOdataTypeMicrosoftMediaAbsoluteClipTime
objectMap := make(map[string]interface{})
if act.Time != nil {
objectMap["time"] = act.Time
@@ -180,9 +180,17 @@ func (act AbsoluteClipTime) AsBasicClipTime() (BasicClipTime, bool) {
return &act, true
}
+// AccessControl ...
+type AccessControl struct {
+ // DefaultAction - The behavior for IP access control in Key Delivery. Possible values include: 'DefaultActionAllow', 'DefaultActionDeny'
+ DefaultAction DefaultAction `json:"defaultAction,omitempty"`
+ // IPAllowList - The IP allow list for access control in Key Delivery. If the default action is set to 'Allow', the IP allow list must be empty.
+ IPAllowList *[]string `json:"ipAllowList,omitempty"`
+}
+
// AccountEncryption ...
type AccountEncryption struct {
- // Type - The type of key used to encrypt the Account Key. Possible values include: 'SystemKey', 'CustomerKey'
+ // Type - The type of key used to encrypt the Account Key. Possible values include: 'AccountEncryptionKeyTypeSystemKey', 'AccountEncryptionKeyTypeCustomerKey'
Type AccountEncryptionKeyType `json:"type,omitempty"`
// KeyVaultProperties - The properties of the key used to encrypt the account.
KeyVaultProperties *KeyVaultProperties `json:"keyVaultProperties,omitempty"`
@@ -970,7 +978,7 @@ type AssetProperties struct {
Container *string `json:"container,omitempty"`
// StorageAccountName - The name of the storage account.
StorageAccountName *string `json:"storageAccountName,omitempty"`
- // StorageEncryptionFormat - READ-ONLY; The Asset encryption format. One of None or MediaStorageEncryption. Possible values include: 'None', 'MediaStorageClientEncryption'
+ // StorageEncryptionFormat - READ-ONLY; The Asset encryption format. One of None or MediaStorageEncryption. Possible values include: 'AssetStorageEncryptionFormatNone', 'AssetStorageEncryptionFormatMediaStorageClientEncryption'
StorageEncryptionFormat AssetStorageEncryptionFormat `json:"storageEncryptionFormat,omitempty"`
}
@@ -1012,6 +1020,12 @@ type AssetStreamingLocator struct {
DefaultContentKeyPolicyName *string `json:"defaultContentKeyPolicyName,omitempty"`
}
+// MarshalJSON is the custom marshaler for AssetStreamingLocator.
+func (asl AssetStreamingLocator) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ return json.Marshal(objectMap)
+}
+
// BasicAudio defines the common properties for all audio codecs.
type BasicAudio interface {
AsAacAudio() (*AacAudio, bool)
@@ -1028,7 +1042,7 @@ type Audio struct {
Bitrate *int32 `json:"bitrate,omitempty"`
// Label - An optional label for the codec. The label can be used to control muxing behavior.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeCodec', 'OdataTypeMicrosoftMediaAudio', 'OdataTypeMicrosoftMediaAacAudio', 'OdataTypeMicrosoftMediaVideo', 'OdataTypeMicrosoftMediaH265Video', 'OdataTypeMicrosoftMediaCopyVideo', 'OdataTypeMicrosoftMediaImage', 'OdataTypeMicrosoftMediaCopyAudio', 'OdataTypeMicrosoftMediaH264Video', 'OdataTypeMicrosoftMediaJpgImage', 'OdataTypeMicrosoftMediaPngImage'
+ // OdataType - Possible values include: 'OdataTypeBasicCodecOdataTypeCodec', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage'
OdataType OdataTypeBasicCodec `json:"@odata.type,omitempty"`
}
@@ -1040,7 +1054,7 @@ func unmarshalBasicAudio(body []byte) (BasicAudio, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaAacAudio):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio):
var aa AacAudio
err := json.Unmarshal(body, &aa)
return aa, err
@@ -1071,7 +1085,7 @@ func unmarshalBasicAudioArray(body []byte) ([]BasicAudio, error) {
// MarshalJSON is the custom marshaler for Audio.
func (a Audio) MarshalJSON() ([]byte, error) {
- a.OdataType = OdataTypeMicrosoftMediaAudio
+ a.OdataType = OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio
objectMap := make(map[string]interface{})
if a.Channels != nil {
objectMap["channels"] = a.Channels
@@ -1179,11 +1193,11 @@ type BasicAudioAnalyzerPreset interface {
type AudioAnalyzerPreset struct {
// AudioLanguage - The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
AudioLanguage *string `json:"audioLanguage,omitempty"`
- // Mode - Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen. Possible values include: 'Standard', 'Basic'
+ // Mode - Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen. Possible values include: 'AudioAnalysisModeStandard', 'AudioAnalysisModeBasic'
Mode AudioAnalysisMode `json:"mode,omitempty"`
// ExperimentalOptions - Dictionary containing key value pairs for parameters not exposed in the preset itself
ExperimentalOptions map[string]*string `json:"experimentalOptions"`
- // OdataType - Possible values include: 'OdataTypePreset', 'OdataTypeMicrosoftMediaFaceDetectorPreset', 'OdataTypeMicrosoftMediaAudioAnalyzerPreset', 'OdataTypeMicrosoftMediaBuiltInStandardEncoderPreset', 'OdataTypeMicrosoftMediaStandardEncoderPreset', 'OdataTypeMicrosoftMediaVideoAnalyzerPreset'
+ // OdataType - Possible values include: 'OdataTypeBasicPresetOdataTypePreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaFaceDetectorPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaAudioAnalyzerPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaBuiltInStandardEncoderPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaStandardEncoderPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset'
OdataType OdataTypeBasicPreset `json:"@odata.type,omitempty"`
}
@@ -1195,7 +1209,7 @@ func unmarshalBasicAudioAnalyzerPreset(body []byte) (BasicAudioAnalyzerPreset, e
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaVideoAnalyzerPreset):
+ case string(OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset):
var vap VideoAnalyzerPreset
err := json.Unmarshal(body, &vap)
return vap, err
@@ -1226,7 +1240,7 @@ func unmarshalBasicAudioAnalyzerPresetArray(body []byte) ([]BasicAudioAnalyzerPr
// MarshalJSON is the custom marshaler for AudioAnalyzerPreset.
func (aap AudioAnalyzerPreset) MarshalJSON() ([]byte, error) {
- aap.OdataType = OdataTypeMicrosoftMediaAudioAnalyzerPreset
+ aap.OdataType = OdataTypeBasicPresetOdataTypeMicrosoftMediaAudioAnalyzerPreset
objectMap := make(map[string]interface{})
if aap.AudioLanguage != nil {
objectMap["audioLanguage"] = aap.AudioLanguage
@@ -1297,13 +1311,13 @@ type AudioOverlay struct {
FadeOutDuration *string `json:"fadeOutDuration,omitempty"`
// AudioGainLevel - The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
AudioGainLevel *float64 `json:"audioGainLevel,omitempty"`
- // OdataType - Possible values include: 'OdataTypeOverlay', 'OdataTypeMicrosoftMediaAudioOverlay', 'OdataTypeMicrosoftMediaVideoOverlay'
+ // OdataType - Possible values include: 'OdataTypeBasicOverlayOdataTypeOverlay', 'OdataTypeBasicOverlayOdataTypeMicrosoftMediaAudioOverlay', 'OdataTypeBasicOverlayOdataTypeMicrosoftMediaVideoOverlay'
OdataType OdataTypeBasicOverlay `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for AudioOverlay.
func (ao AudioOverlay) MarshalJSON() ([]byte, error) {
- ao.OdataType = OdataTypeMicrosoftMediaAudioOverlay
+ ao.OdataType = OdataTypeBasicOverlayOdataTypeMicrosoftMediaAudioOverlay
objectMap := make(map[string]interface{})
if ao.InputLabel != nil {
objectMap["inputLabel"] = ao.InputLabel
@@ -1358,9 +1372,9 @@ type BasicAudioTrackDescriptor interface {
// AudioTrackDescriptor a TrackSelection to select audio tracks.
type AudioTrackDescriptor struct {
- // ChannelMapping - Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks. Possible values include: 'FrontLeft', 'FrontRight', 'Center', 'LowFrequencyEffects', 'BackLeft', 'BackRight', 'StereoLeft', 'StereoRight'
+ // ChannelMapping - Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks. Possible values include: 'ChannelMappingFrontLeft', 'ChannelMappingFrontRight', 'ChannelMappingCenter', 'ChannelMappingLowFrequencyEffects', 'ChannelMappingBackLeft', 'ChannelMappingBackRight', 'ChannelMappingStereoLeft', 'ChannelMappingStereoRight'
ChannelMapping ChannelMapping `json:"channelMapping,omitempty"`
- // OdataType - Possible values include: 'OdataTypeTrackDescriptor', 'OdataTypeMicrosoftMediaAudioTrackDescriptor', 'OdataTypeMicrosoftMediaSelectAudioTrackByAttribute', 'OdataTypeMicrosoftMediaSelectAudioTrackByID', 'OdataTypeMicrosoftMediaVideoTrackDescriptor', 'OdataTypeMicrosoftMediaSelectVideoTrackByAttribute', 'OdataTypeMicrosoftMediaSelectVideoTrackByID'
+ // OdataType - Possible values include: 'OdataTypeBasicTrackDescriptorOdataTypeTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaAudioTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByAttribute', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByID', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaVideoTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByAttribute', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByID'
OdataType OdataTypeBasicTrackDescriptor `json:"@odata.type,omitempty"`
}
@@ -1372,11 +1386,11 @@ func unmarshalBasicAudioTrackDescriptor(body []byte) (BasicAudioTrackDescriptor,
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaSelectAudioTrackByAttribute):
+ case string(OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByAttribute):
var satba SelectAudioTrackByAttribute
err := json.Unmarshal(body, &satba)
return satba, err
- case string(OdataTypeMicrosoftMediaSelectAudioTrackByID):
+ case string(OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByID):
var satbi SelectAudioTrackByID
err := json.Unmarshal(body, &satbi)
return satbi, err
@@ -1407,7 +1421,7 @@ func unmarshalBasicAudioTrackDescriptorArray(body []byte) ([]BasicAudioTrackDesc
// MarshalJSON is the custom marshaler for AudioTrackDescriptor.
func (atd AudioTrackDescriptor) MarshalJSON() ([]byte, error) {
- atd.OdataType = OdataTypeMicrosoftMediaAudioTrackDescriptor
+ atd.OdataType = OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaAudioTrackDescriptor
objectMap := make(map[string]interface{})
if atd.ChannelMapping != "" {
objectMap["channelMapping"] = atd.ChannelMapping
@@ -1480,18 +1494,24 @@ type AzureEntityResource struct {
Type *string `json:"type,omitempty"`
}
+// MarshalJSON is the custom marshaler for AzureEntityResource.
+func (aer AzureEntityResource) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ return json.Marshal(objectMap)
+}
+
// BuiltInStandardEncoderPreset describes a built-in preset for encoding the input video with the Standard
// Encoder.
type BuiltInStandardEncoderPreset struct {
- // PresetName - The built-in preset to be used for encoding videos. Possible values include: 'H264SingleBitrateSD', 'H264SingleBitrate720p', 'H264SingleBitrate1080p', 'AdaptiveStreaming', 'AACGoodQualityAudio', 'ContentAwareEncodingExperimental', 'ContentAwareEncoding', 'CopyAllBitrateNonInterleaved', 'H264MultipleBitrate1080p', 'H264MultipleBitrate720p', 'H264MultipleBitrateSD', 'H265ContentAwareEncoding', 'H265AdaptiveStreaming', 'H265SingleBitrate720p', 'H265SingleBitrate1080p', 'H265SingleBitrate4K'
+ // PresetName - The built-in preset to be used for encoding videos. Possible values include: 'EncoderNamedPresetH264SingleBitrateSD', 'EncoderNamedPresetH264SingleBitrate720p', 'EncoderNamedPresetH264SingleBitrate1080p', 'EncoderNamedPresetAdaptiveStreaming', 'EncoderNamedPresetAACGoodQualityAudio', 'EncoderNamedPresetContentAwareEncodingExperimental', 'EncoderNamedPresetContentAwareEncoding', 'EncoderNamedPresetCopyAllBitrateNonInterleaved', 'EncoderNamedPresetH264MultipleBitrate1080p', 'EncoderNamedPresetH264MultipleBitrate720p', 'EncoderNamedPresetH264MultipleBitrateSD', 'EncoderNamedPresetH265ContentAwareEncoding', 'EncoderNamedPresetH265AdaptiveStreaming', 'EncoderNamedPresetH265SingleBitrate720p', 'EncoderNamedPresetH265SingleBitrate1080p', 'EncoderNamedPresetH265SingleBitrate4K'
PresetName EncoderNamedPreset `json:"presetName,omitempty"`
- // OdataType - Possible values include: 'OdataTypePreset', 'OdataTypeMicrosoftMediaFaceDetectorPreset', 'OdataTypeMicrosoftMediaAudioAnalyzerPreset', 'OdataTypeMicrosoftMediaBuiltInStandardEncoderPreset', 'OdataTypeMicrosoftMediaStandardEncoderPreset', 'OdataTypeMicrosoftMediaVideoAnalyzerPreset'
+ // OdataType - Possible values include: 'OdataTypeBasicPresetOdataTypePreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaFaceDetectorPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaAudioAnalyzerPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaBuiltInStandardEncoderPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaStandardEncoderPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset'
OdataType OdataTypeBasicPreset `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for BuiltInStandardEncoderPreset.
func (bisep BuiltInStandardEncoderPreset) MarshalJSON() ([]byte, error) {
- bisep.OdataType = OdataTypeMicrosoftMediaBuiltInStandardEncoderPreset
+ bisep.OdataType = OdataTypeBasicPresetOdataTypeMicrosoftMediaBuiltInStandardEncoderPreset
objectMap := make(map[string]interface{})
if bisep.PresetName != "" {
objectMap["presetName"] = bisep.PresetName
@@ -1581,7 +1601,7 @@ type BasicClipTime interface {
// ClipTime base class for specifying a clip time. Use sub classes of this class to specify the time position
// in the media.
type ClipTime struct {
- // OdataType - Possible values include: 'OdataTypeClipTime', 'OdataTypeMicrosoftMediaAbsoluteClipTime', 'OdataTypeMicrosoftMediaUtcClipTime'
+ // OdataType - Possible values include: 'OdataTypeBasicClipTimeOdataTypeClipTime', 'OdataTypeBasicClipTimeOdataTypeMicrosoftMediaAbsoluteClipTime', 'OdataTypeBasicClipTimeOdataTypeMicrosoftMediaUtcClipTime'
OdataType OdataTypeBasicClipTime `json:"@odata.type,omitempty"`
}
@@ -1593,11 +1613,11 @@ func unmarshalBasicClipTime(body []byte) (BasicClipTime, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaAbsoluteClipTime):
+ case string(OdataTypeBasicClipTimeOdataTypeMicrosoftMediaAbsoluteClipTime):
var act AbsoluteClipTime
err := json.Unmarshal(body, &act)
return act, err
- case string(OdataTypeMicrosoftMediaUtcClipTime):
+ case string(OdataTypeBasicClipTimeOdataTypeMicrosoftMediaUtcClipTime):
var uct UtcClipTime
err := json.Unmarshal(body, &uct)
return uct, err
@@ -1628,7 +1648,7 @@ func unmarshalBasicClipTimeArray(body []byte) ([]BasicClipTime, error) {
// MarshalJSON is the custom marshaler for ClipTime.
func (ct ClipTime) MarshalJSON() ([]byte, error) {
- ct.OdataType = OdataTypeClipTime
+ ct.OdataType = OdataTypeBasicClipTimeOdataTypeClipTime
objectMap := make(map[string]interface{})
if ct.OdataType != "" {
objectMap["@odata.type"] = ct.OdataType
@@ -1678,7 +1698,7 @@ type BasicCodec interface {
type Codec struct {
// Label - An optional label for the codec. The label can be used to control muxing behavior.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeCodec', 'OdataTypeMicrosoftMediaAudio', 'OdataTypeMicrosoftMediaAacAudio', 'OdataTypeMicrosoftMediaVideo', 'OdataTypeMicrosoftMediaH265Video', 'OdataTypeMicrosoftMediaCopyVideo', 'OdataTypeMicrosoftMediaImage', 'OdataTypeMicrosoftMediaCopyAudio', 'OdataTypeMicrosoftMediaH264Video', 'OdataTypeMicrosoftMediaJpgImage', 'OdataTypeMicrosoftMediaPngImage'
+ // OdataType - Possible values include: 'OdataTypeBasicCodecOdataTypeCodec', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage'
OdataType OdataTypeBasicCodec `json:"@odata.type,omitempty"`
}
@@ -1690,43 +1710,43 @@ func unmarshalBasicCodec(body []byte) (BasicCodec, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaAudio):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio):
var a Audio
err := json.Unmarshal(body, &a)
return a, err
- case string(OdataTypeMicrosoftMediaAacAudio):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio):
var aa AacAudio
err := json.Unmarshal(body, &aa)
return aa, err
- case string(OdataTypeMicrosoftMediaVideo):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo):
var vVar Video
err := json.Unmarshal(body, &vVar)
return vVar, err
- case string(OdataTypeMicrosoftMediaH265Video):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video):
var hv H265Video
err := json.Unmarshal(body, &hv)
return hv, err
- case string(OdataTypeMicrosoftMediaCopyVideo):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo):
var cv CopyVideo
err := json.Unmarshal(body, &cv)
return cv, err
- case string(OdataTypeMicrosoftMediaImage):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaImage):
var i Image
err := json.Unmarshal(body, &i)
return i, err
- case string(OdataTypeMicrosoftMediaCopyAudio):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio):
var ca CopyAudio
err := json.Unmarshal(body, &ca)
return ca, err
- case string(OdataTypeMicrosoftMediaH264Video):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video):
var hv H264Video
err := json.Unmarshal(body, &hv)
return hv, err
- case string(OdataTypeMicrosoftMediaJpgImage):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage):
var ji JpgImage
err := json.Unmarshal(body, &ji)
return ji, err
- case string(OdataTypeMicrosoftMediaPngImage):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage):
var pi PngImage
err := json.Unmarshal(body, &pi)
return pi, err
@@ -1757,7 +1777,7 @@ func unmarshalBasicCodecArray(body []byte) ([]BasicCodec, error) {
// MarshalJSON is the custom marshaler for Codec.
func (c Codec) MarshalJSON() ([]byte, error) {
- c.OdataType = OdataTypeCodec
+ c.OdataType = OdataTypeBasicCodecOdataTypeCodec
objectMap := make(map[string]interface{})
if c.Label != nil {
objectMap["label"] = c.Label
@@ -1952,13 +1972,13 @@ func (ckp *ContentKeyPolicy) UnmarshalJSON(body []byte) error {
// ContentKeyPolicyClearKeyConfiguration represents a configuration for non-DRM keys.
type ContentKeyPolicyClearKeyConfiguration struct {
- // OdataType - Possible values include: 'OdataTypeContentKeyPolicyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration'
+ // OdataType - Possible values include: 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeContentKeyPolicyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration'
OdataType OdataTypeBasicContentKeyPolicyConfiguration `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for ContentKeyPolicyClearKeyConfiguration.
func (ckpckc ContentKeyPolicyClearKeyConfiguration) MarshalJSON() ([]byte, error) {
- ckpckc.OdataType = OdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration
+ ckpckc.OdataType = OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration
objectMap := make(map[string]interface{})
if ckpckc.OdataType != "" {
objectMap["@odata.type"] = ckpckc.OdataType
@@ -2174,7 +2194,7 @@ type BasicContentKeyPolicyConfiguration interface {
// ContentKeyPolicyConfiguration base class for Content Key Policy configuration. A derived class must be used
// to create a configuration.
type ContentKeyPolicyConfiguration struct {
- // OdataType - Possible values include: 'OdataTypeContentKeyPolicyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration'
+ // OdataType - Possible values include: 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeContentKeyPolicyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration'
OdataType OdataTypeBasicContentKeyPolicyConfiguration `json:"@odata.type,omitempty"`
}
@@ -2186,23 +2206,23 @@ func unmarshalBasicContentKeyPolicyConfiguration(body []byte) (BasicContentKeyPo
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration):
+ case string(OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration):
var ckpckc ContentKeyPolicyClearKeyConfiguration
err := json.Unmarshal(body, &ckpckc)
return ckpckc, err
- case string(OdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration):
+ case string(OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration):
var ckpuc ContentKeyPolicyUnknownConfiguration
err := json.Unmarshal(body, &ckpuc)
return ckpuc, err
- case string(OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration):
+ case string(OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration):
var ckpwc ContentKeyPolicyWidevineConfiguration
err := json.Unmarshal(body, &ckpwc)
return ckpwc, err
- case string(OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration):
+ case string(OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration):
var ckpprc ContentKeyPolicyPlayReadyConfiguration
err := json.Unmarshal(body, &ckpprc)
return ckpprc, err
- case string(OdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration):
+ case string(OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration):
var ckpfpc ContentKeyPolicyFairPlayConfiguration
err := json.Unmarshal(body, &ckpfpc)
return ckpfpc, err
@@ -2233,7 +2253,7 @@ func unmarshalBasicContentKeyPolicyConfigurationArray(body []byte) ([]BasicConte
// MarshalJSON is the custom marshaler for ContentKeyPolicyConfiguration.
func (ckpc ContentKeyPolicyConfiguration) MarshalJSON() ([]byte, error) {
- ckpc.OdataType = OdataTypeContentKeyPolicyConfiguration
+ ckpc.OdataType = OdataTypeBasicContentKeyPolicyConfigurationOdataTypeContentKeyPolicyConfiguration
objectMap := make(map[string]interface{})
if ckpc.OdataType != "" {
objectMap["@odata.type"] = ckpc.OdataType
@@ -2284,19 +2304,19 @@ type ContentKeyPolicyFairPlayConfiguration struct {
FairPlayPfxPassword *string `json:"fairPlayPfxPassword,omitempty"`
// FairPlayPfx - The Base64 representation of FairPlay certificate in PKCS 12 (pfx) format (including private key).
FairPlayPfx *string `json:"fairPlayPfx,omitempty"`
- // RentalAndLeaseKeyType - The rental and lease key type. Possible values include: 'Unknown', 'Undefined', 'DualExpiry', 'PersistentUnlimited', 'PersistentLimited'
+ // RentalAndLeaseKeyType - The rental and lease key type. Possible values include: 'ContentKeyPolicyFairPlayRentalAndLeaseKeyTypeUnknown', 'ContentKeyPolicyFairPlayRentalAndLeaseKeyTypeUndefined', 'ContentKeyPolicyFairPlayRentalAndLeaseKeyTypeDualExpiry', 'ContentKeyPolicyFairPlayRentalAndLeaseKeyTypePersistentUnlimited', 'ContentKeyPolicyFairPlayRentalAndLeaseKeyTypePersistentLimited'
RentalAndLeaseKeyType ContentKeyPolicyFairPlayRentalAndLeaseKeyType `json:"rentalAndLeaseKeyType,omitempty"`
// RentalDuration - The rental duration. Must be greater than or equal to 0.
RentalDuration *int64 `json:"rentalDuration,omitempty"`
// OfflineRentalConfiguration - Offline rental policy
OfflineRentalConfiguration *ContentKeyPolicyFairPlayOfflineRentalConfiguration `json:"offlineRentalConfiguration,omitempty"`
- // OdataType - Possible values include: 'OdataTypeContentKeyPolicyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration'
+ // OdataType - Possible values include: 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeContentKeyPolicyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration'
OdataType OdataTypeBasicContentKeyPolicyConfiguration `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for ContentKeyPolicyFairPlayConfiguration.
func (ckpfpc ContentKeyPolicyFairPlayConfiguration) MarshalJSON() ([]byte, error) {
- ckpfpc.OdataType = OdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration
+ ckpfpc.OdataType = OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration
objectMap := make(map[string]interface{})
if ckpfpc.Ask != nil {
objectMap["ask"] = ckpfpc.Ask
@@ -2368,13 +2388,13 @@ type ContentKeyPolicyFairPlayOfflineRentalConfiguration struct {
// ContentKeyPolicyOpenRestriction represents an open restriction. License or key will be delivered on
// every request.
type ContentKeyPolicyOpenRestriction struct {
- // OdataType - Possible values include: 'OdataTypeContentKeyPolicyRestriction', 'OdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction', 'OdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction', 'OdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction'
+ // OdataType - Possible values include: 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeContentKeyPolicyRestriction', 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction', 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction', 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction'
OdataType OdataTypeBasicContentKeyPolicyRestriction `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for ContentKeyPolicyOpenRestriction.
func (ckpor ContentKeyPolicyOpenRestriction) MarshalJSON() ([]byte, error) {
- ckpor.OdataType = OdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction
+ ckpor.OdataType = OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction
objectMap := make(map[string]interface{})
if ckpor.OdataType != "" {
objectMap["@odata.type"] = ckpor.OdataType
@@ -2485,13 +2505,13 @@ type ContentKeyPolicyPlayReadyConfiguration struct {
Licenses *[]ContentKeyPolicyPlayReadyLicense `json:"licenses,omitempty"`
// ResponseCustomData - The custom response data.
ResponseCustomData *string `json:"responseCustomData,omitempty"`
- // OdataType - Possible values include: 'OdataTypeContentKeyPolicyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration'
+ // OdataType - Possible values include: 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeContentKeyPolicyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration'
OdataType OdataTypeBasicContentKeyPolicyConfiguration `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for ContentKeyPolicyPlayReadyConfiguration.
func (ckpprc ContentKeyPolicyPlayReadyConfiguration) MarshalJSON() ([]byte, error) {
- ckpprc.OdataType = OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration
+ ckpprc.OdataType = OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration
objectMap := make(map[string]interface{})
if ckpprc.Licenses != nil {
objectMap["licenses"] = ckpprc.Licenses
@@ -2911,7 +2931,7 @@ type BasicContentKeyPolicyRestriction interface {
// ContentKeyPolicyRestriction base class for Content Key Policy restrictions. A derived class must be used to
// create a restriction.
type ContentKeyPolicyRestriction struct {
- // OdataType - Possible values include: 'OdataTypeContentKeyPolicyRestriction', 'OdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction', 'OdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction', 'OdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction'
+ // OdataType - Possible values include: 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeContentKeyPolicyRestriction', 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction', 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction', 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction'
OdataType OdataTypeBasicContentKeyPolicyRestriction `json:"@odata.type,omitempty"`
}
@@ -2923,15 +2943,15 @@ func unmarshalBasicContentKeyPolicyRestriction(body []byte) (BasicContentKeyPoli
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction):
+ case string(OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction):
var ckpor ContentKeyPolicyOpenRestriction
err := json.Unmarshal(body, &ckpor)
return ckpor, err
- case string(OdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction):
+ case string(OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction):
var ckpur ContentKeyPolicyUnknownRestriction
err := json.Unmarshal(body, &ckpur)
return ckpur, err
- case string(OdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction):
+ case string(OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction):
var ckptr ContentKeyPolicyTokenRestriction
err := json.Unmarshal(body, &ckptr)
return ckptr, err
@@ -2962,7 +2982,7 @@ func unmarshalBasicContentKeyPolicyRestrictionArray(body []byte) ([]BasicContent
// MarshalJSON is the custom marshaler for ContentKeyPolicyRestriction.
func (ckpr ContentKeyPolicyRestriction) MarshalJSON() ([]byte, error) {
- ckpr.OdataType = OdataTypeContentKeyPolicyRestriction
+ ckpr.OdataType = OdataTypeBasicContentKeyPolicyRestrictionOdataTypeContentKeyPolicyRestriction
objectMap := make(map[string]interface{})
if ckpr.OdataType != "" {
objectMap["@odata.type"] = ckpr.OdataType
@@ -3007,7 +3027,7 @@ type BasicContentKeyPolicyRestrictionTokenKey interface {
// ContentKeyPolicyRestrictionTokenKey base class for Content Key Policy key for token validation. A derived
// class must be used to create a token key.
type ContentKeyPolicyRestrictionTokenKey struct {
- // OdataType - Possible values include: 'OdataTypeContentKeyPolicyRestrictionTokenKey', 'OdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey', 'OdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey', 'OdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey'
+ // OdataType - Possible values include: 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeContentKeyPolicyRestrictionTokenKey', 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey', 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey', 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey'
OdataType OdataTypeBasicContentKeyPolicyRestrictionTokenKey `json:"@odata.type,omitempty"`
}
@@ -3019,15 +3039,15 @@ func unmarshalBasicContentKeyPolicyRestrictionTokenKey(body []byte) (BasicConten
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey):
+ case string(OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey):
var ckpstk ContentKeyPolicySymmetricTokenKey
err := json.Unmarshal(body, &ckpstk)
return ckpstk, err
- case string(OdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey):
+ case string(OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey):
var ckprtk ContentKeyPolicyRsaTokenKey
err := json.Unmarshal(body, &ckprtk)
return ckprtk, err
- case string(OdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey):
+ case string(OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey):
var ckpxctk ContentKeyPolicyX509CertificateTokenKey
err := json.Unmarshal(body, &ckpxctk)
return ckpxctk, err
@@ -3058,7 +3078,7 @@ func unmarshalBasicContentKeyPolicyRestrictionTokenKeyArray(body []byte) ([]Basi
// MarshalJSON is the custom marshaler for ContentKeyPolicyRestrictionTokenKey.
func (ckprtk ContentKeyPolicyRestrictionTokenKey) MarshalJSON() ([]byte, error) {
- ckprtk.OdataType = OdataTypeContentKeyPolicyRestrictionTokenKey
+ ckprtk.OdataType = OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeContentKeyPolicyRestrictionTokenKey
objectMap := make(map[string]interface{})
if ckprtk.OdataType != "" {
objectMap["@odata.type"] = ckprtk.OdataType
@@ -3097,13 +3117,13 @@ type ContentKeyPolicyRsaTokenKey struct {
Exponent *[]byte `json:"exponent,omitempty"`
// Modulus - The RSA Parameter modulus
Modulus *[]byte `json:"modulus,omitempty"`
- // OdataType - Possible values include: 'OdataTypeContentKeyPolicyRestrictionTokenKey', 'OdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey', 'OdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey', 'OdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey'
+ // OdataType - Possible values include: 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeContentKeyPolicyRestrictionTokenKey', 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey', 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey', 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey'
OdataType OdataTypeBasicContentKeyPolicyRestrictionTokenKey `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for ContentKeyPolicyRsaTokenKey.
func (ckprtk ContentKeyPolicyRsaTokenKey) MarshalJSON() ([]byte, error) {
- ckprtk.OdataType = OdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey
+ ckprtk.OdataType = OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey
objectMap := make(map[string]interface{})
if ckprtk.Exponent != nil {
objectMap["exponent"] = ckprtk.Exponent
@@ -3146,13 +3166,13 @@ func (ckprtk ContentKeyPolicyRsaTokenKey) AsBasicContentKeyPolicyRestrictionToke
type ContentKeyPolicySymmetricTokenKey struct {
// KeyValue - The key value of the key
KeyValue *[]byte `json:"keyValue,omitempty"`
- // OdataType - Possible values include: 'OdataTypeContentKeyPolicyRestrictionTokenKey', 'OdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey', 'OdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey', 'OdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey'
+ // OdataType - Possible values include: 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeContentKeyPolicyRestrictionTokenKey', 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey', 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey', 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey'
OdataType OdataTypeBasicContentKeyPolicyRestrictionTokenKey `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for ContentKeyPolicySymmetricTokenKey.
func (ckpstk ContentKeyPolicySymmetricTokenKey) MarshalJSON() ([]byte, error) {
- ckpstk.OdataType = OdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey
+ ckpstk.OdataType = OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey
objectMap := make(map[string]interface{})
if ckpstk.KeyValue != nil {
objectMap["keyValue"] = ckpstk.KeyValue
@@ -3213,13 +3233,13 @@ type ContentKeyPolicyTokenRestriction struct {
RestrictionTokenType ContentKeyPolicyRestrictionTokenType `json:"restrictionTokenType,omitempty"`
// OpenIDConnectDiscoveryDocument - The OpenID connect discovery document.
OpenIDConnectDiscoveryDocument *string `json:"openIdConnectDiscoveryDocument,omitempty"`
- // OdataType - Possible values include: 'OdataTypeContentKeyPolicyRestriction', 'OdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction', 'OdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction', 'OdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction'
+ // OdataType - Possible values include: 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeContentKeyPolicyRestriction', 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction', 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction', 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction'
OdataType OdataTypeBasicContentKeyPolicyRestriction `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for ContentKeyPolicyTokenRestriction.
func (ckptr ContentKeyPolicyTokenRestriction) MarshalJSON() ([]byte, error) {
- ckptr.OdataType = OdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction
+ ckptr.OdataType = OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction
objectMap := make(map[string]interface{})
if ckptr.Issuer != nil {
objectMap["issuer"] = ckptr.Issuer
@@ -3359,13 +3379,13 @@ func (ckptr *ContentKeyPolicyTokenRestriction) UnmarshalJSON(body []byte) error
// ContentKeyPolicyUnknownConfiguration represents a ContentKeyPolicyConfiguration that is unavailable in
// the current API version.
type ContentKeyPolicyUnknownConfiguration struct {
- // OdataType - Possible values include: 'OdataTypeContentKeyPolicyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration'
+ // OdataType - Possible values include: 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeContentKeyPolicyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration'
OdataType OdataTypeBasicContentKeyPolicyConfiguration `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for ContentKeyPolicyUnknownConfiguration.
func (ckpuc ContentKeyPolicyUnknownConfiguration) MarshalJSON() ([]byte, error) {
- ckpuc.OdataType = OdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration
+ ckpuc.OdataType = OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration
objectMap := make(map[string]interface{})
if ckpuc.OdataType != "" {
objectMap["@odata.type"] = ckpuc.OdataType
@@ -3411,13 +3431,13 @@ func (ckpuc ContentKeyPolicyUnknownConfiguration) AsBasicContentKeyPolicyConfigu
// ContentKeyPolicyUnknownRestriction represents a ContentKeyPolicyRestriction that is unavailable in the
// current API version.
type ContentKeyPolicyUnknownRestriction struct {
- // OdataType - Possible values include: 'OdataTypeContentKeyPolicyRestriction', 'OdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction', 'OdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction', 'OdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction'
+ // OdataType - Possible values include: 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeContentKeyPolicyRestriction', 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyOpenRestriction', 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction', 'OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyTokenRestriction'
OdataType OdataTypeBasicContentKeyPolicyRestriction `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for ContentKeyPolicyUnknownRestriction.
func (ckpur ContentKeyPolicyUnknownRestriction) MarshalJSON() ([]byte, error) {
- ckpur.OdataType = OdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction
+ ckpur.OdataType = OdataTypeBasicContentKeyPolicyRestrictionOdataTypeMicrosoftMediaContentKeyPolicyUnknownRestriction
objectMap := make(map[string]interface{})
if ckpur.OdataType != "" {
objectMap["@odata.type"] = ckpur.OdataType
@@ -3454,13 +3474,13 @@ func (ckpur ContentKeyPolicyUnknownRestriction) AsBasicContentKeyPolicyRestricti
type ContentKeyPolicyWidevineConfiguration struct {
// WidevineTemplate - The Widevine template.
WidevineTemplate *string `json:"widevineTemplate,omitempty"`
- // OdataType - Possible values include: 'OdataTypeContentKeyPolicyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration', 'OdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration'
+ // OdataType - Possible values include: 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeContentKeyPolicyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyClearKeyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyUnknownConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyPlayReadyConfiguration', 'OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyFairPlayConfiguration'
OdataType OdataTypeBasicContentKeyPolicyConfiguration `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for ContentKeyPolicyWidevineConfiguration.
func (ckpwc ContentKeyPolicyWidevineConfiguration) MarshalJSON() ([]byte, error) {
- ckpwc.OdataType = OdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration
+ ckpwc.OdataType = OdataTypeBasicContentKeyPolicyConfigurationOdataTypeMicrosoftMediaContentKeyPolicyWidevineConfiguration
objectMap := make(map[string]interface{})
if ckpwc.WidevineTemplate != nil {
objectMap["widevineTemplate"] = ckpwc.WidevineTemplate
@@ -3510,13 +3530,13 @@ func (ckpwc ContentKeyPolicyWidevineConfiguration) AsBasicContentKeyPolicyConfig
type ContentKeyPolicyX509CertificateTokenKey struct {
// RawBody - The raw data field of a certificate in PKCS 12 format (X509Certificate2 in .NET)
RawBody *[]byte `json:"rawBody,omitempty"`
- // OdataType - Possible values include: 'OdataTypeContentKeyPolicyRestrictionTokenKey', 'OdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey', 'OdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey', 'OdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey'
+ // OdataType - Possible values include: 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeContentKeyPolicyRestrictionTokenKey', 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicySymmetricTokenKey', 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyRsaTokenKey', 'OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey'
OdataType OdataTypeBasicContentKeyPolicyRestrictionTokenKey `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for ContentKeyPolicyX509CertificateTokenKey.
func (ckpxctk ContentKeyPolicyX509CertificateTokenKey) MarshalJSON() ([]byte, error) {
- ckpxctk.OdataType = OdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey
+ ckpxctk.OdataType = OdataTypeBasicContentKeyPolicyRestrictionTokenKeyOdataTypeMicrosoftMediaContentKeyPolicyX509CertificateTokenKey
objectMap := make(map[string]interface{})
if ckpxctk.RawBody != nil {
objectMap["rawBody"] = ckpxctk.RawBody
@@ -3556,13 +3576,13 @@ func (ckpxctk ContentKeyPolicyX509CertificateTokenKey) AsBasicContentKeyPolicyRe
type CopyAudio struct {
// Label - An optional label for the codec. The label can be used to control muxing behavior.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeCodec', 'OdataTypeMicrosoftMediaAudio', 'OdataTypeMicrosoftMediaAacAudio', 'OdataTypeMicrosoftMediaVideo', 'OdataTypeMicrosoftMediaH265Video', 'OdataTypeMicrosoftMediaCopyVideo', 'OdataTypeMicrosoftMediaImage', 'OdataTypeMicrosoftMediaCopyAudio', 'OdataTypeMicrosoftMediaH264Video', 'OdataTypeMicrosoftMediaJpgImage', 'OdataTypeMicrosoftMediaPngImage'
+ // OdataType - Possible values include: 'OdataTypeBasicCodecOdataTypeCodec', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage'
OdataType OdataTypeBasicCodec `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for CopyAudio.
func (ca CopyAudio) MarshalJSON() ([]byte, error) {
- ca.OdataType = OdataTypeMicrosoftMediaCopyAudio
+ ca.OdataType = OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio
objectMap := make(map[string]interface{})
if ca.Label != nil {
objectMap["label"] = ca.Label
@@ -3652,13 +3672,13 @@ func (ca CopyAudio) AsBasicCodec() (BasicCodec, bool) {
type CopyVideo struct {
// Label - An optional label for the codec. The label can be used to control muxing behavior.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeCodec', 'OdataTypeMicrosoftMediaAudio', 'OdataTypeMicrosoftMediaAacAudio', 'OdataTypeMicrosoftMediaVideo', 'OdataTypeMicrosoftMediaH265Video', 'OdataTypeMicrosoftMediaCopyVideo', 'OdataTypeMicrosoftMediaImage', 'OdataTypeMicrosoftMediaCopyAudio', 'OdataTypeMicrosoftMediaH264Video', 'OdataTypeMicrosoftMediaJpgImage', 'OdataTypeMicrosoftMediaPngImage'
+ // OdataType - Possible values include: 'OdataTypeBasicCodecOdataTypeCodec', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage'
OdataType OdataTypeBasicCodec `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for CopyVideo.
func (cv CopyVideo) MarshalJSON() ([]byte, error) {
- cv.OdataType = OdataTypeMicrosoftMediaCopyVideo
+ cv.OdataType = OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo
objectMap := make(map[string]interface{})
if cv.Label != nil {
objectMap["label"] = cv.Label
@@ -3762,9 +3782,9 @@ type DefaultKey struct {
// Deinterlace describes the de-interlacing settings.
type Deinterlace struct {
- // Parity - The field parity for de-interlacing, defaults to Auto. Possible values include: 'Auto', 'TopFieldFirst', 'BottomFieldFirst'
+ // Parity - The field parity for de-interlacing, defaults to Auto. Possible values include: 'DeinterlaceParityAuto', 'DeinterlaceParityTopFieldFirst', 'DeinterlaceParityBottomFieldFirst'
Parity DeinterlaceParity `json:"parity,omitempty"`
- // Mode - The deinterlacing mode. Defaults to AutoPixelAdaptive. Possible values include: 'Off', 'AutoPixelAdaptive'
+ // Mode - The deinterlacing mode. Defaults to AutoPixelAdaptive. Possible values include: 'DeinterlaceModeOff', 'DeinterlaceModeAutoPixelAdaptive'
Mode DeinterlaceMode `json:"mode,omitempty"`
}
@@ -3834,21 +3854,21 @@ type EnvelopeEncryption struct {
// FaceDetectorPreset describes all the settings to be used when analyzing a video in order to detect (and
// optionally redact) all the faces present.
type FaceDetectorPreset struct {
- // Resolution - Specifies the maximum resolution at which your video is analyzed. The default behavior is "SourceResolution," which will keep the input video at its original resolution when analyzed. Using "StandardDefinition" will resize input videos to standard definition while preserving the appropriate aspect ratio. It will only resize if the video is of higher resolution. For example, a 1920x1080 input would be scaled to 640x360 before processing. Switching to "StandardDefinition" will reduce the time it takes to process high resolution video. It may also reduce the cost of using this component (see https://azure.microsoft.com/en-us/pricing/details/media-services/#analytics for details). However, faces that end up being too small in the resized video may not be detected. Possible values include: 'SourceResolution', 'StandardDefinition'
+ // Resolution - Specifies the maximum resolution at which your video is analyzed. The default behavior is "SourceResolution," which will keep the input video at its original resolution when analyzed. Using "StandardDefinition" will resize input videos to standard definition while preserving the appropriate aspect ratio. It will only resize if the video is of higher resolution. For example, a 1920x1080 input would be scaled to 640x360 before processing. Switching to "StandardDefinition" will reduce the time it takes to process high resolution video. It may also reduce the cost of using this component (see https://azure.microsoft.com/en-us/pricing/details/media-services/#analytics for details). However, faces that end up being too small in the resized video may not be detected. Possible values include: 'AnalysisResolutionSourceResolution', 'AnalysisResolutionStandardDefinition'
Resolution AnalysisResolution `json:"resolution,omitempty"`
- // Mode - This mode provides the ability to choose between the following settings: 1) Analyze - For detection only.This mode generates a metadata JSON file marking appearances of faces throughout the video.Where possible, appearances of the same person are assigned the same ID. 2) Combined - Additionally redacts(blurs) detected faces. 3) Redact - This enables a 2-pass process, allowing for selective redaction of a subset of detected faces.It takes in the metadata file from a prior analyze pass, along with the source video, and a user-selected subset of IDs that require redaction. Possible values include: 'Analyze', 'Redact', 'Combined'
+ // Mode - This mode provides the ability to choose between the following settings: 1) Analyze - For detection only.This mode generates a metadata JSON file marking appearances of faces throughout the video.Where possible, appearances of the same person are assigned the same ID. 2) Combined - Additionally redacts(blurs) detected faces. 3) Redact - This enables a 2-pass process, allowing for selective redaction of a subset of detected faces.It takes in the metadata file from a prior analyze pass, along with the source video, and a user-selected subset of IDs that require redaction. Possible values include: 'FaceRedactorModeAnalyze', 'FaceRedactorModeRedact', 'FaceRedactorModeCombined'
Mode FaceRedactorMode `json:"mode,omitempty"`
- // BlurType - Blur type. Possible values include: 'Box', 'Low', 'Med', 'High', 'Black'
+ // BlurType - Blur type. Possible values include: 'BlurTypeBox', 'BlurTypeLow', 'BlurTypeMed', 'BlurTypeHigh', 'BlurTypeBlack'
BlurType BlurType `json:"blurType,omitempty"`
// ExperimentalOptions - Dictionary containing key value pairs for parameters not exposed in the preset itself
ExperimentalOptions map[string]*string `json:"experimentalOptions"`
- // OdataType - Possible values include: 'OdataTypePreset', 'OdataTypeMicrosoftMediaFaceDetectorPreset', 'OdataTypeMicrosoftMediaAudioAnalyzerPreset', 'OdataTypeMicrosoftMediaBuiltInStandardEncoderPreset', 'OdataTypeMicrosoftMediaStandardEncoderPreset', 'OdataTypeMicrosoftMediaVideoAnalyzerPreset'
+ // OdataType - Possible values include: 'OdataTypeBasicPresetOdataTypePreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaFaceDetectorPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaAudioAnalyzerPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaBuiltInStandardEncoderPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaStandardEncoderPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset'
OdataType OdataTypeBasicPreset `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for FaceDetectorPreset.
func (fdp FaceDetectorPreset) MarshalJSON() ([]byte, error) {
- fdp.OdataType = OdataTypeMicrosoftMediaFaceDetectorPreset
+ fdp.OdataType = OdataTypeBasicPresetOdataTypeMicrosoftMediaFaceDetectorPreset
objectMap := make(map[string]interface{})
if fdp.Resolution != "" {
objectMap["resolution"] = fdp.Resolution
@@ -3987,7 +4007,7 @@ type FilterTrackPropertyCondition struct {
Property FilterTrackPropertyType `json:"property,omitempty"`
// Value - The track property value.
Value *string `json:"value,omitempty"`
- // Operation - The track property condition operation. Possible values include: 'Equal', 'NotEqual'
+ // Operation - The track property condition operation. Possible values include: 'FilterTrackPropertyCompareOperationEqual', 'FilterTrackPropertyCompareOperationNotEqual'
Operation FilterTrackPropertyCompareOperation `json:"operation,omitempty"`
}
@@ -4021,7 +4041,7 @@ type BasicFormat interface {
type Format struct {
// FilenamePattern - The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
FilenamePattern *string `json:"filenamePattern,omitempty"`
- // OdataType - Possible values include: 'OdataTypeFormat', 'OdataTypeMicrosoftMediaImageFormat', 'OdataTypeMicrosoftMediaJpgFormat', 'OdataTypeMicrosoftMediaPngFormat', 'OdataTypeMicrosoftMediaMultiBitrateFormat', 'OdataTypeMicrosoftMediaMp4Format', 'OdataTypeMicrosoftMediaTransportStreamFormat'
+ // OdataType - Possible values include: 'OdataTypeBasicFormatOdataTypeFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaImageFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaJpgFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaPngFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaMultiBitrateFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaMp4Format', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaTransportStreamFormat'
OdataType OdataTypeBasicFormat `json:"@odata.type,omitempty"`
}
@@ -4033,27 +4053,27 @@ func unmarshalBasicFormat(body []byte) (BasicFormat, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaImageFormat):
+ case string(OdataTypeBasicFormatOdataTypeMicrosoftMediaImageFormat):
var ifVar ImageFormat
err := json.Unmarshal(body, &ifVar)
return ifVar, err
- case string(OdataTypeMicrosoftMediaJpgFormat):
+ case string(OdataTypeBasicFormatOdataTypeMicrosoftMediaJpgFormat):
var jf JpgFormat
err := json.Unmarshal(body, &jf)
return jf, err
- case string(OdataTypeMicrosoftMediaPngFormat):
+ case string(OdataTypeBasicFormatOdataTypeMicrosoftMediaPngFormat):
var pf PngFormat
err := json.Unmarshal(body, &pf)
return pf, err
- case string(OdataTypeMicrosoftMediaMultiBitrateFormat):
+ case string(OdataTypeBasicFormatOdataTypeMicrosoftMediaMultiBitrateFormat):
var mbf MultiBitrateFormat
err := json.Unmarshal(body, &mbf)
return mbf, err
- case string(OdataTypeMicrosoftMediaMp4Format):
+ case string(OdataTypeBasicFormatOdataTypeMicrosoftMediaMp4Format):
var m4f Mp4Format
err := json.Unmarshal(body, &m4f)
return m4f, err
- case string(OdataTypeMicrosoftMediaTransportStreamFormat):
+ case string(OdataTypeBasicFormatOdataTypeMicrosoftMediaTransportStreamFormat):
var tsf TransportStreamFormat
err := json.Unmarshal(body, &tsf)
return tsf, err
@@ -4084,7 +4104,7 @@ func unmarshalBasicFormatArray(body []byte) ([]BasicFormat, error) {
// MarshalJSON is the custom marshaler for Format.
func (f Format) MarshalJSON() ([]byte, error) {
- f.OdataType = OdataTypeFormat
+ f.OdataType = OdataTypeBasicFormatOdataTypeFormat
objectMap := make(map[string]interface{})
if f.FilenamePattern != nil {
objectMap["filenamePattern"] = f.FilenamePattern
@@ -4151,13 +4171,13 @@ func (f Format) AsBasicFormat() (BasicFormat, bool) {
type FromAllInputFile struct {
// IncludedTracks - The list of TrackDescriptors which define the metadata and selection of tracks in the input.
IncludedTracks *[]BasicTrackDescriptor `json:"includedTracks,omitempty"`
- // OdataType - Possible values include: 'OdataTypeInputDefinition', 'OdataTypeMicrosoftMediaFromAllInputFile', 'OdataTypeMicrosoftMediaFromEachInputFile', 'OdataTypeMicrosoftMediaInputFile'
+ // OdataType - Possible values include: 'OdataTypeBasicInputDefinitionOdataTypeInputDefinition', 'OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromAllInputFile', 'OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromEachInputFile', 'OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaInputFile'
OdataType OdataTypeBasicInputDefinition `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for FromAllInputFile.
func (faif FromAllInputFile) MarshalJSON() ([]byte, error) {
- faif.OdataType = OdataTypeMicrosoftMediaFromAllInputFile
+ faif.OdataType = OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromAllInputFile
objectMap := make(map[string]interface{})
if faif.IncludedTracks != nil {
objectMap["includedTracks"] = faif.IncludedTracks
@@ -4231,13 +4251,13 @@ func (faif *FromAllInputFile) UnmarshalJSON(body []byte) error {
type FromEachInputFile struct {
// IncludedTracks - The list of TrackDescriptors which define the metadata and selection of tracks in the input.
IncludedTracks *[]BasicTrackDescriptor `json:"includedTracks,omitempty"`
- // OdataType - Possible values include: 'OdataTypeInputDefinition', 'OdataTypeMicrosoftMediaFromAllInputFile', 'OdataTypeMicrosoftMediaFromEachInputFile', 'OdataTypeMicrosoftMediaInputFile'
+ // OdataType - Possible values include: 'OdataTypeBasicInputDefinitionOdataTypeInputDefinition', 'OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromAllInputFile', 'OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromEachInputFile', 'OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaInputFile'
OdataType OdataTypeBasicInputDefinition `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for FromEachInputFile.
func (feif FromEachInputFile) MarshalJSON() ([]byte, error) {
- feif.OdataType = OdataTypeMicrosoftMediaFromEachInputFile
+ feif.OdataType = OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromEachInputFile
objectMap := make(map[string]interface{})
if feif.IncludedTracks != nil {
objectMap["includedTracks"] = feif.IncludedTracks
@@ -4316,7 +4336,7 @@ type H264Layer struct {
BufferWindow *string `json:"bufferWindow,omitempty"`
// ReferenceFrames - The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting.
ReferenceFrames *int32 `json:"referenceFrames,omitempty"`
- // EntropyMode - The entropy mode to be used for this layer. If not specified, the encoder chooses the mode that is appropriate for the profile and level. Possible values include: 'Cabac', 'Cavlc'
+ // EntropyMode - The entropy mode to be used for this layer. If not specified, the encoder chooses the mode that is appropriate for the profile and level. Possible values include: 'EntropyModeCabac', 'EntropyModeCavlc'
EntropyMode EntropyMode `json:"entropyMode,omitempty"`
// Bitrate - The average bitrate in bits per second at which to encode the input video when generating this layer. This is a required field.
Bitrate *int32 `json:"bitrate,omitempty"`
@@ -4336,13 +4356,13 @@ type H264Layer struct {
Height *string `json:"height,omitempty"`
// Label - The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeLayer', 'OdataTypeMicrosoftMediaH265VideoLayer', 'OdataTypeMicrosoftMediaH265Layer', 'OdataTypeMicrosoftMediaVideoLayer', 'OdataTypeMicrosoftMediaH264Layer', 'OdataTypeMicrosoftMediaJpgLayer', 'OdataTypeMicrosoftMediaPngLayer'
+ // OdataType - Possible values include: 'OdataTypeBasicLayerOdataTypeLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH265VideoLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH265Layer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaVideoLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH264Layer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaJpgLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaPngLayer'
OdataType OdataTypeBasicLayer `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for H264Layer.
func (hl H264Layer) MarshalJSON() ([]byte, error) {
- hl.OdataType = OdataTypeMicrosoftMediaH264Layer
+ hl.OdataType = OdataTypeBasicLayerOdataTypeMicrosoftMediaH264Layer
objectMap := make(map[string]interface{})
if hl.Profile != "" {
objectMap["profile"] = hl.Profile
@@ -4446,7 +4466,7 @@ func (hl H264Layer) AsBasicLayer() (BasicLayer, bool) {
type H264Video struct {
// SceneChangeDetection - Whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video.
SceneChangeDetection *bool `json:"sceneChangeDetection,omitempty"`
- // Complexity - Tells the encoder how to choose its encoding settings. The default value is Balanced. Possible values include: 'Speed', 'Balanced', 'Quality'
+ // Complexity - Tells the encoder how to choose its encoding settings. The default value is Balanced. Possible values include: 'H264ComplexitySpeed', 'H264ComplexityBalanced', 'H264ComplexityQuality'
Complexity H264Complexity `json:"complexity,omitempty"`
// Layers - The collection of output H.264 layers to be produced by the encoder.
Layers *[]H264Layer `json:"layers,omitempty"`
@@ -4458,13 +4478,13 @@ type H264Video struct {
SyncMode VideoSyncMode `json:"syncMode,omitempty"`
// Label - An optional label for the codec. The label can be used to control muxing behavior.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeCodec', 'OdataTypeMicrosoftMediaAudio', 'OdataTypeMicrosoftMediaAacAudio', 'OdataTypeMicrosoftMediaVideo', 'OdataTypeMicrosoftMediaH265Video', 'OdataTypeMicrosoftMediaCopyVideo', 'OdataTypeMicrosoftMediaImage', 'OdataTypeMicrosoftMediaCopyAudio', 'OdataTypeMicrosoftMediaH264Video', 'OdataTypeMicrosoftMediaJpgImage', 'OdataTypeMicrosoftMediaPngImage'
+ // OdataType - Possible values include: 'OdataTypeBasicCodecOdataTypeCodec', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage'
OdataType OdataTypeBasicCodec `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for H264Video.
func (hv H264Video) MarshalJSON() ([]byte, error) {
- hv.OdataType = OdataTypeMicrosoftMediaH264Video
+ hv.OdataType = OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video
objectMap := make(map[string]interface{})
if hv.SceneChangeDetection != nil {
objectMap["sceneChangeDetection"] = hv.SceneChangeDetection
@@ -4597,13 +4617,13 @@ type H265Layer struct {
Height *string `json:"height,omitempty"`
// Label - The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeLayer', 'OdataTypeMicrosoftMediaH265VideoLayer', 'OdataTypeMicrosoftMediaH265Layer', 'OdataTypeMicrosoftMediaVideoLayer', 'OdataTypeMicrosoftMediaH264Layer', 'OdataTypeMicrosoftMediaJpgLayer', 'OdataTypeMicrosoftMediaPngLayer'
+ // OdataType - Possible values include: 'OdataTypeBasicLayerOdataTypeLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH265VideoLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH265Layer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaVideoLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH264Layer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaJpgLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaPngLayer'
OdataType OdataTypeBasicLayer `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for H265Layer.
func (hl H265Layer) MarshalJSON() ([]byte, error) {
- hl.OdataType = OdataTypeMicrosoftMediaH265Layer
+ hl.OdataType = OdataTypeBasicLayerOdataTypeMicrosoftMediaH265Layer
objectMap := make(map[string]interface{})
if hl.Profile != "" {
objectMap["profile"] = hl.Profile
@@ -4716,13 +4736,13 @@ type H265Video struct {
SyncMode VideoSyncMode `json:"syncMode,omitempty"`
// Label - An optional label for the codec. The label can be used to control muxing behavior.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeCodec', 'OdataTypeMicrosoftMediaAudio', 'OdataTypeMicrosoftMediaAacAudio', 'OdataTypeMicrosoftMediaVideo', 'OdataTypeMicrosoftMediaH265Video', 'OdataTypeMicrosoftMediaCopyVideo', 'OdataTypeMicrosoftMediaImage', 'OdataTypeMicrosoftMediaCopyAudio', 'OdataTypeMicrosoftMediaH264Video', 'OdataTypeMicrosoftMediaJpgImage', 'OdataTypeMicrosoftMediaPngImage'
+ // OdataType - Possible values include: 'OdataTypeBasicCodecOdataTypeCodec', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage'
OdataType OdataTypeBasicCodec `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for H265Video.
func (hv H265Video) MarshalJSON() ([]byte, error) {
- hv.OdataType = OdataTypeMicrosoftMediaH265Video
+ hv.OdataType = OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video
objectMap := make(map[string]interface{})
if hv.SceneChangeDetection != nil {
objectMap["sceneChangeDetection"] = hv.SceneChangeDetection
@@ -4854,7 +4874,7 @@ type H265VideoLayer struct {
Height *string `json:"height,omitempty"`
// Label - The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeLayer', 'OdataTypeMicrosoftMediaH265VideoLayer', 'OdataTypeMicrosoftMediaH265Layer', 'OdataTypeMicrosoftMediaVideoLayer', 'OdataTypeMicrosoftMediaH264Layer', 'OdataTypeMicrosoftMediaJpgLayer', 'OdataTypeMicrosoftMediaPngLayer'
+ // OdataType - Possible values include: 'OdataTypeBasicLayerOdataTypeLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH265VideoLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH265Layer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaVideoLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH264Layer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaJpgLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaPngLayer'
OdataType OdataTypeBasicLayer `json:"@odata.type,omitempty"`
}
@@ -4866,7 +4886,7 @@ func unmarshalBasicH265VideoLayer(body []byte) (BasicH265VideoLayer, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaH265Layer):
+ case string(OdataTypeBasicLayerOdataTypeMicrosoftMediaH265Layer):
var hl H265Layer
err := json.Unmarshal(body, &hl)
return hl, err
@@ -4897,7 +4917,7 @@ func unmarshalBasicH265VideoLayerArray(body []byte) ([]BasicH265VideoLayer, erro
// MarshalJSON is the custom marshaler for H265VideoLayer.
func (hvl H265VideoLayer) MarshalJSON() ([]byte, error) {
- hvl.OdataType = OdataTypeMicrosoftMediaH265VideoLayer
+ hvl.OdataType = OdataTypeBasicLayerOdataTypeMicrosoftMediaH265VideoLayer
objectMap := make(map[string]interface{})
if hvl.Bitrate != nil {
objectMap["bitrate"] = hvl.Bitrate
@@ -5011,7 +5031,7 @@ type Image struct {
SyncMode VideoSyncMode `json:"syncMode,omitempty"`
// Label - An optional label for the codec. The label can be used to control muxing behavior.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeCodec', 'OdataTypeMicrosoftMediaAudio', 'OdataTypeMicrosoftMediaAacAudio', 'OdataTypeMicrosoftMediaVideo', 'OdataTypeMicrosoftMediaH265Video', 'OdataTypeMicrosoftMediaCopyVideo', 'OdataTypeMicrosoftMediaImage', 'OdataTypeMicrosoftMediaCopyAudio', 'OdataTypeMicrosoftMediaH264Video', 'OdataTypeMicrosoftMediaJpgImage', 'OdataTypeMicrosoftMediaPngImage'
+ // OdataType - Possible values include: 'OdataTypeBasicCodecOdataTypeCodec', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage'
OdataType OdataTypeBasicCodec `json:"@odata.type,omitempty"`
}
@@ -5023,11 +5043,11 @@ func unmarshalBasicImage(body []byte) (BasicImage, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaJpgImage):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage):
var ji JpgImage
err := json.Unmarshal(body, &ji)
return ji, err
- case string(OdataTypeMicrosoftMediaPngImage):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage):
var pi PngImage
err := json.Unmarshal(body, &pi)
return pi, err
@@ -5058,7 +5078,7 @@ func unmarshalBasicImageArray(body []byte) ([]BasicImage, error) {
// MarshalJSON is the custom marshaler for Image.
func (i Image) MarshalJSON() ([]byte, error) {
- i.OdataType = OdataTypeMicrosoftMediaImage
+ i.OdataType = OdataTypeBasicCodecOdataTypeMicrosoftMediaImage
objectMap := make(map[string]interface{})
if i.Start != nil {
objectMap["start"] = i.Start
@@ -5173,7 +5193,7 @@ type BasicImageFormat interface {
type ImageFormat struct {
// FilenamePattern - The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
FilenamePattern *string `json:"filenamePattern,omitempty"`
- // OdataType - Possible values include: 'OdataTypeFormat', 'OdataTypeMicrosoftMediaImageFormat', 'OdataTypeMicrosoftMediaJpgFormat', 'OdataTypeMicrosoftMediaPngFormat', 'OdataTypeMicrosoftMediaMultiBitrateFormat', 'OdataTypeMicrosoftMediaMp4Format', 'OdataTypeMicrosoftMediaTransportStreamFormat'
+ // OdataType - Possible values include: 'OdataTypeBasicFormatOdataTypeFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaImageFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaJpgFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaPngFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaMultiBitrateFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaMp4Format', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaTransportStreamFormat'
OdataType OdataTypeBasicFormat `json:"@odata.type,omitempty"`
}
@@ -5185,11 +5205,11 @@ func unmarshalBasicImageFormat(body []byte) (BasicImageFormat, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaJpgFormat):
+ case string(OdataTypeBasicFormatOdataTypeMicrosoftMediaJpgFormat):
var jf JpgFormat
err := json.Unmarshal(body, &jf)
return jf, err
- case string(OdataTypeMicrosoftMediaPngFormat):
+ case string(OdataTypeBasicFormatOdataTypeMicrosoftMediaPngFormat):
var pf PngFormat
err := json.Unmarshal(body, &pf)
return pf, err
@@ -5220,7 +5240,7 @@ func unmarshalBasicImageFormatArray(body []byte) ([]BasicImageFormat, error) {
// MarshalJSON is the custom marshaler for ImageFormat.
func (ifVar ImageFormat) MarshalJSON() ([]byte, error) {
- ifVar.OdataType = OdataTypeMicrosoftMediaImageFormat
+ ifVar.OdataType = OdataTypeBasicFormatOdataTypeMicrosoftMediaImageFormat
objectMap := make(map[string]interface{})
if ifVar.FilenamePattern != nil {
objectMap["filenamePattern"] = ifVar.FilenamePattern
@@ -5295,7 +5315,7 @@ type BasicInputDefinition interface {
type InputDefinition struct {
// IncludedTracks - The list of TrackDescriptors which define the metadata and selection of tracks in the input.
IncludedTracks *[]BasicTrackDescriptor `json:"includedTracks,omitempty"`
- // OdataType - Possible values include: 'OdataTypeInputDefinition', 'OdataTypeMicrosoftMediaFromAllInputFile', 'OdataTypeMicrosoftMediaFromEachInputFile', 'OdataTypeMicrosoftMediaInputFile'
+ // OdataType - Possible values include: 'OdataTypeBasicInputDefinitionOdataTypeInputDefinition', 'OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromAllInputFile', 'OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromEachInputFile', 'OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaInputFile'
OdataType OdataTypeBasicInputDefinition `json:"@odata.type,omitempty"`
}
@@ -5307,15 +5327,15 @@ func unmarshalBasicInputDefinition(body []byte) (BasicInputDefinition, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaFromAllInputFile):
+ case string(OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromAllInputFile):
var faif FromAllInputFile
err := json.Unmarshal(body, &faif)
return faif, err
- case string(OdataTypeMicrosoftMediaFromEachInputFile):
+ case string(OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromEachInputFile):
var feif FromEachInputFile
err := json.Unmarshal(body, &feif)
return feif, err
- case string(OdataTypeMicrosoftMediaInputFile):
+ case string(OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaInputFile):
var ifVar InputFile
err := json.Unmarshal(body, &ifVar)
return ifVar, err
@@ -5346,7 +5366,7 @@ func unmarshalBasicInputDefinitionArray(body []byte) ([]BasicInputDefinition, er
// MarshalJSON is the custom marshaler for InputDefinition.
func (ID InputDefinition) MarshalJSON() ([]byte, error) {
- ID.OdataType = OdataTypeInputDefinition
+ ID.OdataType = OdataTypeBasicInputDefinitionOdataTypeInputDefinition
objectMap := make(map[string]interface{})
if ID.IncludedTracks != nil {
objectMap["includedTracks"] = ID.IncludedTracks
@@ -5420,13 +5440,13 @@ type InputFile struct {
Filename *string `json:"filename,omitempty"`
// IncludedTracks - The list of TrackDescriptors which define the metadata and selection of tracks in the input.
IncludedTracks *[]BasicTrackDescriptor `json:"includedTracks,omitempty"`
- // OdataType - Possible values include: 'OdataTypeInputDefinition', 'OdataTypeMicrosoftMediaFromAllInputFile', 'OdataTypeMicrosoftMediaFromEachInputFile', 'OdataTypeMicrosoftMediaInputFile'
+ // OdataType - Possible values include: 'OdataTypeBasicInputDefinitionOdataTypeInputDefinition', 'OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromAllInputFile', 'OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaFromEachInputFile', 'OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaInputFile'
OdataType OdataTypeBasicInputDefinition `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for InputFile.
func (ifVar InputFile) MarshalJSON() ([]byte, error) {
- ifVar.OdataType = OdataTypeMicrosoftMediaInputFile
+ ifVar.OdataType = OdataTypeBasicInputDefinitionOdataTypeMicrosoftMediaInputFile
objectMap := make(map[string]interface{})
if ifVar.Filename != nil {
objectMap["filename"] = ifVar.Filename
@@ -5768,18 +5788,24 @@ func NewJobCollectionPage(cur JobCollection, getNextPage func(context.Context, J
// JobError details of JobOutput errors.
type JobError struct {
- // Code - READ-ONLY; Error code describing the error. Possible values include: 'ServiceError', 'ServiceTransientError', 'DownloadNotAccessible', 'DownloadTransientError', 'UploadNotAccessible', 'UploadTransientError', 'ConfigurationUnsupported', 'ContentMalformed', 'ContentUnsupported'
+ // Code - READ-ONLY; Error code describing the error. Possible values include: 'JobErrorCodeServiceError', 'JobErrorCodeServiceTransientError', 'JobErrorCodeDownloadNotAccessible', 'JobErrorCodeDownloadTransientError', 'JobErrorCodeUploadNotAccessible', 'JobErrorCodeUploadTransientError', 'JobErrorCodeConfigurationUnsupported', 'JobErrorCodeContentMalformed', 'JobErrorCodeContentUnsupported'
Code JobErrorCode `json:"code,omitempty"`
// Message - READ-ONLY; A human-readable language-dependent representation of the error.
Message *string `json:"message,omitempty"`
// Category - READ-ONLY; Helps with categorization of errors. Possible values include: 'JobErrorCategoryService', 'JobErrorCategoryDownload', 'JobErrorCategoryUpload', 'JobErrorCategoryConfiguration', 'JobErrorCategoryContent'
Category JobErrorCategory `json:"category,omitempty"`
- // Retry - READ-ONLY; Indicates that it may be possible to retry the Job. If retry is unsuccessful, please contact Azure support via Azure Portal. Possible values include: 'DoNotRetry', 'MayRetry'
+ // Retry - READ-ONLY; Indicates that it may be possible to retry the Job. If retry is unsuccessful, please contact Azure support via Azure Portal. Possible values include: 'JobRetryDoNotRetry', 'JobRetryMayRetry'
Retry JobRetry `json:"retry,omitempty"`
// Details - READ-ONLY; An array of details about specific errors that led to this reported error.
Details *[]JobErrorDetail `json:"details,omitempty"`
}
+// MarshalJSON is the custom marshaler for JobError.
+func (je JobError) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ return json.Marshal(objectMap)
+}
+
// JobErrorDetail details of JobOutput errors.
type JobErrorDetail struct {
// Code - READ-ONLY; Code describing the error detail.
@@ -5788,6 +5814,12 @@ type JobErrorDetail struct {
Message *string `json:"message,omitempty"`
}
+// MarshalJSON is the custom marshaler for JobErrorDetail.
+func (jed JobErrorDetail) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ return json.Marshal(objectMap)
+}
+
// BasicJobInput base class for inputs to a Job.
type BasicJobInput interface {
AsJobInputClip() (*JobInputClip, bool)
@@ -5801,7 +5833,7 @@ type BasicJobInput interface {
// JobInput base class for inputs to a Job.
type JobInput struct {
- // OdataType - Possible values include: 'OdataTypeJobInput', 'OdataTypeMicrosoftMediaJobInputClip', 'OdataTypeMicrosoftMediaJobInputs', 'OdataTypeMicrosoftMediaJobInputAsset', 'OdataTypeMicrosoftMediaJobInputHTTP', 'OdataTypeMicrosoftMediaJobInputSequence'
+ // OdataType - Possible values include: 'OdataTypeBasicJobInputOdataTypeJobInput', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputClip', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputs', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputAsset', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputHTTP', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputSequence'
OdataType OdataTypeBasicJobInput `json:"@odata.type,omitempty"`
}
@@ -5813,23 +5845,23 @@ func unmarshalBasicJobInput(body []byte) (BasicJobInput, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaJobInputClip):
+ case string(OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputClip):
var jic JobInputClip
err := json.Unmarshal(body, &jic)
return jic, err
- case string(OdataTypeMicrosoftMediaJobInputs):
+ case string(OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputs):
var ji JobInputs
err := json.Unmarshal(body, &ji)
return ji, err
- case string(OdataTypeMicrosoftMediaJobInputAsset):
+ case string(OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputAsset):
var jia JobInputAsset
err := json.Unmarshal(body, &jia)
return jia, err
- case string(OdataTypeMicrosoftMediaJobInputHTTP):
+ case string(OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputHTTP):
var jih JobInputHTTP
err := json.Unmarshal(body, &jih)
return jih, err
- case string(OdataTypeMicrosoftMediaJobInputSequence):
+ case string(OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputSequence):
var jis JobInputSequence
err := json.Unmarshal(body, &jis)
return jis, err
@@ -5860,7 +5892,7 @@ func unmarshalBasicJobInputArray(body []byte) ([]BasicJobInput, error) {
// MarshalJSON is the custom marshaler for JobInput.
func (ji JobInput) MarshalJSON() ([]byte, error) {
- ji.OdataType = OdataTypeJobInput
+ ji.OdataType = OdataTypeBasicJobInputOdataTypeJobInput
objectMap := make(map[string]interface{})
if ji.OdataType != "" {
objectMap["@odata.type"] = ji.OdataType
@@ -5922,13 +5954,13 @@ type JobInputAsset struct {
Label *string `json:"label,omitempty"`
// InputDefinitions - Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
InputDefinitions *[]BasicInputDefinition `json:"inputDefinitions,omitempty"`
- // OdataType - Possible values include: 'OdataTypeJobInput', 'OdataTypeMicrosoftMediaJobInputClip', 'OdataTypeMicrosoftMediaJobInputs', 'OdataTypeMicrosoftMediaJobInputAsset', 'OdataTypeMicrosoftMediaJobInputHTTP', 'OdataTypeMicrosoftMediaJobInputSequence'
+ // OdataType - Possible values include: 'OdataTypeBasicJobInputOdataTypeJobInput', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputClip', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputs', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputAsset', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputHTTP', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputSequence'
OdataType OdataTypeBasicJobInput `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for JobInputAsset.
func (jia JobInputAsset) MarshalJSON() ([]byte, error) {
- jia.OdataType = OdataTypeMicrosoftMediaJobInputAsset
+ jia.OdataType = OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputAsset
objectMap := make(map[string]interface{})
if jia.AssetName != nil {
objectMap["assetName"] = jia.AssetName
@@ -6084,7 +6116,7 @@ type JobInputClip struct {
Label *string `json:"label,omitempty"`
// InputDefinitions - Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
InputDefinitions *[]BasicInputDefinition `json:"inputDefinitions,omitempty"`
- // OdataType - Possible values include: 'OdataTypeJobInput', 'OdataTypeMicrosoftMediaJobInputClip', 'OdataTypeMicrosoftMediaJobInputs', 'OdataTypeMicrosoftMediaJobInputAsset', 'OdataTypeMicrosoftMediaJobInputHTTP', 'OdataTypeMicrosoftMediaJobInputSequence'
+ // OdataType - Possible values include: 'OdataTypeBasicJobInputOdataTypeJobInput', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputClip', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputs', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputAsset', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputHTTP', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputSequence'
OdataType OdataTypeBasicJobInput `json:"@odata.type,omitempty"`
}
@@ -6096,11 +6128,11 @@ func unmarshalBasicJobInputClip(body []byte) (BasicJobInputClip, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaJobInputAsset):
+ case string(OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputAsset):
var jia JobInputAsset
err := json.Unmarshal(body, &jia)
return jia, err
- case string(OdataTypeMicrosoftMediaJobInputHTTP):
+ case string(OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputHTTP):
var jih JobInputHTTP
err := json.Unmarshal(body, &jih)
return jih, err
@@ -6131,7 +6163,7 @@ func unmarshalBasicJobInputClipArray(body []byte) ([]BasicJobInputClip, error) {
// MarshalJSON is the custom marshaler for JobInputClip.
func (jic JobInputClip) MarshalJSON() ([]byte, error) {
- jic.OdataType = OdataTypeMicrosoftMediaJobInputClip
+ jic.OdataType = OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputClip
objectMap := make(map[string]interface{})
if jic.Files != nil {
objectMap["files"] = jic.Files
@@ -6270,13 +6302,13 @@ type JobInputHTTP struct {
Label *string `json:"label,omitempty"`
// InputDefinitions - Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
InputDefinitions *[]BasicInputDefinition `json:"inputDefinitions,omitempty"`
- // OdataType - Possible values include: 'OdataTypeJobInput', 'OdataTypeMicrosoftMediaJobInputClip', 'OdataTypeMicrosoftMediaJobInputs', 'OdataTypeMicrosoftMediaJobInputAsset', 'OdataTypeMicrosoftMediaJobInputHTTP', 'OdataTypeMicrosoftMediaJobInputSequence'
+ // OdataType - Possible values include: 'OdataTypeBasicJobInputOdataTypeJobInput', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputClip', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputs', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputAsset', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputHTTP', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputSequence'
OdataType OdataTypeBasicJobInput `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for JobInputHTTP.
func (jih JobInputHTTP) MarshalJSON() ([]byte, error) {
- jih.OdataType = OdataTypeMicrosoftMediaJobInputHTTP
+ jih.OdataType = OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputHTTP
objectMap := make(map[string]interface{})
if jih.BaseURI != nil {
objectMap["baseUri"] = jih.BaseURI
@@ -6417,13 +6449,13 @@ func (jih *JobInputHTTP) UnmarshalJSON(body []byte) error {
type JobInputs struct {
// Inputs - List of inputs to a Job.
Inputs *[]BasicJobInput `json:"inputs,omitempty"`
- // OdataType - Possible values include: 'OdataTypeJobInput', 'OdataTypeMicrosoftMediaJobInputClip', 'OdataTypeMicrosoftMediaJobInputs', 'OdataTypeMicrosoftMediaJobInputAsset', 'OdataTypeMicrosoftMediaJobInputHTTP', 'OdataTypeMicrosoftMediaJobInputSequence'
+ // OdataType - Possible values include: 'OdataTypeBasicJobInputOdataTypeJobInput', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputClip', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputs', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputAsset', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputHTTP', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputSequence'
OdataType OdataTypeBasicJobInput `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for JobInputs.
func (ji JobInputs) MarshalJSON() ([]byte, error) {
- ji.OdataType = OdataTypeMicrosoftMediaJobInputs
+ ji.OdataType = OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputs
objectMap := make(map[string]interface{})
if ji.Inputs != nil {
objectMap["inputs"] = ji.Inputs
@@ -6511,13 +6543,13 @@ func (ji *JobInputs) UnmarshalJSON(body []byte) error {
type JobInputSequence struct {
// Inputs - JobInputs that make up the timeline.
Inputs *[]BasicJobInputClip `json:"inputs,omitempty"`
- // OdataType - Possible values include: 'OdataTypeJobInput', 'OdataTypeMicrosoftMediaJobInputClip', 'OdataTypeMicrosoftMediaJobInputs', 'OdataTypeMicrosoftMediaJobInputAsset', 'OdataTypeMicrosoftMediaJobInputHTTP', 'OdataTypeMicrosoftMediaJobInputSequence'
+ // OdataType - Possible values include: 'OdataTypeBasicJobInputOdataTypeJobInput', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputClip', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputs', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputAsset', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputHTTP', 'OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputSequence'
OdataType OdataTypeBasicJobInput `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for JobInputSequence.
func (jis JobInputSequence) MarshalJSON() ([]byte, error) {
- jis.OdataType = OdataTypeMicrosoftMediaJobInputSequence
+ jis.OdataType = OdataTypeBasicJobInputOdataTypeMicrosoftMediaJobInputSequence
objectMap := make(map[string]interface{})
if jis.Inputs != nil {
objectMap["inputs"] = jis.Inputs
@@ -6610,7 +6642,7 @@ type BasicJobOutput interface {
type JobOutput struct {
// Error - READ-ONLY; If the JobOutput is in the Error state, it contains the details of the error.
Error *JobError `json:"error,omitempty"`
- // State - READ-ONLY; Describes the state of the JobOutput. Possible values include: 'Canceled', 'Canceling', 'Error', 'Finished', 'Processing', 'Queued', 'Scheduled'
+ // State - READ-ONLY; Describes the state of the JobOutput. Possible values include: 'JobStateCanceled', 'JobStateCanceling', 'JobStateError', 'JobStateFinished', 'JobStateProcessing', 'JobStateQueued', 'JobStateScheduled'
State JobState `json:"state,omitempty"`
// Progress - READ-ONLY; If the JobOutput is in a Processing state, this contains the Job completion percentage. The value is an estimate and not intended to be used to predict Job completion times. To determine if the JobOutput is complete, use the State property.
Progress *int32 `json:"progress,omitempty"`
@@ -6620,7 +6652,7 @@ type JobOutput struct {
StartTime *date.Time `json:"startTime,omitempty"`
// EndTime - READ-ONLY; The UTC date and time at which this Job Output finished processing.
EndTime *date.Time `json:"endTime,omitempty"`
- // OdataType - Possible values include: 'OdataTypeJobOutput', 'OdataTypeMicrosoftMediaJobOutputAsset'
+ // OdataType - Possible values include: 'OdataTypeBasicJobOutputOdataTypeJobOutput', 'OdataTypeBasicJobOutputOdataTypeMicrosoftMediaJobOutputAsset'
OdataType OdataTypeBasicJobOutput `json:"@odata.type,omitempty"`
}
@@ -6632,7 +6664,7 @@ func unmarshalBasicJobOutput(body []byte) (BasicJobOutput, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaJobOutputAsset):
+ case string(OdataTypeBasicJobOutputOdataTypeMicrosoftMediaJobOutputAsset):
var joa JobOutputAsset
err := json.Unmarshal(body, &joa)
return joa, err
@@ -6663,7 +6695,7 @@ func unmarshalBasicJobOutputArray(body []byte) ([]BasicJobOutput, error) {
// MarshalJSON is the custom marshaler for JobOutput.
func (jo JobOutput) MarshalJSON() ([]byte, error) {
- jo.OdataType = OdataTypeJobOutput
+ jo.OdataType = OdataTypeBasicJobOutputOdataTypeJobOutput
objectMap := make(map[string]interface{})
if jo.Label != nil {
objectMap["label"] = jo.Label
@@ -6695,7 +6727,7 @@ type JobOutputAsset struct {
AssetName *string `json:"assetName,omitempty"`
// Error - READ-ONLY; If the JobOutput is in the Error state, it contains the details of the error.
Error *JobError `json:"error,omitempty"`
- // State - READ-ONLY; Describes the state of the JobOutput. Possible values include: 'Canceled', 'Canceling', 'Error', 'Finished', 'Processing', 'Queued', 'Scheduled'
+ // State - READ-ONLY; Describes the state of the JobOutput. Possible values include: 'JobStateCanceled', 'JobStateCanceling', 'JobStateError', 'JobStateFinished', 'JobStateProcessing', 'JobStateQueued', 'JobStateScheduled'
State JobState `json:"state,omitempty"`
// Progress - READ-ONLY; If the JobOutput is in a Processing state, this contains the Job completion percentage. The value is an estimate and not intended to be used to predict Job completion times. To determine if the JobOutput is complete, use the State property.
Progress *int32 `json:"progress,omitempty"`
@@ -6705,13 +6737,13 @@ type JobOutputAsset struct {
StartTime *date.Time `json:"startTime,omitempty"`
// EndTime - READ-ONLY; The UTC date and time at which this Job Output finished processing.
EndTime *date.Time `json:"endTime,omitempty"`
- // OdataType - Possible values include: 'OdataTypeJobOutput', 'OdataTypeMicrosoftMediaJobOutputAsset'
+ // OdataType - Possible values include: 'OdataTypeBasicJobOutputOdataTypeJobOutput', 'OdataTypeBasicJobOutputOdataTypeMicrosoftMediaJobOutputAsset'
OdataType OdataTypeBasicJobOutput `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for JobOutputAsset.
func (joa JobOutputAsset) MarshalJSON() ([]byte, error) {
- joa.OdataType = OdataTypeMicrosoftMediaJobOutputAsset
+ joa.OdataType = OdataTypeBasicJobOutputOdataTypeMicrosoftMediaJobOutputAsset
objectMap := make(map[string]interface{})
if joa.AssetName != nil {
objectMap["assetName"] = joa.AssetName
@@ -6744,7 +6776,7 @@ func (joa JobOutputAsset) AsBasicJobOutput() (BasicJobOutput, bool) {
type JobProperties struct {
// Created - READ-ONLY; The UTC date and time when the customer has created the Job, in 'YYYY-MM-DDThh:mm:ssZ' format.
Created *date.Time `json:"created,omitempty"`
- // State - READ-ONLY; The current state of the job. Possible values include: 'Canceled', 'Canceling', 'Error', 'Finished', 'Processing', 'Queued', 'Scheduled'
+ // State - READ-ONLY; The current state of the job. Possible values include: 'JobStateCanceled', 'JobStateCanceling', 'JobStateError', 'JobStateFinished', 'JobStateProcessing', 'JobStateQueued', 'JobStateScheduled'
State JobState `json:"state,omitempty"`
// Description - Optional customer supplied description of the Job.
Description *string `json:"description,omitempty"`
@@ -6890,13 +6922,13 @@ func (jp *JobProperties) UnmarshalJSON(body []byte) error {
type JpgFormat struct {
// FilenamePattern - The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
FilenamePattern *string `json:"filenamePattern,omitempty"`
- // OdataType - Possible values include: 'OdataTypeFormat', 'OdataTypeMicrosoftMediaImageFormat', 'OdataTypeMicrosoftMediaJpgFormat', 'OdataTypeMicrosoftMediaPngFormat', 'OdataTypeMicrosoftMediaMultiBitrateFormat', 'OdataTypeMicrosoftMediaMp4Format', 'OdataTypeMicrosoftMediaTransportStreamFormat'
+ // OdataType - Possible values include: 'OdataTypeBasicFormatOdataTypeFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaImageFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaJpgFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaPngFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaMultiBitrateFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaMp4Format', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaTransportStreamFormat'
OdataType OdataTypeBasicFormat `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for JpgFormat.
func (jf JpgFormat) MarshalJSON() ([]byte, error) {
- jf.OdataType = OdataTypeMicrosoftMediaJpgFormat
+ jf.OdataType = OdataTypeBasicFormatOdataTypeMicrosoftMediaJpgFormat
objectMap := make(map[string]interface{})
if jf.FilenamePattern != nil {
objectMap["filenamePattern"] = jf.FilenamePattern
@@ -6977,13 +7009,13 @@ type JpgImage struct {
SyncMode VideoSyncMode `json:"syncMode,omitempty"`
// Label - An optional label for the codec. The label can be used to control muxing behavior.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeCodec', 'OdataTypeMicrosoftMediaAudio', 'OdataTypeMicrosoftMediaAacAudio', 'OdataTypeMicrosoftMediaVideo', 'OdataTypeMicrosoftMediaH265Video', 'OdataTypeMicrosoftMediaCopyVideo', 'OdataTypeMicrosoftMediaImage', 'OdataTypeMicrosoftMediaCopyAudio', 'OdataTypeMicrosoftMediaH264Video', 'OdataTypeMicrosoftMediaJpgImage', 'OdataTypeMicrosoftMediaPngImage'
+ // OdataType - Possible values include: 'OdataTypeBasicCodecOdataTypeCodec', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage'
OdataType OdataTypeBasicCodec `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for JpgImage.
func (ji JpgImage) MarshalJSON() ([]byte, error) {
- ji.OdataType = OdataTypeMicrosoftMediaJpgImage
+ ji.OdataType = OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage
objectMap := make(map[string]interface{})
if ji.Layers != nil {
objectMap["layers"] = ji.Layers
@@ -7103,13 +7135,13 @@ type JpgLayer struct {
Height *string `json:"height,omitempty"`
// Label - The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeLayer', 'OdataTypeMicrosoftMediaH265VideoLayer', 'OdataTypeMicrosoftMediaH265Layer', 'OdataTypeMicrosoftMediaVideoLayer', 'OdataTypeMicrosoftMediaH264Layer', 'OdataTypeMicrosoftMediaJpgLayer', 'OdataTypeMicrosoftMediaPngLayer'
+ // OdataType - Possible values include: 'OdataTypeBasicLayerOdataTypeLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH265VideoLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH265Layer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaVideoLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH264Layer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaJpgLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaPngLayer'
OdataType OdataTypeBasicLayer `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for JpgLayer.
func (jl JpgLayer) MarshalJSON() ([]byte, error) {
- jl.OdataType = OdataTypeMicrosoftMediaJpgLayer
+ jl.OdataType = OdataTypeBasicLayerOdataTypeMicrosoftMediaJpgLayer
objectMap := make(map[string]interface{})
if jl.Quality != nil {
objectMap["quality"] = jl.Quality
@@ -7179,6 +7211,12 @@ func (jl JpgLayer) AsBasicLayer() (BasicLayer, bool) {
return &jl, true
}
+// KeyDelivery ...
+type KeyDelivery struct {
+ // AccessControl - The access control properties for Key Delivery.
+ AccessControl *AccessControl `json:"accessControl,omitempty"`
+}
+
// KeyVaultProperties ...
type KeyVaultProperties struct {
// KeyIdentifier - The URL of the Key Vault key used to encrypt the account. The key may either be versioned (for example https://vault/keys/mykey/version1) or reference a key without a version (for example https://vault/keys/mykey).
@@ -7221,7 +7259,7 @@ type Layer struct {
Height *string `json:"height,omitempty"`
// Label - The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeLayer', 'OdataTypeMicrosoftMediaH265VideoLayer', 'OdataTypeMicrosoftMediaH265Layer', 'OdataTypeMicrosoftMediaVideoLayer', 'OdataTypeMicrosoftMediaH264Layer', 'OdataTypeMicrosoftMediaJpgLayer', 'OdataTypeMicrosoftMediaPngLayer'
+ // OdataType - Possible values include: 'OdataTypeBasicLayerOdataTypeLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH265VideoLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH265Layer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaVideoLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH264Layer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaJpgLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaPngLayer'
OdataType OdataTypeBasicLayer `json:"@odata.type,omitempty"`
}
@@ -7233,27 +7271,27 @@ func unmarshalBasicLayer(body []byte) (BasicLayer, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaH265VideoLayer):
+ case string(OdataTypeBasicLayerOdataTypeMicrosoftMediaH265VideoLayer):
var hvl H265VideoLayer
err := json.Unmarshal(body, &hvl)
return hvl, err
- case string(OdataTypeMicrosoftMediaH265Layer):
+ case string(OdataTypeBasicLayerOdataTypeMicrosoftMediaH265Layer):
var hl H265Layer
err := json.Unmarshal(body, &hl)
return hl, err
- case string(OdataTypeMicrosoftMediaVideoLayer):
+ case string(OdataTypeBasicLayerOdataTypeMicrosoftMediaVideoLayer):
var vl VideoLayer
err := json.Unmarshal(body, &vl)
return vl, err
- case string(OdataTypeMicrosoftMediaH264Layer):
+ case string(OdataTypeBasicLayerOdataTypeMicrosoftMediaH264Layer):
var hl H264Layer
err := json.Unmarshal(body, &hl)
return hl, err
- case string(OdataTypeMicrosoftMediaJpgLayer):
+ case string(OdataTypeBasicLayerOdataTypeMicrosoftMediaJpgLayer):
var jl JpgLayer
err := json.Unmarshal(body, &jl)
return jl, err
- case string(OdataTypeMicrosoftMediaPngLayer):
+ case string(OdataTypeBasicLayerOdataTypeMicrosoftMediaPngLayer):
var pl PngLayer
err := json.Unmarshal(body, &pl)
return pl, err
@@ -7284,7 +7322,7 @@ func unmarshalBasicLayerArray(body []byte) ([]BasicLayer, error) {
// MarshalJSON is the custom marshaler for Layer.
func (l Layer) MarshalJSON() ([]byte, error) {
- l.OdataType = OdataTypeLayer
+ l.OdataType = OdataTypeBasicLayerOdataTypeLayer
objectMap := make(map[string]interface{})
if l.Width != nil {
objectMap["width"] = l.Width
@@ -7353,7 +7391,7 @@ func (l Layer) AsBasicLayer() (BasicLayer, bool) {
// ListContainerSasInput the parameters to the list SAS request.
type ListContainerSasInput struct {
- // Permissions - The permissions to set on the SAS URL. Possible values include: 'Read', 'ReadWrite', 'ReadWriteDelete'
+ // Permissions - The permissions to set on the SAS URL. Possible values include: 'AssetContainerPermissionRead', 'AssetContainerPermissionReadWrite', 'AssetContainerPermissionReadWriteDelete'
Permissions AssetContainerPermission `json:"permissions,omitempty"`
// ExpiryTime - The SAS URL expiration time. This must be less than 24 hours from the current time.
ExpiryTime *date.Time `json:"expiryTime,omitempty"`
@@ -7388,6 +7426,12 @@ type ListStreamingLocatorsResponse struct {
StreamingLocators *[]AssetStreamingLocator `json:"streamingLocators,omitempty"`
}
+// MarshalJSON is the custom marshaler for ListStreamingLocatorsResponse.
+func (lslr ListStreamingLocatorsResponse) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ return json.Marshal(objectMap)
+}
+
// LiveEvent the live event.
type LiveEvent struct {
autorest.Response `json:"-"`
@@ -7528,7 +7572,7 @@ type LiveEventEndpoint struct {
// LiveEventInput the live event input.
type LiveEventInput struct {
- // StreamingProtocol - The input protocol for the live event. This is specified at creation time and cannot be updated. Possible values include: 'FragmentedMP4', 'RTMP'
+ // StreamingProtocol - The input protocol for the live event. This is specified at creation time and cannot be updated. Possible values include: 'LiveEventInputProtocolFragmentedMP4', 'LiveEventInputProtocolRTMP'
StreamingProtocol LiveEventInputProtocol `json:"streamingProtocol,omitempty"`
// AccessControl - Access control for live event input.
AccessControl *LiveEventInputAccessControl `json:"accessControl,omitempty"`
@@ -7760,7 +7804,7 @@ type LiveEventProperties struct {
Transcriptions *[]LiveEventTranscription `json:"transcriptions,omitempty"`
// ProvisioningState - READ-ONLY; The provisioning state of the live event.
ProvisioningState *string `json:"provisioningState,omitempty"`
- // ResourceState - READ-ONLY; The resource state of the live event. See https://go.microsoft.com/fwlink/?linkid=2139012 for more information. Possible values include: 'Stopped', 'Allocating', 'StandBy', 'Starting', 'Running', 'Stopping', 'Deleting'
+ // ResourceState - READ-ONLY; The resource state of the live event. See https://go.microsoft.com/fwlink/?linkid=2139012 for more information. Possible values include: 'LiveEventResourceStateStopped', 'LiveEventResourceStateAllocating', 'LiveEventResourceStateStandBy', 'LiveEventResourceStateStarting', 'LiveEventResourceStateRunning', 'LiveEventResourceStateStopping', 'LiveEventResourceStateDeleting'
ResourceState LiveEventResourceState `json:"resourceState,omitempty"`
// CrossSiteAccessPolicies - Live event cross site access policies.
CrossSiteAccessPolicies *CrossSiteAccessPolicies `json:"crossSiteAccessPolicies,omitempty"`
@@ -8469,6 +8513,12 @@ type LogSpecification struct {
BlobDuration *string `json:"blobDuration,omitempty"`
}
+// MarshalJSON is the custom marshaler for LogSpecification.
+func (ls LogSpecification) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ return json.Marshal(objectMap)
+}
+
// MetricDimension a metric dimension.
type MetricDimension struct {
// Name - READ-ONLY; The metric dimension name.
@@ -8479,6 +8529,12 @@ type MetricDimension struct {
ToBeExportedForShoebox *bool `json:"toBeExportedForShoebox,omitempty"`
}
+// MarshalJSON is the custom marshaler for MetricDimension.
+func (md MetricDimension) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ return json.Marshal(objectMap)
+}
+
// MetricSpecification a metric emitted by service.
type MetricSpecification struct {
// Name - READ-ONLY; The metric name.
@@ -8489,14 +8545,22 @@ type MetricSpecification struct {
DisplayDescription *string `json:"displayDescription,omitempty"`
// Unit - READ-ONLY; The metric unit. Possible values include: 'MetricUnitBytes', 'MetricUnitCount', 'MetricUnitMilliseconds'
Unit MetricUnit `json:"unit,omitempty"`
- // AggregationType - READ-ONLY; The metric aggregation type. Possible values include: 'Average', 'Count', 'Total'
+ // AggregationType - READ-ONLY; The metric aggregation type. Possible values include: 'MetricAggregationTypeAverage', 'MetricAggregationTypeCount', 'MetricAggregationTypeTotal'
AggregationType MetricAggregationType `json:"aggregationType,omitempty"`
- // LockAggregationType - READ-ONLY; The metric lock aggregation type. Possible values include: 'Average', 'Count', 'Total'
+ // LockAggregationType - READ-ONLY; The metric lock aggregation type. Possible values include: 'MetricAggregationTypeAverage', 'MetricAggregationTypeCount', 'MetricAggregationTypeTotal'
LockAggregationType MetricAggregationType `json:"lockAggregationType,omitempty"`
// SupportedAggregationTypes - Supported aggregation types.
SupportedAggregationTypes *[]string `json:"supportedAggregationTypes,omitempty"`
// Dimensions - READ-ONLY; The metric dimensions.
Dimensions *[]MetricDimension `json:"dimensions,omitempty"`
+ // EnableRegionalMdmAccount - READ-ONLY; Indicates whether regional MDM account is enabled.
+ EnableRegionalMdmAccount *bool `json:"enableRegionalMdmAccount,omitempty"`
+ // SourceMdmAccount - READ-ONLY; The source MDM account.
+ SourceMdmAccount *string `json:"sourceMdmAccount,omitempty"`
+ // SourceMdmNamespace - READ-ONLY; The source MDM namespace.
+ SourceMdmNamespace *string `json:"sourceMdmNamespace,omitempty"`
+ // SupportedTimeGrainTypes - READ-ONLY; The supported time grain types.
+ SupportedTimeGrainTypes *[]string `json:"supportedTimeGrainTypes,omitempty"`
}
// MarshalJSON is the custom marshaler for MetricSpecification.
@@ -8514,13 +8578,13 @@ type Mp4Format struct {
OutputFiles *[]OutputFile `json:"outputFiles,omitempty"`
// FilenamePattern - The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
FilenamePattern *string `json:"filenamePattern,omitempty"`
- // OdataType - Possible values include: 'OdataTypeFormat', 'OdataTypeMicrosoftMediaImageFormat', 'OdataTypeMicrosoftMediaJpgFormat', 'OdataTypeMicrosoftMediaPngFormat', 'OdataTypeMicrosoftMediaMultiBitrateFormat', 'OdataTypeMicrosoftMediaMp4Format', 'OdataTypeMicrosoftMediaTransportStreamFormat'
+ // OdataType - Possible values include: 'OdataTypeBasicFormatOdataTypeFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaImageFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaJpgFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaPngFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaMultiBitrateFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaMp4Format', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaTransportStreamFormat'
OdataType OdataTypeBasicFormat `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for Mp4Format.
func (m4f Mp4Format) MarshalJSON() ([]byte, error) {
- m4f.OdataType = OdataTypeMicrosoftMediaMp4Format
+ m4f.OdataType = OdataTypeBasicFormatOdataTypeMicrosoftMediaMp4Format
objectMap := make(map[string]interface{})
if m4f.OutputFiles != nil {
objectMap["outputFiles"] = m4f.OutputFiles
@@ -8601,7 +8665,7 @@ type MultiBitrateFormat struct {
OutputFiles *[]OutputFile `json:"outputFiles,omitempty"`
// FilenamePattern - The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
FilenamePattern *string `json:"filenamePattern,omitempty"`
- // OdataType - Possible values include: 'OdataTypeFormat', 'OdataTypeMicrosoftMediaImageFormat', 'OdataTypeMicrosoftMediaJpgFormat', 'OdataTypeMicrosoftMediaPngFormat', 'OdataTypeMicrosoftMediaMultiBitrateFormat', 'OdataTypeMicrosoftMediaMp4Format', 'OdataTypeMicrosoftMediaTransportStreamFormat'
+ // OdataType - Possible values include: 'OdataTypeBasicFormatOdataTypeFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaImageFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaJpgFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaPngFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaMultiBitrateFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaMp4Format', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaTransportStreamFormat'
OdataType OdataTypeBasicFormat `json:"@odata.type,omitempty"`
}
@@ -8613,11 +8677,11 @@ func unmarshalBasicMultiBitrateFormat(body []byte) (BasicMultiBitrateFormat, err
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaMp4Format):
+ case string(OdataTypeBasicFormatOdataTypeMicrosoftMediaMp4Format):
var m4f Mp4Format
err := json.Unmarshal(body, &m4f)
return m4f, err
- case string(OdataTypeMicrosoftMediaTransportStreamFormat):
+ case string(OdataTypeBasicFormatOdataTypeMicrosoftMediaTransportStreamFormat):
var tsf TransportStreamFormat
err := json.Unmarshal(body, &tsf)
return tsf, err
@@ -8648,7 +8712,7 @@ func unmarshalBasicMultiBitrateFormatArray(body []byte) ([]BasicMultiBitrateForm
// MarshalJSON is the custom marshaler for MultiBitrateFormat.
func (mbf MultiBitrateFormat) MarshalJSON() ([]byte, error) {
- mbf.OdataType = OdataTypeMicrosoftMediaMultiBitrateFormat
+ mbf.OdataType = OdataTypeBasicFormatOdataTypeMicrosoftMediaMultiBitrateFormat
objectMap := make(map[string]interface{})
if mbf.OutputFiles != nil {
objectMap["outputFiles"] = mbf.OutputFiles
@@ -8740,6 +8804,10 @@ type Operation struct {
Origin *string `json:"origin,omitempty"`
// Properties - Operation properties format.
Properties *Properties `json:"properties,omitempty"`
+ // IsDataAction - Whether the operation applies to data-plane.
+ IsDataAction *bool `json:"isDataAction,omitempty"`
+ // ActionType - Indicates the action type. Possible values include: 'ActionTypeInternal'
+ ActionType ActionType `json:"actionType,omitempty"`
}
// OperationCollection a collection of Operation items.
@@ -8940,7 +9008,7 @@ type Overlay struct {
FadeOutDuration *string `json:"fadeOutDuration,omitempty"`
// AudioGainLevel - The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
AudioGainLevel *float64 `json:"audioGainLevel,omitempty"`
- // OdataType - Possible values include: 'OdataTypeOverlay', 'OdataTypeMicrosoftMediaAudioOverlay', 'OdataTypeMicrosoftMediaVideoOverlay'
+ // OdataType - Possible values include: 'OdataTypeBasicOverlayOdataTypeOverlay', 'OdataTypeBasicOverlayOdataTypeMicrosoftMediaAudioOverlay', 'OdataTypeBasicOverlayOdataTypeMicrosoftMediaVideoOverlay'
OdataType OdataTypeBasicOverlay `json:"@odata.type,omitempty"`
}
@@ -8952,11 +9020,11 @@ func unmarshalBasicOverlay(body []byte) (BasicOverlay, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaAudioOverlay):
+ case string(OdataTypeBasicOverlayOdataTypeMicrosoftMediaAudioOverlay):
var ao AudioOverlay
err := json.Unmarshal(body, &ao)
return ao, err
- case string(OdataTypeMicrosoftMediaVideoOverlay):
+ case string(OdataTypeBasicOverlayOdataTypeMicrosoftMediaVideoOverlay):
var vo VideoOverlay
err := json.Unmarshal(body, &vo)
return vo, err
@@ -8987,7 +9055,7 @@ func unmarshalBasicOverlayArray(body []byte) ([]BasicOverlay, error) {
// MarshalJSON is the custom marshaler for Overlay.
func (o Overlay) MarshalJSON() ([]byte, error) {
- o.OdataType = OdataTypeOverlay
+ o.OdataType = OdataTypeBasicOverlayOdataTypeOverlay
objectMap := make(map[string]interface{})
if o.InputLabel != nil {
objectMap["inputLabel"] = o.InputLabel
@@ -9037,13 +9105,13 @@ func (o Overlay) AsBasicOverlay() (BasicOverlay, bool) {
type PngFormat struct {
// FilenamePattern - The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
FilenamePattern *string `json:"filenamePattern,omitempty"`
- // OdataType - Possible values include: 'OdataTypeFormat', 'OdataTypeMicrosoftMediaImageFormat', 'OdataTypeMicrosoftMediaJpgFormat', 'OdataTypeMicrosoftMediaPngFormat', 'OdataTypeMicrosoftMediaMultiBitrateFormat', 'OdataTypeMicrosoftMediaMp4Format', 'OdataTypeMicrosoftMediaTransportStreamFormat'
+ // OdataType - Possible values include: 'OdataTypeBasicFormatOdataTypeFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaImageFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaJpgFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaPngFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaMultiBitrateFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaMp4Format', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaTransportStreamFormat'
OdataType OdataTypeBasicFormat `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for PngFormat.
func (pf PngFormat) MarshalJSON() ([]byte, error) {
- pf.OdataType = OdataTypeMicrosoftMediaPngFormat
+ pf.OdataType = OdataTypeBasicFormatOdataTypeMicrosoftMediaPngFormat
objectMap := make(map[string]interface{})
if pf.FilenamePattern != nil {
objectMap["filenamePattern"] = pf.FilenamePattern
@@ -9122,13 +9190,13 @@ type PngImage struct {
SyncMode VideoSyncMode `json:"syncMode,omitempty"`
// Label - An optional label for the codec. The label can be used to control muxing behavior.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeCodec', 'OdataTypeMicrosoftMediaAudio', 'OdataTypeMicrosoftMediaAacAudio', 'OdataTypeMicrosoftMediaVideo', 'OdataTypeMicrosoftMediaH265Video', 'OdataTypeMicrosoftMediaCopyVideo', 'OdataTypeMicrosoftMediaImage', 'OdataTypeMicrosoftMediaCopyAudio', 'OdataTypeMicrosoftMediaH264Video', 'OdataTypeMicrosoftMediaJpgImage', 'OdataTypeMicrosoftMediaPngImage'
+ // OdataType - Possible values include: 'OdataTypeBasicCodecOdataTypeCodec', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage'
OdataType OdataTypeBasicCodec `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for PngImage.
func (pi PngImage) MarshalJSON() ([]byte, error) {
- pi.OdataType = OdataTypeMicrosoftMediaPngImage
+ pi.OdataType = OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage
objectMap := make(map[string]interface{})
if pi.Layers != nil {
objectMap["layers"] = pi.Layers
@@ -9243,13 +9311,13 @@ type PngLayer struct {
Height *string `json:"height,omitempty"`
// Label - The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeLayer', 'OdataTypeMicrosoftMediaH265VideoLayer', 'OdataTypeMicrosoftMediaH265Layer', 'OdataTypeMicrosoftMediaVideoLayer', 'OdataTypeMicrosoftMediaH264Layer', 'OdataTypeMicrosoftMediaJpgLayer', 'OdataTypeMicrosoftMediaPngLayer'
+ // OdataType - Possible values include: 'OdataTypeBasicLayerOdataTypeLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH265VideoLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH265Layer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaVideoLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH264Layer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaJpgLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaPngLayer'
OdataType OdataTypeBasicLayer `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for PngLayer.
func (pl PngLayer) MarshalJSON() ([]byte, error) {
- pl.OdataType = OdataTypeMicrosoftMediaPngLayer
+ pl.OdataType = OdataTypeBasicLayerOdataTypeMicrosoftMediaPngLayer
objectMap := make(map[string]interface{})
if pl.Width != nil {
objectMap["width"] = pl.Width
@@ -9348,7 +9416,7 @@ type BasicPreset interface {
// Preset base type for all Presets, which define the recipe or instructions on how the input media files
// should be processed.
type Preset struct {
- // OdataType - Possible values include: 'OdataTypePreset', 'OdataTypeMicrosoftMediaFaceDetectorPreset', 'OdataTypeMicrosoftMediaAudioAnalyzerPreset', 'OdataTypeMicrosoftMediaBuiltInStandardEncoderPreset', 'OdataTypeMicrosoftMediaStandardEncoderPreset', 'OdataTypeMicrosoftMediaVideoAnalyzerPreset'
+ // OdataType - Possible values include: 'OdataTypeBasicPresetOdataTypePreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaFaceDetectorPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaAudioAnalyzerPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaBuiltInStandardEncoderPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaStandardEncoderPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset'
OdataType OdataTypeBasicPreset `json:"@odata.type,omitempty"`
}
@@ -9360,23 +9428,23 @@ func unmarshalBasicPreset(body []byte) (BasicPreset, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaFaceDetectorPreset):
+ case string(OdataTypeBasicPresetOdataTypeMicrosoftMediaFaceDetectorPreset):
var fdp FaceDetectorPreset
err := json.Unmarshal(body, &fdp)
return fdp, err
- case string(OdataTypeMicrosoftMediaAudioAnalyzerPreset):
+ case string(OdataTypeBasicPresetOdataTypeMicrosoftMediaAudioAnalyzerPreset):
var aap AudioAnalyzerPreset
err := json.Unmarshal(body, &aap)
return aap, err
- case string(OdataTypeMicrosoftMediaBuiltInStandardEncoderPreset):
+ case string(OdataTypeBasicPresetOdataTypeMicrosoftMediaBuiltInStandardEncoderPreset):
var bisep BuiltInStandardEncoderPreset
err := json.Unmarshal(body, &bisep)
return bisep, err
- case string(OdataTypeMicrosoftMediaStandardEncoderPreset):
+ case string(OdataTypeBasicPresetOdataTypeMicrosoftMediaStandardEncoderPreset):
var sep StandardEncoderPreset
err := json.Unmarshal(body, &sep)
return sep, err
- case string(OdataTypeMicrosoftMediaVideoAnalyzerPreset):
+ case string(OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset):
var vap VideoAnalyzerPreset
err := json.Unmarshal(body, &vap)
return vap, err
@@ -9407,7 +9475,7 @@ func unmarshalBasicPresetArray(body []byte) ([]BasicPreset, error) {
// MarshalJSON is the custom marshaler for Preset.
func (p Preset) MarshalJSON() ([]byte, error) {
- p.OdataType = OdataTypePreset
+ p.OdataType = OdataTypeBasicPresetOdataTypePreset
objectMap := make(map[string]interface{})
if p.OdataType != "" {
objectMap["@odata.type"] = p.OdataType
@@ -9461,6 +9529,12 @@ type PrivateEndpoint struct {
ID *string `json:"id,omitempty"`
}
+// MarshalJSON is the custom marshaler for PrivateEndpoint.
+func (peVar PrivateEndpoint) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ return json.Marshal(objectMap)
+}
+
// PrivateEndpointConnection the Private Endpoint Connection resource.
type PrivateEndpointConnection struct {
autorest.Response `json:"-"`
@@ -9654,7 +9728,7 @@ func (plrp PrivateLinkResourceProperties) MarshalJSON() ([]byte, error) {
// PrivateLinkServiceConnectionState a collection of information about the state of the connection between
// service consumer and provider.
type PrivateLinkServiceConnectionState struct {
- // Status - Indicates whether the connection has been Approved/Rejected/Removed by the owner of the service. Possible values include: 'Pending', 'Approved', 'Rejected'
+ // Status - Indicates whether the connection has been Approved/Rejected/Removed by the owner of the service. Possible values include: 'PrivateEndpointServiceConnectionStatusPending', 'PrivateEndpointServiceConnectionStatusApproved', 'PrivateEndpointServiceConnectionStatusRejected'
Status PrivateEndpointServiceConnectionStatus `json:"status,omitempty"`
// Description - The reason for approval/rejection of the connection.
Description *string `json:"description,omitempty"`
@@ -9668,6 +9742,12 @@ type Properties struct {
ServiceSpecification *ServiceSpecification `json:"serviceSpecification,omitempty"`
}
+// MarshalJSON is the custom marshaler for Properties.
+func (p Properties) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ return json.Marshal(objectMap)
+}
+
// Provider a resource provider.
type Provider struct {
// ProviderName - The provider name.
@@ -9685,6 +9765,12 @@ type ProxyResource struct {
Type *string `json:"type,omitempty"`
}
+// MarshalJSON is the custom marshaler for ProxyResource.
+func (pr ProxyResource) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ return json.Marshal(objectMap)
+}
+
// Rectangle describes the properties of a rectangular window applied to the input media before processing
// it.
type Rectangle struct {
@@ -9708,24 +9794,30 @@ type Resource struct {
Type *string `json:"type,omitempty"`
}
+// MarshalJSON is the custom marshaler for Resource.
+func (r Resource) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ return json.Marshal(objectMap)
+}
+
// SelectAudioTrackByAttribute select audio tracks from the input by specifying an attribute and an
// attribute filter.
type SelectAudioTrackByAttribute struct {
- // Attribute - The TrackAttribute to filter the tracks by. Possible values include: 'Bitrate', 'Language'
+ // Attribute - The TrackAttribute to filter the tracks by. Possible values include: 'TrackAttributeBitrate', 'TrackAttributeLanguage'
Attribute TrackAttribute `json:"attribute,omitempty"`
- // Filter - The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks. Possible values include: 'All', 'Top', 'Bottom', 'ValueEquals'
+ // Filter - The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks. Possible values include: 'AttributeFilterAll', 'AttributeFilterTop', 'AttributeFilterBottom', 'AttributeFilterValueEquals'
Filter AttributeFilter `json:"filter,omitempty"`
// FilterValue - The value to filter the tracks by. Only used when AttributeFilter.ValueEquals is specified for the Filter property.
FilterValue *string `json:"filterValue,omitempty"`
- // ChannelMapping - Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks. Possible values include: 'FrontLeft', 'FrontRight', 'Center', 'LowFrequencyEffects', 'BackLeft', 'BackRight', 'StereoLeft', 'StereoRight'
+ // ChannelMapping - Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks. Possible values include: 'ChannelMappingFrontLeft', 'ChannelMappingFrontRight', 'ChannelMappingCenter', 'ChannelMappingLowFrequencyEffects', 'ChannelMappingBackLeft', 'ChannelMappingBackRight', 'ChannelMappingStereoLeft', 'ChannelMappingStereoRight'
ChannelMapping ChannelMapping `json:"channelMapping,omitempty"`
- // OdataType - Possible values include: 'OdataTypeTrackDescriptor', 'OdataTypeMicrosoftMediaAudioTrackDescriptor', 'OdataTypeMicrosoftMediaSelectAudioTrackByAttribute', 'OdataTypeMicrosoftMediaSelectAudioTrackByID', 'OdataTypeMicrosoftMediaVideoTrackDescriptor', 'OdataTypeMicrosoftMediaSelectVideoTrackByAttribute', 'OdataTypeMicrosoftMediaSelectVideoTrackByID'
+ // OdataType - Possible values include: 'OdataTypeBasicTrackDescriptorOdataTypeTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaAudioTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByAttribute', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByID', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaVideoTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByAttribute', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByID'
OdataType OdataTypeBasicTrackDescriptor `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for SelectAudioTrackByAttribute.
func (satba SelectAudioTrackByAttribute) MarshalJSON() ([]byte, error) {
- satba.OdataType = OdataTypeMicrosoftMediaSelectAudioTrackByAttribute
+ satba.OdataType = OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByAttribute
objectMap := make(map[string]interface{})
if satba.Attribute != "" {
objectMap["attribute"] = satba.Attribute
@@ -9799,15 +9891,15 @@ func (satba SelectAudioTrackByAttribute) AsBasicTrackDescriptor() (BasicTrackDes
type SelectAudioTrackByID struct {
// TrackID - Track identifier to select
TrackID *int64 `json:"trackId,omitempty"`
- // ChannelMapping - Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks. Possible values include: 'FrontLeft', 'FrontRight', 'Center', 'LowFrequencyEffects', 'BackLeft', 'BackRight', 'StereoLeft', 'StereoRight'
+ // ChannelMapping - Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks. Possible values include: 'ChannelMappingFrontLeft', 'ChannelMappingFrontRight', 'ChannelMappingCenter', 'ChannelMappingLowFrequencyEffects', 'ChannelMappingBackLeft', 'ChannelMappingBackRight', 'ChannelMappingStereoLeft', 'ChannelMappingStereoRight'
ChannelMapping ChannelMapping `json:"channelMapping,omitempty"`
- // OdataType - Possible values include: 'OdataTypeTrackDescriptor', 'OdataTypeMicrosoftMediaAudioTrackDescriptor', 'OdataTypeMicrosoftMediaSelectAudioTrackByAttribute', 'OdataTypeMicrosoftMediaSelectAudioTrackByID', 'OdataTypeMicrosoftMediaVideoTrackDescriptor', 'OdataTypeMicrosoftMediaSelectVideoTrackByAttribute', 'OdataTypeMicrosoftMediaSelectVideoTrackByID'
+ // OdataType - Possible values include: 'OdataTypeBasicTrackDescriptorOdataTypeTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaAudioTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByAttribute', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByID', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaVideoTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByAttribute', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByID'
OdataType OdataTypeBasicTrackDescriptor `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for SelectAudioTrackByID.
func (satbi SelectAudioTrackByID) MarshalJSON() ([]byte, error) {
- satbi.OdataType = OdataTypeMicrosoftMediaSelectAudioTrackByID
+ satbi.OdataType = OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByID
objectMap := make(map[string]interface{})
if satbi.TrackID != nil {
objectMap["trackId"] = satbi.TrackID
@@ -9874,19 +9966,19 @@ func (satbi SelectAudioTrackByID) AsBasicTrackDescriptor() (BasicTrackDescriptor
// SelectVideoTrackByAttribute select video tracks from the input by specifying an attribute and an
// attribute filter.
type SelectVideoTrackByAttribute struct {
- // Attribute - The TrackAttribute to filter the tracks by. Possible values include: 'Bitrate', 'Language'
+ // Attribute - The TrackAttribute to filter the tracks by. Possible values include: 'TrackAttributeBitrate', 'TrackAttributeLanguage'
Attribute TrackAttribute `json:"attribute,omitempty"`
- // Filter - The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks. Possible values include: 'All', 'Top', 'Bottom', 'ValueEquals'
+ // Filter - The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks. Possible values include: 'AttributeFilterAll', 'AttributeFilterTop', 'AttributeFilterBottom', 'AttributeFilterValueEquals'
Filter AttributeFilter `json:"filter,omitempty"`
// FilterValue - The value to filter the tracks by. Only used when AttributeFilter.ValueEquals is specified for the Filter property. For TrackAttribute.Bitrate, this should be an integer value in bits per second (e.g: '1500000'). The TrackAttribute.Language is not supported for video tracks.
FilterValue *string `json:"filterValue,omitempty"`
- // OdataType - Possible values include: 'OdataTypeTrackDescriptor', 'OdataTypeMicrosoftMediaAudioTrackDescriptor', 'OdataTypeMicrosoftMediaSelectAudioTrackByAttribute', 'OdataTypeMicrosoftMediaSelectAudioTrackByID', 'OdataTypeMicrosoftMediaVideoTrackDescriptor', 'OdataTypeMicrosoftMediaSelectVideoTrackByAttribute', 'OdataTypeMicrosoftMediaSelectVideoTrackByID'
+ // OdataType - Possible values include: 'OdataTypeBasicTrackDescriptorOdataTypeTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaAudioTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByAttribute', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByID', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaVideoTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByAttribute', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByID'
OdataType OdataTypeBasicTrackDescriptor `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for SelectVideoTrackByAttribute.
func (svtba SelectVideoTrackByAttribute) MarshalJSON() ([]byte, error) {
- svtba.OdataType = OdataTypeMicrosoftMediaSelectVideoTrackByAttribute
+ svtba.OdataType = OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByAttribute
objectMap := make(map[string]interface{})
if svtba.Attribute != "" {
objectMap["attribute"] = svtba.Attribute
@@ -9957,13 +10049,13 @@ func (svtba SelectVideoTrackByAttribute) AsBasicTrackDescriptor() (BasicTrackDes
type SelectVideoTrackByID struct {
// TrackID - Track identifier to select
TrackID *int64 `json:"trackId,omitempty"`
- // OdataType - Possible values include: 'OdataTypeTrackDescriptor', 'OdataTypeMicrosoftMediaAudioTrackDescriptor', 'OdataTypeMicrosoftMediaSelectAudioTrackByAttribute', 'OdataTypeMicrosoftMediaSelectAudioTrackByID', 'OdataTypeMicrosoftMediaVideoTrackDescriptor', 'OdataTypeMicrosoftMediaSelectVideoTrackByAttribute', 'OdataTypeMicrosoftMediaSelectVideoTrackByID'
+ // OdataType - Possible values include: 'OdataTypeBasicTrackDescriptorOdataTypeTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaAudioTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByAttribute', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByID', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaVideoTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByAttribute', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByID'
OdataType OdataTypeBasicTrackDescriptor `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for SelectVideoTrackByID.
func (svtbi SelectVideoTrackByID) MarshalJSON() ([]byte, error) {
- svtbi.OdataType = OdataTypeMicrosoftMediaSelectVideoTrackByID
+ svtbi.OdataType = OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByID
objectMap := make(map[string]interface{})
if svtbi.TrackID != nil {
objectMap["trackId"] = svtbi.TrackID
@@ -10338,6 +10430,8 @@ type ServiceProperties struct {
StorageAuthentication StorageAuthentication `json:"storageAuthentication,omitempty"`
// Encryption - The account encryption properties.
Encryption *AccountEncryption `json:"encryption,omitempty"`
+ // KeyDelivery - The Key Delivery properties for Media Services account.
+ KeyDelivery *KeyDelivery `json:"keyDelivery,omitempty"`
}
// MarshalJSON is the custom marshaler for ServiceProperties.
@@ -10352,6 +10446,9 @@ func (sp ServiceProperties) MarshalJSON() ([]byte, error) {
if sp.Encryption != nil {
objectMap["encryption"] = sp.Encryption
}
+ if sp.KeyDelivery != nil {
+ objectMap["keyDelivery"] = sp.KeyDelivery
+ }
return json.Marshal(objectMap)
}
@@ -10363,6 +10460,79 @@ type ServiceSpecification struct {
MetricSpecifications *[]MetricSpecification `json:"metricSpecifications,omitempty"`
}
+// MarshalJSON is the custom marshaler for ServiceSpecification.
+func (ss ServiceSpecification) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ return json.Marshal(objectMap)
+}
+
+// ServiceUpdate a Media Services account update.
+type ServiceUpdate struct {
+ // Tags - Resource tags.
+ Tags map[string]*string `json:"tags"`
+ // ServiceProperties - The resource properties.
+ *ServiceProperties `json:"properties,omitempty"`
+ // Identity - The Managed Identity for the Media Services account.
+ Identity *ServiceIdentity `json:"identity,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for ServiceUpdate.
+func (su ServiceUpdate) MarshalJSON() ([]byte, error) {
+ objectMap := make(map[string]interface{})
+ if su.Tags != nil {
+ objectMap["tags"] = su.Tags
+ }
+ if su.ServiceProperties != nil {
+ objectMap["properties"] = su.ServiceProperties
+ }
+ if su.Identity != nil {
+ objectMap["identity"] = su.Identity
+ }
+ return json.Marshal(objectMap)
+}
+
+// UnmarshalJSON is the custom unmarshaler for ServiceUpdate struct.
+func (su *ServiceUpdate) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "tags":
+ if v != nil {
+ var tags map[string]*string
+ err = json.Unmarshal(*v, &tags)
+ if err != nil {
+ return err
+ }
+ su.Tags = tags
+ }
+ case "properties":
+ if v != nil {
+ var serviceProperties ServiceProperties
+ err = json.Unmarshal(*v, &serviceProperties)
+ if err != nil {
+ return err
+ }
+ su.ServiceProperties = &serviceProperties
+ }
+ case "identity":
+ if v != nil {
+ var identity ServiceIdentity
+ err = json.Unmarshal(*v, &identity)
+ if err != nil {
+ return err
+ }
+ su.Identity = &identity
+ }
+ }
+ }
+
+ return nil
+}
+
// StandardEncoderPreset describes all the settings to be used when encoding the input video with the
// Standard Encoder.
type StandardEncoderPreset struct {
@@ -10372,13 +10542,13 @@ type StandardEncoderPreset struct {
Codecs *[]BasicCodec `json:"codecs,omitempty"`
// Formats - The list of outputs to be produced by the encoder.
Formats *[]BasicFormat `json:"formats,omitempty"`
- // OdataType - Possible values include: 'OdataTypePreset', 'OdataTypeMicrosoftMediaFaceDetectorPreset', 'OdataTypeMicrosoftMediaAudioAnalyzerPreset', 'OdataTypeMicrosoftMediaBuiltInStandardEncoderPreset', 'OdataTypeMicrosoftMediaStandardEncoderPreset', 'OdataTypeMicrosoftMediaVideoAnalyzerPreset'
+ // OdataType - Possible values include: 'OdataTypeBasicPresetOdataTypePreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaFaceDetectorPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaAudioAnalyzerPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaBuiltInStandardEncoderPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaStandardEncoderPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset'
OdataType OdataTypeBasicPreset `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for StandardEncoderPreset.
func (sep StandardEncoderPreset) MarshalJSON() ([]byte, error) {
- sep.OdataType = OdataTypeMicrosoftMediaStandardEncoderPreset
+ sep.OdataType = OdataTypeBasicPresetOdataTypeMicrosoftMediaStandardEncoderPreset
objectMap := make(map[string]interface{})
if sep.Filters != nil {
objectMap["filters"] = sep.Filters
@@ -10488,7 +10658,7 @@ func (sep *StandardEncoderPreset) UnmarshalJSON(body []byte) error {
type StorageAccount struct {
// ID - The ID of the storage account resource. Media Services relies on tables and queues as well as blobs, so the primary storage account must be a Standard Storage account (either Microsoft.ClassicStorage or Microsoft.Storage). Blob only storage accounts can be added as secondary storage accounts.
ID *string `json:"id,omitempty"`
- // Type - The type of the storage account. Possible values include: 'Primary', 'Secondary'
+ // Type - The type of the storage account. Possible values include: 'StorageAccountTypePrimary', 'StorageAccountTypeSecondary'
Type StorageAccountType `json:"type,omitempty"`
}
@@ -11764,13 +11934,13 @@ type SyncStorageKeysInput struct {
type SystemData struct {
// CreatedBy - The identity that created the resource.
CreatedBy *string `json:"createdBy,omitempty"`
- // CreatedByType - The type of identity that created the resource. Possible values include: 'User', 'Application', 'ManagedIdentity', 'Key'
+ // CreatedByType - The type of identity that created the resource. Possible values include: 'CreatedByTypeUser', 'CreatedByTypeApplication', 'CreatedByTypeManagedIdentity', 'CreatedByTypeKey'
CreatedByType CreatedByType `json:"createdByType,omitempty"`
// CreatedAt - The timestamp of resource creation (UTC).
CreatedAt *date.Time `json:"createdAt,omitempty"`
// LastModifiedBy - The identity that last modified the resource.
LastModifiedBy *string `json:"lastModifiedBy,omitempty"`
- // LastModifiedByType - The type of identity that last modified the resource. Possible values include: 'User', 'Application', 'ManagedIdentity', 'Key'
+ // LastModifiedByType - The type of identity that last modified the resource. Possible values include: 'CreatedByTypeUser', 'CreatedByTypeApplication', 'CreatedByTypeManagedIdentity', 'CreatedByTypeKey'
LastModifiedByType CreatedByType `json:"lastModifiedByType,omitempty"`
// LastModifiedAt - The timestamp of resource last modification (UTC)
LastModifiedAt *date.Time `json:"lastModifiedAt,omitempty"`
@@ -11793,7 +11963,7 @@ type BasicTrackDescriptor interface {
// TrackDescriptor base type for all TrackDescriptor types, which define the metadata and selection for tracks
// that should be processed by a Job
type TrackDescriptor struct {
- // OdataType - Possible values include: 'OdataTypeTrackDescriptor', 'OdataTypeMicrosoftMediaAudioTrackDescriptor', 'OdataTypeMicrosoftMediaSelectAudioTrackByAttribute', 'OdataTypeMicrosoftMediaSelectAudioTrackByID', 'OdataTypeMicrosoftMediaVideoTrackDescriptor', 'OdataTypeMicrosoftMediaSelectVideoTrackByAttribute', 'OdataTypeMicrosoftMediaSelectVideoTrackByID'
+ // OdataType - Possible values include: 'OdataTypeBasicTrackDescriptorOdataTypeTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaAudioTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByAttribute', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByID', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaVideoTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByAttribute', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByID'
OdataType OdataTypeBasicTrackDescriptor `json:"@odata.type,omitempty"`
}
@@ -11805,27 +11975,27 @@ func unmarshalBasicTrackDescriptor(body []byte) (BasicTrackDescriptor, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaAudioTrackDescriptor):
+ case string(OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaAudioTrackDescriptor):
var atd AudioTrackDescriptor
err := json.Unmarshal(body, &atd)
return atd, err
- case string(OdataTypeMicrosoftMediaSelectAudioTrackByAttribute):
+ case string(OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByAttribute):
var satba SelectAudioTrackByAttribute
err := json.Unmarshal(body, &satba)
return satba, err
- case string(OdataTypeMicrosoftMediaSelectAudioTrackByID):
+ case string(OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByID):
var satbi SelectAudioTrackByID
err := json.Unmarshal(body, &satbi)
return satbi, err
- case string(OdataTypeMicrosoftMediaVideoTrackDescriptor):
+ case string(OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaVideoTrackDescriptor):
var vtd VideoTrackDescriptor
err := json.Unmarshal(body, &vtd)
return vtd, err
- case string(OdataTypeMicrosoftMediaSelectVideoTrackByAttribute):
+ case string(OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByAttribute):
var svtba SelectVideoTrackByAttribute
err := json.Unmarshal(body, &svtba)
return svtba, err
- case string(OdataTypeMicrosoftMediaSelectVideoTrackByID):
+ case string(OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByID):
var svtbi SelectVideoTrackByID
err := json.Unmarshal(body, &svtbi)
return svtbi, err
@@ -11856,7 +12026,7 @@ func unmarshalBasicTrackDescriptorArray(body []byte) ([]BasicTrackDescriptor, er
// MarshalJSON is the custom marshaler for TrackDescriptor.
func (td TrackDescriptor) MarshalJSON() ([]byte, error) {
- td.OdataType = OdataTypeTrackDescriptor
+ td.OdataType = OdataTypeBasicTrackDescriptorOdataTypeTrackDescriptor
objectMap := make(map[string]interface{})
if td.OdataType != "" {
objectMap["@odata.type"] = td.OdataType
@@ -12205,7 +12375,7 @@ func NewTransformCollectionPage(cur TransformCollection, getNextPage func(contex
// TransformOutput describes the properties of a TransformOutput, which are the rules to be applied while
// generating the desired output.
type TransformOutput struct {
- // OnError - A Transform can define more than one outputs. This property defines what the service should do when one output fails - either continue to produce other outputs, or, stop the other outputs. The overall Job state will not reflect failures of outputs that are specified with 'ContinueJob'. The default is 'StopProcessingJob'. Possible values include: 'StopProcessingJob', 'ContinueJob'
+ // OnError - A Transform can define more than one outputs. This property defines what the service should do when one output fails - either continue to produce other outputs, or, stop the other outputs. The overall Job state will not reflect failures of outputs that are specified with 'ContinueJob'. The default is 'StopProcessingJob'. Possible values include: 'OnErrorTypeStopProcessingJob', 'OnErrorTypeContinueJob'
OnError OnErrorType `json:"onError,omitempty"`
// RelativePriority - Sets the relative priority of the TransformOutputs within a Transform. This sets the priority that the service uses for processing TransformOutputs. The default priority is Normal. Possible values include: 'PriorityLow', 'PriorityNormal', 'PriorityHigh'
RelativePriority Priority `json:"relativePriority,omitempty"`
@@ -12285,13 +12455,13 @@ type TransportStreamFormat struct {
OutputFiles *[]OutputFile `json:"outputFiles,omitempty"`
// FilenamePattern - The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
FilenamePattern *string `json:"filenamePattern,omitempty"`
- // OdataType - Possible values include: 'OdataTypeFormat', 'OdataTypeMicrosoftMediaImageFormat', 'OdataTypeMicrosoftMediaJpgFormat', 'OdataTypeMicrosoftMediaPngFormat', 'OdataTypeMicrosoftMediaMultiBitrateFormat', 'OdataTypeMicrosoftMediaMp4Format', 'OdataTypeMicrosoftMediaTransportStreamFormat'
+ // OdataType - Possible values include: 'OdataTypeBasicFormatOdataTypeFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaImageFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaJpgFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaPngFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaMultiBitrateFormat', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaMp4Format', 'OdataTypeBasicFormatOdataTypeMicrosoftMediaTransportStreamFormat'
OdataType OdataTypeBasicFormat `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for TransportStreamFormat.
func (tsf TransportStreamFormat) MarshalJSON() ([]byte, error) {
- tsf.OdataType = OdataTypeMicrosoftMediaTransportStreamFormat
+ tsf.OdataType = OdataTypeBasicFormatOdataTypeMicrosoftMediaTransportStreamFormat
objectMap := make(map[string]interface{})
if tsf.OutputFiles != nil {
objectMap["outputFiles"] = tsf.OutputFiles
@@ -12360,13 +12530,13 @@ func (tsf TransportStreamFormat) AsBasicFormat() (BasicFormat, bool) {
type UtcClipTime struct {
// Time - The time position on the timeline of the input media based on Utc time.
Time *date.Time `json:"time,omitempty"`
- // OdataType - Possible values include: 'OdataTypeClipTime', 'OdataTypeMicrosoftMediaAbsoluteClipTime', 'OdataTypeMicrosoftMediaUtcClipTime'
+ // OdataType - Possible values include: 'OdataTypeBasicClipTimeOdataTypeClipTime', 'OdataTypeBasicClipTimeOdataTypeMicrosoftMediaAbsoluteClipTime', 'OdataTypeBasicClipTimeOdataTypeMicrosoftMediaUtcClipTime'
OdataType OdataTypeBasicClipTime `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for UtcClipTime.
func (uct UtcClipTime) MarshalJSON() ([]byte, error) {
- uct.OdataType = OdataTypeMicrosoftMediaUtcClipTime
+ uct.OdataType = OdataTypeBasicClipTimeOdataTypeMicrosoftMediaUtcClipTime
objectMap := make(map[string]interface{})
if uct.Time != nil {
objectMap["time"] = uct.Time
@@ -12418,7 +12588,7 @@ type Video struct {
SyncMode VideoSyncMode `json:"syncMode,omitempty"`
// Label - An optional label for the codec. The label can be used to control muxing behavior.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeCodec', 'OdataTypeMicrosoftMediaAudio', 'OdataTypeMicrosoftMediaAacAudio', 'OdataTypeMicrosoftMediaVideo', 'OdataTypeMicrosoftMediaH265Video', 'OdataTypeMicrosoftMediaCopyVideo', 'OdataTypeMicrosoftMediaImage', 'OdataTypeMicrosoftMediaCopyAudio', 'OdataTypeMicrosoftMediaH264Video', 'OdataTypeMicrosoftMediaJpgImage', 'OdataTypeMicrosoftMediaPngImage'
+ // OdataType - Possible values include: 'OdataTypeBasicCodecOdataTypeCodec', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaAacAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyVideo', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaCopyAudio', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage', 'OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage'
OdataType OdataTypeBasicCodec `json:"@odata.type,omitempty"`
}
@@ -12430,23 +12600,23 @@ func unmarshalBasicVideo(body []byte) (BasicVideo, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaH265Video):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaH265Video):
var hv H265Video
err := json.Unmarshal(body, &hv)
return hv, err
- case string(OdataTypeMicrosoftMediaImage):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaImage):
var i Image
err := json.Unmarshal(body, &i)
return i, err
- case string(OdataTypeMicrosoftMediaH264Video):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaH264Video):
var hv H264Video
err := json.Unmarshal(body, &hv)
return hv, err
- case string(OdataTypeMicrosoftMediaJpgImage):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaJpgImage):
var ji JpgImage
err := json.Unmarshal(body, &ji)
return ji, err
- case string(OdataTypeMicrosoftMediaPngImage):
+ case string(OdataTypeBasicCodecOdataTypeMicrosoftMediaPngImage):
var pi PngImage
err := json.Unmarshal(body, &pi)
return pi, err
@@ -12477,7 +12647,7 @@ func unmarshalBasicVideoArray(body []byte) ([]BasicVideo, error) {
// MarshalJSON is the custom marshaler for Video.
func (vVar Video) MarshalJSON() ([]byte, error) {
- vVar.OdataType = OdataTypeMicrosoftMediaVideo
+ vVar.OdataType = OdataTypeBasicCodecOdataTypeMicrosoftMediaVideo
objectMap := make(map[string]interface{})
if vVar.KeyFrameInterval != nil {
objectMap["keyFrameInterval"] = vVar.KeyFrameInterval
@@ -12575,21 +12745,21 @@ func (vVar Video) AsBasicCodec() (BasicCodec, bool) {
// VideoAnalyzerPreset a video analyzer preset that extracts insights (rich metadata) from both audio and
// video, and outputs a JSON format file.
type VideoAnalyzerPreset struct {
- // InsightsToExtract - Defines the type of insights that you want the service to generate. The allowed values are 'AudioInsightsOnly', 'VideoInsightsOnly', and 'AllInsights'. The default is AllInsights. If you set this to AllInsights and the input is audio only, then only audio insights are generated. Similarly if the input is video only, then only video insights are generated. It is recommended that you not use AudioInsightsOnly if you expect some of your inputs to be video only; or use VideoInsightsOnly if you expect some of your inputs to be audio only. Your Jobs in such conditions would error out. Possible values include: 'AudioInsightsOnly', 'VideoInsightsOnly', 'AllInsights'
+ // InsightsToExtract - Defines the type of insights that you want the service to generate. The allowed values are 'AudioInsightsOnly', 'VideoInsightsOnly', and 'AllInsights'. The default is AllInsights. If you set this to AllInsights and the input is audio only, then only audio insights are generated. Similarly if the input is video only, then only video insights are generated. It is recommended that you not use AudioInsightsOnly if you expect some of your inputs to be video only; or use VideoInsightsOnly if you expect some of your inputs to be audio only. Your Jobs in such conditions would error out. Possible values include: 'InsightsTypeAudioInsightsOnly', 'InsightsTypeVideoInsightsOnly', 'InsightsTypeAllInsights'
InsightsToExtract InsightsType `json:"insightsToExtract,omitempty"`
// AudioLanguage - The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
AudioLanguage *string `json:"audioLanguage,omitempty"`
- // Mode - Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen. Possible values include: 'Standard', 'Basic'
+ // Mode - Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen. Possible values include: 'AudioAnalysisModeStandard', 'AudioAnalysisModeBasic'
Mode AudioAnalysisMode `json:"mode,omitempty"`
// ExperimentalOptions - Dictionary containing key value pairs for parameters not exposed in the preset itself
ExperimentalOptions map[string]*string `json:"experimentalOptions"`
- // OdataType - Possible values include: 'OdataTypePreset', 'OdataTypeMicrosoftMediaFaceDetectorPreset', 'OdataTypeMicrosoftMediaAudioAnalyzerPreset', 'OdataTypeMicrosoftMediaBuiltInStandardEncoderPreset', 'OdataTypeMicrosoftMediaStandardEncoderPreset', 'OdataTypeMicrosoftMediaVideoAnalyzerPreset'
+ // OdataType - Possible values include: 'OdataTypeBasicPresetOdataTypePreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaFaceDetectorPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaAudioAnalyzerPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaBuiltInStandardEncoderPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaStandardEncoderPreset', 'OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset'
OdataType OdataTypeBasicPreset `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for VideoAnalyzerPreset.
func (vap VideoAnalyzerPreset) MarshalJSON() ([]byte, error) {
- vap.OdataType = OdataTypeMicrosoftMediaVideoAnalyzerPreset
+ vap.OdataType = OdataTypeBasicPresetOdataTypeMicrosoftMediaVideoAnalyzerPreset
objectMap := make(map[string]interface{})
if vap.InsightsToExtract != "" {
objectMap["insightsToExtract"] = vap.InsightsToExtract
@@ -12676,7 +12846,7 @@ type VideoLayer struct {
Height *string `json:"height,omitempty"`
// Label - The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
Label *string `json:"label,omitempty"`
- // OdataType - Possible values include: 'OdataTypeLayer', 'OdataTypeMicrosoftMediaH265VideoLayer', 'OdataTypeMicrosoftMediaH265Layer', 'OdataTypeMicrosoftMediaVideoLayer', 'OdataTypeMicrosoftMediaH264Layer', 'OdataTypeMicrosoftMediaJpgLayer', 'OdataTypeMicrosoftMediaPngLayer'
+ // OdataType - Possible values include: 'OdataTypeBasicLayerOdataTypeLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH265VideoLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH265Layer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaVideoLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaH264Layer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaJpgLayer', 'OdataTypeBasicLayerOdataTypeMicrosoftMediaPngLayer'
OdataType OdataTypeBasicLayer `json:"@odata.type,omitempty"`
}
@@ -12688,7 +12858,7 @@ func unmarshalBasicVideoLayer(body []byte) (BasicVideoLayer, error) {
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaH264Layer):
+ case string(OdataTypeBasicLayerOdataTypeMicrosoftMediaH264Layer):
var hl H264Layer
err := json.Unmarshal(body, &hl)
return hl, err
@@ -12719,7 +12889,7 @@ func unmarshalBasicVideoLayerArray(body []byte) ([]BasicVideoLayer, error) {
// MarshalJSON is the custom marshaler for VideoLayer.
func (vl VideoLayer) MarshalJSON() ([]byte, error) {
- vl.OdataType = OdataTypeMicrosoftMediaVideoLayer
+ vl.OdataType = OdataTypeBasicLayerOdataTypeMicrosoftMediaVideoLayer
objectMap := make(map[string]interface{})
if vl.Bitrate != nil {
objectMap["bitrate"] = vl.Bitrate
@@ -12824,13 +12994,13 @@ type VideoOverlay struct {
FadeOutDuration *string `json:"fadeOutDuration,omitempty"`
// AudioGainLevel - The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
AudioGainLevel *float64 `json:"audioGainLevel,omitempty"`
- // OdataType - Possible values include: 'OdataTypeOverlay', 'OdataTypeMicrosoftMediaAudioOverlay', 'OdataTypeMicrosoftMediaVideoOverlay'
+ // OdataType - Possible values include: 'OdataTypeBasicOverlayOdataTypeOverlay', 'OdataTypeBasicOverlayOdataTypeMicrosoftMediaAudioOverlay', 'OdataTypeBasicOverlayOdataTypeMicrosoftMediaVideoOverlay'
OdataType OdataTypeBasicOverlay `json:"@odata.type,omitempty"`
}
// MarshalJSON is the custom marshaler for VideoOverlay.
func (vo VideoOverlay) MarshalJSON() ([]byte, error) {
- vo.OdataType = OdataTypeMicrosoftMediaVideoOverlay
+ vo.OdataType = OdataTypeBasicOverlayOdataTypeMicrosoftMediaVideoOverlay
objectMap := make(map[string]interface{})
if vo.Position != nil {
objectMap["position"] = vo.Position
@@ -12894,7 +13064,7 @@ type BasicVideoTrackDescriptor interface {
// VideoTrackDescriptor a TrackSelection to select video tracks.
type VideoTrackDescriptor struct {
- // OdataType - Possible values include: 'OdataTypeTrackDescriptor', 'OdataTypeMicrosoftMediaAudioTrackDescriptor', 'OdataTypeMicrosoftMediaSelectAudioTrackByAttribute', 'OdataTypeMicrosoftMediaSelectAudioTrackByID', 'OdataTypeMicrosoftMediaVideoTrackDescriptor', 'OdataTypeMicrosoftMediaSelectVideoTrackByAttribute', 'OdataTypeMicrosoftMediaSelectVideoTrackByID'
+ // OdataType - Possible values include: 'OdataTypeBasicTrackDescriptorOdataTypeTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaAudioTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByAttribute', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectAudioTrackByID', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaVideoTrackDescriptor', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByAttribute', 'OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByID'
OdataType OdataTypeBasicTrackDescriptor `json:"@odata.type,omitempty"`
}
@@ -12906,11 +13076,11 @@ func unmarshalBasicVideoTrackDescriptor(body []byte) (BasicVideoTrackDescriptor,
}
switch m["@odata.type"] {
- case string(OdataTypeMicrosoftMediaSelectVideoTrackByAttribute):
+ case string(OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByAttribute):
var svtba SelectVideoTrackByAttribute
err := json.Unmarshal(body, &svtba)
return svtba, err
- case string(OdataTypeMicrosoftMediaSelectVideoTrackByID):
+ case string(OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaSelectVideoTrackByID):
var svtbi SelectVideoTrackByID
err := json.Unmarshal(body, &svtbi)
return svtbi, err
@@ -12941,7 +13111,7 @@ func unmarshalBasicVideoTrackDescriptorArray(body []byte) ([]BasicVideoTrackDesc
// MarshalJSON is the custom marshaler for VideoTrackDescriptor.
func (vtd VideoTrackDescriptor) MarshalJSON() ([]byte, error) {
- vtd.OdataType = OdataTypeMicrosoftMediaVideoTrackDescriptor
+ vtd.OdataType = OdataTypeBasicTrackDescriptorOdataTypeMicrosoftMediaVideoTrackDescriptor
objectMap := make(map[string]interface{})
if vtd.OdataType != "" {
objectMap["@odata.type"] = vtd.OdataType
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/operations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/operations.go
similarity index 99%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/operations.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/operations.go
index 8113b06cdf17d..a4eaa54b555d4 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/operations.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/operations.go
@@ -71,7 +71,7 @@ func (client OperationsClient) List(ctx context.Context) (result OperationCollec
// ListPreparer prepares the List request.
func (client OperationsClient) ListPreparer(ctx context.Context) (*http.Request, error) {
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/privateendpointconnections.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/privateendpointconnections.go
similarity index 99%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/privateendpointconnections.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/privateendpointconnections.go
index a5fc53ce51f0d..bd24cbb4b5e15 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/privateendpointconnections.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/privateendpointconnections.go
@@ -86,7 +86,7 @@ func (client PrivateEndpointConnectionsClient) CreateOrUpdatePreparer(ctx contex
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -165,7 +165,7 @@ func (client PrivateEndpointConnectionsClient) DeletePreparer(ctx context.Contex
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -241,7 +241,7 @@ func (client PrivateEndpointConnectionsClient) GetPreparer(ctx context.Context,
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -317,7 +317,7 @@ func (client PrivateEndpointConnectionsClient) ListPreparer(ctx context.Context,
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/privatelinkresources.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/privatelinkresources.go
similarity index 99%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/privatelinkresources.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/privatelinkresources.go
index 67d42683f8e91..f3eadf8154a85 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/privatelinkresources.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/privatelinkresources.go
@@ -77,7 +77,7 @@ func (client PrivateLinkResourcesClient) GetPreparer(ctx context.Context, resour
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -153,7 +153,7 @@ func (client PrivateLinkResourcesClient) ListPreparer(ctx context.Context, resou
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-05-01"
+ const APIVersion = "2021-05-01"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/streamingendpoints.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/streamingendpoints.go
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/streamingendpoints.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/streamingendpoints.go
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/streaminglocators.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/streaminglocators.go
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/streaminglocators.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/streaminglocators.go
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/streamingpolicies.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/streamingpolicies.go
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/streamingpolicies.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/streamingpolicies.go
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/transforms.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/transforms.go
similarity index 100%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/transforms.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/transforms.go
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/version.go b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/version.go
similarity index 90%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/version.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/version.go
index 8f8718563033e..20e40a08c9c4a 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2020-05-01/media/version.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/mediaservices/mgmt/2021-05-01/media/version.go
@@ -10,7 +10,7 @@ import "github.com/Azure/azure-sdk-for-go/version"
// UserAgent returns the UserAgent string to use when sending http.Requests.
func UserAgent() string {
- return "Azure-SDK-For-Go/" + Version() + " media/2020-05-01"
+ return "Azure-SDK-For-Go/" + Version() + " media/2021-05-01"
}
// Version returns the semantic version (see http://semver.org) of the client.
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/CHANGELOG.md b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/CHANGELOG.md
new file mode 100644
index 0000000000000..52911e4cc5e4c
--- /dev/null
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/CHANGELOG.md
@@ -0,0 +1,2 @@
+# Change History
+
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/_meta.json b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/_meta.json
similarity index 79%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/_meta.json
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/_meta.json
index f7d29d3dab1c0..ce929e3416808 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/_meta.json
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/_meta.json
@@ -1,10 +1,10 @@
{
- "commit": "3c764635e7d442b3e74caf593029fcd440b3ef82",
+ "commit": "8240593bde5350e6762015523ccd57cb61e32da5",
"readme": "/_/azure-rest-api-specs/specification/eventgrid/resource-manager/readme.md",
- "tag": "package-2020-04-preview",
+ "tag": "package-2020-10-preview",
"use": "@microsoft.azure/autorest.go@2.1.180",
"repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
- "autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.180 --tag=package-2020-04-preview --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/eventgrid/resource-manager/readme.md",
+ "autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.180 --tag=package-2020-10-preview --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/eventgrid/resource-manager/readme.md",
"additional_properties": {
"additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/client.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/client.go
similarity index 97%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/client.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/client.go
index 0ec013ec31a4b..76595b74cd409 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/client.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/client.go
@@ -1,4 +1,4 @@
-// Package eventgrid implements the Azure ARM Eventgrid service API version 2020-04-01-preview.
+// Package eventgrid implements the Azure ARM Eventgrid service API version 2020-10-15-preview.
//
// Azure EventGrid Management Client
package eventgrid
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/domains.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/domains.go
similarity index 98%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/domains.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/domains.go
index dbc243e602af9..bea954bd5757c 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/domains.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/domains.go
@@ -70,7 +70,7 @@ func (client DomainsClient) CreateOrUpdatePreparer(ctx context.Context, resource
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -150,7 +150,7 @@ func (client DomainsClient) DeletePreparer(ctx context.Context, resourceGroupNam
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -234,7 +234,7 @@ func (client DomainsClient) GetPreparer(ctx context.Context, resourceGroupName s
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -321,7 +321,7 @@ func (client DomainsClient) ListByResourceGroupPreparer(ctx context.Context, res
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -449,7 +449,7 @@ func (client DomainsClient) ListBySubscriptionPreparer(ctx context.Context, filt
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -568,7 +568,7 @@ func (client DomainsClient) ListSharedAccessKeysPreparer(ctx context.Context, re
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -651,7 +651,7 @@ func (client DomainsClient) RegenerateKeyPreparer(ctx context.Context, resourceG
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -723,7 +723,7 @@ func (client DomainsClient) UpdatePreparer(ctx context.Context, resourceGroupNam
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/domaintopics.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/domaintopics.go
similarity index 99%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/domaintopics.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/domaintopics.go
index eac68e508f172..1185329bfbba7 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/domaintopics.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/domaintopics.go
@@ -70,7 +70,7 @@ func (client DomainTopicsClient) CreateOrUpdatePreparer(ctx context.Context, res
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -150,7 +150,7 @@ func (client DomainTopicsClient) DeletePreparer(ctx context.Context, resourceGro
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -236,7 +236,7 @@ func (client DomainTopicsClient) GetPreparer(ctx context.Context, resourceGroupN
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -325,7 +325,7 @@ func (client DomainTopicsClient) ListByDomainPreparer(ctx context.Context, resou
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/enums.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/enums.go
similarity index 90%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/enums.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/enums.go
index f447a1aa69677..1e4f765d43372 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/enums.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/enums.go
@@ -6,6 +6,25 @@ package eventgrid
// Code generated by Microsoft (R) AutoRest Code Generator.
// Changes may cause incorrect behavior and will be lost if the code is regenerated.
+// CreatedByType enumerates the values for created by type.
+type CreatedByType string
+
+const (
+ // Application ...
+ Application CreatedByType = "Application"
+ // Key ...
+ Key CreatedByType = "Key"
+ // ManagedIdentity ...
+ ManagedIdentity CreatedByType = "ManagedIdentity"
+ // User ...
+ User CreatedByType = "User"
+)
+
+// PossibleCreatedByTypeValues returns an array of possible values for the CreatedByType const type.
+func PossibleCreatedByTypeValues() []CreatedByType {
+ return []CreatedByType{Application, Key, ManagedIdentity, User}
+}
+
// DomainProvisioningState enumerates the values for domain provisioning state.
type DomainProvisioningState string
@@ -247,18 +266,26 @@ const (
OperatorTypeAdvancedFilter OperatorType = "AdvancedFilter"
// OperatorTypeBoolEquals ...
OperatorTypeBoolEquals OperatorType = "BoolEquals"
+ // OperatorTypeIsNotNull ...
+ OperatorTypeIsNotNull OperatorType = "IsNotNull"
+ // OperatorTypeIsNullOrUndefined ...
+ OperatorTypeIsNullOrUndefined OperatorType = "IsNullOrUndefined"
// OperatorTypeNumberGreaterThan ...
OperatorTypeNumberGreaterThan OperatorType = "NumberGreaterThan"
// OperatorTypeNumberGreaterThanOrEquals ...
OperatorTypeNumberGreaterThanOrEquals OperatorType = "NumberGreaterThanOrEquals"
// OperatorTypeNumberIn ...
OperatorTypeNumberIn OperatorType = "NumberIn"
+ // OperatorTypeNumberInRange ...
+ OperatorTypeNumberInRange OperatorType = "NumberInRange"
// OperatorTypeNumberLessThan ...
OperatorTypeNumberLessThan OperatorType = "NumberLessThan"
// OperatorTypeNumberLessThanOrEquals ...
OperatorTypeNumberLessThanOrEquals OperatorType = "NumberLessThanOrEquals"
// OperatorTypeNumberNotIn ...
OperatorTypeNumberNotIn OperatorType = "NumberNotIn"
+ // OperatorTypeNumberNotInRange ...
+ OperatorTypeNumberNotInRange OperatorType = "NumberNotInRange"
// OperatorTypeStringBeginsWith ...
OperatorTypeStringBeginsWith OperatorType = "StringBeginsWith"
// OperatorTypeStringContains ...
@@ -267,13 +294,19 @@ const (
OperatorTypeStringEndsWith OperatorType = "StringEndsWith"
// OperatorTypeStringIn ...
OperatorTypeStringIn OperatorType = "StringIn"
+ // OperatorTypeStringNotBeginsWith ...
+ OperatorTypeStringNotBeginsWith OperatorType = "StringNotBeginsWith"
+ // OperatorTypeStringNotContains ...
+ OperatorTypeStringNotContains OperatorType = "StringNotContains"
+ // OperatorTypeStringNotEndsWith ...
+ OperatorTypeStringNotEndsWith OperatorType = "StringNotEndsWith"
// OperatorTypeStringNotIn ...
OperatorTypeStringNotIn OperatorType = "StringNotIn"
)
// PossibleOperatorTypeValues returns an array of possible values for the OperatorType const type.
func PossibleOperatorTypeValues() []OperatorType {
- return []OperatorType{OperatorTypeAdvancedFilter, OperatorTypeBoolEquals, OperatorTypeNumberGreaterThan, OperatorTypeNumberGreaterThanOrEquals, OperatorTypeNumberIn, OperatorTypeNumberLessThan, OperatorTypeNumberLessThanOrEquals, OperatorTypeNumberNotIn, OperatorTypeStringBeginsWith, OperatorTypeStringContains, OperatorTypeStringEndsWith, OperatorTypeStringIn, OperatorTypeStringNotIn}
+ return []OperatorType{OperatorTypeAdvancedFilter, OperatorTypeBoolEquals, OperatorTypeIsNotNull, OperatorTypeIsNullOrUndefined, OperatorTypeNumberGreaterThan, OperatorTypeNumberGreaterThanOrEquals, OperatorTypeNumberIn, OperatorTypeNumberInRange, OperatorTypeNumberLessThan, OperatorTypeNumberLessThanOrEquals, OperatorTypeNumberNotIn, OperatorTypeNumberNotInRange, OperatorTypeStringBeginsWith, OperatorTypeStringContains, OperatorTypeStringEndsWith, OperatorTypeStringIn, OperatorTypeStringNotBeginsWith, OperatorTypeStringNotContains, OperatorTypeStringNotEndsWith, OperatorTypeStringNotIn}
}
// PartnerNamespaceProvisioningState enumerates the values for partner namespace provisioning state.
@@ -449,6 +482,21 @@ func PossiblePublicNetworkAccessValues() []PublicNetworkAccess {
return []PublicNetworkAccess{Disabled, Enabled}
}
+// ResourceKind enumerates the values for resource kind.
+type ResourceKind string
+
+const (
+ // Azure ...
+ Azure ResourceKind = "Azure"
+ // AzureArc ...
+ AzureArc ResourceKind = "AzureArc"
+)
+
+// PossibleResourceKindValues returns an array of possible values for the ResourceKind const type.
+func PossibleResourceKindValues() []ResourceKind {
+ return []ResourceKind{Azure, AzureArc}
+}
+
// ResourceProvisioningState enumerates the values for resource provisioning state.
type ResourceProvisioningState string
@@ -547,3 +595,20 @@ const (
func PossibleTopicTypeProvisioningStateValues() []TopicTypeProvisioningState {
return []TopicTypeProvisioningState{TopicTypeProvisioningStateCanceled, TopicTypeProvisioningStateCreating, TopicTypeProvisioningStateDeleting, TopicTypeProvisioningStateFailed, TopicTypeProvisioningStateSucceeded, TopicTypeProvisioningStateUpdating}
}
+
+// Type enumerates the values for type.
+type Type string
+
+const (
+ // TypeDeliveryAttributeMapping ...
+ TypeDeliveryAttributeMapping Type = "DeliveryAttributeMapping"
+ // TypeDynamic ...
+ TypeDynamic Type = "Dynamic"
+ // TypeStatic ...
+ TypeStatic Type = "Static"
+)
+
+// PossibleTypeValues returns an array of possible values for the Type const type.
+func PossibleTypeValues() []Type {
+ return []Type{TypeDeliveryAttributeMapping, TypeDynamic, TypeStatic}
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/eventchannels.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/eventchannels.go
similarity index 98%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/eventchannels.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/eventchannels.go
index 3f93024a69b5b..176cefaf47930 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/eventchannels.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/eventchannels.go
@@ -78,11 +78,12 @@ func (client EventChannelsClient) CreateOrUpdatePreparer(ctx context.Context, re
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
+ eventChannelInfo.SystemData = nil
preparer := autorest.CreatePreparer(
autorest.AsContentType("application/json; charset=utf-8"),
autorest.AsPut(),
@@ -151,7 +152,7 @@ func (client EventChannelsClient) DeletePreparer(ctx context.Context, resourceGr
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -237,7 +238,7 @@ func (client EventChannelsClient) GetPreparer(ctx context.Context, resourceGroup
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -326,7 +327,7 @@ func (client EventChannelsClient) ListByPartnerNamespacePreparer(ctx context.Con
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/eventsubscriptions.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/eventsubscriptions.go
similarity index 94%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/eventsubscriptions.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/eventsubscriptions.go
index 8f2604ef1ac54..6652f31d3370b 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/eventsubscriptions.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/eventsubscriptions.go
@@ -78,11 +78,12 @@ func (client EventSubscriptionsClient) CreateOrUpdatePreparer(ctx context.Contex
"scope": scope,
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
+ eventSubscriptionInfo.SystemData = nil
preparer := autorest.CreatePreparer(
autorest.AsContentType("application/json; charset=utf-8"),
autorest.AsPut(),
@@ -164,7 +165,7 @@ func (client EventSubscriptionsClient) DeletePreparer(ctx context.Context, scope
"scope": scope,
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -254,7 +255,7 @@ func (client EventSubscriptionsClient) GetPreparer(ctx context.Context, scope st
"scope": scope,
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -285,6 +286,88 @@ func (client EventSubscriptionsClient) GetResponder(resp *http.Response) (result
return
}
+// GetDeliveryAttributes get all delivery attributes for an event subscription.
+// Parameters:
+// scope - the scope of the event subscription. The scope can be a subscription, or a resource group, or a top
+// level resource belonging to a resource provider namespace, or an EventGrid topic. For example, use
+// '/subscriptions/{subscriptionId}/' for a subscription,
+// '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}' for a resource group, and
+// '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}'
+// for a resource, and
+// '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}'
+// for an EventGrid topic.
+// eventSubscriptionName - name of the event subscription.
+func (client EventSubscriptionsClient) GetDeliveryAttributes(ctx context.Context, scope string, eventSubscriptionName string) (result DeliveryAttributeListResult, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/EventSubscriptionsClient.GetDeliveryAttributes")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.GetDeliveryAttributesPreparer(ctx, scope, eventSubscriptionName)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "eventgrid.EventSubscriptionsClient", "GetDeliveryAttributes", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetDeliveryAttributesSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "eventgrid.EventSubscriptionsClient", "GetDeliveryAttributes", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetDeliveryAttributesResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "eventgrid.EventSubscriptionsClient", "GetDeliveryAttributes", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetDeliveryAttributesPreparer prepares the GetDeliveryAttributes request.
+func (client EventSubscriptionsClient) GetDeliveryAttributesPreparer(ctx context.Context, scope string, eventSubscriptionName string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "eventSubscriptionName": autorest.Encode("path", eventSubscriptionName),
+ "scope": scope,
+ }
+
+ const APIVersion = "2020-10-15-preview"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsPost(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/{scope}/providers/Microsoft.EventGrid/eventSubscriptions/{eventSubscriptionName}/getDeliveryAttributes", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetDeliveryAttributesSender sends the GetDeliveryAttributes request. The method will close the
+// http.Response Body if it receives an error.
+func (client EventSubscriptionsClient) GetDeliveryAttributesSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+}
+
+// GetDeliveryAttributesResponder handles the response to the GetDeliveryAttributes request. The method always
+// closes the http.Response Body.
+func (client EventSubscriptionsClient) GetDeliveryAttributesResponder(resp *http.Response) (result DeliveryAttributeListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
// GetFullURL get the full endpoint URL for an event subscription.
// Parameters:
// scope - the scope of the event subscription. The scope can be a subscription, or a resource group, or a top
@@ -336,7 +419,7 @@ func (client EventSubscriptionsClient) GetFullURLPreparer(ctx context.Context, s
"scope": scope,
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -427,7 +510,7 @@ func (client EventSubscriptionsClient) ListByDomainTopicPreparer(ctx context.Con
"topicName": autorest.Encode("path", topicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -563,7 +646,7 @@ func (client EventSubscriptionsClient) ListByResourcePreparer(ctx context.Contex
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -694,7 +777,7 @@ func (client EventSubscriptionsClient) ListGlobalByResourceGroupPreparer(ctx con
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -827,7 +910,7 @@ func (client EventSubscriptionsClient) ListGlobalByResourceGroupForTopicTypePrep
"topicTypeName": autorest.Encode("path", topicTypeName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -955,7 +1038,7 @@ func (client EventSubscriptionsClient) ListGlobalBySubscriptionPreparer(ctx cont
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -1086,7 +1169,7 @@ func (client EventSubscriptionsClient) ListGlobalBySubscriptionForTopicTypePrepa
"topicTypeName": autorest.Encode("path", topicTypeName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -1219,7 +1302,7 @@ func (client EventSubscriptionsClient) ListRegionalByResourceGroupPreparer(ctx c
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -1354,7 +1437,7 @@ func (client EventSubscriptionsClient) ListRegionalByResourceGroupForTopicTypePr
"topicTypeName": autorest.Encode("path", topicTypeName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -1484,7 +1567,7 @@ func (client EventSubscriptionsClient) ListRegionalBySubscriptionPreparer(ctx co
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -1617,7 +1700,7 @@ func (client EventSubscriptionsClient) ListRegionalBySubscriptionForTopicTypePre
"topicTypeName": autorest.Encode("path", topicTypeName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -1736,7 +1819,7 @@ func (client EventSubscriptionsClient) UpdatePreparer(ctx context.Context, scope
"scope": scope,
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/extensiontopics.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/extensiontopics.go
similarity index 99%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/extensiontopics.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/extensiontopics.go
index 49835c1b3467c..437637c7044b2 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/extensiontopics.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/extensiontopics.go
@@ -77,7 +77,7 @@ func (client ExtensionTopicsClient) GetPreparer(ctx context.Context, scope strin
"scope": autorest.Encode("path", scope),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/models.go
similarity index 75%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/models.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/models.go
index 23eeb34c5a9d3..8f0edf116f0bf 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/models.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/models.go
@@ -18,7 +18,7 @@ import (
)
// The package's fully qualified name.
-const fqdn = "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid"
+const fqdn = "github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid"
// BasicAdvancedFilter this is the base type that represents an advanced filter. To configure an advanced filter, do
// not directly instantiate an object of this class. Instead, instantiate an object of a derived class such as
@@ -37,6 +37,13 @@ type BasicAdvancedFilter interface {
AsStringBeginsWithAdvancedFilter() (*StringBeginsWithAdvancedFilter, bool)
AsStringEndsWithAdvancedFilter() (*StringEndsWithAdvancedFilter, bool)
AsStringContainsAdvancedFilter() (*StringContainsAdvancedFilter, bool)
+ AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool)
+ AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool)
+ AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool)
+ AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool)
+ AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool)
+ AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool)
+ AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool)
AsAdvancedFilter() (*AdvancedFilter, bool)
}
@@ -47,7 +54,7 @@ type BasicAdvancedFilter interface {
type AdvancedFilter struct {
// Key - The field/property in the event based on which you want to filter.
Key *string `json:"key,omitempty"`
- // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains'
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
OperatorType OperatorType `json:"operatorType,omitempty"`
}
@@ -107,6 +114,34 @@ func unmarshalBasicAdvancedFilter(body []byte) (BasicAdvancedFilter, error) {
var scaf StringContainsAdvancedFilter
err := json.Unmarshal(body, &scaf)
return scaf, err
+ case string(OperatorTypeNumberInRange):
+ var niraf NumberInRangeAdvancedFilter
+ err := json.Unmarshal(body, &niraf)
+ return niraf, err
+ case string(OperatorTypeNumberNotInRange):
+ var nniraf NumberNotInRangeAdvancedFilter
+ err := json.Unmarshal(body, &nniraf)
+ return nniraf, err
+ case string(OperatorTypeStringNotBeginsWith):
+ var snbwaf StringNotBeginsWithAdvancedFilter
+ err := json.Unmarshal(body, &snbwaf)
+ return snbwaf, err
+ case string(OperatorTypeStringNotEndsWith):
+ var snewaf StringNotEndsWithAdvancedFilter
+ err := json.Unmarshal(body, &snewaf)
+ return snewaf, err
+ case string(OperatorTypeStringNotContains):
+ var sncaf StringNotContainsAdvancedFilter
+ err := json.Unmarshal(body, &sncaf)
+ return sncaf, err
+ case string(OperatorTypeIsNullOrUndefined):
+ var inouaf IsNullOrUndefinedAdvancedFilter
+ err := json.Unmarshal(body, &inouaf)
+ return inouaf, err
+ case string(OperatorTypeIsNotNull):
+ var innaf IsNotNullAdvancedFilter
+ err := json.Unmarshal(body, &innaf)
+ return innaf, err
default:
var af AdvancedFilter
err := json.Unmarshal(body, &af)
@@ -205,6 +240,41 @@ func (af AdvancedFilter) AsStringContainsAdvancedFilter() (*StringContainsAdvanc
return nil, false
}
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for AdvancedFilter.
+func (af AdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for AdvancedFilter.
+func (af AdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for AdvancedFilter.
+func (af AdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for AdvancedFilter.
+func (af AdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for AdvancedFilter.
+func (af AdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for AdvancedFilter.
+func (af AdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for AdvancedFilter.
+func (af AdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
// AsAdvancedFilter is the BasicAdvancedFilter implementation for AdvancedFilter.
func (af AdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
return &af, true
@@ -324,6 +394,58 @@ type AzureFunctionEventSubscriptionDestinationProperties struct {
MaxEventsPerBatch *int32 `json:"maxEventsPerBatch,omitempty"`
// PreferredBatchSizeInKilobytes - Preferred batch size in Kilobytes.
PreferredBatchSizeInKilobytes *int32 `json:"preferredBatchSizeInKilobytes,omitempty"`
+ // DeliveryAttributeMappings - Delivery attribute details.
+ DeliveryAttributeMappings *[]BasicDeliveryAttributeMapping `json:"deliveryAttributeMappings,omitempty"`
+}
+
+// UnmarshalJSON is the custom unmarshaler for AzureFunctionEventSubscriptionDestinationProperties struct.
+func (afesdp *AzureFunctionEventSubscriptionDestinationProperties) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "resourceId":
+ if v != nil {
+ var resourceID string
+ err = json.Unmarshal(*v, &resourceID)
+ if err != nil {
+ return err
+ }
+ afesdp.ResourceID = &resourceID
+ }
+ case "maxEventsPerBatch":
+ if v != nil {
+ var maxEventsPerBatch int32
+ err = json.Unmarshal(*v, &maxEventsPerBatch)
+ if err != nil {
+ return err
+ }
+ afesdp.MaxEventsPerBatch = &maxEventsPerBatch
+ }
+ case "preferredBatchSizeInKilobytes":
+ if v != nil {
+ var preferredBatchSizeInKilobytes int32
+ err = json.Unmarshal(*v, &preferredBatchSizeInKilobytes)
+ if err != nil {
+ return err
+ }
+ afesdp.PreferredBatchSizeInKilobytes = &preferredBatchSizeInKilobytes
+ }
+ case "deliveryAttributeMappings":
+ if v != nil {
+ deliveryAttributeMappings, err := unmarshalBasicDeliveryAttributeMappingArray(*v)
+ if err != nil {
+ return err
+ }
+ afesdp.DeliveryAttributeMappings = &deliveryAttributeMappings
+ }
+ }
+ }
+
+ return nil
}
// BoolEqualsAdvancedFilter boolEquals Advanced Filter.
@@ -332,7 +454,7 @@ type BoolEqualsAdvancedFilter struct {
Value *bool `json:"value,omitempty"`
// Key - The field/property in the event based on which you want to filter.
Key *string `json:"key,omitempty"`
- // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains'
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
OperatorType OperatorType `json:"operatorType,omitempty"`
}
@@ -412,6 +534,41 @@ func (beaf BoolEqualsAdvancedFilter) AsStringContainsAdvancedFilter() (*StringCo
return nil, false
}
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for BoolEqualsAdvancedFilter.
+func (beaf BoolEqualsAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for BoolEqualsAdvancedFilter.
+func (beaf BoolEqualsAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for BoolEqualsAdvancedFilter.
+func (beaf BoolEqualsAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for BoolEqualsAdvancedFilter.
+func (beaf BoolEqualsAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for BoolEqualsAdvancedFilter.
+func (beaf BoolEqualsAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for BoolEqualsAdvancedFilter.
+func (beaf BoolEqualsAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for BoolEqualsAdvancedFilter.
+func (beaf BoolEqualsAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
// AsAdvancedFilter is the BasicAdvancedFilter implementation for BoolEqualsAdvancedFilter.
func (beaf BoolEqualsAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
return nil, false
@@ -552,6 +709,125 @@ func (dlwri *DeadLetterWithResourceIdentity) UnmarshalJSON(body []byte) error {
return nil
}
+// DeliveryAttributeListResult result of the Get delivery attributes operation.
+type DeliveryAttributeListResult struct {
+ autorest.Response `json:"-"`
+ // Value - A collection of DeliveryAttributeMapping
+ Value *[]BasicDeliveryAttributeMapping `json:"value,omitempty"`
+}
+
+// UnmarshalJSON is the custom unmarshaler for DeliveryAttributeListResult struct.
+func (dalr *DeliveryAttributeListResult) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "value":
+ if v != nil {
+ value, err := unmarshalBasicDeliveryAttributeMappingArray(*v)
+ if err != nil {
+ return err
+ }
+ dalr.Value = &value
+ }
+ }
+ }
+
+ return nil
+}
+
+// BasicDeliveryAttributeMapping delivery attribute mapping details.
+type BasicDeliveryAttributeMapping interface {
+ AsStaticDeliveryAttributeMapping() (*StaticDeliveryAttributeMapping, bool)
+ AsDynamicDeliveryAttributeMapping() (*DynamicDeliveryAttributeMapping, bool)
+ AsDeliveryAttributeMapping() (*DeliveryAttributeMapping, bool)
+}
+
+// DeliveryAttributeMapping delivery attribute mapping details.
+type DeliveryAttributeMapping struct {
+ // Name - Name of the delivery attribute or header.
+ Name *string `json:"name,omitempty"`
+ // Type - Possible values include: 'TypeDeliveryAttributeMapping', 'TypeStatic', 'TypeDynamic'
+ Type Type `json:"type,omitempty"`
+}
+
+func unmarshalBasicDeliveryAttributeMapping(body []byte) (BasicDeliveryAttributeMapping, error) {
+ var m map[string]interface{}
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return nil, err
+ }
+
+ switch m["type"] {
+ case string(TypeStatic):
+ var sdam StaticDeliveryAttributeMapping
+ err := json.Unmarshal(body, &sdam)
+ return sdam, err
+ case string(TypeDynamic):
+ var ddam DynamicDeliveryAttributeMapping
+ err := json.Unmarshal(body, &ddam)
+ return ddam, err
+ default:
+ var dam DeliveryAttributeMapping
+ err := json.Unmarshal(body, &dam)
+ return dam, err
+ }
+}
+func unmarshalBasicDeliveryAttributeMappingArray(body []byte) ([]BasicDeliveryAttributeMapping, error) {
+ var rawMessages []*json.RawMessage
+ err := json.Unmarshal(body, &rawMessages)
+ if err != nil {
+ return nil, err
+ }
+
+ damArray := make([]BasicDeliveryAttributeMapping, len(rawMessages))
+
+ for index, rawMessage := range rawMessages {
+ dam, err := unmarshalBasicDeliveryAttributeMapping(*rawMessage)
+ if err != nil {
+ return nil, err
+ }
+ damArray[index] = dam
+ }
+ return damArray, nil
+}
+
+// MarshalJSON is the custom marshaler for DeliveryAttributeMapping.
+func (dam DeliveryAttributeMapping) MarshalJSON() ([]byte, error) {
+ dam.Type = TypeDeliveryAttributeMapping
+ objectMap := make(map[string]interface{})
+ if dam.Name != nil {
+ objectMap["name"] = dam.Name
+ }
+ if dam.Type != "" {
+ objectMap["type"] = dam.Type
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsStaticDeliveryAttributeMapping is the BasicDeliveryAttributeMapping implementation for DeliveryAttributeMapping.
+func (dam DeliveryAttributeMapping) AsStaticDeliveryAttributeMapping() (*StaticDeliveryAttributeMapping, bool) {
+ return nil, false
+}
+
+// AsDynamicDeliveryAttributeMapping is the BasicDeliveryAttributeMapping implementation for DeliveryAttributeMapping.
+func (dam DeliveryAttributeMapping) AsDynamicDeliveryAttributeMapping() (*DynamicDeliveryAttributeMapping, bool) {
+ return nil, false
+}
+
+// AsDeliveryAttributeMapping is the BasicDeliveryAttributeMapping implementation for DeliveryAttributeMapping.
+func (dam DeliveryAttributeMapping) AsDeliveryAttributeMapping() (*DeliveryAttributeMapping, bool) {
+ return &dam, true
+}
+
+// AsBasicDeliveryAttributeMapping is the BasicDeliveryAttributeMapping implementation for DeliveryAttributeMapping.
+func (dam DeliveryAttributeMapping) AsBasicDeliveryAttributeMapping() (BasicDeliveryAttributeMapping, bool) {
+ return &dam, true
+}
+
// DeliveryWithResourceIdentity information about the delivery for an event subscription with resource
// identity.
type DeliveryWithResourceIdentity struct {
@@ -609,9 +885,9 @@ type Domain struct {
Tags map[string]*string `json:"tags"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -1153,9 +1429,9 @@ type DomainTopic struct {
*DomainTopicProperties `json:"properties,omitempty"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -1554,16 +1830,112 @@ func (dup *DomainUpdateParameters) UnmarshalJSON(body []byte) error {
return nil
}
+// DynamicDeliveryAttributeMapping dynamic delivery attribute mapping details.
+type DynamicDeliveryAttributeMapping struct {
+ // DynamicDeliveryAttributeMappingProperties - Properties of dynamic delivery attribute mapping.
+ *DynamicDeliveryAttributeMappingProperties `json:"properties,omitempty"`
+ // Name - Name of the delivery attribute or header.
+ Name *string `json:"name,omitempty"`
+ // Type - Possible values include: 'TypeDeliveryAttributeMapping', 'TypeStatic', 'TypeDynamic'
+ Type Type `json:"type,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for DynamicDeliveryAttributeMapping.
+func (ddam DynamicDeliveryAttributeMapping) MarshalJSON() ([]byte, error) {
+ ddam.Type = TypeDynamic
+ objectMap := make(map[string]interface{})
+ if ddam.DynamicDeliveryAttributeMappingProperties != nil {
+ objectMap["properties"] = ddam.DynamicDeliveryAttributeMappingProperties
+ }
+ if ddam.Name != nil {
+ objectMap["name"] = ddam.Name
+ }
+ if ddam.Type != "" {
+ objectMap["type"] = ddam.Type
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsStaticDeliveryAttributeMapping is the BasicDeliveryAttributeMapping implementation for DynamicDeliveryAttributeMapping.
+func (ddam DynamicDeliveryAttributeMapping) AsStaticDeliveryAttributeMapping() (*StaticDeliveryAttributeMapping, bool) {
+ return nil, false
+}
+
+// AsDynamicDeliveryAttributeMapping is the BasicDeliveryAttributeMapping implementation for DynamicDeliveryAttributeMapping.
+func (ddam DynamicDeliveryAttributeMapping) AsDynamicDeliveryAttributeMapping() (*DynamicDeliveryAttributeMapping, bool) {
+ return &ddam, true
+}
+
+// AsDeliveryAttributeMapping is the BasicDeliveryAttributeMapping implementation for DynamicDeliveryAttributeMapping.
+func (ddam DynamicDeliveryAttributeMapping) AsDeliveryAttributeMapping() (*DeliveryAttributeMapping, bool) {
+ return nil, false
+}
+
+// AsBasicDeliveryAttributeMapping is the BasicDeliveryAttributeMapping implementation for DynamicDeliveryAttributeMapping.
+func (ddam DynamicDeliveryAttributeMapping) AsBasicDeliveryAttributeMapping() (BasicDeliveryAttributeMapping, bool) {
+ return &ddam, true
+}
+
+// UnmarshalJSON is the custom unmarshaler for DynamicDeliveryAttributeMapping struct.
+func (ddam *DynamicDeliveryAttributeMapping) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var dynamicDeliveryAttributeMappingProperties DynamicDeliveryAttributeMappingProperties
+ err = json.Unmarshal(*v, &dynamicDeliveryAttributeMappingProperties)
+ if err != nil {
+ return err
+ }
+ ddam.DynamicDeliveryAttributeMappingProperties = &dynamicDeliveryAttributeMappingProperties
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ ddam.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar Type
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ ddam.Type = typeVar
+ }
+ }
+ }
+
+ return nil
+}
+
+// DynamicDeliveryAttributeMappingProperties properties of dynamic delivery attribute mapping.
+type DynamicDeliveryAttributeMappingProperties struct {
+ // SourceField - JSON path in the event which contains attribute value.
+ SourceField *string `json:"sourceField,omitempty"`
+}
+
// EventChannel event Channel.
type EventChannel struct {
autorest.Response `json:"-"`
// EventChannelProperties - Properties of the EventChannel.
*EventChannelProperties `json:"properties,omitempty"`
+ // SystemData - READ-ONLY; The system metadata relating to this resource.
+ SystemData *SystemData `json:"systemData,omitempty"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -1594,6 +1966,15 @@ func (ec *EventChannel) UnmarshalJSON(body []byte) error {
}
ec.EventChannelProperties = &eventChannelProperties
}
+ case "systemData":
+ if v != nil {
+ var systemData SystemData
+ err = json.Unmarshal(*v, &systemData)
+ if err != nil {
+ return err
+ }
+ ec.SystemData = &systemData
+ }
case "id":
if v != nil {
var ID string
@@ -1641,6 +2022,8 @@ type EventChannelDestination struct {
// EventChannelFilter filter for the Event Channel.
type EventChannelFilter struct {
+ // EnableAdvancedFilteringOnArrays - Allows advanced filters to be evaluated against an array of values instead of expecting a singular value.
+ EnableAdvancedFilteringOnArrays *bool `json:"enableAdvancedFilteringOnArrays,omitempty"`
// AdvancedFilters - An array of advanced filters that are used for filtering event channels.
AdvancedFilters *[]BasicAdvancedFilter `json:"advancedFilters,omitempty"`
}
@@ -1654,6 +2037,15 @@ func (ecf *EventChannelFilter) UnmarshalJSON(body []byte) error {
}
for k, v := range m {
switch k {
+ case "enableAdvancedFilteringOnArrays":
+ if v != nil {
+ var enableAdvancedFilteringOnArrays bool
+ err = json.Unmarshal(*v, &enableAdvancedFilteringOnArrays)
+ if err != nil {
+ return err
+ }
+ ecf.EnableAdvancedFilteringOnArrays = &enableAdvancedFilteringOnArrays
+ }
case "advancedFilters":
if v != nil {
advancedFilters, err := unmarshalBasicAdvancedFilterArray(*v)
@@ -2016,6 +2408,40 @@ func (ehesd *EventHubEventSubscriptionDestination) UnmarshalJSON(body []byte) er
type EventHubEventSubscriptionDestinationProperties struct {
// ResourceID - The Azure Resource Id that represents the endpoint of an Event Hub destination of an event subscription.
ResourceID *string `json:"resourceId,omitempty"`
+ // DeliveryAttributeMappings - Delivery attribute details.
+ DeliveryAttributeMappings *[]BasicDeliveryAttributeMapping `json:"deliveryAttributeMappings,omitempty"`
+}
+
+// UnmarshalJSON is the custom unmarshaler for EventHubEventSubscriptionDestinationProperties struct.
+func (ehesdp *EventHubEventSubscriptionDestinationProperties) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "resourceId":
+ if v != nil {
+ var resourceID string
+ err = json.Unmarshal(*v, &resourceID)
+ if err != nil {
+ return err
+ }
+ ehesdp.ResourceID = &resourceID
+ }
+ case "deliveryAttributeMappings":
+ if v != nil {
+ deliveryAttributeMappings, err := unmarshalBasicDeliveryAttributeMappingArray(*v)
+ if err != nil {
+ return err
+ }
+ ehesdp.DeliveryAttributeMappings = &deliveryAttributeMappings
+ }
+ }
+ }
+
+ return nil
}
// EventSubscription event Subscription
@@ -2023,11 +2449,13 @@ type EventSubscription struct {
autorest.Response `json:"-"`
// EventSubscriptionProperties - Properties of the event subscription.
*EventSubscriptionProperties `json:"properties,omitempty"`
+ // SystemData - READ-ONLY; The system metadata relating to this resource.
+ SystemData *SystemData `json:"systemData,omitempty"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -2058,6 +2486,15 @@ func (es *EventSubscription) UnmarshalJSON(body []byte) error {
}
es.EventSubscriptionProperties = &eventSubscriptionProperties
}
+ case "systemData":
+ if v != nil {
+ var systemData SystemData
+ err = json.Unmarshal(*v, &systemData)
+ if err != nil {
+ return err
+ }
+ es.SystemData = &systemData
+ }
case "id":
if v != nil {
var ID string
@@ -2239,6 +2676,8 @@ type EventSubscriptionFilter struct {
// IsSubjectCaseSensitive - Specifies if the SubjectBeginsWith and SubjectEndsWith properties of the filter
// should be compared in a case sensitive manner.
IsSubjectCaseSensitive *bool `json:"isSubjectCaseSensitive,omitempty"`
+ // EnableAdvancedFilteringOnArrays - Allows advanced filters to be evaluated against an array of values instead of expecting a singular value.
+ EnableAdvancedFilteringOnArrays *bool `json:"enableAdvancedFilteringOnArrays,omitempty"`
// AdvancedFilters - An array of advanced filters that are used for filtering event subscriptions.
AdvancedFilters *[]BasicAdvancedFilter `json:"advancedFilters,omitempty"`
}
@@ -2288,6 +2727,15 @@ func (esf *EventSubscriptionFilter) UnmarshalJSON(body []byte) error {
}
esf.IsSubjectCaseSensitive = &isSubjectCaseSensitive
}
+ case "enableAdvancedFilteringOnArrays":
+ if v != nil {
+ var enableAdvancedFilteringOnArrays bool
+ err = json.Unmarshal(*v, &enableAdvancedFilteringOnArrays)
+ if err != nil {
+ return err
+ }
+ esf.EnableAdvancedFilteringOnArrays = &enableAdvancedFilteringOnArrays
+ }
case "advancedFilters":
if v != nil {
advancedFilters, err := unmarshalBasicAdvancedFilterArray(*v)
@@ -2896,9 +3344,9 @@ type EventType struct {
*EventTypeProperties `json:"properties,omitempty"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -2981,6 +3429,14 @@ type EventTypesListResult struct {
Value *[]EventType `json:"value,omitempty"`
}
+// ExtendedLocation definition of an Extended Location
+type ExtendedLocation struct {
+ // Name - Fully qualified name of the extended location.
+ Name *string `json:"name,omitempty"`
+ // Type - Type of the extended location.
+ Type *string `json:"type,omitempty"`
+}
+
// ExtensionTopic event grid Extension Topic. This is used for getting Event Grid related metrics for Azure
// resources.
type ExtensionTopic struct {
@@ -2989,9 +3445,9 @@ type ExtensionTopic struct {
*ExtensionTopicProperties `json:"properties,omitempty"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -3168,20 +3624,54 @@ func (hcesd *HybridConnectionEventSubscriptionDestination) UnmarshalJSON(body []
type HybridConnectionEventSubscriptionDestinationProperties struct {
// ResourceID - The Azure Resource ID of an hybrid connection that is the destination of an event subscription.
ResourceID *string `json:"resourceId,omitempty"`
+ // DeliveryAttributeMappings - Delivery attribute details.
+ DeliveryAttributeMappings *[]BasicDeliveryAttributeMapping `json:"deliveryAttributeMappings,omitempty"`
}
-// IdentityInfo the identity information for the resource.
-type IdentityInfo struct {
- // Type - The type of managed identity used. The type 'SystemAssigned, UserAssigned' includes both an implicitly created identity and a set of user-assigned identities. The type 'None' will remove any identity. Possible values include: 'IdentityTypeNone', 'IdentityTypeSystemAssigned', 'IdentityTypeUserAssigned', 'IdentityTypeSystemAssignedUserAssigned'
- Type IdentityType `json:"type,omitempty"`
- // PrincipalID - The principal ID of resource identity.
- PrincipalID *string `json:"principalId,omitempty"`
- // TenantID - The tenant ID of resource.
- TenantID *string `json:"tenantId,omitempty"`
- // UserAssignedIdentities - The list of user identities associated with the resource. The user identity dictionary key references will be ARM resource ids in the form:
- // '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}'.
- // This property is currently not used and reserved for future usage.
- UserAssignedIdentities map[string]*UserIdentityProperties `json:"userAssignedIdentities"`
+// UnmarshalJSON is the custom unmarshaler for HybridConnectionEventSubscriptionDestinationProperties struct.
+func (hcesdp *HybridConnectionEventSubscriptionDestinationProperties) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "resourceId":
+ if v != nil {
+ var resourceID string
+ err = json.Unmarshal(*v, &resourceID)
+ if err != nil {
+ return err
+ }
+ hcesdp.ResourceID = &resourceID
+ }
+ case "deliveryAttributeMappings":
+ if v != nil {
+ deliveryAttributeMappings, err := unmarshalBasicDeliveryAttributeMappingArray(*v)
+ if err != nil {
+ return err
+ }
+ hcesdp.DeliveryAttributeMappings = &deliveryAttributeMappings
+ }
+ }
+ }
+
+ return nil
+}
+
+// IdentityInfo the identity information for the resource.
+type IdentityInfo struct {
+ // Type - The type of managed identity used. The type 'SystemAssigned, UserAssigned' includes both an implicitly created identity and a set of user-assigned identities. The type 'None' will remove any identity. Possible values include: 'IdentityTypeNone', 'IdentityTypeSystemAssigned', 'IdentityTypeUserAssigned', 'IdentityTypeSystemAssignedUserAssigned'
+ Type IdentityType `json:"type,omitempty"`
+ // PrincipalID - The principal ID of resource identity.
+ PrincipalID *string `json:"principalId,omitempty"`
+ // TenantID - The tenant ID of resource.
+ TenantID *string `json:"tenantId,omitempty"`
+ // UserAssignedIdentities - The list of user identities associated with the resource. The user identity dictionary key references will be ARM resource ids in the form:
+ // '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}'.
+ // This property is currently not used and reserved for future usage.
+ UserAssignedIdentities map[string]*UserIdentityProperties `json:"userAssignedIdentities"`
}
// MarshalJSON is the custom marshaler for IdentityInfo.
@@ -3288,6 +3778,258 @@ func (ism InputSchemaMapping) AsBasicInputSchemaMapping() (BasicInputSchemaMappi
return &ism, true
}
+// IsNotNullAdvancedFilter isNotNull Advanced Filter.
+type IsNotNullAdvancedFilter struct {
+ // Key - The field/property in the event based on which you want to filter.
+ Key *string `json:"key,omitempty"`
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
+ OperatorType OperatorType `json:"operatorType,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) MarshalJSON() ([]byte, error) {
+ innaf.OperatorType = OperatorTypeIsNotNull
+ objectMap := make(map[string]interface{})
+ if innaf.Key != nil {
+ objectMap["key"] = innaf.Key
+ }
+ if innaf.OperatorType != "" {
+ objectMap["operatorType"] = innaf.OperatorType
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsNumberInAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsNumberInAdvancedFilter() (*NumberInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsNumberNotInAdvancedFilter() (*NumberNotInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberLessThanAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsNumberLessThanAdvancedFilter() (*NumberLessThanAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberGreaterThanAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsNumberGreaterThanAdvancedFilter() (*NumberGreaterThanAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberLessThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsNumberLessThanOrEqualsAdvancedFilter() (*NumberLessThanOrEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberGreaterThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsNumberGreaterThanOrEqualsAdvancedFilter() (*NumberGreaterThanOrEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBoolEqualsAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsBoolEqualsAdvancedFilter() (*BoolEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringInAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsStringInAdvancedFilter() (*StringInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotInAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsStringNotInAdvancedFilter() (*StringNotInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsStringBeginsWithAdvancedFilter() (*StringBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsStringEndsWithAdvancedFilter() (*StringEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringContainsAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsStringContainsAdvancedFilter() (*StringContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return &innaf, true
+}
+
+// AsAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBasicAdvancedFilter is the BasicAdvancedFilter implementation for IsNotNullAdvancedFilter.
+func (innaf IsNotNullAdvancedFilter) AsBasicAdvancedFilter() (BasicAdvancedFilter, bool) {
+ return &innaf, true
+}
+
+// IsNullOrUndefinedAdvancedFilter isNullOrUndefined Advanced Filter.
+type IsNullOrUndefinedAdvancedFilter struct {
+ // Key - The field/property in the event based on which you want to filter.
+ Key *string `json:"key,omitempty"`
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
+ OperatorType OperatorType `json:"operatorType,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) MarshalJSON() ([]byte, error) {
+ inouaf.OperatorType = OperatorTypeIsNullOrUndefined
+ objectMap := make(map[string]interface{})
+ if inouaf.Key != nil {
+ objectMap["key"] = inouaf.Key
+ }
+ if inouaf.OperatorType != "" {
+ objectMap["operatorType"] = inouaf.OperatorType
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsNumberInAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsNumberInAdvancedFilter() (*NumberInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsNumberNotInAdvancedFilter() (*NumberNotInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberLessThanAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsNumberLessThanAdvancedFilter() (*NumberLessThanAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberGreaterThanAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsNumberGreaterThanAdvancedFilter() (*NumberGreaterThanAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberLessThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsNumberLessThanOrEqualsAdvancedFilter() (*NumberLessThanOrEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberGreaterThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsNumberGreaterThanOrEqualsAdvancedFilter() (*NumberGreaterThanOrEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBoolEqualsAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsBoolEqualsAdvancedFilter() (*BoolEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringInAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsStringInAdvancedFilter() (*StringInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotInAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsStringNotInAdvancedFilter() (*StringNotInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsStringBeginsWithAdvancedFilter() (*StringBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsStringEndsWithAdvancedFilter() (*StringEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringContainsAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsStringContainsAdvancedFilter() (*StringContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return &inouaf, true
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBasicAdvancedFilter is the BasicAdvancedFilter implementation for IsNullOrUndefinedAdvancedFilter.
+func (inouaf IsNullOrUndefinedAdvancedFilter) AsBasicAdvancedFilter() (BasicAdvancedFilter, bool) {
+ return &inouaf, true
+}
+
// JSONField this is used to express the source of an input schema mapping for a single target field in the
// Event Grid Event schema. This is currently used in the mappings for the 'id', 'topic' and 'eventtime'
// properties. This represents a field in the input event schema.
@@ -3401,7 +4143,7 @@ type NumberGreaterThanAdvancedFilter struct {
Value *float64 `json:"value,omitempty"`
// Key - The field/property in the event based on which you want to filter.
Key *string `json:"key,omitempty"`
- // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains'
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
OperatorType OperatorType `json:"operatorType,omitempty"`
}
@@ -3481,6 +4223,41 @@ func (ngtaf NumberGreaterThanAdvancedFilter) AsStringContainsAdvancedFilter() (*
return nil, false
}
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanAdvancedFilter.
+func (ngtaf NumberGreaterThanAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanAdvancedFilter.
+func (ngtaf NumberGreaterThanAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanAdvancedFilter.
+func (ngtaf NumberGreaterThanAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanAdvancedFilter.
+func (ngtaf NumberGreaterThanAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanAdvancedFilter.
+func (ngtaf NumberGreaterThanAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanAdvancedFilter.
+func (ngtaf NumberGreaterThanAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanAdvancedFilter.
+func (ngtaf NumberGreaterThanAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
// AsAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanAdvancedFilter.
func (ngtaf NumberGreaterThanAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
return nil, false
@@ -3497,7 +4274,7 @@ type NumberGreaterThanOrEqualsAdvancedFilter struct {
Value *float64 `json:"value,omitempty"`
// Key - The field/property in the event based on which you want to filter.
Key *string `json:"key,omitempty"`
- // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains'
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
OperatorType OperatorType `json:"operatorType,omitempty"`
}
@@ -3577,6 +4354,41 @@ func (ngtoeaf NumberGreaterThanOrEqualsAdvancedFilter) AsStringContainsAdvancedF
return nil, false
}
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanOrEqualsAdvancedFilter.
+func (ngtoeaf NumberGreaterThanOrEqualsAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanOrEqualsAdvancedFilter.
+func (ngtoeaf NumberGreaterThanOrEqualsAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanOrEqualsAdvancedFilter.
+func (ngtoeaf NumberGreaterThanOrEqualsAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanOrEqualsAdvancedFilter.
+func (ngtoeaf NumberGreaterThanOrEqualsAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanOrEqualsAdvancedFilter.
+func (ngtoeaf NumberGreaterThanOrEqualsAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanOrEqualsAdvancedFilter.
+func (ngtoeaf NumberGreaterThanOrEqualsAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanOrEqualsAdvancedFilter.
+func (ngtoeaf NumberGreaterThanOrEqualsAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
// AsAdvancedFilter is the BasicAdvancedFilter implementation for NumberGreaterThanOrEqualsAdvancedFilter.
func (ngtoeaf NumberGreaterThanOrEqualsAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
return nil, false
@@ -3593,7 +4405,7 @@ type NumberInAdvancedFilter struct {
Values *[]float64 `json:"values,omitempty"`
// Key - The field/property in the event based on which you want to filter.
Key *string `json:"key,omitempty"`
- // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains'
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
OperatorType OperatorType `json:"operatorType,omitempty"`
}
@@ -3673,6 +4485,41 @@ func (niaf NumberInAdvancedFilter) AsStringContainsAdvancedFilter() (*StringCont
return nil, false
}
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberInAdvancedFilter.
+func (niaf NumberInAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberInAdvancedFilter.
+func (niaf NumberInAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberInAdvancedFilter.
+func (niaf NumberInAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberInAdvancedFilter.
+func (niaf NumberInAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for NumberInAdvancedFilter.
+func (niaf NumberInAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for NumberInAdvancedFilter.
+func (niaf NumberInAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for NumberInAdvancedFilter.
+func (niaf NumberInAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
// AsAdvancedFilter is the BasicAdvancedFilter implementation for NumberInAdvancedFilter.
func (niaf NumberInAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
return nil, false
@@ -3683,28 +4530,159 @@ func (niaf NumberInAdvancedFilter) AsBasicAdvancedFilter() (BasicAdvancedFilter,
return &niaf, true
}
-// NumberLessThanAdvancedFilter numberLessThan Advanced Filter.
-type NumberLessThanAdvancedFilter struct {
- // Value - The filter value.
- Value *float64 `json:"value,omitempty"`
+// NumberInRangeAdvancedFilter numberInRange Advanced Filter.
+type NumberInRangeAdvancedFilter struct {
+ // Values - The set of filter values.
+ Values *[][]float64 `json:"values,omitempty"`
// Key - The field/property in the event based on which you want to filter.
Key *string `json:"key,omitempty"`
- // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains'
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
OperatorType OperatorType `json:"operatorType,omitempty"`
}
-// MarshalJSON is the custom marshaler for NumberLessThanAdvancedFilter.
-func (nltaf NumberLessThanAdvancedFilter) MarshalJSON() ([]byte, error) {
- nltaf.OperatorType = OperatorTypeNumberLessThan
+// MarshalJSON is the custom marshaler for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) MarshalJSON() ([]byte, error) {
+ niraf.OperatorType = OperatorTypeNumberInRange
objectMap := make(map[string]interface{})
- if nltaf.Value != nil {
- objectMap["value"] = nltaf.Value
+ if niraf.Values != nil {
+ objectMap["values"] = niraf.Values
}
- if nltaf.Key != nil {
- objectMap["key"] = nltaf.Key
+ if niraf.Key != nil {
+ objectMap["key"] = niraf.Key
}
- if nltaf.OperatorType != "" {
- objectMap["operatorType"] = nltaf.OperatorType
+ if niraf.OperatorType != "" {
+ objectMap["operatorType"] = niraf.OperatorType
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsNumberInAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsNumberInAdvancedFilter() (*NumberInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsNumberNotInAdvancedFilter() (*NumberNotInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberLessThanAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsNumberLessThanAdvancedFilter() (*NumberLessThanAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberGreaterThanAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsNumberGreaterThanAdvancedFilter() (*NumberGreaterThanAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberLessThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsNumberLessThanOrEqualsAdvancedFilter() (*NumberLessThanOrEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberGreaterThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsNumberGreaterThanOrEqualsAdvancedFilter() (*NumberGreaterThanOrEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBoolEqualsAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsBoolEqualsAdvancedFilter() (*BoolEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringInAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsStringInAdvancedFilter() (*StringInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotInAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsStringNotInAdvancedFilter() (*StringNotInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsStringBeginsWithAdvancedFilter() (*StringBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsStringEndsWithAdvancedFilter() (*StringEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringContainsAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsStringContainsAdvancedFilter() (*StringContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return &niraf, true
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBasicAdvancedFilter is the BasicAdvancedFilter implementation for NumberInRangeAdvancedFilter.
+func (niraf NumberInRangeAdvancedFilter) AsBasicAdvancedFilter() (BasicAdvancedFilter, bool) {
+ return &niraf, true
+}
+
+// NumberLessThanAdvancedFilter numberLessThan Advanced Filter.
+type NumberLessThanAdvancedFilter struct {
+ // Value - The filter value.
+ Value *float64 `json:"value,omitempty"`
+ // Key - The field/property in the event based on which you want to filter.
+ Key *string `json:"key,omitempty"`
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
+ OperatorType OperatorType `json:"operatorType,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for NumberLessThanAdvancedFilter.
+func (nltaf NumberLessThanAdvancedFilter) MarshalJSON() ([]byte, error) {
+ nltaf.OperatorType = OperatorTypeNumberLessThan
+ objectMap := make(map[string]interface{})
+ if nltaf.Value != nil {
+ objectMap["value"] = nltaf.Value
+ }
+ if nltaf.Key != nil {
+ objectMap["key"] = nltaf.Key
+ }
+ if nltaf.OperatorType != "" {
+ objectMap["operatorType"] = nltaf.OperatorType
}
return json.Marshal(objectMap)
}
@@ -3769,6 +4747,41 @@ func (nltaf NumberLessThanAdvancedFilter) AsStringContainsAdvancedFilter() (*Str
return nil, false
}
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanAdvancedFilter.
+func (nltaf NumberLessThanAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanAdvancedFilter.
+func (nltaf NumberLessThanAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanAdvancedFilter.
+func (nltaf NumberLessThanAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanAdvancedFilter.
+func (nltaf NumberLessThanAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanAdvancedFilter.
+func (nltaf NumberLessThanAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanAdvancedFilter.
+func (nltaf NumberLessThanAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanAdvancedFilter.
+func (nltaf NumberLessThanAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
// AsAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanAdvancedFilter.
func (nltaf NumberLessThanAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
return nil, false
@@ -3785,7 +4798,7 @@ type NumberLessThanOrEqualsAdvancedFilter struct {
Value *float64 `json:"value,omitempty"`
// Key - The field/property in the event based on which you want to filter.
Key *string `json:"key,omitempty"`
- // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains'
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
OperatorType OperatorType `json:"operatorType,omitempty"`
}
@@ -3865,6 +4878,41 @@ func (nltoeaf NumberLessThanOrEqualsAdvancedFilter) AsStringContainsAdvancedFilt
return nil, false
}
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanOrEqualsAdvancedFilter.
+func (nltoeaf NumberLessThanOrEqualsAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanOrEqualsAdvancedFilter.
+func (nltoeaf NumberLessThanOrEqualsAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanOrEqualsAdvancedFilter.
+func (nltoeaf NumberLessThanOrEqualsAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanOrEqualsAdvancedFilter.
+func (nltoeaf NumberLessThanOrEqualsAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanOrEqualsAdvancedFilter.
+func (nltoeaf NumberLessThanOrEqualsAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanOrEqualsAdvancedFilter.
+func (nltoeaf NumberLessThanOrEqualsAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanOrEqualsAdvancedFilter.
+func (nltoeaf NumberLessThanOrEqualsAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
// AsAdvancedFilter is the BasicAdvancedFilter implementation for NumberLessThanOrEqualsAdvancedFilter.
func (nltoeaf NumberLessThanOrEqualsAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
return nil, false
@@ -3881,7 +4929,7 @@ type NumberNotInAdvancedFilter struct {
Values *[]float64 `json:"values,omitempty"`
// Key - The field/property in the event based on which you want to filter.
Key *string `json:"key,omitempty"`
- // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains'
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
OperatorType OperatorType `json:"operatorType,omitempty"`
}
@@ -3961,6 +5009,41 @@ func (nniaf NumberNotInAdvancedFilter) AsStringContainsAdvancedFilter() (*String
return nil, false
}
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInAdvancedFilter.
+func (nniaf NumberNotInAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInAdvancedFilter.
+func (nniaf NumberNotInAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInAdvancedFilter.
+func (nniaf NumberNotInAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInAdvancedFilter.
+func (nniaf NumberNotInAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInAdvancedFilter.
+func (nniaf NumberNotInAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInAdvancedFilter.
+func (nniaf NumberNotInAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInAdvancedFilter.
+func (nniaf NumberNotInAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
// AsAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInAdvancedFilter.
func (nniaf NumberNotInAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
return nil, false
@@ -3971,6 +5054,137 @@ func (nniaf NumberNotInAdvancedFilter) AsBasicAdvancedFilter() (BasicAdvancedFil
return &nniaf, true
}
+// NumberNotInRangeAdvancedFilter numberNotInRange Advanced Filter.
+type NumberNotInRangeAdvancedFilter struct {
+ // Values - The set of filter values.
+ Values *[][]float64 `json:"values,omitempty"`
+ // Key - The field/property in the event based on which you want to filter.
+ Key *string `json:"key,omitempty"`
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
+ OperatorType OperatorType `json:"operatorType,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) MarshalJSON() ([]byte, error) {
+ nniraf.OperatorType = OperatorTypeNumberNotInRange
+ objectMap := make(map[string]interface{})
+ if nniraf.Values != nil {
+ objectMap["values"] = nniraf.Values
+ }
+ if nniraf.Key != nil {
+ objectMap["key"] = nniraf.Key
+ }
+ if nniraf.OperatorType != "" {
+ objectMap["operatorType"] = nniraf.OperatorType
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsNumberInAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsNumberInAdvancedFilter() (*NumberInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsNumberNotInAdvancedFilter() (*NumberNotInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberLessThanAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsNumberLessThanAdvancedFilter() (*NumberLessThanAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberGreaterThanAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsNumberGreaterThanAdvancedFilter() (*NumberGreaterThanAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberLessThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsNumberLessThanOrEqualsAdvancedFilter() (*NumberLessThanOrEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberGreaterThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsNumberGreaterThanOrEqualsAdvancedFilter() (*NumberGreaterThanOrEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBoolEqualsAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsBoolEqualsAdvancedFilter() (*BoolEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringInAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsStringInAdvancedFilter() (*StringInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotInAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsStringNotInAdvancedFilter() (*StringNotInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsStringBeginsWithAdvancedFilter() (*StringBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsStringEndsWithAdvancedFilter() (*StringEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringContainsAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsStringContainsAdvancedFilter() (*StringContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return &nniraf, true
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBasicAdvancedFilter is the BasicAdvancedFilter implementation for NumberNotInRangeAdvancedFilter.
+func (nniraf NumberNotInRangeAdvancedFilter) AsBasicAdvancedFilter() (BasicAdvancedFilter, bool) {
+ return &nniraf, true
+}
+
// Operation represents an operation returned by the GetOperations request
type Operation struct {
// Name - Name of the operation
@@ -4007,15 +5221,17 @@ type PartnerNamespace struct {
autorest.Response `json:"-"`
// PartnerNamespaceProperties - Properties of the partner namespace.
*PartnerNamespaceProperties `json:"properties,omitempty"`
+ // SystemData - READ-ONLY; The system metadata relating to this resource.
+ SystemData *SystemData `json:"systemData,omitempty"`
// Location - Location of the resource.
Location *string `json:"location,omitempty"`
// Tags - Tags of the resource.
Tags map[string]*string `json:"tags"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -4052,6 +5268,15 @@ func (pn *PartnerNamespace) UnmarshalJSON(body []byte) error {
}
pn.PartnerNamespaceProperties = &partnerNamespaceProperties
}
+ case "systemData":
+ if v != nil {
+ var systemData SystemData
+ err = json.Unmarshal(*v, &systemData)
+ if err != nil {
+ return err
+ }
+ pn.SystemData = &systemData
+ }
case "location":
if v != nil {
var location string
@@ -4440,15 +5665,17 @@ type PartnerRegistration struct {
autorest.Response `json:"-"`
// PartnerRegistrationProperties - Properties of the partner registration.
*PartnerRegistrationProperties `json:"properties,omitempty"`
+ // SystemData - READ-ONLY; The system metadata relating to this resource.
+ SystemData *SystemData `json:"systemData,omitempty"`
// Location - Location of the resource.
Location *string `json:"location,omitempty"`
// Tags - Tags of the resource.
Tags map[string]*string `json:"tags"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -4485,6 +5712,15 @@ func (pr *PartnerRegistration) UnmarshalJSON(body []byte) error {
}
pr.PartnerRegistrationProperties = &partnerRegistrationProperties
}
+ case "systemData":
+ if v != nil {
+ var systemData SystemData
+ err = json.Unmarshal(*v, &systemData)
+ if err != nil {
+ return err
+ }
+ pr.SystemData = &systemData
+ }
case "location":
if v != nil {
var location string
@@ -4838,15 +6074,19 @@ type PartnerTopic struct {
autorest.Response `json:"-"`
// PartnerTopicProperties - Properties of the partner topic.
*PartnerTopicProperties `json:"properties,omitempty"`
+ // Identity - Identity information for the resource.
+ Identity *IdentityInfo `json:"identity,omitempty"`
+ // SystemData - READ-ONLY; The system metadata relating to this resource.
+ SystemData *SystemData `json:"systemData,omitempty"`
// Location - Location of the resource.
Location *string `json:"location,omitempty"`
// Tags - Tags of the resource.
Tags map[string]*string `json:"tags"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -4856,6 +6096,9 @@ func (pt PartnerTopic) MarshalJSON() ([]byte, error) {
if pt.PartnerTopicProperties != nil {
objectMap["properties"] = pt.PartnerTopicProperties
}
+ if pt.Identity != nil {
+ objectMap["identity"] = pt.Identity
+ }
if pt.Location != nil {
objectMap["location"] = pt.Location
}
@@ -4883,6 +6126,24 @@ func (pt *PartnerTopic) UnmarshalJSON(body []byte) error {
}
pt.PartnerTopicProperties = &partnerTopicProperties
}
+ case "identity":
+ if v != nil {
+ var identity IdentityInfo
+ err = json.Unmarshal(*v, &identity)
+ if err != nil {
+ return err
+ }
+ pt.Identity = &identity
+ }
+ case "systemData":
+ if v != nil {
+ var systemData SystemData
+ err = json.Unmarshal(*v, &systemData)
+ if err != nil {
+ return err
+ }
+ pt.SystemData = &systemData
+ }
case "location":
if v != nil {
var location string
@@ -5293,9 +6554,9 @@ type PartnerTopicType struct {
*PartnerTopicTypeProperties `json:"properties,omitempty"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -5411,9 +6672,9 @@ type PrivateEndpointConnection struct {
*PrivateEndpointConnectionProperties `json:"properties,omitempty"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -5736,9 +6997,9 @@ type PrivateLinkResource struct {
*PrivateLinkResourceProperties `json:"properties,omitempty"`
// ID - Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - Name of the resource
+ // Name - Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - Type of the resource
+ // Type - Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -5983,9 +7244,9 @@ func NewPrivateLinkResourcesListResultPage(cur PrivateLinkResourcesListResult, g
type Resource struct {
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -6108,15 +7369,49 @@ func (sbqesd *ServiceBusQueueEventSubscriptionDestination) UnmarshalJSON(body []
type ServiceBusQueueEventSubscriptionDestinationProperties struct {
// ResourceID - The Azure Resource Id that represents the endpoint of the Service Bus destination of an event subscription.
ResourceID *string `json:"resourceId,omitempty"`
+ // DeliveryAttributeMappings - Delivery attribute details.
+ DeliveryAttributeMappings *[]BasicDeliveryAttributeMapping `json:"deliveryAttributeMappings,omitempty"`
}
-// ServiceBusTopicEventSubscriptionDestination information about the service bus topic destination for an
-// event subscription.
-type ServiceBusTopicEventSubscriptionDestination struct {
- // ServiceBusTopicEventSubscriptionDestinationProperties - Service Bus Topic Properties of the event subscription destination.
- *ServiceBusTopicEventSubscriptionDestinationProperties `json:"properties,omitempty"`
- // EndpointType - Possible values include: 'EndpointTypeEventSubscriptionDestination', 'EndpointTypeWebHook', 'EndpointTypeEventHub', 'EndpointTypeStorageQueue', 'EndpointTypeHybridConnection', 'EndpointTypeServiceBusQueue', 'EndpointTypeServiceBusTopic', 'EndpointTypeAzureFunction'
- EndpointType EndpointType `json:"endpointType,omitempty"`
+// UnmarshalJSON is the custom unmarshaler for ServiceBusQueueEventSubscriptionDestinationProperties struct.
+func (sbqesdp *ServiceBusQueueEventSubscriptionDestinationProperties) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "resourceId":
+ if v != nil {
+ var resourceID string
+ err = json.Unmarshal(*v, &resourceID)
+ if err != nil {
+ return err
+ }
+ sbqesdp.ResourceID = &resourceID
+ }
+ case "deliveryAttributeMappings":
+ if v != nil {
+ deliveryAttributeMappings, err := unmarshalBasicDeliveryAttributeMappingArray(*v)
+ if err != nil {
+ return err
+ }
+ sbqesdp.DeliveryAttributeMappings = &deliveryAttributeMappings
+ }
+ }
+ }
+
+ return nil
+}
+
+// ServiceBusTopicEventSubscriptionDestination information about the service bus topic destination for an
+// event subscription.
+type ServiceBusTopicEventSubscriptionDestination struct {
+ // ServiceBusTopicEventSubscriptionDestinationProperties - Service Bus Topic Properties of the event subscription destination.
+ *ServiceBusTopicEventSubscriptionDestinationProperties `json:"properties,omitempty"`
+ // EndpointType - Possible values include: 'EndpointTypeEventSubscriptionDestination', 'EndpointTypeWebHook', 'EndpointTypeEventHub', 'EndpointTypeStorageQueue', 'EndpointTypeHybridConnection', 'EndpointTypeServiceBusQueue', 'EndpointTypeServiceBusTopic', 'EndpointTypeAzureFunction'
+ EndpointType EndpointType `json:"endpointType,omitempty"`
}
// MarshalJSON is the custom marshaler for ServiceBusTopicEventSubscriptionDestination.
@@ -6215,6 +7510,136 @@ func (sbtesd *ServiceBusTopicEventSubscriptionDestination) UnmarshalJSON(body []
type ServiceBusTopicEventSubscriptionDestinationProperties struct {
// ResourceID - The Azure Resource Id that represents the endpoint of the Service Bus Topic destination of an event subscription.
ResourceID *string `json:"resourceId,omitempty"`
+ // DeliveryAttributeMappings - Delivery attribute details.
+ DeliveryAttributeMappings *[]BasicDeliveryAttributeMapping `json:"deliveryAttributeMappings,omitempty"`
+}
+
+// UnmarshalJSON is the custom unmarshaler for ServiceBusTopicEventSubscriptionDestinationProperties struct.
+func (sbtesdp *ServiceBusTopicEventSubscriptionDestinationProperties) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "resourceId":
+ if v != nil {
+ var resourceID string
+ err = json.Unmarshal(*v, &resourceID)
+ if err != nil {
+ return err
+ }
+ sbtesdp.ResourceID = &resourceID
+ }
+ case "deliveryAttributeMappings":
+ if v != nil {
+ deliveryAttributeMappings, err := unmarshalBasicDeliveryAttributeMappingArray(*v)
+ if err != nil {
+ return err
+ }
+ sbtesdp.DeliveryAttributeMappings = &deliveryAttributeMappings
+ }
+ }
+ }
+
+ return nil
+}
+
+// StaticDeliveryAttributeMapping static delivery attribute mapping details.
+type StaticDeliveryAttributeMapping struct {
+ // StaticDeliveryAttributeMappingProperties - Properties of static delivery attribute mapping.
+ *StaticDeliveryAttributeMappingProperties `json:"properties,omitempty"`
+ // Name - Name of the delivery attribute or header.
+ Name *string `json:"name,omitempty"`
+ // Type - Possible values include: 'TypeDeliveryAttributeMapping', 'TypeStatic', 'TypeDynamic'
+ Type Type `json:"type,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for StaticDeliveryAttributeMapping.
+func (sdam StaticDeliveryAttributeMapping) MarshalJSON() ([]byte, error) {
+ sdam.Type = TypeStatic
+ objectMap := make(map[string]interface{})
+ if sdam.StaticDeliveryAttributeMappingProperties != nil {
+ objectMap["properties"] = sdam.StaticDeliveryAttributeMappingProperties
+ }
+ if sdam.Name != nil {
+ objectMap["name"] = sdam.Name
+ }
+ if sdam.Type != "" {
+ objectMap["type"] = sdam.Type
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsStaticDeliveryAttributeMapping is the BasicDeliveryAttributeMapping implementation for StaticDeliveryAttributeMapping.
+func (sdam StaticDeliveryAttributeMapping) AsStaticDeliveryAttributeMapping() (*StaticDeliveryAttributeMapping, bool) {
+ return &sdam, true
+}
+
+// AsDynamicDeliveryAttributeMapping is the BasicDeliveryAttributeMapping implementation for StaticDeliveryAttributeMapping.
+func (sdam StaticDeliveryAttributeMapping) AsDynamicDeliveryAttributeMapping() (*DynamicDeliveryAttributeMapping, bool) {
+ return nil, false
+}
+
+// AsDeliveryAttributeMapping is the BasicDeliveryAttributeMapping implementation for StaticDeliveryAttributeMapping.
+func (sdam StaticDeliveryAttributeMapping) AsDeliveryAttributeMapping() (*DeliveryAttributeMapping, bool) {
+ return nil, false
+}
+
+// AsBasicDeliveryAttributeMapping is the BasicDeliveryAttributeMapping implementation for StaticDeliveryAttributeMapping.
+func (sdam StaticDeliveryAttributeMapping) AsBasicDeliveryAttributeMapping() (BasicDeliveryAttributeMapping, bool) {
+ return &sdam, true
+}
+
+// UnmarshalJSON is the custom unmarshaler for StaticDeliveryAttributeMapping struct.
+func (sdam *StaticDeliveryAttributeMapping) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "properties":
+ if v != nil {
+ var staticDeliveryAttributeMappingProperties StaticDeliveryAttributeMappingProperties
+ err = json.Unmarshal(*v, &staticDeliveryAttributeMappingProperties)
+ if err != nil {
+ return err
+ }
+ sdam.StaticDeliveryAttributeMappingProperties = &staticDeliveryAttributeMappingProperties
+ }
+ case "name":
+ if v != nil {
+ var name string
+ err = json.Unmarshal(*v, &name)
+ if err != nil {
+ return err
+ }
+ sdam.Name = &name
+ }
+ case "type":
+ if v != nil {
+ var typeVar Type
+ err = json.Unmarshal(*v, &typeVar)
+ if err != nil {
+ return err
+ }
+ sdam.Type = typeVar
+ }
+ }
+ }
+
+ return nil
+}
+
+// StaticDeliveryAttributeMappingProperties properties of static delivery attribute mapping.
+type StaticDeliveryAttributeMappingProperties struct {
+ // Value - Value of the delivery attribute.
+ Value *string `json:"value,omitempty"`
+ // IsSecret - Boolean flag to tell if the attribute contains sensitive information .
+ IsSecret *bool `json:"isSecret,omitempty"`
}
// StorageBlobDeadLetterDestination information about the storage blob based dead letter destination.
@@ -6400,6 +7825,8 @@ type StorageQueueEventSubscriptionDestinationProperties struct {
ResourceID *string `json:"resourceId,omitempty"`
// QueueName - The name of the Storage queue under a storage account that is the destination of an event subscription.
QueueName *string `json:"queueName,omitempty"`
+ // QueueMessageTimeToLiveInSeconds - Storage queue message time to live in seconds.
+ QueueMessageTimeToLiveInSeconds *int64 `json:"queueMessageTimeToLiveInSeconds,omitempty"`
}
// StringBeginsWithAdvancedFilter stringBeginsWith Advanced Filter.
@@ -6408,7 +7835,7 @@ type StringBeginsWithAdvancedFilter struct {
Values *[]string `json:"values,omitempty"`
// Key - The field/property in the event based on which you want to filter.
Key *string `json:"key,omitempty"`
- // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains'
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
OperatorType OperatorType `json:"operatorType,omitempty"`
}
@@ -6488,6 +7915,41 @@ func (sbwaf StringBeginsWithAdvancedFilter) AsStringContainsAdvancedFilter() (*S
return nil, false
}
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringBeginsWithAdvancedFilter.
+func (sbwaf StringBeginsWithAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringBeginsWithAdvancedFilter.
+func (sbwaf StringBeginsWithAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringBeginsWithAdvancedFilter.
+func (sbwaf StringBeginsWithAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringBeginsWithAdvancedFilter.
+func (sbwaf StringBeginsWithAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringBeginsWithAdvancedFilter.
+func (sbwaf StringBeginsWithAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for StringBeginsWithAdvancedFilter.
+func (sbwaf StringBeginsWithAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for StringBeginsWithAdvancedFilter.
+func (sbwaf StringBeginsWithAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
// AsAdvancedFilter is the BasicAdvancedFilter implementation for StringBeginsWithAdvancedFilter.
func (sbwaf StringBeginsWithAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
return nil, false
@@ -6504,7 +7966,7 @@ type StringContainsAdvancedFilter struct {
Values *[]string `json:"values,omitempty"`
// Key - The field/property in the event based on which you want to filter.
Key *string `json:"key,omitempty"`
- // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains'
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
OperatorType OperatorType `json:"operatorType,omitempty"`
}
@@ -6584,6 +8046,41 @@ func (scaf StringContainsAdvancedFilter) AsStringContainsAdvancedFilter() (*Stri
return &scaf, true
}
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringContainsAdvancedFilter.
+func (scaf StringContainsAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringContainsAdvancedFilter.
+func (scaf StringContainsAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringContainsAdvancedFilter.
+func (scaf StringContainsAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringContainsAdvancedFilter.
+func (scaf StringContainsAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringContainsAdvancedFilter.
+func (scaf StringContainsAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for StringContainsAdvancedFilter.
+func (scaf StringContainsAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for StringContainsAdvancedFilter.
+func (scaf StringContainsAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
// AsAdvancedFilter is the BasicAdvancedFilter implementation for StringContainsAdvancedFilter.
func (scaf StringContainsAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
return nil, false
@@ -6600,190 +8097,653 @@ type StringEndsWithAdvancedFilter struct {
Values *[]string `json:"values,omitempty"`
// Key - The field/property in the event based on which you want to filter.
Key *string `json:"key,omitempty"`
- // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains'
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
+ OperatorType OperatorType `json:"operatorType,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) MarshalJSON() ([]byte, error) {
+ sewaf.OperatorType = OperatorTypeStringEndsWith
+ objectMap := make(map[string]interface{})
+ if sewaf.Values != nil {
+ objectMap["values"] = sewaf.Values
+ }
+ if sewaf.Key != nil {
+ objectMap["key"] = sewaf.Key
+ }
+ if sewaf.OperatorType != "" {
+ objectMap["operatorType"] = sewaf.OperatorType
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsNumberInAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsNumberInAdvancedFilter() (*NumberInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsNumberNotInAdvancedFilter() (*NumberNotInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberLessThanAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsNumberLessThanAdvancedFilter() (*NumberLessThanAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberGreaterThanAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsNumberGreaterThanAdvancedFilter() (*NumberGreaterThanAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberLessThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsNumberLessThanOrEqualsAdvancedFilter() (*NumberLessThanOrEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberGreaterThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsNumberGreaterThanOrEqualsAdvancedFilter() (*NumberGreaterThanOrEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBoolEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsBoolEqualsAdvancedFilter() (*BoolEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringInAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsStringInAdvancedFilter() (*StringInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotInAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsStringNotInAdvancedFilter() (*StringNotInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsStringBeginsWithAdvancedFilter() (*StringBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsStringEndsWithAdvancedFilter() (*StringEndsWithAdvancedFilter, bool) {
+ return &sewaf, true
+}
+
+// AsStringContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsStringContainsAdvancedFilter() (*StringContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBasicAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
+func (sewaf StringEndsWithAdvancedFilter) AsBasicAdvancedFilter() (BasicAdvancedFilter, bool) {
+ return &sewaf, true
+}
+
+// StringInAdvancedFilter stringIn Advanced Filter.
+type StringInAdvancedFilter struct {
+ // Values - The set of filter values.
+ Values *[]string `json:"values,omitempty"`
+ // Key - The field/property in the event based on which you want to filter.
+ Key *string `json:"key,omitempty"`
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
+ OperatorType OperatorType `json:"operatorType,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) MarshalJSON() ([]byte, error) {
+ siaf.OperatorType = OperatorTypeStringIn
+ objectMap := make(map[string]interface{})
+ if siaf.Values != nil {
+ objectMap["values"] = siaf.Values
+ }
+ if siaf.Key != nil {
+ objectMap["key"] = siaf.Key
+ }
+ if siaf.OperatorType != "" {
+ objectMap["operatorType"] = siaf.OperatorType
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsNumberInAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsNumberInAdvancedFilter() (*NumberInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsNumberNotInAdvancedFilter() (*NumberNotInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberLessThanAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsNumberLessThanAdvancedFilter() (*NumberLessThanAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberGreaterThanAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsNumberGreaterThanAdvancedFilter() (*NumberGreaterThanAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberLessThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsNumberLessThanOrEqualsAdvancedFilter() (*NumberLessThanOrEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberGreaterThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsNumberGreaterThanOrEqualsAdvancedFilter() (*NumberGreaterThanOrEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBoolEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsBoolEqualsAdvancedFilter() (*BoolEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringInAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsStringInAdvancedFilter() (*StringInAdvancedFilter, bool) {
+ return &siaf, true
+}
+
+// AsStringNotInAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsStringNotInAdvancedFilter() (*StringNotInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsStringBeginsWithAdvancedFilter() (*StringBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsStringEndsWithAdvancedFilter() (*StringEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsStringContainsAdvancedFilter() (*StringContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBasicAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
+func (siaf StringInAdvancedFilter) AsBasicAdvancedFilter() (BasicAdvancedFilter, bool) {
+ return &siaf, true
+}
+
+// StringNotBeginsWithAdvancedFilter stringNotBeginsWith Advanced Filter.
+type StringNotBeginsWithAdvancedFilter struct {
+ // Values - The set of filter values.
+ Values *[]string `json:"values,omitempty"`
+ // Key - The field/property in the event based on which you want to filter.
+ Key *string `json:"key,omitempty"`
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
+ OperatorType OperatorType `json:"operatorType,omitempty"`
+}
+
+// MarshalJSON is the custom marshaler for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) MarshalJSON() ([]byte, error) {
+ snbwaf.OperatorType = OperatorTypeStringNotBeginsWith
+ objectMap := make(map[string]interface{})
+ if snbwaf.Values != nil {
+ objectMap["values"] = snbwaf.Values
+ }
+ if snbwaf.Key != nil {
+ objectMap["key"] = snbwaf.Key
+ }
+ if snbwaf.OperatorType != "" {
+ objectMap["operatorType"] = snbwaf.OperatorType
+ }
+ return json.Marshal(objectMap)
+}
+
+// AsNumberInAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsNumberInAdvancedFilter() (*NumberInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsNumberNotInAdvancedFilter() (*NumberNotInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberLessThanAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsNumberLessThanAdvancedFilter() (*NumberLessThanAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberGreaterThanAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsNumberGreaterThanAdvancedFilter() (*NumberGreaterThanAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberLessThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsNumberLessThanOrEqualsAdvancedFilter() (*NumberLessThanOrEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberGreaterThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsNumberGreaterThanOrEqualsAdvancedFilter() (*NumberGreaterThanOrEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBoolEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsBoolEqualsAdvancedFilter() (*BoolEqualsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringInAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsStringInAdvancedFilter() (*StringInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotInAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsStringNotInAdvancedFilter() (*StringNotInAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsStringBeginsWithAdvancedFilter() (*StringBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsStringEndsWithAdvancedFilter() (*StringEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsStringContainsAdvancedFilter() (*StringContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return &snbwaf, true
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBasicAdvancedFilter is the BasicAdvancedFilter implementation for StringNotBeginsWithAdvancedFilter.
+func (snbwaf StringNotBeginsWithAdvancedFilter) AsBasicAdvancedFilter() (BasicAdvancedFilter, bool) {
+ return &snbwaf, true
+}
+
+// StringNotContainsAdvancedFilter stringNotContains Advanced Filter.
+type StringNotContainsAdvancedFilter struct {
+ // Values - The set of filter values.
+ Values *[]string `json:"values,omitempty"`
+ // Key - The field/property in the event based on which you want to filter.
+ Key *string `json:"key,omitempty"`
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
OperatorType OperatorType `json:"operatorType,omitempty"`
}
-// MarshalJSON is the custom marshaler for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) MarshalJSON() ([]byte, error) {
- sewaf.OperatorType = OperatorTypeStringEndsWith
+// MarshalJSON is the custom marshaler for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) MarshalJSON() ([]byte, error) {
+ sncaf.OperatorType = OperatorTypeStringNotContains
objectMap := make(map[string]interface{})
- if sewaf.Values != nil {
- objectMap["values"] = sewaf.Values
+ if sncaf.Values != nil {
+ objectMap["values"] = sncaf.Values
}
- if sewaf.Key != nil {
- objectMap["key"] = sewaf.Key
+ if sncaf.Key != nil {
+ objectMap["key"] = sncaf.Key
}
- if sewaf.OperatorType != "" {
- objectMap["operatorType"] = sewaf.OperatorType
+ if sncaf.OperatorType != "" {
+ objectMap["operatorType"] = sncaf.OperatorType
}
return json.Marshal(objectMap)
}
-// AsNumberInAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) AsNumberInAdvancedFilter() (*NumberInAdvancedFilter, bool) {
+// AsNumberInAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsNumberInAdvancedFilter() (*NumberInAdvancedFilter, bool) {
return nil, false
}
-// AsNumberNotInAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) AsNumberNotInAdvancedFilter() (*NumberNotInAdvancedFilter, bool) {
+// AsNumberNotInAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsNumberNotInAdvancedFilter() (*NumberNotInAdvancedFilter, bool) {
return nil, false
}
-// AsNumberLessThanAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) AsNumberLessThanAdvancedFilter() (*NumberLessThanAdvancedFilter, bool) {
+// AsNumberLessThanAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsNumberLessThanAdvancedFilter() (*NumberLessThanAdvancedFilter, bool) {
return nil, false
}
-// AsNumberGreaterThanAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) AsNumberGreaterThanAdvancedFilter() (*NumberGreaterThanAdvancedFilter, bool) {
+// AsNumberGreaterThanAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsNumberGreaterThanAdvancedFilter() (*NumberGreaterThanAdvancedFilter, bool) {
return nil, false
}
-// AsNumberLessThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) AsNumberLessThanOrEqualsAdvancedFilter() (*NumberLessThanOrEqualsAdvancedFilter, bool) {
+// AsNumberLessThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsNumberLessThanOrEqualsAdvancedFilter() (*NumberLessThanOrEqualsAdvancedFilter, bool) {
return nil, false
}
-// AsNumberGreaterThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) AsNumberGreaterThanOrEqualsAdvancedFilter() (*NumberGreaterThanOrEqualsAdvancedFilter, bool) {
+// AsNumberGreaterThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsNumberGreaterThanOrEqualsAdvancedFilter() (*NumberGreaterThanOrEqualsAdvancedFilter, bool) {
return nil, false
}
-// AsBoolEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) AsBoolEqualsAdvancedFilter() (*BoolEqualsAdvancedFilter, bool) {
+// AsBoolEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsBoolEqualsAdvancedFilter() (*BoolEqualsAdvancedFilter, bool) {
return nil, false
}
-// AsStringInAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) AsStringInAdvancedFilter() (*StringInAdvancedFilter, bool) {
+// AsStringInAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsStringInAdvancedFilter() (*StringInAdvancedFilter, bool) {
return nil, false
}
-// AsStringNotInAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) AsStringNotInAdvancedFilter() (*StringNotInAdvancedFilter, bool) {
+// AsStringNotInAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsStringNotInAdvancedFilter() (*StringNotInAdvancedFilter, bool) {
return nil, false
}
-// AsStringBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) AsStringBeginsWithAdvancedFilter() (*StringBeginsWithAdvancedFilter, bool) {
+// AsStringBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsStringBeginsWithAdvancedFilter() (*StringBeginsWithAdvancedFilter, bool) {
return nil, false
}
-// AsStringEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) AsStringEndsWithAdvancedFilter() (*StringEndsWithAdvancedFilter, bool) {
- return &sewaf, true
+// AsStringEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsStringEndsWithAdvancedFilter() (*StringEndsWithAdvancedFilter, bool) {
+ return nil, false
}
-// AsStringContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) AsStringContainsAdvancedFilter() (*StringContainsAdvancedFilter, bool) {
+// AsStringContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsStringContainsAdvancedFilter() (*StringContainsAdvancedFilter, bool) {
return nil, false
}
-// AsAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
return nil, false
}
-// AsBasicAdvancedFilter is the BasicAdvancedFilter implementation for StringEndsWithAdvancedFilter.
-func (sewaf StringEndsWithAdvancedFilter) AsBasicAdvancedFilter() (BasicAdvancedFilter, bool) {
- return &sewaf, true
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
}
-// StringInAdvancedFilter stringIn Advanced Filter.
-type StringInAdvancedFilter struct {
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return &sncaf, true
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBasicAdvancedFilter is the BasicAdvancedFilter implementation for StringNotContainsAdvancedFilter.
+func (sncaf StringNotContainsAdvancedFilter) AsBasicAdvancedFilter() (BasicAdvancedFilter, bool) {
+ return &sncaf, true
+}
+
+// StringNotEndsWithAdvancedFilter stringNotEndsWith Advanced Filter.
+type StringNotEndsWithAdvancedFilter struct {
// Values - The set of filter values.
Values *[]string `json:"values,omitempty"`
// Key - The field/property in the event based on which you want to filter.
Key *string `json:"key,omitempty"`
- // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains'
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
OperatorType OperatorType `json:"operatorType,omitempty"`
}
-// MarshalJSON is the custom marshaler for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) MarshalJSON() ([]byte, error) {
- siaf.OperatorType = OperatorTypeStringIn
+// MarshalJSON is the custom marshaler for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) MarshalJSON() ([]byte, error) {
+ snewaf.OperatorType = OperatorTypeStringNotEndsWith
objectMap := make(map[string]interface{})
- if siaf.Values != nil {
- objectMap["values"] = siaf.Values
+ if snewaf.Values != nil {
+ objectMap["values"] = snewaf.Values
}
- if siaf.Key != nil {
- objectMap["key"] = siaf.Key
+ if snewaf.Key != nil {
+ objectMap["key"] = snewaf.Key
}
- if siaf.OperatorType != "" {
- objectMap["operatorType"] = siaf.OperatorType
+ if snewaf.OperatorType != "" {
+ objectMap["operatorType"] = snewaf.OperatorType
}
return json.Marshal(objectMap)
}
-// AsNumberInAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) AsNumberInAdvancedFilter() (*NumberInAdvancedFilter, bool) {
+// AsNumberInAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsNumberInAdvancedFilter() (*NumberInAdvancedFilter, bool) {
return nil, false
}
-// AsNumberNotInAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) AsNumberNotInAdvancedFilter() (*NumberNotInAdvancedFilter, bool) {
+// AsNumberNotInAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsNumberNotInAdvancedFilter() (*NumberNotInAdvancedFilter, bool) {
return nil, false
}
-// AsNumberLessThanAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) AsNumberLessThanAdvancedFilter() (*NumberLessThanAdvancedFilter, bool) {
+// AsNumberLessThanAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsNumberLessThanAdvancedFilter() (*NumberLessThanAdvancedFilter, bool) {
return nil, false
}
-// AsNumberGreaterThanAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) AsNumberGreaterThanAdvancedFilter() (*NumberGreaterThanAdvancedFilter, bool) {
+// AsNumberGreaterThanAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsNumberGreaterThanAdvancedFilter() (*NumberGreaterThanAdvancedFilter, bool) {
return nil, false
}
-// AsNumberLessThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) AsNumberLessThanOrEqualsAdvancedFilter() (*NumberLessThanOrEqualsAdvancedFilter, bool) {
+// AsNumberLessThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsNumberLessThanOrEqualsAdvancedFilter() (*NumberLessThanOrEqualsAdvancedFilter, bool) {
return nil, false
}
-// AsNumberGreaterThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) AsNumberGreaterThanOrEqualsAdvancedFilter() (*NumberGreaterThanOrEqualsAdvancedFilter, bool) {
+// AsNumberGreaterThanOrEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsNumberGreaterThanOrEqualsAdvancedFilter() (*NumberGreaterThanOrEqualsAdvancedFilter, bool) {
return nil, false
}
-// AsBoolEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) AsBoolEqualsAdvancedFilter() (*BoolEqualsAdvancedFilter, bool) {
+// AsBoolEqualsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsBoolEqualsAdvancedFilter() (*BoolEqualsAdvancedFilter, bool) {
return nil, false
}
-// AsStringInAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) AsStringInAdvancedFilter() (*StringInAdvancedFilter, bool) {
- return &siaf, true
+// AsStringInAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsStringInAdvancedFilter() (*StringInAdvancedFilter, bool) {
+ return nil, false
}
-// AsStringNotInAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) AsStringNotInAdvancedFilter() (*StringNotInAdvancedFilter, bool) {
+// AsStringNotInAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsStringNotInAdvancedFilter() (*StringNotInAdvancedFilter, bool) {
return nil, false
}
-// AsStringBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) AsStringBeginsWithAdvancedFilter() (*StringBeginsWithAdvancedFilter, bool) {
+// AsStringBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsStringBeginsWithAdvancedFilter() (*StringBeginsWithAdvancedFilter, bool) {
return nil, false
}
-// AsStringEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) AsStringEndsWithAdvancedFilter() (*StringEndsWithAdvancedFilter, bool) {
+// AsStringEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsStringEndsWithAdvancedFilter() (*StringEndsWithAdvancedFilter, bool) {
return nil, false
}
-// AsStringContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) AsStringContainsAdvancedFilter() (*StringContainsAdvancedFilter, bool) {
+// AsStringContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsStringContainsAdvancedFilter() (*StringContainsAdvancedFilter, bool) {
return nil, false
}
-// AsAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
return nil, false
}
-// AsBasicAdvancedFilter is the BasicAdvancedFilter implementation for StringInAdvancedFilter.
-func (siaf StringInAdvancedFilter) AsBasicAdvancedFilter() (BasicAdvancedFilter, bool) {
- return &siaf, true
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return &snewaf, true
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsBasicAdvancedFilter is the BasicAdvancedFilter implementation for StringNotEndsWithAdvancedFilter.
+func (snewaf StringNotEndsWithAdvancedFilter) AsBasicAdvancedFilter() (BasicAdvancedFilter, bool) {
+ return &snewaf, true
}
// StringNotInAdvancedFilter stringNotIn Advanced Filter.
@@ -6792,7 +8752,7 @@ type StringNotInAdvancedFilter struct {
Values *[]string `json:"values,omitempty"`
// Key - The field/property in the event based on which you want to filter.
Key *string `json:"key,omitempty"`
- // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains'
+ // OperatorType - Possible values include: 'OperatorTypeAdvancedFilter', 'OperatorTypeNumberIn', 'OperatorTypeNumberNotIn', 'OperatorTypeNumberLessThan', 'OperatorTypeNumberGreaterThan', 'OperatorTypeNumberLessThanOrEquals', 'OperatorTypeNumberGreaterThanOrEquals', 'OperatorTypeBoolEquals', 'OperatorTypeStringIn', 'OperatorTypeStringNotIn', 'OperatorTypeStringBeginsWith', 'OperatorTypeStringEndsWith', 'OperatorTypeStringContains', 'OperatorTypeNumberInRange', 'OperatorTypeNumberNotInRange', 'OperatorTypeStringNotBeginsWith', 'OperatorTypeStringNotEndsWith', 'OperatorTypeStringNotContains', 'OperatorTypeIsNullOrUndefined', 'OperatorTypeIsNotNull'
OperatorType OperatorType `json:"operatorType,omitempty"`
}
@@ -6872,6 +8832,41 @@ func (sniaf StringNotInAdvancedFilter) AsStringContainsAdvancedFilter() (*String
return nil, false
}
+// AsNumberInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringNotInAdvancedFilter.
+func (sniaf StringNotInAdvancedFilter) AsNumberInRangeAdvancedFilter() (*NumberInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsNumberNotInRangeAdvancedFilter is the BasicAdvancedFilter implementation for StringNotInAdvancedFilter.
+func (sniaf StringNotInAdvancedFilter) AsNumberNotInRangeAdvancedFilter() (*NumberNotInRangeAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotBeginsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringNotInAdvancedFilter.
+func (sniaf StringNotInAdvancedFilter) AsStringNotBeginsWithAdvancedFilter() (*StringNotBeginsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotEndsWithAdvancedFilter is the BasicAdvancedFilter implementation for StringNotInAdvancedFilter.
+func (sniaf StringNotInAdvancedFilter) AsStringNotEndsWithAdvancedFilter() (*StringNotEndsWithAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsStringNotContainsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotInAdvancedFilter.
+func (sniaf StringNotInAdvancedFilter) AsStringNotContainsAdvancedFilter() (*StringNotContainsAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNullOrUndefinedAdvancedFilter is the BasicAdvancedFilter implementation for StringNotInAdvancedFilter.
+func (sniaf StringNotInAdvancedFilter) AsIsNullOrUndefinedAdvancedFilter() (*IsNullOrUndefinedAdvancedFilter, bool) {
+ return nil, false
+}
+
+// AsIsNotNullAdvancedFilter is the BasicAdvancedFilter implementation for StringNotInAdvancedFilter.
+func (sniaf StringNotInAdvancedFilter) AsIsNotNullAdvancedFilter() (*IsNotNullAdvancedFilter, bool) {
+ return nil, false
+}
+
// AsAdvancedFilter is the BasicAdvancedFilter implementation for StringNotInAdvancedFilter.
func (sniaf StringNotInAdvancedFilter) AsAdvancedFilter() (*AdvancedFilter, bool) {
return nil, false
@@ -6882,20 +8877,40 @@ func (sniaf StringNotInAdvancedFilter) AsBasicAdvancedFilter() (BasicAdvancedFil
return &sniaf, true
}
+// SystemData metadata pertaining to creation and last modification of the resource.
+type SystemData struct {
+ // CreatedBy - The identity that created the resource.
+ CreatedBy *string `json:"createdBy,omitempty"`
+ // CreatedByType - The type of identity that created the resource. Possible values include: 'User', 'Application', 'ManagedIdentity', 'Key'
+ CreatedByType CreatedByType `json:"createdByType,omitempty"`
+ // CreatedAt - The timestamp of resource creation (UTC).
+ CreatedAt *date.Time `json:"createdAt,omitempty"`
+ // LastModifiedBy - The identity that last modified the resource.
+ LastModifiedBy *string `json:"lastModifiedBy,omitempty"`
+ // LastModifiedByType - The type of identity that last modified the resource. Possible values include: 'User', 'Application', 'ManagedIdentity', 'Key'
+ LastModifiedByType CreatedByType `json:"lastModifiedByType,omitempty"`
+ // LastModifiedAt - The timestamp of resource last modification (UTC)
+ LastModifiedAt *date.Time `json:"lastModifiedAt,omitempty"`
+}
+
// SystemTopic eventGrid System Topic.
type SystemTopic struct {
autorest.Response `json:"-"`
// SystemTopicProperties - Properties of the system topic.
*SystemTopicProperties `json:"properties,omitempty"`
+ // Identity - Identity information for the resource.
+ Identity *IdentityInfo `json:"identity,omitempty"`
+ // SystemData - READ-ONLY; The system metadata relating to this resource.
+ SystemData *SystemData `json:"systemData,omitempty"`
// Location - Location of the resource.
Location *string `json:"location,omitempty"`
// Tags - Tags of the resource.
Tags map[string]*string `json:"tags"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -6905,6 +8920,9 @@ func (st SystemTopic) MarshalJSON() ([]byte, error) {
if st.SystemTopicProperties != nil {
objectMap["properties"] = st.SystemTopicProperties
}
+ if st.Identity != nil {
+ objectMap["identity"] = st.Identity
+ }
if st.Location != nil {
objectMap["location"] = st.Location
}
@@ -6932,6 +8950,24 @@ func (st *SystemTopic) UnmarshalJSON(body []byte) error {
}
st.SystemTopicProperties = &systemTopicProperties
}
+ case "identity":
+ if v != nil {
+ var identity IdentityInfo
+ err = json.Unmarshal(*v, &identity)
+ if err != nil {
+ return err
+ }
+ st.Identity = &identity
+ }
+ case "systemData":
+ if v != nil {
+ var systemData SystemData
+ err = json.Unmarshal(*v, &systemData)
+ if err != nil {
+ return err
+ }
+ st.SystemData = &systemData
+ }
case "location":
if v != nil {
var location string
@@ -7416,6 +9452,8 @@ func (future *SystemTopicsUpdateFuture) result(client SystemTopicsClient) (st Sy
type SystemTopicUpdateParameters struct {
// Tags - Tags of the system topic.
Tags map[string]*string `json:"tags"`
+ // Identity - Resource identity information.
+ Identity *IdentityInfo `json:"identity,omitempty"`
}
// MarshalJSON is the custom marshaler for SystemTopicUpdateParameters.
@@ -7424,6 +9462,9 @@ func (stup SystemTopicUpdateParameters) MarshalJSON() ([]byte, error) {
if stup.Tags != nil {
objectMap["tags"] = stup.Tags
}
+ if stup.Identity != nil {
+ objectMap["identity"] = stup.Identity
+ }
return json.Marshal(objectMap)
}
@@ -7436,15 +9477,19 @@ type Topic struct {
Sku *ResourceSku `json:"sku,omitempty"`
// Identity - Identity information for the resource.
Identity *IdentityInfo `json:"identity,omitempty"`
+ // Kind - Kind of the resource. Possible values include: 'Azure', 'AzureArc'
+ Kind ResourceKind `json:"kind,omitempty"`
+ // ExtendedLocation - Extended location of the resource.
+ ExtendedLocation *ExtendedLocation `json:"extendedLocation,omitempty"`
// Location - Location of the resource.
Location *string `json:"location,omitempty"`
// Tags - Tags of the resource.
Tags map[string]*string `json:"tags"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -7460,6 +9505,12 @@ func (t Topic) MarshalJSON() ([]byte, error) {
if t.Identity != nil {
objectMap["identity"] = t.Identity
}
+ if t.Kind != "" {
+ objectMap["kind"] = t.Kind
+ }
+ if t.ExtendedLocation != nil {
+ objectMap["extendedLocation"] = t.ExtendedLocation
+ }
if t.Location != nil {
objectMap["location"] = t.Location
}
@@ -7505,6 +9556,24 @@ func (t *Topic) UnmarshalJSON(body []byte) error {
}
t.Identity = &identity
}
+ case "kind":
+ if v != nil {
+ var kind ResourceKind
+ err = json.Unmarshal(*v, &kind)
+ if err != nil {
+ return err
+ }
+ t.Kind = kind
+ }
+ case "extendedLocation":
+ if v != nil {
+ var extendedLocation ExtendedLocation
+ err = json.Unmarshal(*v, &extendedLocation)
+ if err != nil {
+ return err
+ }
+ t.ExtendedLocation = &extendedLocation
+ }
case "location":
if v != nil {
var location string
@@ -7934,6 +10003,49 @@ func NewTopicsListResultPage(cur TopicsListResult, getNextPage func(context.Cont
}
}
+// TopicsRegenerateKeyFuture an abstraction for monitoring and retrieving the results of a long-running
+// operation.
+type TopicsRegenerateKeyFuture struct {
+ azure.FutureAPI
+ // Result returns the result of the asynchronous operation.
+ // If the operation has not completed it will return an error.
+ Result func(TopicsClient) (TopicSharedAccessKeys, error)
+}
+
+// UnmarshalJSON is the custom unmarshaller for CreateFuture.
+func (future *TopicsRegenerateKeyFuture) UnmarshalJSON(body []byte) error {
+ var azFuture azure.Future
+ if err := json.Unmarshal(body, &azFuture); err != nil {
+ return err
+ }
+ future.FutureAPI = &azFuture
+ future.Result = future.result
+ return nil
+}
+
+// result is the default implementation for TopicsRegenerateKeyFuture.Result.
+func (future *TopicsRegenerateKeyFuture) result(client TopicsClient) (tsak TopicSharedAccessKeys, err error) {
+ var done bool
+ done, err = future.DoneWithContext(context.Background(), client)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "eventgrid.TopicsRegenerateKeyFuture", "Result", future.Response(), "Polling failure")
+ return
+ }
+ if !done {
+ tsak.Response.Response = future.Response()
+ err = azure.NewAsyncOpIncompleteError("eventgrid.TopicsRegenerateKeyFuture")
+ return
+ }
+ sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...))
+ if tsak.Response.Response, err = future.GetResult(sender); err == nil && tsak.Response.Response.StatusCode != http.StatusNoContent {
+ tsak, err = client.RegenerateKeyResponder(tsak.Response.Response)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "eventgrid.TopicsRegenerateKeyFuture", "Result", tsak.Response.Response, "Failure responding to request")
+ }
+ }
+ return
+}
+
// TopicsUpdateFuture an abstraction for monitoring and retrieving the results of a long-running operation.
type TopicsUpdateFuture struct {
azure.FutureAPI
@@ -7983,9 +10095,9 @@ type TopicTypeInfo struct {
*TopicTypeProperties `json:"properties,omitempty"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -8065,6 +10177,8 @@ type TopicTypeProperties struct {
SupportedLocations *[]string `json:"supportedLocations,omitempty"`
// SourceResourceFormat - Source resource format.
SourceResourceFormat *string `json:"sourceResourceFormat,omitempty"`
+ // SupportedScopesForSource - Supported source scopes.
+ SupportedScopesForSource *[]string `json:"supportedScopesForSource,omitempty"`
}
// TopicTypesListResult result of the List Topic Types operation
@@ -8172,9 +10286,9 @@ type TrackedResource struct {
Tags map[string]*string `json:"tags"`
// ID - READ-ONLY; Fully qualified identifier of the resource.
ID *string `json:"id,omitempty"`
- // Name - READ-ONLY; Name of the resource
+ // Name - READ-ONLY; Name of the resource.
Name *string `json:"name,omitempty"`
- // Type - READ-ONLY; Type of the resource
+ // Type - READ-ONLY; Type of the resource.
Type *string `json:"type,omitempty"`
}
@@ -8312,6 +10426,8 @@ type WebHookEventSubscriptionDestinationProperties struct {
AzureActiveDirectoryTenantID *string `json:"azureActiveDirectoryTenantId,omitempty"`
// AzureActiveDirectoryApplicationIDOrURI - The Azure Active Directory Application ID or URI to get the access token that will be included as the bearer token in delivery requests.
AzureActiveDirectoryApplicationIDOrURI *string `json:"azureActiveDirectoryApplicationIdOrUri,omitempty"`
+ // DeliveryAttributeMappings - Delivery attribute details.
+ DeliveryAttributeMappings *[]BasicDeliveryAttributeMapping `json:"deliveryAttributeMappings,omitempty"`
}
// MarshalJSON is the custom marshaler for WebHookEventSubscriptionDestinationProperties.
@@ -8332,5 +10448,85 @@ func (whesdp WebHookEventSubscriptionDestinationProperties) MarshalJSON() ([]byt
if whesdp.AzureActiveDirectoryApplicationIDOrURI != nil {
objectMap["azureActiveDirectoryApplicationIdOrUri"] = whesdp.AzureActiveDirectoryApplicationIDOrURI
}
+ if whesdp.DeliveryAttributeMappings != nil {
+ objectMap["deliveryAttributeMappings"] = whesdp.DeliveryAttributeMappings
+ }
return json.Marshal(objectMap)
}
+
+// UnmarshalJSON is the custom unmarshaler for WebHookEventSubscriptionDestinationProperties struct.
+func (whesdp *WebHookEventSubscriptionDestinationProperties) UnmarshalJSON(body []byte) error {
+ var m map[string]*json.RawMessage
+ err := json.Unmarshal(body, &m)
+ if err != nil {
+ return err
+ }
+ for k, v := range m {
+ switch k {
+ case "endpointUrl":
+ if v != nil {
+ var endpointURL string
+ err = json.Unmarshal(*v, &endpointURL)
+ if err != nil {
+ return err
+ }
+ whesdp.EndpointURL = &endpointURL
+ }
+ case "endpointBaseUrl":
+ if v != nil {
+ var endpointBaseURL string
+ err = json.Unmarshal(*v, &endpointBaseURL)
+ if err != nil {
+ return err
+ }
+ whesdp.EndpointBaseURL = &endpointBaseURL
+ }
+ case "maxEventsPerBatch":
+ if v != nil {
+ var maxEventsPerBatch int32
+ err = json.Unmarshal(*v, &maxEventsPerBatch)
+ if err != nil {
+ return err
+ }
+ whesdp.MaxEventsPerBatch = &maxEventsPerBatch
+ }
+ case "preferredBatchSizeInKilobytes":
+ if v != nil {
+ var preferredBatchSizeInKilobytes int32
+ err = json.Unmarshal(*v, &preferredBatchSizeInKilobytes)
+ if err != nil {
+ return err
+ }
+ whesdp.PreferredBatchSizeInKilobytes = &preferredBatchSizeInKilobytes
+ }
+ case "azureActiveDirectoryTenantId":
+ if v != nil {
+ var azureActiveDirectoryTenantID string
+ err = json.Unmarshal(*v, &azureActiveDirectoryTenantID)
+ if err != nil {
+ return err
+ }
+ whesdp.AzureActiveDirectoryTenantID = &azureActiveDirectoryTenantID
+ }
+ case "azureActiveDirectoryApplicationIdOrUri":
+ if v != nil {
+ var azureActiveDirectoryApplicationIDOrURI string
+ err = json.Unmarshal(*v, &azureActiveDirectoryApplicationIDOrURI)
+ if err != nil {
+ return err
+ }
+ whesdp.AzureActiveDirectoryApplicationIDOrURI = &azureActiveDirectoryApplicationIDOrURI
+ }
+ case "deliveryAttributeMappings":
+ if v != nil {
+ deliveryAttributeMappings, err := unmarshalBasicDeliveryAttributeMappingArray(*v)
+ if err != nil {
+ return err
+ }
+ whesdp.DeliveryAttributeMappings = &deliveryAttributeMappings
+ }
+ }
+ }
+
+ return nil
+}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/operations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/operations.go
similarity index 98%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/operations.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/operations.go
index ae5c5102207d5..bb954bf1dc12c 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/operations.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/operations.go
@@ -66,7 +66,7 @@ func (client OperationsClient) List(ctx context.Context) (result OperationsListR
// ListPreparer prepares the List request.
func (client OperationsClient) ListPreparer(ctx context.Context) (*http.Request, error) {
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/partnernamespaces.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/partnernamespaces.go
similarity index 98%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/partnernamespaces.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/partnernamespaces.go
index d177f32128f46..37295ccfe6710 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/partnernamespaces.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/partnernamespaces.go
@@ -71,11 +71,12 @@ func (client PartnerNamespacesClient) CreateOrUpdatePreparer(ctx context.Context
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
+ partnerNamespaceInfo.SystemData = nil
preparer := autorest.CreatePreparer(
autorest.AsContentType("application/json; charset=utf-8"),
autorest.AsPut(),
@@ -151,7 +152,7 @@ func (client PartnerNamespacesClient) DeletePreparer(ctx context.Context, resour
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -235,7 +236,7 @@ func (client PartnerNamespacesClient) GetPreparer(ctx context.Context, resourceG
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -322,7 +323,7 @@ func (client PartnerNamespacesClient) ListByResourceGroupPreparer(ctx context.Co
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -450,7 +451,7 @@ func (client PartnerNamespacesClient) ListBySubscriptionPreparer(ctx context.Con
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -569,7 +570,7 @@ func (client PartnerNamespacesClient) ListSharedAccessKeysPreparer(ctx context.C
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -652,7 +653,7 @@ func (client PartnerNamespacesClient) RegenerateKeyPreparer(ctx context.Context,
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -724,7 +725,7 @@ func (client PartnerNamespacesClient) UpdatePreparer(ctx context.Context, resour
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/partnerregistrations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/partnerregistrations.go
similarity index 98%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/partnerregistrations.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/partnerregistrations.go
index 70b4c7d7edaad..e3e09f052e054 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/partnerregistrations.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/partnerregistrations.go
@@ -77,11 +77,12 @@ func (client PartnerRegistrationsClient) CreateOrUpdatePreparer(ctx context.Cont
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
+ partnerRegistrationInfo.SystemData = nil
preparer := autorest.CreatePreparer(
autorest.AsContentType("application/json; charset=utf-8"),
autorest.AsPut(),
@@ -155,7 +156,7 @@ func (client PartnerRegistrationsClient) DeletePreparer(ctx context.Context, res
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -230,7 +231,7 @@ func (client PartnerRegistrationsClient) GetPreparer(ctx context.Context, resour
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -297,7 +298,7 @@ func (client PartnerRegistrationsClient) List(ctx context.Context) (result Partn
// ListPreparer prepares the List request.
func (client PartnerRegistrationsClient) ListPreparer(ctx context.Context) (*http.Request, error) {
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -384,7 +385,7 @@ func (client PartnerRegistrationsClient) ListByResourceGroupPreparer(ctx context
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -512,7 +513,7 @@ func (client PartnerRegistrationsClient) ListBySubscriptionPreparer(ctx context.
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -632,7 +633,7 @@ func (client PartnerRegistrationsClient) UpdatePreparer(ctx context.Context, res
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/partnertopiceventsubscriptions.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/partnertopiceventsubscriptions.go
similarity index 86%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/partnertopiceventsubscriptions.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/partnertopiceventsubscriptions.go
index 75ccde2d1f55e..6d3a64f28cbee 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/partnertopiceventsubscriptions.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/partnertopiceventsubscriptions.go
@@ -74,11 +74,12 @@ func (client PartnerTopicEventSubscriptionsClient) CreateOrUpdatePreparer(ctx co
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
+ eventSubscriptionInfo.SystemData = nil
preparer := autorest.CreatePreparer(
autorest.AsContentType("application/json; charset=utf-8"),
autorest.AsPut(),
@@ -157,7 +158,7 @@ func (client PartnerTopicEventSubscriptionsClient) DeletePreparer(ctx context.Co
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -244,7 +245,7 @@ func (client PartnerTopicEventSubscriptionsClient) GetPreparer(ctx context.Conte
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -275,6 +276,85 @@ func (client PartnerTopicEventSubscriptionsClient) GetResponder(resp *http.Respo
return
}
+// GetDeliveryAttributes get all delivery attributes for an event subscription of a partner topic.
+// Parameters:
+// resourceGroupName - the name of the resource group within the user's subscription.
+// partnerTopicName - name of the partner topic.
+// eventSubscriptionName - name of the event subscription to be created. Event subscription names must be
+// between 3 and 100 characters in length and use alphanumeric letters only.
+func (client PartnerTopicEventSubscriptionsClient) GetDeliveryAttributes(ctx context.Context, resourceGroupName string, partnerTopicName string, eventSubscriptionName string) (result DeliveryAttributeListResult, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/PartnerTopicEventSubscriptionsClient.GetDeliveryAttributes")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.GetDeliveryAttributesPreparer(ctx, resourceGroupName, partnerTopicName, eventSubscriptionName)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "eventgrid.PartnerTopicEventSubscriptionsClient", "GetDeliveryAttributes", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetDeliveryAttributesSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "eventgrid.PartnerTopicEventSubscriptionsClient", "GetDeliveryAttributes", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetDeliveryAttributesResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "eventgrid.PartnerTopicEventSubscriptionsClient", "GetDeliveryAttributes", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetDeliveryAttributesPreparer prepares the GetDeliveryAttributes request.
+func (client PartnerTopicEventSubscriptionsClient) GetDeliveryAttributesPreparer(ctx context.Context, resourceGroupName string, partnerTopicName string, eventSubscriptionName string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "eventSubscriptionName": autorest.Encode("path", eventSubscriptionName),
+ "partnerTopicName": autorest.Encode("path", partnerTopicName),
+ "resourceGroupName": autorest.Encode("path", resourceGroupName),
+ "subscriptionId": autorest.Encode("path", client.SubscriptionID),
+ }
+
+ const APIVersion = "2020-10-15-preview"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsPost(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/partnerTopics/{partnerTopicName}/eventSubscriptions/{eventSubscriptionName}/getDeliveryAttributes", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetDeliveryAttributesSender sends the GetDeliveryAttributes request. The method will close the
+// http.Response Body if it receives an error.
+func (client PartnerTopicEventSubscriptionsClient) GetDeliveryAttributesSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, azure.DoRetryWithRegistration(client.Client))
+}
+
+// GetDeliveryAttributesResponder handles the response to the GetDeliveryAttributes request. The method always
+// closes the http.Response Body.
+func (client PartnerTopicEventSubscriptionsClient) GetDeliveryAttributesResponder(resp *http.Response) (result DeliveryAttributeListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
// GetFullURL get the full endpoint URL for an event subscription of a partner topic.
// Parameters:
// resourceGroupName - the name of the resource group within the user's subscription.
@@ -323,7 +403,7 @@ func (client PartnerTopicEventSubscriptionsClient) GetFullURLPreparer(ctx contex
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -412,7 +492,7 @@ func (client PartnerTopicEventSubscriptionsClient) ListByPartnerTopicPreparer(ct
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -528,7 +608,7 @@ func (client PartnerTopicEventSubscriptionsClient) UpdatePreparer(ctx context.Co
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/partnertopics.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/partnertopics.go
similarity index 98%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/partnertopics.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/partnertopics.go
index d71eb15e0e5c1..d653420fad186 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/partnertopics.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/partnertopics.go
@@ -75,7 +75,7 @@ func (client PartnerTopicsClient) ActivatePreparer(ctx context.Context, resource
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -151,7 +151,7 @@ func (client PartnerTopicsClient) DeactivatePreparer(ctx context.Context, resour
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -220,7 +220,7 @@ func (client PartnerTopicsClient) DeletePreparer(ctx context.Context, resourceGr
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -304,7 +304,7 @@ func (client PartnerTopicsClient) GetPreparer(ctx context.Context, resourceGroup
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -391,7 +391,7 @@ func (client PartnerTopicsClient) ListByResourceGroupPreparer(ctx context.Contex
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -519,7 +519,7 @@ func (client PartnerTopicsClient) ListBySubscriptionPreparer(ctx context.Context
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -639,7 +639,7 @@ func (client PartnerTopicsClient) UpdatePreparer(ctx context.Context, resourceGr
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/privateendpointconnections.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/privateendpointconnections.go
similarity index 99%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/privateendpointconnections.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/privateendpointconnections.go
index ca9b1cb137f08..82c4ccd967da4 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/privateendpointconnections.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/privateendpointconnections.go
@@ -73,7 +73,7 @@ func (client PrivateEndpointConnectionsClient) DeletePreparer(ctx context.Contex
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -161,7 +161,7 @@ func (client PrivateEndpointConnectionsClient) GetPreparer(ctx context.Context,
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -252,7 +252,7 @@ func (client PrivateEndpointConnectionsClient) ListByResourcePreparer(ctx contex
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -369,7 +369,7 @@ func (client PrivateEndpointConnectionsClient) UpdatePreparer(ctx context.Contex
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/privatelinkresources.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/privatelinkresources.go
similarity index 99%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/privatelinkresources.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/privatelinkresources.go
index 4c7d0298e544a..a440a34c60e33 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/privatelinkresources.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/privatelinkresources.go
@@ -80,7 +80,7 @@ func (client PrivateLinkResourcesClient) GetPreparer(ctx context.Context, resour
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -171,7 +171,7 @@ func (client PrivateLinkResourcesClient) ListByResourcePreparer(ctx context.Cont
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/systemtopiceventsubscriptions.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/systemtopiceventsubscriptions.go
similarity index 86%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/systemtopiceventsubscriptions.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/systemtopiceventsubscriptions.go
index 50faebb4e141a..be7f0955309dd 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/systemtopiceventsubscriptions.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/systemtopiceventsubscriptions.go
@@ -74,11 +74,12 @@ func (client SystemTopicEventSubscriptionsClient) CreateOrUpdatePreparer(ctx con
"systemTopicName": autorest.Encode("path", systemTopicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
+ eventSubscriptionInfo.SystemData = nil
preparer := autorest.CreatePreparer(
autorest.AsContentType("application/json; charset=utf-8"),
autorest.AsPut(),
@@ -157,7 +158,7 @@ func (client SystemTopicEventSubscriptionsClient) DeletePreparer(ctx context.Con
"systemTopicName": autorest.Encode("path", systemTopicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -244,7 +245,7 @@ func (client SystemTopicEventSubscriptionsClient) GetPreparer(ctx context.Contex
"systemTopicName": autorest.Encode("path", systemTopicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -275,6 +276,85 @@ func (client SystemTopicEventSubscriptionsClient) GetResponder(resp *http.Respon
return
}
+// GetDeliveryAttributes get all delivery attributes for an event subscription.
+// Parameters:
+// resourceGroupName - the name of the resource group within the user's subscription.
+// systemTopicName - name of the system topic.
+// eventSubscriptionName - name of the event subscription to be created. Event subscription names must be
+// between 3 and 100 characters in length and use alphanumeric letters only.
+func (client SystemTopicEventSubscriptionsClient) GetDeliveryAttributes(ctx context.Context, resourceGroupName string, systemTopicName string, eventSubscriptionName string) (result DeliveryAttributeListResult, err error) {
+ if tracing.IsEnabled() {
+ ctx = tracing.StartSpan(ctx, fqdn+"/SystemTopicEventSubscriptionsClient.GetDeliveryAttributes")
+ defer func() {
+ sc := -1
+ if result.Response.Response != nil {
+ sc = result.Response.Response.StatusCode
+ }
+ tracing.EndSpan(ctx, sc, err)
+ }()
+ }
+ req, err := client.GetDeliveryAttributesPreparer(ctx, resourceGroupName, systemTopicName, eventSubscriptionName)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "eventgrid.SystemTopicEventSubscriptionsClient", "GetDeliveryAttributes", nil, "Failure preparing request")
+ return
+ }
+
+ resp, err := client.GetDeliveryAttributesSender(req)
+ if err != nil {
+ result.Response = autorest.Response{Response: resp}
+ err = autorest.NewErrorWithError(err, "eventgrid.SystemTopicEventSubscriptionsClient", "GetDeliveryAttributes", resp, "Failure sending request")
+ return
+ }
+
+ result, err = client.GetDeliveryAttributesResponder(resp)
+ if err != nil {
+ err = autorest.NewErrorWithError(err, "eventgrid.SystemTopicEventSubscriptionsClient", "GetDeliveryAttributes", resp, "Failure responding to request")
+ return
+ }
+
+ return
+}
+
+// GetDeliveryAttributesPreparer prepares the GetDeliveryAttributes request.
+func (client SystemTopicEventSubscriptionsClient) GetDeliveryAttributesPreparer(ctx context.Context, resourceGroupName string, systemTopicName string, eventSubscriptionName string) (*http.Request, error) {
+ pathParameters := map[string]interface{}{
+ "eventSubscriptionName": autorest.Encode("path", eventSubscriptionName),
+ "resourceGroupName": autorest.Encode("path", resourceGroupName),
+ "subscriptionId": autorest.Encode("path", client.SubscriptionID),
+ "systemTopicName": autorest.Encode("path", systemTopicName),
+ }
+
+ const APIVersion = "2020-10-15-preview"
+ queryParameters := map[string]interface{}{
+ "api-version": APIVersion,
+ }
+
+ preparer := autorest.CreatePreparer(
+ autorest.AsPost(),
+ autorest.WithBaseURL(client.BaseURI),
+ autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/systemTopics/{systemTopicName}/eventSubscriptions/{eventSubscriptionName}/getDeliveryAttributes", pathParameters),
+ autorest.WithQueryParameters(queryParameters))
+ return preparer.Prepare((&http.Request{}).WithContext(ctx))
+}
+
+// GetDeliveryAttributesSender sends the GetDeliveryAttributes request. The method will close the
+// http.Response Body if it receives an error.
+func (client SystemTopicEventSubscriptionsClient) GetDeliveryAttributesSender(req *http.Request) (*http.Response, error) {
+ return client.Send(req, azure.DoRetryWithRegistration(client.Client))
+}
+
+// GetDeliveryAttributesResponder handles the response to the GetDeliveryAttributes request. The method always
+// closes the http.Response Body.
+func (client SystemTopicEventSubscriptionsClient) GetDeliveryAttributesResponder(resp *http.Response) (result DeliveryAttributeListResult, err error) {
+ err = autorest.Respond(
+ resp,
+ azure.WithErrorUnlessStatusCode(http.StatusOK),
+ autorest.ByUnmarshallingJSON(&result),
+ autorest.ByClosing())
+ result.Response = autorest.Response{Response: resp}
+ return
+}
+
// GetFullURL get the full endpoint URL for an event subscription of a system topic.
// Parameters:
// resourceGroupName - the name of the resource group within the user's subscription.
@@ -323,7 +403,7 @@ func (client SystemTopicEventSubscriptionsClient) GetFullURLPreparer(ctx context
"systemTopicName": autorest.Encode("path", systemTopicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -412,7 +492,7 @@ func (client SystemTopicEventSubscriptionsClient) ListBySystemTopicPreparer(ctx
"systemTopicName": autorest.Encode("path", systemTopicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -528,7 +608,7 @@ func (client SystemTopicEventSubscriptionsClient) UpdatePreparer(ctx context.Con
"systemTopicName": autorest.Encode("path", systemTopicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/systemtopics.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/systemtopics.go
similarity index 98%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/systemtopics.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/systemtopics.go
index 429156947dd5b..8f46083e9385f 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/systemtopics.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/systemtopics.go
@@ -69,11 +69,12 @@ func (client SystemTopicsClient) CreateOrUpdatePreparer(ctx context.Context, res
"systemTopicName": autorest.Encode("path", systemTopicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
+ systemTopicInfo.SystemData = nil
preparer := autorest.CreatePreparer(
autorest.AsContentType("application/json; charset=utf-8"),
autorest.AsPut(),
@@ -149,7 +150,7 @@ func (client SystemTopicsClient) DeletePreparer(ctx context.Context, resourceGro
"systemTopicName": autorest.Encode("path", systemTopicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -233,7 +234,7 @@ func (client SystemTopicsClient) GetPreparer(ctx context.Context, resourceGroupN
"systemTopicName": autorest.Encode("path", systemTopicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -320,7 +321,7 @@ func (client SystemTopicsClient) ListByResourceGroupPreparer(ctx context.Context
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -448,7 +449,7 @@ func (client SystemTopicsClient) ListBySubscriptionPreparer(ctx context.Context,
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -561,7 +562,7 @@ func (client SystemTopicsClient) UpdatePreparer(ctx context.Context, resourceGro
"systemTopicName": autorest.Encode("path", systemTopicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/topics.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/topics.go
similarity index 97%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/topics.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/topics.go
index af92a6cf21906..eb348d9bde28b 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/topics.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/topics.go
@@ -70,7 +70,7 @@ func (client TopicsClient) CreateOrUpdatePreparer(ctx context.Context, resourceG
"topicName": autorest.Encode("path", topicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -150,7 +150,7 @@ func (client TopicsClient) DeletePreparer(ctx context.Context, resourceGroupName
"topicName": autorest.Encode("path", topicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -234,7 +234,7 @@ func (client TopicsClient) GetPreparer(ctx context.Context, resourceGroupName st
"topicName": autorest.Encode("path", topicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -321,7 +321,7 @@ func (client TopicsClient) ListByResourceGroupPreparer(ctx context.Context, reso
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -449,7 +449,7 @@ func (client TopicsClient) ListBySubscriptionPreparer(ctx context.Context, filte
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -572,7 +572,7 @@ func (client TopicsClient) ListEventTypesPreparer(ctx context.Context, resourceG
"subscriptionId": autorest.Encode("path", client.SubscriptionID),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -648,7 +648,7 @@ func (client TopicsClient) ListSharedAccessKeysPreparer(ctx context.Context, res
"topicName": autorest.Encode("path", topicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -684,13 +684,13 @@ func (client TopicsClient) ListSharedAccessKeysResponder(resp *http.Response) (r
// resourceGroupName - the name of the resource group within the user's subscription.
// topicName - name of the topic.
// regenerateKeyRequest - request body to regenerate key.
-func (client TopicsClient) RegenerateKey(ctx context.Context, resourceGroupName string, topicName string, regenerateKeyRequest TopicRegenerateKeyRequest) (result TopicSharedAccessKeys, err error) {
+func (client TopicsClient) RegenerateKey(ctx context.Context, resourceGroupName string, topicName string, regenerateKeyRequest TopicRegenerateKeyRequest) (result TopicsRegenerateKeyFuture, err error) {
if tracing.IsEnabled() {
ctx = tracing.StartSpan(ctx, fqdn+"/TopicsClient.RegenerateKey")
defer func() {
sc := -1
- if result.Response.Response != nil {
- sc = result.Response.Response.StatusCode
+ if result.FutureAPI != nil && result.FutureAPI.Response() != nil {
+ sc = result.FutureAPI.Response().StatusCode
}
tracing.EndSpan(ctx, sc, err)
}()
@@ -707,16 +707,9 @@ func (client TopicsClient) RegenerateKey(ctx context.Context, resourceGroupName
return
}
- resp, err := client.RegenerateKeySender(req)
+ result, err = client.RegenerateKeySender(req)
if err != nil {
- result.Response = autorest.Response{Response: resp}
- err = autorest.NewErrorWithError(err, "eventgrid.TopicsClient", "RegenerateKey", resp, "Failure sending request")
- return
- }
-
- result, err = client.RegenerateKeyResponder(resp)
- if err != nil {
- err = autorest.NewErrorWithError(err, "eventgrid.TopicsClient", "RegenerateKey", resp, "Failure responding to request")
+ err = autorest.NewErrorWithError(err, "eventgrid.TopicsClient", "RegenerateKey", nil, "Failure sending request")
return
}
@@ -731,7 +724,7 @@ func (client TopicsClient) RegenerateKeyPreparer(ctx context.Context, resourceGr
"topicName": autorest.Encode("path", topicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -748,8 +741,17 @@ func (client TopicsClient) RegenerateKeyPreparer(ctx context.Context, resourceGr
// RegenerateKeySender sends the RegenerateKey request. The method will close the
// http.Response Body if it receives an error.
-func (client TopicsClient) RegenerateKeySender(req *http.Request) (*http.Response, error) {
- return client.Send(req, azure.DoRetryWithRegistration(client.Client))
+func (client TopicsClient) RegenerateKeySender(req *http.Request) (future TopicsRegenerateKeyFuture, err error) {
+ var resp *http.Response
+ resp, err = client.Send(req, azure.DoRetryWithRegistration(client.Client))
+ if err != nil {
+ return
+ }
+ var azf azure.Future
+ azf, err = azure.NewFutureFromResponse(resp)
+ future.FutureAPI = &azf
+ future.Result = future.result
+ return
}
// RegenerateKeyResponder handles the response to the RegenerateKey request. The method always
@@ -757,7 +759,7 @@ func (client TopicsClient) RegenerateKeySender(req *http.Request) (*http.Respons
func (client TopicsClient) RegenerateKeyResponder(resp *http.Response) (result TopicSharedAccessKeys, err error) {
err = autorest.Respond(
resp,
- azure.WithErrorUnlessStatusCode(http.StatusOK),
+ azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted),
autorest.ByUnmarshallingJSON(&result),
autorest.ByClosing())
result.Response = autorest.Response{Response: resp}
@@ -803,7 +805,7 @@ func (client TopicsClient) UpdatePreparer(ctx context.Context, resourceGroupName
"topicName": autorest.Encode("path", topicName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/topictypes.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/topictypes.go
similarity index 98%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/topictypes.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/topictypes.go
index ca275f3ef5276..23e8cd7bba063 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/topictypes.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/topictypes.go
@@ -72,7 +72,7 @@ func (client TopicTypesClient) GetPreparer(ctx context.Context, topicTypeName st
"topicTypeName": autorest.Encode("path", topicTypeName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -139,7 +139,7 @@ func (client TopicTypesClient) List(ctx context.Context) (result TopicTypesListR
// ListPreparer prepares the List request.
func (client TopicTypesClient) ListPreparer(ctx context.Context) (*http.Request, error) {
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
@@ -212,7 +212,7 @@ func (client TopicTypesClient) ListEventTypesPreparer(ctx context.Context, topic
"topicTypeName": autorest.Encode("path", topicTypeName),
}
- const APIVersion = "2020-04-01-preview"
+ const APIVersion = "2020-10-15-preview"
queryParameters := map[string]interface{}{
"api-version": APIVersion,
}
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/version.go b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/version.go
similarity index 90%
rename from vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/version.go
rename to vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/version.go
index 91a00389d5bdb..96bb0d2f83325 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-04-01-preview/eventgrid/version.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/services/preview/eventgrid/mgmt/2020-10-15-preview/eventgrid/version.go
@@ -10,7 +10,7 @@ import "github.com/Azure/azure-sdk-for-go/version"
// UserAgent returns the UserAgent string to use when sending http.Requests.
func UserAgent() string {
- return "Azure-SDK-For-Go/" + Version() + " eventgrid/2020-04-01-preview"
+ return "Azure-SDK-For-Go/" + Version() + " eventgrid/2020-10-15-preview"
}
// Version returns the semantic version (see http://semver.org) of the client.
diff --git a/vendor/github.com/Azure/azure-sdk-for-go/version/version.go b/vendor/github.com/Azure/azure-sdk-for-go/version/version.go
index 5c99f7cd687e5..2d5192c22a236 100644
--- a/vendor/github.com/Azure/azure-sdk-for-go/version/version.go
+++ b/vendor/github.com/Azure/azure-sdk-for-go/version/version.go
@@ -4,4 +4,4 @@ package version
// Licensed under the MIT License. See License.txt in the project root for license information.
// Number contains the semantic version of this SDK.
-const Number = "v54.0.0"
+const Number = "v54.2.0"
diff --git a/vendor/github.com/hashicorp/go-hclog/README.md b/vendor/github.com/hashicorp/go-hclog/README.md
index 9b6845e98872d..5d56f4b59c3f3 100644
--- a/vendor/github.com/hashicorp/go-hclog/README.md
+++ b/vendor/github.com/hashicorp/go-hclog/README.md
@@ -132,7 +132,7 @@ Alternatively, you may configure the system-wide logger:
```go
// log the standard logger from 'import "log"'
-log.SetOutput(appLogger.Writer(&hclog.StandardLoggerOptions{InferLevels: true}))
+log.SetOutput(appLogger.StandardWriter(&hclog.StandardLoggerOptions{InferLevels: true}))
log.SetPrefix("")
log.SetFlags(0)
diff --git a/vendor/github.com/hashicorp/go-hclog/colorize_unix.go b/vendor/github.com/hashicorp/go-hclog/colorize_unix.go
new file mode 100644
index 0000000000000..44aa9bf2c620c
--- /dev/null
+++ b/vendor/github.com/hashicorp/go-hclog/colorize_unix.go
@@ -0,0 +1,27 @@
+// +build !windows
+
+package hclog
+
+import (
+ "github.com/mattn/go-isatty"
+)
+
+// setColorization will mutate the values of this logger
+// to approperately configure colorization options. It provides
+// a wrapper to the output stream on Windows systems.
+func (l *intLogger) setColorization(opts *LoggerOptions) {
+ switch opts.Color {
+ case ColorOff:
+ fallthrough
+ case ForceColor:
+ return
+ case AutoColor:
+ fi := l.checkWriterIsFile()
+ isUnixTerm := isatty.IsTerminal(fi.Fd())
+ isCygwinTerm := isatty.IsCygwinTerminal(fi.Fd())
+ isTerm := isUnixTerm || isCygwinTerm
+ if !isTerm {
+ l.writer.color = ColorOff
+ }
+ }
+}
diff --git a/vendor/github.com/hashicorp/go-hclog/colorize_windows.go b/vendor/github.com/hashicorp/go-hclog/colorize_windows.go
new file mode 100644
index 0000000000000..23486b6d74f81
--- /dev/null
+++ b/vendor/github.com/hashicorp/go-hclog/colorize_windows.go
@@ -0,0 +1,33 @@
+// +build windows
+
+package hclog
+
+import (
+ "os"
+
+ colorable "github.com/mattn/go-colorable"
+ "github.com/mattn/go-isatty"
+)
+
+// setColorization will mutate the values of this logger
+// to approperately configure colorization options. It provides
+// a wrapper to the output stream on Windows systems.
+func (l *intLogger) setColorization(opts *LoggerOptions) {
+ switch opts.Color {
+ case ColorOff:
+ return
+ case ForceColor:
+ fi := l.checkWriterIsFile()
+ l.writer.w = colorable.NewColorable(fi)
+ case AutoColor:
+ fi := l.checkWriterIsFile()
+ isUnixTerm := isatty.IsTerminal(os.Stdout.Fd())
+ isCygwinTerm := isatty.IsCygwinTerminal(os.Stdout.Fd())
+ isTerm := isUnixTerm || isCygwinTerm
+ if !isTerm {
+ l.writer.color = ColorOff
+ return
+ }
+ l.writer.w = colorable.NewColorable(fi)
+ }
+}
diff --git a/vendor/github.com/hashicorp/go-hclog/exclude.go b/vendor/github.com/hashicorp/go-hclog/exclude.go
new file mode 100644
index 0000000000000..cfd4307a80351
--- /dev/null
+++ b/vendor/github.com/hashicorp/go-hclog/exclude.go
@@ -0,0 +1,71 @@
+package hclog
+
+import (
+ "regexp"
+ "strings"
+)
+
+// ExcludeByMessage provides a simple way to build a list of log messages that
+// can be queried and matched. This is meant to be used with the Exclude
+// option on Options to suppress log messages. This does not hold any mutexs
+// within itself, so normal usage would be to Add entries at setup and none after
+// Exclude is going to be called. Exclude is called with a mutex held within
+// the Logger, so that doesn't need to use a mutex. Example usage:
+//
+// f := new(ExcludeByMessage)
+// f.Add("Noisy log message text")
+// appLogger.Exclude = f.Exclude
+type ExcludeByMessage struct {
+ messages map[string]struct{}
+}
+
+// Add a message to be filtered. Do not call this after Exclude is to be called
+// due to concurrency issues.
+func (f *ExcludeByMessage) Add(msg string) {
+ if f.messages == nil {
+ f.messages = make(map[string]struct{})
+ }
+
+ f.messages[msg] = struct{}{}
+}
+
+// Return true if the given message should be included
+func (f *ExcludeByMessage) Exclude(level Level, msg string, args ...interface{}) bool {
+ _, ok := f.messages[msg]
+ return ok
+}
+
+// ExcludeByPrefix is a simple type to match a message string that has a common prefix.
+type ExcludeByPrefix string
+
+// Matches an message that starts with the prefix.
+func (p ExcludeByPrefix) Exclude(level Level, msg string, args ...interface{}) bool {
+ return strings.HasPrefix(msg, string(p))
+}
+
+// ExcludeByRegexp takes a regexp and uses it to match a log message string. If it matches
+// the log entry is excluded.
+type ExcludeByRegexp struct {
+ Regexp *regexp.Regexp
+}
+
+// Exclude the log message if the message string matches the regexp
+func (e ExcludeByRegexp) Exclude(level Level, msg string, args ...interface{}) bool {
+ return e.Regexp.MatchString(msg)
+}
+
+// ExcludeFuncs is a slice of functions that will called to see if a log entry
+// should be filtered or not. It stops calling functions once at least one returns
+// true.
+type ExcludeFuncs []func(level Level, msg string, args ...interface{}) bool
+
+// Calls each function until one of them returns true
+func (ff ExcludeFuncs) Exclude(level Level, msg string, args ...interface{}) bool {
+ for _, f := range ff {
+ if f(level, msg, args...) {
+ return true
+ }
+ }
+
+ return false
+}
diff --git a/vendor/github.com/hashicorp/go-hclog/global.go b/vendor/github.com/hashicorp/go-hclog/global.go
index 3efc54c12901e..22ebc57d877f7 100644
--- a/vendor/github.com/hashicorp/go-hclog/global.go
+++ b/vendor/github.com/hashicorp/go-hclog/global.go
@@ -20,6 +20,13 @@ var (
// Default returns a globally held logger. This can be a good starting
// place, and then you can use .With() and .Name() to create sub-loggers
// to be used in more specific contexts.
+// The value of the Default logger can be set via SetDefault() or by
+// changing the options in DefaultOptions.
+//
+// This method is goroutine safe, returning a global from memory, but
+// cause should be used if SetDefault() is called it random times
+// in the program as that may result in race conditions and an unexpected
+// Logger being returned.
func Default() Logger {
protect.Do(func() {
// If SetDefault was used before Default() was called, we need to
@@ -41,6 +48,13 @@ func L() Logger {
// to the one given. This allows packages to use the default logger
// and have higher level packages change it to match the execution
// environment. It returns any old default if there is one.
+//
+// NOTE: This is expected to be called early in the program to setup
+// a default logger. As such, it does not attempt to make itself
+// not racy with regard to the value of the default logger. Ergo
+// if it is called in goroutines, you may experience race conditions
+// with other goroutines retrieving the default logger. Basically,
+// don't do that.
func SetDefault(log Logger) Logger {
old := def
def = log
diff --git a/vendor/github.com/hashicorp/go-hclog/go.mod b/vendor/github.com/hashicorp/go-hclog/go.mod
index 0d079a65444c7..b6698c0836fa0 100644
--- a/vendor/github.com/hashicorp/go-hclog/go.mod
+++ b/vendor/github.com/hashicorp/go-hclog/go.mod
@@ -2,6 +2,11 @@ module github.com/hashicorp/go-hclog
require (
github.com/davecgh/go-spew v1.1.1 // indirect
+ github.com/fatih/color v1.7.0
+ github.com/mattn/go-colorable v0.1.4
+ github.com/mattn/go-isatty v0.0.10
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/stretchr/testify v1.2.2
)
+
+go 1.13
diff --git a/vendor/github.com/hashicorp/go-hclog/go.sum b/vendor/github.com/hashicorp/go-hclog/go.sum
index e03ee77d9e3b1..3a656dfd9c971 100644
--- a/vendor/github.com/hashicorp/go-hclog/go.sum
+++ b/vendor/github.com/hashicorp/go-hclog/go.sum
@@ -1,6 +1,18 @@
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys=
+github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
+github.com/mattn/go-colorable v0.1.4 h1:snbPLB8fVfU9iwbbo30TPtbLRzwWu6aJS6Xh4eaaviA=
+github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
+github.com/mattn/go-isatty v0.0.8 h1:HLtExJ+uU2HOZ+wI0Tt5DtUDrx8yhUqDcp7fYERX4CE=
+github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
+github.com/mattn/go-isatty v0.0.10 h1:qxFzApOv4WsAL965uUPIsXzAKCZxN2p9UqdhFS4ZW10=
+github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
+golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223 h1:DH4skfRX4EBpamg7iV4ZlCpblAHI6s6TDM39bFZumv8=
+golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20191008105621-543471e840be h1:QAcqgptGM8IQBC9K/RC4o+O9YmqEm0diQn9QmZw/0mU=
+golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
diff --git a/vendor/github.com/hashicorp/go-hclog/interceptlogger.go b/vendor/github.com/hashicorp/go-hclog/interceptlogger.go
new file mode 100644
index 0000000000000..08a6677eb7567
--- /dev/null
+++ b/vendor/github.com/hashicorp/go-hclog/interceptlogger.go
@@ -0,0 +1,246 @@
+package hclog
+
+import (
+ "io"
+ "log"
+ "sync"
+ "sync/atomic"
+)
+
+var _ Logger = &interceptLogger{}
+
+type interceptLogger struct {
+ Logger
+
+ mu *sync.Mutex
+ sinkCount *int32
+ Sinks map[SinkAdapter]struct{}
+}
+
+func NewInterceptLogger(opts *LoggerOptions) InterceptLogger {
+ intercept := &interceptLogger{
+ Logger: New(opts),
+ mu: new(sync.Mutex),
+ sinkCount: new(int32),
+ Sinks: make(map[SinkAdapter]struct{}),
+ }
+
+ atomic.StoreInt32(intercept.sinkCount, 0)
+
+ return intercept
+}
+
+func (i *interceptLogger) Log(level Level, msg string, args ...interface{}) {
+ i.Logger.Log(level, msg, args...)
+ if atomic.LoadInt32(i.sinkCount) == 0 {
+ return
+ }
+
+ i.mu.Lock()
+ defer i.mu.Unlock()
+ for s := range i.Sinks {
+ s.Accept(i.Name(), level, msg, i.retrieveImplied(args...)...)
+ }
+}
+
+// Emit the message and args at TRACE level to log and sinks
+func (i *interceptLogger) Trace(msg string, args ...interface{}) {
+ i.Logger.Trace(msg, args...)
+ if atomic.LoadInt32(i.sinkCount) == 0 {
+ return
+ }
+
+ i.mu.Lock()
+ defer i.mu.Unlock()
+ for s := range i.Sinks {
+ s.Accept(i.Name(), Trace, msg, i.retrieveImplied(args...)...)
+ }
+}
+
+// Emit the message and args at DEBUG level to log and sinks
+func (i *interceptLogger) Debug(msg string, args ...interface{}) {
+ i.Logger.Debug(msg, args...)
+ if atomic.LoadInt32(i.sinkCount) == 0 {
+ return
+ }
+
+ i.mu.Lock()
+ defer i.mu.Unlock()
+ for s := range i.Sinks {
+ s.Accept(i.Name(), Debug, msg, i.retrieveImplied(args...)...)
+ }
+}
+
+// Emit the message and args at INFO level to log and sinks
+func (i *interceptLogger) Info(msg string, args ...interface{}) {
+ i.Logger.Info(msg, args...)
+ if atomic.LoadInt32(i.sinkCount) == 0 {
+ return
+ }
+
+ i.mu.Lock()
+ defer i.mu.Unlock()
+ for s := range i.Sinks {
+ s.Accept(i.Name(), Info, msg, i.retrieveImplied(args...)...)
+ }
+}
+
+// Emit the message and args at WARN level to log and sinks
+func (i *interceptLogger) Warn(msg string, args ...interface{}) {
+ i.Logger.Warn(msg, args...)
+ if atomic.LoadInt32(i.sinkCount) == 0 {
+ return
+ }
+
+ i.mu.Lock()
+ defer i.mu.Unlock()
+ for s := range i.Sinks {
+ s.Accept(i.Name(), Warn, msg, i.retrieveImplied(args...)...)
+ }
+}
+
+// Emit the message and args at ERROR level to log and sinks
+func (i *interceptLogger) Error(msg string, args ...interface{}) {
+ i.Logger.Error(msg, args...)
+ if atomic.LoadInt32(i.sinkCount) == 0 {
+ return
+ }
+
+ i.mu.Lock()
+ defer i.mu.Unlock()
+ for s := range i.Sinks {
+ s.Accept(i.Name(), Error, msg, i.retrieveImplied(args...)...)
+ }
+}
+
+func (i *interceptLogger) retrieveImplied(args ...interface{}) []interface{} {
+ top := i.Logger.ImpliedArgs()
+
+ cp := make([]interface{}, len(top)+len(args))
+ copy(cp, top)
+ copy(cp[len(top):], args)
+
+ return cp
+}
+
+// Create a new sub-Logger that a name decending from the current name.
+// This is used to create a subsystem specific Logger.
+// Registered sinks will subscribe to these messages as well.
+func (i *interceptLogger) Named(name string) Logger {
+ var sub interceptLogger
+
+ sub = *i
+
+ sub.Logger = i.Logger.Named(name)
+
+ return &sub
+}
+
+// Create a new sub-Logger with an explicit name. This ignores the current
+// name. This is used to create a standalone logger that doesn't fall
+// within the normal hierarchy. Registered sinks will subscribe
+// to these messages as well.
+func (i *interceptLogger) ResetNamed(name string) Logger {
+ var sub interceptLogger
+
+ sub = *i
+
+ sub.Logger = i.Logger.ResetNamed(name)
+
+ return &sub
+}
+
+// Create a new sub-Logger that a name decending from the current name.
+// This is used to create a subsystem specific Logger.
+// Registered sinks will subscribe to these messages as well.
+func (i *interceptLogger) NamedIntercept(name string) InterceptLogger {
+ var sub interceptLogger
+
+ sub = *i
+
+ sub.Logger = i.Logger.Named(name)
+
+ return &sub
+}
+
+// Create a new sub-Logger with an explicit name. This ignores the current
+// name. This is used to create a standalone logger that doesn't fall
+// within the normal hierarchy. Registered sinks will subscribe
+// to these messages as well.
+func (i *interceptLogger) ResetNamedIntercept(name string) InterceptLogger {
+ var sub interceptLogger
+
+ sub = *i
+
+ sub.Logger = i.Logger.ResetNamed(name)
+
+ return &sub
+}
+
+// Return a sub-Logger for which every emitted log message will contain
+// the given key/value pairs. This is used to create a context specific
+// Logger.
+func (i *interceptLogger) With(args ...interface{}) Logger {
+ var sub interceptLogger
+
+ sub = *i
+
+ sub.Logger = i.Logger.With(args...)
+
+ return &sub
+}
+
+// RegisterSink attaches a SinkAdapter to interceptLoggers sinks.
+func (i *interceptLogger) RegisterSink(sink SinkAdapter) {
+ i.mu.Lock()
+ defer i.mu.Unlock()
+
+ i.Sinks[sink] = struct{}{}
+
+ atomic.AddInt32(i.sinkCount, 1)
+}
+
+// DeregisterSink removes a SinkAdapter from interceptLoggers sinks.
+func (i *interceptLogger) DeregisterSink(sink SinkAdapter) {
+ i.mu.Lock()
+ defer i.mu.Unlock()
+
+ delete(i.Sinks, sink)
+
+ atomic.AddInt32(i.sinkCount, -1)
+}
+
+// Create a *log.Logger that will send it's data through this Logger. This
+// allows packages that expect to be using the standard library to log to
+// actually use this logger, which will also send to any registered sinks.
+func (i *interceptLogger) StandardLoggerIntercept(opts *StandardLoggerOptions) *log.Logger {
+ if opts == nil {
+ opts = &StandardLoggerOptions{}
+ }
+
+ return log.New(i.StandardWriterIntercept(opts), "", 0)
+}
+
+func (i *interceptLogger) StandardWriterIntercept(opts *StandardLoggerOptions) io.Writer {
+ return &stdlogAdapter{
+ log: i,
+ inferLevels: opts.InferLevels,
+ forceLevel: opts.ForceLevel,
+ }
+}
+
+func (i *interceptLogger) ResetOutput(opts *LoggerOptions) error {
+ if or, ok := i.Logger.(OutputResettable); ok {
+ return or.ResetOutput(opts)
+ } else {
+ return nil
+ }
+}
+
+func (i *interceptLogger) ResetOutputWithFlush(opts *LoggerOptions, flushable Flushable) error {
+ if or, ok := i.Logger.(OutputResettable); ok {
+ return or.ResetOutputWithFlush(opts, flushable)
+ } else {
+ return nil
+ }
+}
diff --git a/vendor/github.com/hashicorp/go-hclog/intlogger.go b/vendor/github.com/hashicorp/go-hclog/intlogger.go
index 219656c4cb3a9..7158125de2a2a 100644
--- a/vendor/github.com/hashicorp/go-hclog/intlogger.go
+++ b/vendor/github.com/hashicorp/go-hclog/intlogger.go
@@ -4,10 +4,13 @@ import (
"bytes"
"encoding"
"encoding/json"
+ "errors"
"fmt"
"io"
"log"
+ "os"
"reflect"
+ "regexp"
"runtime"
"sort"
"strconv"
@@ -15,6 +18,8 @@ import (
"sync"
"sync/atomic"
"time"
+
+ "github.com/fatih/color"
)
// TimeFormat to use for logging. This is a version of RFC3339 that contains
@@ -32,6 +37,14 @@ var (
Warn: "[WARN] ",
Error: "[ERROR]",
}
+
+ _levelToColor = map[Level]*color.Color{
+ Debug: color.New(color.FgHiWhite),
+ Trace: color.New(color.FgHiGreen),
+ Info: color.New(color.FgHiBlue),
+ Warn: color.New(color.FgHiYellow),
+ Error: color.New(color.FgHiRed),
+ }
)
// Make sure that intLogger is a Logger
@@ -45,17 +58,29 @@ type intLogger struct {
name string
timeFormat string
- // This is a pointer so that it's shared by any derived loggers, since
+ // This is an interface so that it's shared by any derived loggers, since
// those derived loggers share the bufio.Writer as well.
- mutex *sync.Mutex
+ mutex Locker
writer *writer
level *int32
implied []interface{}
+
+ exclude func(level Level, msg string, args ...interface{}) bool
}
// New returns a configured logger.
func New(opts *LoggerOptions) Logger {
+ return newLogger(opts)
+}
+
+// NewSinkAdapter returns a SinkAdapter with configured settings
+// defined by LoggerOptions
+func NewSinkAdapter(opts *LoggerOptions) SinkAdapter {
+ return newLogger(opts)
+}
+
+func newLogger(opts *LoggerOptions) *intLogger {
if opts == nil {
opts = &LoggerOptions{}
}
@@ -81,11 +106,16 @@ func New(opts *LoggerOptions) Logger {
name: opts.Name,
timeFormat: TimeFormat,
mutex: mutex,
- writer: newWriter(output),
+ writer: newWriter(output, opts.Color),
level: new(int32),
+ exclude: opts.Exclude,
}
- if opts.TimeFormat != "" {
+ l.setColorization(opts)
+
+ if opts.DisableTime {
+ l.timeFormat = ""
+ } else if opts.TimeFormat != "" {
l.timeFormat = opts.TimeFormat
}
@@ -96,7 +126,7 @@ func New(opts *LoggerOptions) Logger {
// Log a message and a set of key/value pairs if the given level is at
// or more severe that the threshold configured in the Logger.
-func (l *intLogger) Log(level Level, msg string, args ...interface{}) {
+func (l *intLogger) log(name string, level Level, msg string, args ...interface{}) {
if level < Level(atomic.LoadInt32(l.level)) {
return
}
@@ -106,10 +136,14 @@ func (l *intLogger) Log(level Level, msg string, args ...interface{}) {
l.mutex.Lock()
defer l.mutex.Unlock()
+ if l.exclude != nil && l.exclude(level, msg, args...) {
+ return
+ }
+
if l.json {
- l.logJSON(t, level, msg, args...)
+ l.logJSON(t, name, level, msg, args...)
} else {
- l.log(t, level, msg, args...)
+ l.logPlain(t, name, level, msg, args...)
}
l.writer.Flush(level)
@@ -144,10 +178,14 @@ func trimCallerPath(path string) string {
return path[idx+1:]
}
+var logImplFile = regexp.MustCompile(`.+intlogger.go|.+interceptlogger.go$`)
+
// Non-JSON logging format function
-func (l *intLogger) log(t time.Time, level Level, msg string, args ...interface{}) {
- l.writer.WriteString(t.Format(l.timeFormat))
- l.writer.WriteByte(' ')
+func (l *intLogger) logPlain(t time.Time, name string, level Level, msg string, args ...interface{}) {
+ if len(l.timeFormat) > 0 {
+ l.writer.WriteString(t.Format(l.timeFormat))
+ l.writer.WriteByte(' ')
+ }
s, ok := _levelToBracket[level]
if ok {
@@ -156,8 +194,18 @@ func (l *intLogger) log(t time.Time, level Level, msg string, args ...interface{
l.writer.WriteString("[?????]")
}
+ offset := 3
if l.caller {
- if _, file, line, ok := runtime.Caller(3); ok {
+ // Check if the caller is inside our package and inside
+ // a logger implementation file
+ if _, file, _, ok := runtime.Caller(3); ok {
+ match := logImplFile.MatchString(file)
+ if match {
+ offset = 4
+ }
+ }
+
+ if _, file, line, ok := runtime.Caller(offset); ok {
l.writer.WriteByte(' ')
l.writer.WriteString(trimCallerPath(file))
l.writer.WriteByte(':')
@@ -168,8 +216,8 @@ func (l *intLogger) log(t time.Time, level Level, msg string, args ...interface{
l.writer.WriteByte(' ')
- if l.name != "" {
- l.writer.WriteString(l.name)
+ if name != "" {
+ l.writer.WriteString(name)
l.writer.WriteString(": ")
}
@@ -186,7 +234,8 @@ func (l *intLogger) log(t time.Time, level Level, msg string, args ...interface{
args = args[:len(args)-1]
stacktrace = cs
} else {
- args = append(args, "")
+ extra := args[len(args)-1]
+ args = append(args[:len(args)-1], MissingKey, extra)
}
}
@@ -222,6 +271,12 @@ func (l *intLogger) log(t time.Time, level Level, msg string, args ...interface{
val = strconv.FormatUint(uint64(st), 10)
case uint8:
val = strconv.FormatUint(uint64(st), 10)
+ case Hex:
+ val = "0x" + strconv.FormatUint(uint64(st), 16)
+ case Octal:
+ val = "0" + strconv.FormatUint(uint64(st), 8)
+ case Binary:
+ val = "0b" + strconv.FormatUint(uint64(st), 2)
case CapturedStacktrace:
stacktrace = st
continue FOR
@@ -238,7 +293,12 @@ func (l *intLogger) log(t time.Time, level Level, msg string, args ...interface{
}
l.writer.WriteByte(' ')
- l.writer.WriteString(args[i].(string))
+ switch st := args[i].(type) {
+ case string:
+ l.writer.WriteString(st)
+ default:
+ l.writer.WriteString(fmt.Sprintf("%s", st))
+ }
l.writer.WriteByte('=')
if !raw && strings.ContainsAny(val, " \t\n\r") {
@@ -298,8 +358,8 @@ func (l *intLogger) renderSlice(v reflect.Value) string {
}
// JSON logging function
-func (l *intLogger) logJSON(t time.Time, level Level, msg string, args ...interface{}) {
- vals := l.jsonMapEntry(t, level, msg)
+func (l *intLogger) logJSON(t time.Time, name string, level Level, msg string, args ...interface{}) {
+ vals := l.jsonMapEntry(t, name, level, msg)
args = append(l.implied, args...)
if args != nil && len(args) > 0 {
@@ -309,16 +369,12 @@ func (l *intLogger) logJSON(t time.Time, level Level, msg string, args ...interf
args = args[:len(args)-1]
vals["stacktrace"] = cs
} else {
- args = append(args, "")
+ extra := args[len(args)-1]
+ args = append(args[:len(args)-1], MissingKey, extra)
}
}
for i := 0; i < len(args); i = i + 2 {
- if _, ok := args[i].(string); !ok {
- // As this is the logging function not much we can do here
- // without injecting into logs...
- continue
- }
val := args[i+1]
switch sv := val.(type) {
case error:
@@ -334,14 +390,22 @@ func (l *intLogger) logJSON(t time.Time, level Level, msg string, args ...interf
val = fmt.Sprintf(sv[0].(string), sv[1:]...)
}
- vals[args[i].(string)] = val
+ var key string
+
+ switch st := args[i].(type) {
+ case string:
+ key = st
+ default:
+ key = fmt.Sprintf("%s", st)
+ }
+ vals[key] = val
}
}
err := json.NewEncoder(l.writer).Encode(vals)
if err != nil {
if _, ok := err.(*json.UnsupportedTypeError); ok {
- plainVal := l.jsonMapEntry(t, level, msg)
+ plainVal := l.jsonMapEntry(t, name, level, msg)
plainVal["@warn"] = errJsonUnsupportedTypeMsg
json.NewEncoder(l.writer).Encode(plainVal)
@@ -349,7 +413,7 @@ func (l *intLogger) logJSON(t time.Time, level Level, msg string, args ...interf
}
}
-func (l intLogger) jsonMapEntry(t time.Time, level Level, msg string) map[string]interface{} {
+func (l intLogger) jsonMapEntry(t time.Time, name string, level Level, msg string) map[string]interface{} {
vals := map[string]interface{}{
"@message": msg,
"@timestamp": t.Format("2006-01-02T15:04:05.000000Z07:00"),
@@ -373,8 +437,8 @@ func (l intLogger) jsonMapEntry(t time.Time, level Level, msg string) map[string
vals["@level"] = levelStr
- if l.name != "" {
- vals["@module"] = l.name
+ if name != "" {
+ vals["@module"] = name
}
if l.caller {
@@ -385,29 +449,34 @@ func (l intLogger) jsonMapEntry(t time.Time, level Level, msg string) map[string
return vals
}
+// Emit the message and args at the provided level
+func (l *intLogger) Log(level Level, msg string, args ...interface{}) {
+ l.log(l.Name(), level, msg, args...)
+}
+
// Emit the message and args at DEBUG level
func (l *intLogger) Debug(msg string, args ...interface{}) {
- l.Log(Debug, msg, args...)
+ l.log(l.Name(), Debug, msg, args...)
}
// Emit the message and args at TRACE level
func (l *intLogger) Trace(msg string, args ...interface{}) {
- l.Log(Trace, msg, args...)
+ l.log(l.Name(), Trace, msg, args...)
}
// Emit the message and args at INFO level
func (l *intLogger) Info(msg string, args ...interface{}) {
- l.Log(Info, msg, args...)
+ l.log(l.Name(), Info, msg, args...)
}
// Emit the message and args at WARN level
func (l *intLogger) Warn(msg string, args ...interface{}) {
- l.Log(Warn, msg, args...)
+ l.log(l.Name(), Warn, msg, args...)
}
// Emit the message and args at ERROR level
func (l *intLogger) Error(msg string, args ...interface{}) {
- l.Log(Error, msg, args...)
+ l.log(l.Name(), Error, msg, args...)
}
// Indicate that the logger would emit TRACE level logs
@@ -435,12 +504,17 @@ func (l *intLogger) IsError() bool {
return Level(atomic.LoadInt32(l.level)) <= Error
}
+const MissingKey = "EXTRA_VALUE_AT_END"
+
// Return a sub-Logger for which every emitted log message will contain
// the given key/value pairs. This is used to create a context specific
// Logger.
func (l *intLogger) With(args ...interface{}) Logger {
+ var extra interface{}
+
if len(args)%2 != 0 {
- panic("With() call requires paired arguments")
+ extra = args[len(args)-1]
+ args = args[:len(args)-1]
}
sl := *l
@@ -473,6 +547,10 @@ func (l *intLogger) With(args ...interface{}) Logger {
sl.implied = append(sl.implied, result[k])
}
+ if extra != nil {
+ sl.implied = append(sl.implied, MissingKey, extra)
+ }
+
return &sl
}
@@ -501,6 +579,41 @@ func (l *intLogger) ResetNamed(name string) Logger {
return &sl
}
+func (l *intLogger) ResetOutput(opts *LoggerOptions) error {
+ if opts.Output == nil {
+ return errors.New("given output is nil")
+ }
+
+ l.mutex.Lock()
+ defer l.mutex.Unlock()
+
+ return l.resetOutput(opts)
+}
+
+func (l *intLogger) ResetOutputWithFlush(opts *LoggerOptions, flushable Flushable) error {
+ if opts.Output == nil {
+ return errors.New("given output is nil")
+ }
+ if flushable == nil {
+ return errors.New("flushable is nil")
+ }
+
+ l.mutex.Lock()
+ defer l.mutex.Unlock()
+
+ if err := flushable.Flush(); err != nil {
+ return err
+ }
+
+ return l.resetOutput(opts)
+}
+
+func (l *intLogger) resetOutput(opts *LoggerOptions) error {
+ l.writer = newWriter(opts.Output, opts.Color)
+ l.setColorization(opts)
+ return nil
+}
+
// Update the logging level on-the-fly. This will affect all subloggers as
// well.
func (l *intLogger) SetLevel(level Level) {
@@ -525,3 +638,28 @@ func (l *intLogger) StandardWriter(opts *StandardLoggerOptions) io.Writer {
forceLevel: opts.ForceLevel,
}
}
+
+// checks if the underlying io.Writer is a file, and
+// panics if not. For use by colorization.
+func (l *intLogger) checkWriterIsFile() *os.File {
+ fi, ok := l.writer.w.(*os.File)
+ if !ok {
+ panic("Cannot enable coloring of non-file Writers")
+ }
+ return fi
+}
+
+// Accept implements the SinkAdapter interface
+func (i *intLogger) Accept(name string, level Level, msg string, args ...interface{}) {
+ i.log(name, level, msg, args...)
+}
+
+// ImpliedArgs returns the loggers implied args
+func (i *intLogger) ImpliedArgs() []interface{} {
+ return i.implied
+}
+
+// Name returns the loggers name
+func (i *intLogger) Name() string {
+ return i.name
+}
diff --git a/vendor/github.com/hashicorp/go-hclog/logger.go b/vendor/github.com/hashicorp/go-hclog/logger.go
index 080ed79996687..8d5eed76e50e8 100644
--- a/vendor/github.com/hashicorp/go-hclog/logger.go
+++ b/vendor/github.com/hashicorp/go-hclog/logger.go
@@ -5,7 +5,6 @@ import (
"log"
"os"
"strings"
- "sync"
)
var (
@@ -53,6 +52,33 @@ func Fmt(str string, args ...interface{}) Format {
return append(Format{str}, args...)
}
+// A simple shortcut to format numbers in hex when displayed with the normal
+// text output. For example: L.Info("header value", Hex(17))
+type Hex int
+
+// A simple shortcut to format numbers in octal when displayed with the normal
+// text output. For example: L.Info("perms", Octal(17))
+type Octal int
+
+// A simple shortcut to format numbers in binary when displayed with the normal
+// text output. For example: L.Info("bits", Binary(17))
+type Binary int
+
+// ColorOption expresses how the output should be colored, if at all.
+type ColorOption uint8
+
+const (
+ // ColorOff is the default coloration, and does not
+ // inject color codes into the io.Writer.
+ ColorOff ColorOption = iota
+ // AutoColor checks if the io.Writer is a tty,
+ // and if so enables coloring.
+ AutoColor
+ // ForceColor will enable coloring, regardless of whether
+ // the io.Writer is a tty or not.
+ ForceColor
+)
+
// LevelFromString returns a Level type for the named log level, or "NoLevel" if
// the level string is invalid. This facilitates setting the log level via
// config or environment variable by name in a predictable way.
@@ -75,11 +101,33 @@ func LevelFromString(levelStr string) Level {
}
}
+func (l Level) String() string {
+ switch l {
+ case Trace:
+ return "trace"
+ case Debug:
+ return "debug"
+ case Info:
+ return "info"
+ case Warn:
+ return "warn"
+ case Error:
+ return "error"
+ case NoLevel:
+ return "none"
+ default:
+ return "unknown"
+ }
+}
+
// Logger describes the interface that must be implemeted by all loggers.
type Logger interface {
// Args are alternating key, val pairs
// keys must be strings
// vals can be any type, but display is implementation specific
+ // Emit a message and key/value pairs at a provided log level
+ Log(level Level, msg string, args ...interface{})
+
// Emit a message and key/value pairs at the TRACE level
Trace(msg string, args ...interface{})
@@ -111,9 +159,15 @@ type Logger interface {
// Indicate if ERROR logs would be emitted. This and the other Is* guards
IsError() bool
+ // ImpliedArgs returns With key/value pairs
+ ImpliedArgs() []interface{}
+
// Creates a sublogger that will always have the given key/value pairs
With(args ...interface{}) Logger
+ // Returns the Name of the logger
+ Name() string
+
// Create a logger that will prepend the name string on the front of all messages.
// If the logger already has a name, the new value will be appended to the current
// name. That way, a major subsystem can use this to decorate all it's own logs
@@ -162,8 +216,10 @@ type LoggerOptions struct {
// Where to write the logs to. Defaults to os.Stderr if nil
Output io.Writer
- // An optional mutex pointer in case Output is shared
- Mutex *sync.Mutex
+ // An optional Locker in case Output is shared. This can be a sync.Mutex or
+ // a NoopLocker if the caller wants control over output, e.g. for batching
+ // log lines.
+ Mutex Locker
// Control if the output should be in JSON.
JSONFormat bool
@@ -173,4 +229,99 @@ type LoggerOptions struct {
// The time format to use instead of the default
TimeFormat string
+
+ // Control whether or not to display the time at all. This is required
+ // because setting TimeFormat to empty assumes the default format.
+ DisableTime bool
+
+ // Color the output. On Windows, colored logs are only avaiable for io.Writers that
+ // are concretely instances of *os.File.
+ Color ColorOption
+
+ // A function which is called with the log information and if it returns true the value
+ // should not be logged.
+ // This is useful when interacting with a system that you wish to suppress the log
+ // message for (because it's too noisy, etc)
+ Exclude func(level Level, msg string, args ...interface{}) bool
+}
+
+// InterceptLogger describes the interface for using a logger
+// that can register different output sinks.
+// This is useful for sending lower level log messages
+// to a different output while keeping the root logger
+// at a higher one.
+type InterceptLogger interface {
+ // Logger is the root logger for an InterceptLogger
+ Logger
+
+ // RegisterSink adds a SinkAdapter to the InterceptLogger
+ RegisterSink(sink SinkAdapter)
+
+ // DeregisterSink removes a SinkAdapter from the InterceptLogger
+ DeregisterSink(sink SinkAdapter)
+
+ // Create a interceptlogger that will prepend the name string on the front of all messages.
+ // If the logger already has a name, the new value will be appended to the current
+ // name. That way, a major subsystem can use this to decorate all it's own logs
+ // without losing context.
+ NamedIntercept(name string) InterceptLogger
+
+ // Create a interceptlogger that will prepend the name string on the front of all messages.
+ // This sets the name of the logger to the value directly, unlike Named which honor
+ // the current name as well.
+ ResetNamedIntercept(name string) InterceptLogger
+
+ // Return a value that conforms to the stdlib log.Logger interface
+ StandardLoggerIntercept(opts *StandardLoggerOptions) *log.Logger
+
+ // Return a value that conforms to io.Writer, which can be passed into log.SetOutput()
+ StandardWriterIntercept(opts *StandardLoggerOptions) io.Writer
}
+
+// SinkAdapter describes the interface that must be implemented
+// in order to Register a new sink to an InterceptLogger
+type SinkAdapter interface {
+ Accept(name string, level Level, msg string, args ...interface{})
+}
+
+// Flushable represents a method for flushing an output buffer. It can be used
+// if Resetting the log to use a new output, in order to flush the writes to
+// the existing output beforehand.
+type Flushable interface {
+ Flush() error
+}
+
+// OutputResettable provides ways to swap the output in use at runtime
+type OutputResettable interface {
+ // ResetOutput swaps the current output writer with the one given in the
+ // opts. Color options given in opts will be used for the new output.
+ ResetOutput(opts *LoggerOptions) error
+
+ // ResetOutputWithFlush swaps the current output writer with the one given
+ // in the opts, first calling Flush on the given Flushable. Color options
+ // given in opts will be used for the new output.
+ ResetOutputWithFlush(opts *LoggerOptions, flushable Flushable) error
+}
+
+// Locker is used for locking output. If not set when creating a logger, a
+// sync.Mutex will be used internally.
+type Locker interface {
+ // Lock is called when the output is going to be changed or written to
+ Lock()
+
+ // Unlock is called when the operation that called Lock() completes
+ Unlock()
+}
+
+// NoopLocker implements locker but does nothing. This is useful if the client
+// wants tight control over locking, in order to provide grouping of log
+// entries or other functionality.
+type NoopLocker struct{}
+
+// Lock does nothing
+func (n NoopLocker) Lock() {}
+
+// Unlock does nothing
+func (n NoopLocker) Unlock() {}
+
+var _ Locker = (*NoopLocker)(nil)
diff --git a/vendor/github.com/hashicorp/go-hclog/nulllogger.go b/vendor/github.com/hashicorp/go-hclog/nulllogger.go
index 7ad6b351eb8cf..bc14f77080757 100644
--- a/vendor/github.com/hashicorp/go-hclog/nulllogger.go
+++ b/vendor/github.com/hashicorp/go-hclog/nulllogger.go
@@ -15,6 +15,8 @@ func NewNullLogger() Logger {
type nullLogger struct{}
+func (l *nullLogger) Log(level Level, msg string, args ...interface{}) {}
+
func (l *nullLogger) Trace(msg string, args ...interface{}) {}
func (l *nullLogger) Debug(msg string, args ...interface{}) {}
@@ -35,8 +37,12 @@ func (l *nullLogger) IsWarn() bool { return false }
func (l *nullLogger) IsError() bool { return false }
+func (l *nullLogger) ImpliedArgs() []interface{} { return []interface{}{} }
+
func (l *nullLogger) With(args ...interface{}) Logger { return l }
+func (l *nullLogger) Name() string { return "" }
+
func (l *nullLogger) Named(name string) Logger { return l }
func (l *nullLogger) ResetNamed(name string) Logger { return l }
diff --git a/vendor/github.com/hashicorp/go-hclog/stdlog.go b/vendor/github.com/hashicorp/go-hclog/stdlog.go
index 044a4696088fc..f35d875d327ae 100644
--- a/vendor/github.com/hashicorp/go-hclog/stdlog.go
+++ b/vendor/github.com/hashicorp/go-hclog/stdlog.go
@@ -2,6 +2,7 @@ package hclog
import (
"bytes"
+ "log"
"strings"
)
@@ -25,36 +26,10 @@ func (s *stdlogAdapter) Write(data []byte) (int, error) {
_, str := s.pickLevel(str)
// Log at the forced level
- switch s.forceLevel {
- case Trace:
- s.log.Trace(str)
- case Debug:
- s.log.Debug(str)
- case Info:
- s.log.Info(str)
- case Warn:
- s.log.Warn(str)
- case Error:
- s.log.Error(str)
- default:
- s.log.Info(str)
- }
+ s.dispatch(str, s.forceLevel)
} else if s.inferLevels {
level, str := s.pickLevel(str)
- switch level {
- case Trace:
- s.log.Trace(str)
- case Debug:
- s.log.Debug(str)
- case Info:
- s.log.Info(str)
- case Warn:
- s.log.Warn(str)
- case Error:
- s.log.Error(str)
- default:
- s.log.Info(str)
- }
+ s.dispatch(str, level)
} else {
s.log.Info(str)
}
@@ -62,6 +37,23 @@ func (s *stdlogAdapter) Write(data []byte) (int, error) {
return len(data), nil
}
+func (s *stdlogAdapter) dispatch(str string, level Level) {
+ switch level {
+ case Trace:
+ s.log.Trace(str)
+ case Debug:
+ s.log.Debug(str)
+ case Info:
+ s.log.Info(str)
+ case Warn:
+ s.log.Warn(str)
+ case Error:
+ s.log.Error(str)
+ default:
+ s.log.Info(str)
+ }
+}
+
// Detect, based on conventions, what log level this is.
func (s *stdlogAdapter) pickLevel(str string) (Level, string) {
switch {
@@ -81,3 +73,23 @@ func (s *stdlogAdapter) pickLevel(str string) (Level, string) {
return Info, str
}
}
+
+type logWriter struct {
+ l *log.Logger
+}
+
+func (l *logWriter) Write(b []byte) (int, error) {
+ l.l.Println(string(bytes.TrimRight(b, " \n\t")))
+ return len(b), nil
+}
+
+// Takes a standard library logger and returns a Logger that will write to it
+func FromStandardLogger(l *log.Logger, opts *LoggerOptions) Logger {
+ var dl LoggerOptions = *opts
+
+ // Use the time format that log.Logger uses
+ dl.DisableTime = true
+ dl.Output = &logWriter{l}
+
+ return New(&dl)
+}
diff --git a/vendor/github.com/hashicorp/go-hclog/writer.go b/vendor/github.com/hashicorp/go-hclog/writer.go
index 7e8ec729da8eb..421a1f06c0ba8 100644
--- a/vendor/github.com/hashicorp/go-hclog/writer.go
+++ b/vendor/github.com/hashicorp/go-hclog/writer.go
@@ -6,19 +6,27 @@ import (
)
type writer struct {
- b bytes.Buffer
- w io.Writer
+ b bytes.Buffer
+ w io.Writer
+ color ColorOption
}
-func newWriter(w io.Writer) *writer {
- return &writer{w: w}
+func newWriter(w io.Writer, color ColorOption) *writer {
+ return &writer{w: w, color: color}
}
func (w *writer) Flush(level Level) (err error) {
+ var unwritten = w.b.Bytes()
+
+ if w.color != ColorOff {
+ color := _levelToColor[level]
+ unwritten = []byte(color.Sprintf("%s", unwritten))
+ }
+
if lw, ok := w.w.(LevelWriter); ok {
- _, err = lw.LevelWrite(level, w.b.Bytes())
+ _, err = lw.LevelWrite(level, unwritten)
} else {
- _, err = w.w.Write(w.b.Bytes())
+ _, err = w.w.Write(unwritten)
}
w.b.Reset()
return err
diff --git a/vendor/github.com/hashicorp/go-plugin/go.mod b/vendor/github.com/hashicorp/go-plugin/go.mod
index f0115b782a17a..4e182e6258f56 100644
--- a/vendor/github.com/hashicorp/go-plugin/go.mod
+++ b/vendor/github.com/hashicorp/go-plugin/go.mod
@@ -4,7 +4,7 @@ go 1.13
require (
github.com/golang/protobuf v1.3.4
- github.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd
+ github.com/hashicorp/go-hclog v0.14.1
github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb
github.com/jhump/protoreflect v1.6.0
github.com/mitchellh/go-testing-interface v0.0.0-20171004221916-a61a99592b77
diff --git a/vendor/github.com/hashicorp/go-plugin/go.sum b/vendor/github.com/hashicorp/go-plugin/go.sum
index 5d497615f5d8b..56062044ee41a 100644
--- a/vendor/github.com/hashicorp/go-plugin/go.sum
+++ b/vendor/github.com/hashicorp/go-plugin/go.sum
@@ -4,8 +4,12 @@ github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
+github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
+github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys=
+github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
@@ -17,12 +21,17 @@ github.com/golang/protobuf v1.3.4 h1:87PNWwrRvUSnqS4dlcBU/ftvOIBep4sYuBLlh6rX2wk
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
-github.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd h1:rNuUHR+CvK1IS89MMtcF0EpcVMZtjKfPRp4MEmt/aTs=
-github.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd/go.mod h1:9bjs9uLqI8l75knNv3lV1kA55veR+WUPSiKIWcQHudI=
+github.com/hashicorp/go-hclog v0.14.1 h1:nQcJDQwIAGnmoUWp8ubocEX40cCml/17YkF6csQLReU=
+github.com/hashicorp/go-hclog v0.14.1/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb h1:b5rjCoWHc7eqmAS4/qyk21ZsHyb6Mxv/jykxvNTkU4M=
github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
github.com/jhump/protoreflect v1.6.0 h1:h5jfMVslIg6l29nsMs0D8Wj17RDVdNYti0vDN/PZZoE=
github.com/jhump/protoreflect v1.6.0/go.mod h1:eaTn3RZAmMBcV0fifFvlm6VHNz3wSkYyXYWUh7ymB74=
+github.com/mattn/go-colorable v0.1.4 h1:snbPLB8fVfU9iwbbo30TPtbLRzwWu6aJS6Xh4eaaviA=
+github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
+github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
+github.com/mattn/go-isatty v0.0.10 h1:qxFzApOv4WsAL965uUPIsXzAKCZxN2p9UqdhFS4ZW10=
+github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
github.com/mitchellh/go-testing-interface v0.0.0-20171004221916-a61a99592b77 h1:7GoSOOW2jpsfkntVKaS2rAr1TJqfcxotyaUcuxoZSzg=
github.com/mitchellh/go-testing-interface v0.0.0-20171004221916-a61a99592b77/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/oklog/run v1.0.0 h1:Ru7dDtJNOyC66gQ5dQmaCa0qIsAUFY3sFpK1Xk8igrw=
@@ -31,6 +40,7 @@ github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZb
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
@@ -52,6 +62,9 @@ golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a h1:1BGLXjeY4akVXGgbC9HugT3Jv3hCI0z56oJR5vAMgBU=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20191008105621-543471e840be h1:QAcqgptGM8IQBC9K/RC4o+O9YmqEm0diQn9QmZw/0mU=
+golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
diff --git a/vendor/github.com/hashicorp/go-plugin/grpc_stdio.go b/vendor/github.com/hashicorp/go-plugin/grpc_stdio.go
index 6231a9fd625c6..a582181505fe6 100644
--- a/vendor/github.com/hashicorp/go-plugin/grpc_stdio.go
+++ b/vendor/github.com/hashicorp/go-plugin/grpc_stdio.go
@@ -136,12 +136,12 @@ func (c *grpcStdioClient) Run(stdout, stderr io.Writer) {
status.Code(err) == codes.Canceled ||
status.Code(err) == codes.Unimplemented ||
err == context.Canceled {
- c.log.Warn("received EOF, stopping recv loop", "err", err)
+ c.log.Debug("received EOF, stopping recv loop", "err", err)
return
}
c.log.Error("error receiving data", "err", err)
- continue
+ return
}
// Determine our output writer based on channel
diff --git a/vendor/github.com/hashicorp/go-plugin/server.go b/vendor/github.com/hashicorp/go-plugin/server.go
index 002d6080d4fa6..80f0ac396a4d9 100644
--- a/vendor/github.com/hashicorp/go-plugin/server.go
+++ b/vendor/github.com/hashicorp/go-plugin/server.go
@@ -9,7 +9,6 @@ import (
"fmt"
"io"
"io/ioutil"
- "log"
"net"
"os"
"os/signal"
@@ -260,9 +259,6 @@ func Serve(opts *ServeConfig) {
// start with default version in the handshake config
protoVersion, protoType, pluginSet := protocolVersion(opts)
- // Logging goes to the original stderr
- log.SetOutput(os.Stderr)
-
logger := opts.Logger
if logger == nil {
// internal logger to os.Stderr
diff --git a/vendor/github.com/mattn/go-colorable/colorable_appengine.go b/vendor/github.com/mattn/go-colorable/colorable_appengine.go
index 1f28d773d748a..0b0aef83700cf 100644
--- a/vendor/github.com/mattn/go-colorable/colorable_appengine.go
+++ b/vendor/github.com/mattn/go-colorable/colorable_appengine.go
@@ -9,7 +9,7 @@ import (
_ "github.com/mattn/go-isatty"
)
-// NewColorable return new instance of Writer which handle escape sequence.
+// NewColorable returns new instance of Writer which handles escape sequence.
func NewColorable(file *os.File) io.Writer {
if file == nil {
panic("nil passed instead of *os.File to NewColorable()")
@@ -18,12 +18,12 @@ func NewColorable(file *os.File) io.Writer {
return file
}
-// NewColorableStdout return new instance of Writer which handle escape sequence for stdout.
+// NewColorableStdout returns new instance of Writer which handles escape sequence for stdout.
func NewColorableStdout() io.Writer {
return os.Stdout
}
-// NewColorableStderr return new instance of Writer which handle escape sequence for stderr.
+// NewColorableStderr returns new instance of Writer which handles escape sequence for stderr.
func NewColorableStderr() io.Writer {
return os.Stderr
}
diff --git a/vendor/github.com/mattn/go-colorable/colorable_others.go b/vendor/github.com/mattn/go-colorable/colorable_others.go
index 887f203dc7faa..3fb771dcca293 100644
--- a/vendor/github.com/mattn/go-colorable/colorable_others.go
+++ b/vendor/github.com/mattn/go-colorable/colorable_others.go
@@ -10,7 +10,7 @@ import (
_ "github.com/mattn/go-isatty"
)
-// NewColorable return new instance of Writer which handle escape sequence.
+// NewColorable returns new instance of Writer which handles escape sequence.
func NewColorable(file *os.File) io.Writer {
if file == nil {
panic("nil passed instead of *os.File to NewColorable()")
@@ -19,12 +19,12 @@ func NewColorable(file *os.File) io.Writer {
return file
}
-// NewColorableStdout return new instance of Writer which handle escape sequence for stdout.
+// NewColorableStdout returns new instance of Writer which handles escape sequence for stdout.
func NewColorableStdout() io.Writer {
return os.Stdout
}
-// NewColorableStderr return new instance of Writer which handle escape sequence for stderr.
+// NewColorableStderr returns new instance of Writer which handles escape sequence for stderr.
func NewColorableStderr() io.Writer {
return os.Stderr
}
diff --git a/vendor/github.com/mattn/go-colorable/colorable_windows.go b/vendor/github.com/mattn/go-colorable/colorable_windows.go
index 404e10ca02b14..1bd628f25c0c8 100644
--- a/vendor/github.com/mattn/go-colorable/colorable_windows.go
+++ b/vendor/github.com/mattn/go-colorable/colorable_windows.go
@@ -81,7 +81,7 @@ var (
procCreateConsoleScreenBuffer = kernel32.NewProc("CreateConsoleScreenBuffer")
)
-// Writer provide colorable Writer to the console
+// Writer provides colorable Writer to the console
type Writer struct {
out io.Writer
handle syscall.Handle
@@ -91,7 +91,7 @@ type Writer struct {
rest bytes.Buffer
}
-// NewColorable return new instance of Writer which handle escape sequence from File.
+// NewColorable returns new instance of Writer which handles escape sequence from File.
func NewColorable(file *os.File) io.Writer {
if file == nil {
panic("nil passed instead of *os.File to NewColorable()")
@@ -106,12 +106,12 @@ func NewColorable(file *os.File) io.Writer {
return file
}
-// NewColorableStdout return new instance of Writer which handle escape sequence for stdout.
+// NewColorableStdout returns new instance of Writer which handles escape sequence for stdout.
func NewColorableStdout() io.Writer {
return NewColorable(os.Stdout)
}
-// NewColorableStderr return new instance of Writer which handle escape sequence for stderr.
+// NewColorableStderr returns new instance of Writer which handles escape sequence for stderr.
func NewColorableStderr() io.Writer {
return NewColorable(os.Stderr)
}
@@ -414,7 +414,15 @@ func doTitleSequence(er *bytes.Reader) error {
return nil
}
-// Write write data on console
+// returns Atoi(s) unless s == "" in which case it returns def
+func atoiWithDefault(s string, def int) (int, error) {
+ if s == "" {
+ return def, nil
+ }
+ return strconv.Atoi(s)
+}
+
+// Write writes data on console
func (w *Writer) Write(data []byte) (n int, err error) {
var csbi consoleScreenBufferInfo
procGetConsoleScreenBufferInfo.Call(uintptr(w.handle), uintptr(unsafe.Pointer(&csbi)))
@@ -500,7 +508,7 @@ loop:
switch m {
case 'A':
- n, err = strconv.Atoi(buf.String())
+ n, err = atoiWithDefault(buf.String(), 1)
if err != nil {
continue
}
@@ -508,7 +516,7 @@ loop:
csbi.cursorPosition.y -= short(n)
procSetConsoleCursorPosition.Call(uintptr(handle), *(*uintptr)(unsafe.Pointer(&csbi.cursorPosition)))
case 'B':
- n, err = strconv.Atoi(buf.String())
+ n, err = atoiWithDefault(buf.String(), 1)
if err != nil {
continue
}
@@ -516,7 +524,7 @@ loop:
csbi.cursorPosition.y += short(n)
procSetConsoleCursorPosition.Call(uintptr(handle), *(*uintptr)(unsafe.Pointer(&csbi.cursorPosition)))
case 'C':
- n, err = strconv.Atoi(buf.String())
+ n, err = atoiWithDefault(buf.String(), 1)
if err != nil {
continue
}
@@ -524,7 +532,7 @@ loop:
csbi.cursorPosition.x += short(n)
procSetConsoleCursorPosition.Call(uintptr(handle), *(*uintptr)(unsafe.Pointer(&csbi.cursorPosition)))
case 'D':
- n, err = strconv.Atoi(buf.String())
+ n, err = atoiWithDefault(buf.String(), 1)
if err != nil {
continue
}
@@ -557,6 +565,9 @@ loop:
if err != nil {
continue
}
+ if n < 1 {
+ n = 1
+ }
procGetConsoleScreenBufferInfo.Call(uintptr(handle), uintptr(unsafe.Pointer(&csbi)))
csbi.cursorPosition.x = short(n - 1)
procSetConsoleCursorPosition.Call(uintptr(handle), *(*uintptr)(unsafe.Pointer(&csbi.cursorPosition)))
@@ -635,6 +646,20 @@ loop:
}
procFillConsoleOutputCharacter.Call(uintptr(handle), uintptr(' '), uintptr(count), *(*uintptr)(unsafe.Pointer(&cursor)), uintptr(unsafe.Pointer(&written)))
procFillConsoleOutputAttribute.Call(uintptr(handle), uintptr(csbi.attributes), uintptr(count), *(*uintptr)(unsafe.Pointer(&cursor)), uintptr(unsafe.Pointer(&written)))
+ case 'X':
+ n := 0
+ if buf.Len() > 0 {
+ n, err = strconv.Atoi(buf.String())
+ if err != nil {
+ continue
+ }
+ }
+ procGetConsoleScreenBufferInfo.Call(uintptr(handle), uintptr(unsafe.Pointer(&csbi)))
+ var cursor coord
+ var written dword
+ cursor = coord{x: csbi.cursorPosition.x, y: csbi.cursorPosition.y}
+ procFillConsoleOutputCharacter.Call(uintptr(handle), uintptr(' '), uintptr(n), *(*uintptr)(unsafe.Pointer(&cursor)), uintptr(unsafe.Pointer(&written)))
+ procFillConsoleOutputAttribute.Call(uintptr(handle), uintptr(csbi.attributes), uintptr(n), *(*uintptr)(unsafe.Pointer(&cursor)), uintptr(unsafe.Pointer(&written)))
case 'm':
procGetConsoleScreenBufferInfo.Call(uintptr(handle), uintptr(unsafe.Pointer(&csbi)))
attr := csbi.attributes
diff --git a/vendor/github.com/mattn/go-colorable/go.mod b/vendor/github.com/mattn/go-colorable/go.mod
index 9d9f42485411b..ef3ca9d4c311a 100644
--- a/vendor/github.com/mattn/go-colorable/go.mod
+++ b/vendor/github.com/mattn/go-colorable/go.mod
@@ -1,3 +1,3 @@
module github.com/mattn/go-colorable
-require github.com/mattn/go-isatty v0.0.5
+require github.com/mattn/go-isatty v0.0.8
diff --git a/vendor/github.com/mattn/go-colorable/noncolorable.go b/vendor/github.com/mattn/go-colorable/noncolorable.go
index 9721e16f4bf4b..95f2c6be25766 100644
--- a/vendor/github.com/mattn/go-colorable/noncolorable.go
+++ b/vendor/github.com/mattn/go-colorable/noncolorable.go
@@ -5,17 +5,17 @@ import (
"io"
)
-// NonColorable hold writer but remove escape sequence.
+// NonColorable holds writer but removes escape sequence.
type NonColorable struct {
out io.Writer
}
-// NewNonColorable return new instance of Writer which remove escape sequence from Writer.
+// NewNonColorable returns new instance of Writer which removes escape sequence from Writer.
func NewNonColorable(w io.Writer) io.Writer {
return &NonColorable{out: w}
}
-// Write write data on console
+// Write writes data on console
func (w *NonColorable) Write(data []byte) (n int, err error) {
er := bytes.NewReader(data)
var bw [1]byte
diff --git a/vendor/github.com/mattn/go-isatty/go.mod b/vendor/github.com/mattn/go-isatty/go.mod
index f310320c33f50..a8ddf404fc16f 100644
--- a/vendor/github.com/mattn/go-isatty/go.mod
+++ b/vendor/github.com/mattn/go-isatty/go.mod
@@ -1,3 +1,5 @@
module github.com/mattn/go-isatty
-require golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223
+require golang.org/x/sys v0.0.0-20191008105621-543471e840be
+
+go 1.14
diff --git a/vendor/github.com/mattn/go-isatty/go.sum b/vendor/github.com/mattn/go-isatty/go.sum
index 426c8973c0e23..c141fc53a955d 100644
--- a/vendor/github.com/mattn/go-isatty/go.sum
+++ b/vendor/github.com/mattn/go-isatty/go.sum
@@ -1,2 +1,4 @@
-golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223 h1:DH4skfRX4EBpamg7iV4ZlCpblAHI6s6TDM39bFZumv8=
-golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a h1:aYOabOQFp6Vj6W1F80affTUvO9UxmJRx8K0gsfABByQ=
+golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191008105621-543471e840be h1:QAcqgptGM8IQBC9K/RC4o+O9YmqEm0diQn9QmZw/0mU=
+golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
diff --git a/vendor/github.com/mattn/go-isatty/isatty_android.go b/vendor/github.com/mattn/go-isatty/isatty_android.go
new file mode 100644
index 0000000000000..d3567cb5bf2b7
--- /dev/null
+++ b/vendor/github.com/mattn/go-isatty/isatty_android.go
@@ -0,0 +1,23 @@
+// +build android
+
+package isatty
+
+import (
+ "syscall"
+ "unsafe"
+)
+
+const ioctlReadTermios = syscall.TCGETS
+
+// IsTerminal return true if the file descriptor is terminal.
+func IsTerminal(fd uintptr) bool {
+ var termios syscall.Termios
+ _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, fd, ioctlReadTermios, uintptr(unsafe.Pointer(&termios)), 0, 0, 0)
+ return err == 0
+}
+
+// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2
+// terminal. This is also always false on this environment.
+func IsCygwinTerminal(fd uintptr) bool {
+ return false
+}
diff --git a/vendor/github.com/mattn/go-isatty/isatty_others.go b/vendor/github.com/mattn/go-isatty/isatty_others.go
index f02849c56f220..ff714a37615b9 100644
--- a/vendor/github.com/mattn/go-isatty/isatty_others.go
+++ b/vendor/github.com/mattn/go-isatty/isatty_others.go
@@ -1,4 +1,4 @@
-// +build appengine js
+// +build appengine js nacl
package isatty
diff --git a/vendor/github.com/mattn/go-isatty/isatty_plan9.go b/vendor/github.com/mattn/go-isatty/isatty_plan9.go
new file mode 100644
index 0000000000000..bc0a70920f4d1
--- /dev/null
+++ b/vendor/github.com/mattn/go-isatty/isatty_plan9.go
@@ -0,0 +1,22 @@
+// +build plan9
+
+package isatty
+
+import (
+ "syscall"
+)
+
+// IsTerminal returns true if the given file descriptor is a terminal.
+func IsTerminal(fd uintptr) bool {
+ path, err := syscall.Fd2path(fd)
+ if err != nil {
+ return false
+ }
+ return path == "/dev/cons" || path == "/mnt/term/dev/cons"
+}
+
+// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2
+// terminal. This is also always false on this environment.
+func IsCygwinTerminal(fd uintptr) bool {
+ return false
+}
diff --git a/vendor/github.com/mattn/go-isatty/isatty_linux.go b/vendor/github.com/mattn/go-isatty/isatty_tcgets.go
similarity index 91%
rename from vendor/github.com/mattn/go-isatty/isatty_linux.go
rename to vendor/github.com/mattn/go-isatty/isatty_tcgets.go
index e004038ee7044..453b025d0df02 100644
--- a/vendor/github.com/mattn/go-isatty/isatty_linux.go
+++ b/vendor/github.com/mattn/go-isatty/isatty_tcgets.go
@@ -1,5 +1,6 @@
-// +build linux
+// +build linux aix
// +build !appengine
+// +build !android
package isatty
diff --git a/vendor/github.com/mattn/go-isatty/isatty_windows.go b/vendor/github.com/mattn/go-isatty/isatty_windows.go
index af51cbcaa4853..1fa8691540590 100644
--- a/vendor/github.com/mattn/go-isatty/isatty_windows.go
+++ b/vendor/github.com/mattn/go-isatty/isatty_windows.go
@@ -4,6 +4,7 @@
package isatty
import (
+ "errors"
"strings"
"syscall"
"unicode/utf16"
@@ -11,15 +12,18 @@ import (
)
const (
- fileNameInfo uintptr = 2
- fileTypePipe = 3
+ objectNameInfo uintptr = 1
+ fileNameInfo = 2
+ fileTypePipe = 3
)
var (
kernel32 = syscall.NewLazyDLL("kernel32.dll")
+ ntdll = syscall.NewLazyDLL("ntdll.dll")
procGetConsoleMode = kernel32.NewProc("GetConsoleMode")
procGetFileInformationByHandleEx = kernel32.NewProc("GetFileInformationByHandleEx")
procGetFileType = kernel32.NewProc("GetFileType")
+ procNtQueryObject = ntdll.NewProc("NtQueryObject")
)
func init() {
@@ -45,7 +49,10 @@ func isCygwinPipeName(name string) bool {
return false
}
- if token[0] != `\msys` && token[0] != `\cygwin` {
+ if token[0] != `\msys` &&
+ token[0] != `\cygwin` &&
+ token[0] != `\Device\NamedPipe\msys` &&
+ token[0] != `\Device\NamedPipe\cygwin` {
return false
}
@@ -68,11 +75,35 @@ func isCygwinPipeName(name string) bool {
return true
}
+// getFileNameByHandle use the undocomented ntdll NtQueryObject to get file full name from file handler
+// since GetFileInformationByHandleEx is not avilable under windows Vista and still some old fashion
+// guys are using Windows XP, this is a workaround for those guys, it will also work on system from
+// Windows vista to 10
+// see https://stackoverflow.com/a/18792477 for details
+func getFileNameByHandle(fd uintptr) (string, error) {
+ if procNtQueryObject == nil {
+ return "", errors.New("ntdll.dll: NtQueryObject not supported")
+ }
+
+ var buf [4 + syscall.MAX_PATH]uint16
+ var result int
+ r, _, e := syscall.Syscall6(procNtQueryObject.Addr(), 5,
+ fd, objectNameInfo, uintptr(unsafe.Pointer(&buf)), uintptr(2*len(buf)), uintptr(unsafe.Pointer(&result)), 0)
+ if r != 0 {
+ return "", e
+ }
+ return string(utf16.Decode(buf[4 : 4+buf[0]/2])), nil
+}
+
// IsCygwinTerminal() return true if the file descriptor is a cygwin or msys2
// terminal.
func IsCygwinTerminal(fd uintptr) bool {
if procGetFileInformationByHandleEx == nil {
- return false
+ name, err := getFileNameByHandle(fd)
+ if err != nil {
+ return false
+ }
+ return isCygwinPipeName(name)
}
// Cygwin/msys's pty is a pipe.
diff --git a/vendor/github.com/shopspring/decimal/.gitignore b/vendor/github.com/shopspring/decimal/.gitignore
new file mode 100644
index 0000000000000..8a43ce9d7b6b6
--- /dev/null
+++ b/vendor/github.com/shopspring/decimal/.gitignore
@@ -0,0 +1,6 @@
+.git
+*.swp
+
+# IntelliJ
+.idea/
+*.iml
diff --git a/vendor/github.com/shopspring/decimal/.travis.yml b/vendor/github.com/shopspring/decimal/.travis.yml
new file mode 100644
index 0000000000000..55d42b289d09f
--- /dev/null
+++ b/vendor/github.com/shopspring/decimal/.travis.yml
@@ -0,0 +1,13 @@
+language: go
+
+go:
+ - 1.7.x
+ - 1.12.x
+ - 1.13.x
+ - tip
+
+install:
+ - go build .
+
+script:
+ - go test -v
diff --git a/vendor/github.com/shopspring/decimal/CHANGELOG.md b/vendor/github.com/shopspring/decimal/CHANGELOG.md
new file mode 100644
index 0000000000000..01ba02feb2c7b
--- /dev/null
+++ b/vendor/github.com/shopspring/decimal/CHANGELOG.md
@@ -0,0 +1,19 @@
+## Decimal v1.2.0
+
+#### BREAKING
+- Drop support for Go version older than 1.7 [#172](https://github.com/shopspring/decimal/pull/172)
+
+#### FEATURES
+- Add NewFromInt and NewFromInt32 initializers [#72](https://github.com/shopspring/decimal/pull/72)
+- Add support for Go modules [#157](https://github.com/shopspring/decimal/pull/157)
+- Add BigInt, BigFloat helper methods [#171](https://github.com/shopspring/decimal/pull/171)
+
+#### ENHANCEMENTS
+- Memory usage optimization [#160](https://github.com/shopspring/decimal/pull/160)
+- Updated travis CI golang versions [#156](https://github.com/shopspring/decimal/pull/156)
+- Update documentation [#173](https://github.com/shopspring/decimal/pull/173)
+- Improve code quality [#174](https://github.com/shopspring/decimal/pull/174)
+
+#### BUGFIXES
+- Revert remove insignificant digits [#159](https://github.com/shopspring/decimal/pull/159)
+- Remove 15 interval for RoundCash [#166](https://github.com/shopspring/decimal/pull/166)
diff --git a/vendor/github.com/shopspring/decimal/LICENSE b/vendor/github.com/shopspring/decimal/LICENSE
new file mode 100644
index 0000000000000..ad2148aaf93e3
--- /dev/null
+++ b/vendor/github.com/shopspring/decimal/LICENSE
@@ -0,0 +1,45 @@
+The MIT License (MIT)
+
+Copyright (c) 2015 Spring, Inc.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
+
+- Based on https://github.com/oguzbilgic/fpd, which has the following license:
+"""
+The MIT License (MIT)
+
+Copyright (c) 2013 Oguz Bilgic
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of
+this software and associated documentation files (the "Software"), to deal in
+the Software without restriction, including without limitation the rights to
+use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
+the Software, and to permit persons to whom the Software is furnished to do so,
+subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
+FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
+COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
+IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+"""
diff --git a/vendor/github.com/shopspring/decimal/README.md b/vendor/github.com/shopspring/decimal/README.md
new file mode 100644
index 0000000000000..b70f901593517
--- /dev/null
+++ b/vendor/github.com/shopspring/decimal/README.md
@@ -0,0 +1,130 @@
+# decimal
+
+[![Build Status](https://travis-ci.org/shopspring/decimal.png?branch=master)](https://travis-ci.org/shopspring/decimal) [![GoDoc](https://godoc.org/github.com/shopspring/decimal?status.svg)](https://godoc.org/github.com/shopspring/decimal) [![Go Report Card](https://goreportcard.com/badge/github.com/shopspring/decimal)](https://goreportcard.com/report/github.com/shopspring/decimal)
+
+Arbitrary-precision fixed-point decimal numbers in go.
+
+_Note:_ Decimal library can "only" represent numbers with a maximum of 2^31 digits after the decimal point.
+
+## Features
+
+ * The zero-value is 0, and is safe to use without initialization
+ * Addition, subtraction, multiplication with no loss of precision
+ * Division with specified precision
+ * Database/sql serialization/deserialization
+ * JSON and XML serialization/deserialization
+
+## Install
+
+Run `go get github.com/shopspring/decimal`
+
+## Requirements
+
+Decimal library requires Go version `>=1.7`
+
+## Usage
+
+```go
+package main
+
+import (
+ "fmt"
+ "github.com/shopspring/decimal"
+)
+
+func main() {
+ price, err := decimal.NewFromString("136.02")
+ if err != nil {
+ panic(err)
+ }
+
+ quantity := decimal.NewFromInt(3)
+
+ fee, _ := decimal.NewFromString(".035")
+ taxRate, _ := decimal.NewFromString(".08875")
+
+ subtotal := price.Mul(quantity)
+
+ preTax := subtotal.Mul(fee.Add(decimal.NewFromFloat(1)))
+
+ total := preTax.Mul(taxRate.Add(decimal.NewFromFloat(1)))
+
+ fmt.Println("Subtotal:", subtotal) // Subtotal: 408.06
+ fmt.Println("Pre-tax:", preTax) // Pre-tax: 422.3421
+ fmt.Println("Taxes:", total.Sub(preTax)) // Taxes: 37.482861375
+ fmt.Println("Total:", total) // Total: 459.824961375
+ fmt.Println("Tax rate:", total.Sub(preTax).Div(preTax)) // Tax rate: 0.08875
+}
+```
+
+## Documentation
+
+http://godoc.org/github.com/shopspring/decimal
+
+## Production Usage
+
+* [Spring](https://shopspring.com/), since August 14, 2014.
+* If you are using this in production, please let us know!
+
+## FAQ
+
+#### Why don't you just use float64?
+
+Because float64 (or any binary floating point type, actually) can't represent
+numbers such as `0.1` exactly.
+
+Consider this code: http://play.golang.org/p/TQBd4yJe6B You might expect that
+it prints out `10`, but it actually prints `9.999999999999831`. Over time,
+these small errors can really add up!
+
+#### Why don't you just use big.Rat?
+
+big.Rat is fine for representing rational numbers, but Decimal is better for
+representing money. Why? Here's a (contrived) example:
+
+Let's say you use big.Rat, and you have two numbers, x and y, both
+representing 1/3, and you have `z = 1 - x - y = 1/3`. If you print each one
+out, the string output has to stop somewhere (let's say it stops at 3 decimal
+digits, for simplicity), so you'll get 0.333, 0.333, and 0.333. But where did
+the other 0.001 go?
+
+Here's the above example as code: http://play.golang.org/p/lCZZs0w9KE
+
+With Decimal, the strings being printed out represent the number exactly. So,
+if you have `x = y = 1/3` (with precision 3), they will actually be equal to
+0.333, and when you do `z = 1 - x - y`, `z` will be equal to .334. No money is
+unaccounted for!
+
+You still have to be careful. If you want to split a number `N` 3 ways, you
+can't just send `N/3` to three different people. You have to pick one to send
+`N - (2/3*N)` to. That person will receive the fraction of a penny remainder.
+
+But, it is much easier to be careful with Decimal than with big.Rat.
+
+#### Why isn't the API similar to big.Int's?
+
+big.Int's API is built to reduce the number of memory allocations for maximal
+performance. This makes sense for its use-case, but the trade-off is that the
+API is awkward and easy to misuse.
+
+For example, to add two big.Ints, you do: `z := new(big.Int).Add(x, y)`. A
+developer unfamiliar with this API might try to do `z := a.Add(a, b)`. This
+modifies `a` and sets `z` as an alias for `a`, which they might not expect. It
+also modifies any other aliases to `a`.
+
+Here's an example of the subtle bugs you can introduce with big.Int's API:
+https://play.golang.org/p/x2R_78pa8r
+
+In contrast, it's difficult to make such mistakes with decimal. Decimals
+behave like other go numbers types: even though `a = b` will not deep copy
+`b` into `a`, it is impossible to modify a Decimal, since all Decimal methods
+return new Decimals and do not modify the originals. The downside is that
+this causes extra allocations, so Decimal is less performant. My assumption
+is that if you're using Decimals, you probably care more about correctness
+than performance.
+
+## License
+
+The MIT License (MIT)
+
+This is a heavily modified fork of [fpd.Decimal](https://github.com/oguzbilgic/fpd), which was also released under the MIT License.
diff --git a/vendor/github.com/shopspring/decimal/decimal-go.go b/vendor/github.com/shopspring/decimal/decimal-go.go
new file mode 100644
index 0000000000000..9958d6902063f
--- /dev/null
+++ b/vendor/github.com/shopspring/decimal/decimal-go.go
@@ -0,0 +1,415 @@
+// Copyright 2009 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Multiprecision decimal numbers.
+// For floating-point formatting only; not general purpose.
+// Only operations are assign and (binary) left/right shift.
+// Can do binary floating point in multiprecision decimal precisely
+// because 2 divides 10; cannot do decimal floating point
+// in multiprecision binary precisely.
+
+package decimal
+
+type decimal struct {
+ d [800]byte // digits, big-endian representation
+ nd int // number of digits used
+ dp int // decimal point
+ neg bool // negative flag
+ trunc bool // discarded nonzero digits beyond d[:nd]
+}
+
+func (a *decimal) String() string {
+ n := 10 + a.nd
+ if a.dp > 0 {
+ n += a.dp
+ }
+ if a.dp < 0 {
+ n += -a.dp
+ }
+
+ buf := make([]byte, n)
+ w := 0
+ switch {
+ case a.nd == 0:
+ return "0"
+
+ case a.dp <= 0:
+ // zeros fill space between decimal point and digits
+ buf[w] = '0'
+ w++
+ buf[w] = '.'
+ w++
+ w += digitZero(buf[w : w+-a.dp])
+ w += copy(buf[w:], a.d[0:a.nd])
+
+ case a.dp < a.nd:
+ // decimal point in middle of digits
+ w += copy(buf[w:], a.d[0:a.dp])
+ buf[w] = '.'
+ w++
+ w += copy(buf[w:], a.d[a.dp:a.nd])
+
+ default:
+ // zeros fill space between digits and decimal point
+ w += copy(buf[w:], a.d[0:a.nd])
+ w += digitZero(buf[w : w+a.dp-a.nd])
+ }
+ return string(buf[0:w])
+}
+
+func digitZero(dst []byte) int {
+ for i := range dst {
+ dst[i] = '0'
+ }
+ return len(dst)
+}
+
+// trim trailing zeros from number.
+// (They are meaningless; the decimal point is tracked
+// independent of the number of digits.)
+func trim(a *decimal) {
+ for a.nd > 0 && a.d[a.nd-1] == '0' {
+ a.nd--
+ }
+ if a.nd == 0 {
+ a.dp = 0
+ }
+}
+
+// Assign v to a.
+func (a *decimal) Assign(v uint64) {
+ var buf [24]byte
+
+ // Write reversed decimal in buf.
+ n := 0
+ for v > 0 {
+ v1 := v / 10
+ v -= 10 * v1
+ buf[n] = byte(v + '0')
+ n++
+ v = v1
+ }
+
+ // Reverse again to produce forward decimal in a.d.
+ a.nd = 0
+ for n--; n >= 0; n-- {
+ a.d[a.nd] = buf[n]
+ a.nd++
+ }
+ a.dp = a.nd
+ trim(a)
+}
+
+// Maximum shift that we can do in one pass without overflow.
+// A uint has 32 or 64 bits, and we have to be able to accommodate 9<> 63)
+const maxShift = uintSize - 4
+
+// Binary shift right (/ 2) by k bits. k <= maxShift to avoid overflow.
+func rightShift(a *decimal, k uint) {
+ r := 0 // read pointer
+ w := 0 // write pointer
+
+ // Pick up enough leading digits to cover first shift.
+ var n uint
+ for ; n>>k == 0; r++ {
+ if r >= a.nd {
+ if n == 0 {
+ // a == 0; shouldn't get here, but handle anyway.
+ a.nd = 0
+ return
+ }
+ for n>>k == 0 {
+ n = n * 10
+ r++
+ }
+ break
+ }
+ c := uint(a.d[r])
+ n = n*10 + c - '0'
+ }
+ a.dp -= r - 1
+
+ var mask uint = (1 << k) - 1
+
+ // Pick up a digit, put down a digit.
+ for ; r < a.nd; r++ {
+ c := uint(a.d[r])
+ dig := n >> k
+ n &= mask
+ a.d[w] = byte(dig + '0')
+ w++
+ n = n*10 + c - '0'
+ }
+
+ // Put down extra digits.
+ for n > 0 {
+ dig := n >> k
+ n &= mask
+ if w < len(a.d) {
+ a.d[w] = byte(dig + '0')
+ w++
+ } else if dig > 0 {
+ a.trunc = true
+ }
+ n = n * 10
+ }
+
+ a.nd = w
+ trim(a)
+}
+
+// Cheat sheet for left shift: table indexed by shift count giving
+// number of new digits that will be introduced by that shift.
+//
+// For example, leftcheats[4] = {2, "625"}. That means that
+// if we are shifting by 4 (multiplying by 16), it will add 2 digits
+// when the string prefix is "625" through "999", and one fewer digit
+// if the string prefix is "000" through "624".
+//
+// Credit for this trick goes to Ken.
+
+type leftCheat struct {
+ delta int // number of new digits
+ cutoff string // minus one digit if original < a.
+}
+
+var leftcheats = []leftCheat{
+ // Leading digits of 1/2^i = 5^i.
+ // 5^23 is not an exact 64-bit floating point number,
+ // so have to use bc for the math.
+ // Go up to 60 to be large enough for 32bit and 64bit platforms.
+ /*
+ seq 60 | sed 's/^/5^/' | bc |
+ awk 'BEGIN{ print "\t{ 0, \"\" }," }
+ {
+ log2 = log(2)/log(10)
+ printf("\t{ %d, \"%s\" },\t// * %d\n",
+ int(log2*NR+1), $0, 2**NR)
+ }'
+ */
+ {0, ""},
+ {1, "5"}, // * 2
+ {1, "25"}, // * 4
+ {1, "125"}, // * 8
+ {2, "625"}, // * 16
+ {2, "3125"}, // * 32
+ {2, "15625"}, // * 64
+ {3, "78125"}, // * 128
+ {3, "390625"}, // * 256
+ {3, "1953125"}, // * 512
+ {4, "9765625"}, // * 1024
+ {4, "48828125"}, // * 2048
+ {4, "244140625"}, // * 4096
+ {4, "1220703125"}, // * 8192
+ {5, "6103515625"}, // * 16384
+ {5, "30517578125"}, // * 32768
+ {5, "152587890625"}, // * 65536
+ {6, "762939453125"}, // * 131072
+ {6, "3814697265625"}, // * 262144
+ {6, "19073486328125"}, // * 524288
+ {7, "95367431640625"}, // * 1048576
+ {7, "476837158203125"}, // * 2097152
+ {7, "2384185791015625"}, // * 4194304
+ {7, "11920928955078125"}, // * 8388608
+ {8, "59604644775390625"}, // * 16777216
+ {8, "298023223876953125"}, // * 33554432
+ {8, "1490116119384765625"}, // * 67108864
+ {9, "7450580596923828125"}, // * 134217728
+ {9, "37252902984619140625"}, // * 268435456
+ {9, "186264514923095703125"}, // * 536870912
+ {10, "931322574615478515625"}, // * 1073741824
+ {10, "4656612873077392578125"}, // * 2147483648
+ {10, "23283064365386962890625"}, // * 4294967296
+ {10, "116415321826934814453125"}, // * 8589934592
+ {11, "582076609134674072265625"}, // * 17179869184
+ {11, "2910383045673370361328125"}, // * 34359738368
+ {11, "14551915228366851806640625"}, // * 68719476736
+ {12, "72759576141834259033203125"}, // * 137438953472
+ {12, "363797880709171295166015625"}, // * 274877906944
+ {12, "1818989403545856475830078125"}, // * 549755813888
+ {13, "9094947017729282379150390625"}, // * 1099511627776
+ {13, "45474735088646411895751953125"}, // * 2199023255552
+ {13, "227373675443232059478759765625"}, // * 4398046511104
+ {13, "1136868377216160297393798828125"}, // * 8796093022208
+ {14, "5684341886080801486968994140625"}, // * 17592186044416
+ {14, "28421709430404007434844970703125"}, // * 35184372088832
+ {14, "142108547152020037174224853515625"}, // * 70368744177664
+ {15, "710542735760100185871124267578125"}, // * 140737488355328
+ {15, "3552713678800500929355621337890625"}, // * 281474976710656
+ {15, "17763568394002504646778106689453125"}, // * 562949953421312
+ {16, "88817841970012523233890533447265625"}, // * 1125899906842624
+ {16, "444089209850062616169452667236328125"}, // * 2251799813685248
+ {16, "2220446049250313080847263336181640625"}, // * 4503599627370496
+ {16, "11102230246251565404236316680908203125"}, // * 9007199254740992
+ {17, "55511151231257827021181583404541015625"}, // * 18014398509481984
+ {17, "277555756156289135105907917022705078125"}, // * 36028797018963968
+ {17, "1387778780781445675529539585113525390625"}, // * 72057594037927936
+ {18, "6938893903907228377647697925567626953125"}, // * 144115188075855872
+ {18, "34694469519536141888238489627838134765625"}, // * 288230376151711744
+ {18, "173472347597680709441192448139190673828125"}, // * 576460752303423488
+ {19, "867361737988403547205962240695953369140625"}, // * 1152921504606846976
+}
+
+// Is the leading prefix of b lexicographically less than s?
+func prefixIsLessThan(b []byte, s string) bool {
+ for i := 0; i < len(s); i++ {
+ if i >= len(b) {
+ return true
+ }
+ if b[i] != s[i] {
+ return b[i] < s[i]
+ }
+ }
+ return false
+}
+
+// Binary shift left (* 2) by k bits. k <= maxShift to avoid overflow.
+func leftShift(a *decimal, k uint) {
+ delta := leftcheats[k].delta
+ if prefixIsLessThan(a.d[0:a.nd], leftcheats[k].cutoff) {
+ delta--
+ }
+
+ r := a.nd // read index
+ w := a.nd + delta // write index
+
+ // Pick up a digit, put down a digit.
+ var n uint
+ for r--; r >= 0; r-- {
+ n += (uint(a.d[r]) - '0') << k
+ quo := n / 10
+ rem := n - 10*quo
+ w--
+ if w < len(a.d) {
+ a.d[w] = byte(rem + '0')
+ } else if rem != 0 {
+ a.trunc = true
+ }
+ n = quo
+ }
+
+ // Put down extra digits.
+ for n > 0 {
+ quo := n / 10
+ rem := n - 10*quo
+ w--
+ if w < len(a.d) {
+ a.d[w] = byte(rem + '0')
+ } else if rem != 0 {
+ a.trunc = true
+ }
+ n = quo
+ }
+
+ a.nd += delta
+ if a.nd >= len(a.d) {
+ a.nd = len(a.d)
+ }
+ a.dp += delta
+ trim(a)
+}
+
+// Binary shift left (k > 0) or right (k < 0).
+func (a *decimal) Shift(k int) {
+ switch {
+ case a.nd == 0:
+ // nothing to do: a == 0
+ case k > 0:
+ for k > maxShift {
+ leftShift(a, maxShift)
+ k -= maxShift
+ }
+ leftShift(a, uint(k))
+ case k < 0:
+ for k < -maxShift {
+ rightShift(a, maxShift)
+ k += maxShift
+ }
+ rightShift(a, uint(-k))
+ }
+}
+
+// If we chop a at nd digits, should we round up?
+func shouldRoundUp(a *decimal, nd int) bool {
+ if nd < 0 || nd >= a.nd {
+ return false
+ }
+ if a.d[nd] == '5' && nd+1 == a.nd { // exactly halfway - round to even
+ // if we truncated, a little higher than what's recorded - always round up
+ if a.trunc {
+ return true
+ }
+ return nd > 0 && (a.d[nd-1]-'0')%2 != 0
+ }
+ // not halfway - digit tells all
+ return a.d[nd] >= '5'
+}
+
+// Round a to nd digits (or fewer).
+// If nd is zero, it means we're rounding
+// just to the left of the digits, as in
+// 0.09 -> 0.1.
+func (a *decimal) Round(nd int) {
+ if nd < 0 || nd >= a.nd {
+ return
+ }
+ if shouldRoundUp(a, nd) {
+ a.RoundUp(nd)
+ } else {
+ a.RoundDown(nd)
+ }
+}
+
+// Round a down to nd digits (or fewer).
+func (a *decimal) RoundDown(nd int) {
+ if nd < 0 || nd >= a.nd {
+ return
+ }
+ a.nd = nd
+ trim(a)
+}
+
+// Round a up to nd digits (or fewer).
+func (a *decimal) RoundUp(nd int) {
+ if nd < 0 || nd >= a.nd {
+ return
+ }
+
+ // round up
+ for i := nd - 1; i >= 0; i-- {
+ c := a.d[i]
+ if c < '9' { // can stop after this digit
+ a.d[i]++
+ a.nd = i + 1
+ return
+ }
+ }
+
+ // Number is all 9s.
+ // Change to single 1 with adjusted decimal point.
+ a.d[0] = '1'
+ a.nd = 1
+ a.dp++
+}
+
+// Extract integer part, rounded appropriately.
+// No guarantees about overflow.
+func (a *decimal) RoundedInteger() uint64 {
+ if a.dp > 20 {
+ return 0xFFFFFFFFFFFFFFFF
+ }
+ var i int
+ n := uint64(0)
+ for i = 0; i < a.dp && i < a.nd; i++ {
+ n = n*10 + uint64(a.d[i]-'0')
+ }
+ for ; i < a.dp; i++ {
+ n *= 10
+ }
+ if shouldRoundUp(a, a.dp) {
+ n++
+ }
+ return n
+}
diff --git a/vendor/github.com/shopspring/decimal/decimal.go b/vendor/github.com/shopspring/decimal/decimal.go
new file mode 100644
index 0000000000000..801c1a0457a46
--- /dev/null
+++ b/vendor/github.com/shopspring/decimal/decimal.go
@@ -0,0 +1,1477 @@
+// Package decimal implements an arbitrary precision fixed-point decimal.
+//
+// The zero-value of a Decimal is 0, as you would expect.
+//
+// The best way to create a new Decimal is to use decimal.NewFromString, ex:
+//
+// n, err := decimal.NewFromString("-123.4567")
+// n.String() // output: "-123.4567"
+//
+// To use Decimal as part of a struct:
+//
+// type Struct struct {
+// Number Decimal
+// }
+//
+// Note: This can "only" represent numbers with a maximum of 2^31 digits after the decimal point.
+package decimal
+
+import (
+ "database/sql/driver"
+ "encoding/binary"
+ "fmt"
+ "math"
+ "math/big"
+ "strconv"
+ "strings"
+)
+
+// DivisionPrecision is the number of decimal places in the result when it
+// doesn't divide exactly.
+//
+// Example:
+//
+// d1 := decimal.NewFromFloat(2).Div(decimal.NewFromFloat(3))
+// d1.String() // output: "0.6666666666666667"
+// d2 := decimal.NewFromFloat(2).Div(decimal.NewFromFloat(30000))
+// d2.String() // output: "0.0000666666666667"
+// d3 := decimal.NewFromFloat(20000).Div(decimal.NewFromFloat(3))
+// d3.String() // output: "6666.6666666666666667"
+// decimal.DivisionPrecision = 3
+// d4 := decimal.NewFromFloat(2).Div(decimal.NewFromFloat(3))
+// d4.String() // output: "0.667"
+//
+var DivisionPrecision = 16
+
+// MarshalJSONWithoutQuotes should be set to true if you want the decimal to
+// be JSON marshaled as a number, instead of as a string.
+// WARNING: this is dangerous for decimals with many digits, since many JSON
+// unmarshallers (ex: Javascript's) will unmarshal JSON numbers to IEEE 754
+// double-precision floating point numbers, which means you can potentially
+// silently lose precision.
+var MarshalJSONWithoutQuotes = false
+
+// Zero constant, to make computations faster.
+// Zero should never be compared with == or != directly, please use decimal.Equal or decimal.Cmp instead.
+var Zero = New(0, 1)
+
+var zeroInt = big.NewInt(0)
+var oneInt = big.NewInt(1)
+var twoInt = big.NewInt(2)
+var fourInt = big.NewInt(4)
+var fiveInt = big.NewInt(5)
+var tenInt = big.NewInt(10)
+var twentyInt = big.NewInt(20)
+
+// Decimal represents a fixed-point decimal. It is immutable.
+// number = value * 10 ^ exp
+type Decimal struct {
+ value *big.Int
+
+ // NOTE(vadim): this must be an int32, because we cast it to float64 during
+ // calculations. If exp is 64 bit, we might lose precision.
+ // If we cared about being able to represent every possible decimal, we
+ // could make exp a *big.Int but it would hurt performance and numbers
+ // like that are unrealistic.
+ exp int32
+}
+
+// New returns a new fixed-point decimal, value * 10 ^ exp.
+func New(value int64, exp int32) Decimal {
+ return Decimal{
+ value: big.NewInt(value),
+ exp: exp,
+ }
+}
+
+// NewFromInt converts a int64 to Decimal.
+//
+// Example:
+//
+// NewFromInt(123).String() // output: "123"
+// NewFromInt(-10).String() // output: "-10"
+func NewFromInt(value int64) Decimal {
+ return Decimal{
+ value: big.NewInt(value),
+ exp: 0,
+ }
+}
+
+// NewFromInt32 converts a int32 to Decimal.
+//
+// Example:
+//
+// NewFromInt(123).String() // output: "123"
+// NewFromInt(-10).String() // output: "-10"
+func NewFromInt32(value int32) Decimal {
+ return Decimal{
+ value: big.NewInt(int64(value)),
+ exp: 0,
+ }
+}
+
+// NewFromBigInt returns a new Decimal from a big.Int, value * 10 ^ exp
+func NewFromBigInt(value *big.Int, exp int32) Decimal {
+ return Decimal{
+ value: big.NewInt(0).Set(value),
+ exp: exp,
+ }
+}
+
+// NewFromString returns a new Decimal from a string representation.
+// Trailing zeroes are not trimmed.
+//
+// Example:
+//
+// d, err := NewFromString("-123.45")
+// d2, err := NewFromString(".0001")
+// d3, err := NewFromString("1.47000")
+//
+func NewFromString(value string) (Decimal, error) {
+ originalInput := value
+ var intString string
+ var exp int64
+
+ // Check if number is using scientific notation
+ eIndex := strings.IndexAny(value, "Ee")
+ if eIndex != -1 {
+ expInt, err := strconv.ParseInt(value[eIndex+1:], 10, 32)
+ if err != nil {
+ if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
+ return Decimal{}, fmt.Errorf("can't convert %s to decimal: fractional part too long", value)
+ }
+ return Decimal{}, fmt.Errorf("can't convert %s to decimal: exponent is not numeric", value)
+ }
+ value = value[:eIndex]
+ exp = expInt
+ }
+
+ parts := strings.Split(value, ".")
+ if len(parts) == 1 {
+ // There is no decimal point, we can just parse the original string as
+ // an int
+ intString = value
+ } else if len(parts) == 2 {
+ intString = parts[0] + parts[1]
+ expInt := -len(parts[1])
+ exp += int64(expInt)
+ } else {
+ return Decimal{}, fmt.Errorf("can't convert %s to decimal: too many .s", value)
+ }
+
+ dValue := new(big.Int)
+ _, ok := dValue.SetString(intString, 10)
+ if !ok {
+ return Decimal{}, fmt.Errorf("can't convert %s to decimal", value)
+ }
+
+ if exp < math.MinInt32 || exp > math.MaxInt32 {
+ // NOTE(vadim): I doubt a string could realistically be this long
+ return Decimal{}, fmt.Errorf("can't convert %s to decimal: fractional part too long", originalInput)
+ }
+
+ return Decimal{
+ value: dValue,
+ exp: int32(exp),
+ }, nil
+}
+
+// RequireFromString returns a new Decimal from a string representation
+// or panics if NewFromString would have returned an error.
+//
+// Example:
+//
+// d := RequireFromString("-123.45")
+// d2 := RequireFromString(".0001")
+//
+func RequireFromString(value string) Decimal {
+ dec, err := NewFromString(value)
+ if err != nil {
+ panic(err)
+ }
+ return dec
+}
+
+// NewFromFloat converts a float64 to Decimal.
+//
+// The converted number will contain the number of significant digits that can be
+// represented in a float with reliable roundtrip.
+// This is typically 15 digits, but may be more in some cases.
+// See https://www.exploringbinary.com/decimal-precision-of-binary-floating-point-numbers/ for more information.
+//
+// For slightly faster conversion, use NewFromFloatWithExponent where you can specify the precision in absolute terms.
+//
+// NOTE: this will panic on NaN, +/-inf
+func NewFromFloat(value float64) Decimal {
+ if value == 0 {
+ return New(0, 0)
+ }
+ return newFromFloat(value, math.Float64bits(value), &float64info)
+}
+
+// NewFromFloat32 converts a float32 to Decimal.
+//
+// The converted number will contain the number of significant digits that can be
+// represented in a float with reliable roundtrip.
+// This is typically 6-8 digits depending on the input.
+// See https://www.exploringbinary.com/decimal-precision-of-binary-floating-point-numbers/ for more information.
+//
+// For slightly faster conversion, use NewFromFloatWithExponent where you can specify the precision in absolute terms.
+//
+// NOTE: this will panic on NaN, +/-inf
+func NewFromFloat32(value float32) Decimal {
+ if value == 0 {
+ return New(0, 0)
+ }
+ // XOR is workaround for https://github.com/golang/go/issues/26285
+ a := math.Float32bits(value) ^ 0x80808080
+ return newFromFloat(float64(value), uint64(a)^0x80808080, &float32info)
+}
+
+func newFromFloat(val float64, bits uint64, flt *floatInfo) Decimal {
+ if math.IsNaN(val) || math.IsInf(val, 0) {
+ panic(fmt.Sprintf("Cannot create a Decimal from %v", val))
+ }
+ exp := int(bits>>flt.mantbits) & (1<>(flt.expbits+flt.mantbits) != 0
+
+ roundShortest(&d, mant, exp, flt)
+ // If less than 19 digits, we can do calculation in an int64.
+ if d.nd < 19 {
+ tmp := int64(0)
+ m := int64(1)
+ for i := d.nd - 1; i >= 0; i-- {
+ tmp += m * int64(d.d[i]-'0')
+ m *= 10
+ }
+ if d.neg {
+ tmp *= -1
+ }
+ return Decimal{value: big.NewInt(tmp), exp: int32(d.dp) - int32(d.nd)}
+ }
+ dValue := new(big.Int)
+ dValue, ok := dValue.SetString(string(d.d[:d.nd]), 10)
+ if ok {
+ return Decimal{value: dValue, exp: int32(d.dp) - int32(d.nd)}
+ }
+
+ return NewFromFloatWithExponent(val, int32(d.dp)-int32(d.nd))
+}
+
+// NewFromFloatWithExponent converts a float64 to Decimal, with an arbitrary
+// number of fractional digits.
+//
+// Example:
+//
+// NewFromFloatWithExponent(123.456, -2).String() // output: "123.46"
+//
+func NewFromFloatWithExponent(value float64, exp int32) Decimal {
+ if math.IsNaN(value) || math.IsInf(value, 0) {
+ panic(fmt.Sprintf("Cannot create a Decimal from %v", value))
+ }
+
+ bits := math.Float64bits(value)
+ mant := bits & (1<<52 - 1)
+ exp2 := int32((bits >> 52) & (1<<11 - 1))
+ sign := bits >> 63
+
+ if exp2 == 0 {
+ // specials
+ if mant == 0 {
+ return Decimal{}
+ }
+ // subnormal
+ exp2++
+ } else {
+ // normal
+ mant |= 1 << 52
+ }
+
+ exp2 -= 1023 + 52
+
+ // normalizing base-2 values
+ for mant&1 == 0 {
+ mant = mant >> 1
+ exp2++
+ }
+
+ // maximum number of fractional base-10 digits to represent 2^N exactly cannot be more than -N if N<0
+ if exp < 0 && exp < exp2 {
+ if exp2 < 0 {
+ exp = exp2
+ } else {
+ exp = 0
+ }
+ }
+
+ // representing 10^M * 2^N as 5^M * 2^(M+N)
+ exp2 -= exp
+
+ temp := big.NewInt(1)
+ dMant := big.NewInt(int64(mant))
+
+ // applying 5^M
+ if exp > 0 {
+ temp = temp.SetInt64(int64(exp))
+ temp = temp.Exp(fiveInt, temp, nil)
+ } else if exp < 0 {
+ temp = temp.SetInt64(-int64(exp))
+ temp = temp.Exp(fiveInt, temp, nil)
+ dMant = dMant.Mul(dMant, temp)
+ temp = temp.SetUint64(1)
+ }
+
+ // applying 2^(M+N)
+ if exp2 > 0 {
+ dMant = dMant.Lsh(dMant, uint(exp2))
+ } else if exp2 < 0 {
+ temp = temp.Lsh(temp, uint(-exp2))
+ }
+
+ // rounding and downscaling
+ if exp > 0 || exp2 < 0 {
+ halfDown := new(big.Int).Rsh(temp, 1)
+ dMant = dMant.Add(dMant, halfDown)
+ dMant = dMant.Quo(dMant, temp)
+ }
+
+ if sign == 1 {
+ dMant = dMant.Neg(dMant)
+ }
+
+ return Decimal{
+ value: dMant,
+ exp: exp,
+ }
+}
+
+// rescale returns a rescaled version of the decimal. Returned
+// decimal may be less precise if the given exponent is bigger
+// than the initial exponent of the Decimal.
+// NOTE: this will truncate, NOT round
+//
+// Example:
+//
+// d := New(12345, -4)
+// d2 := d.rescale(-1)
+// d3 := d2.rescale(-4)
+// println(d1)
+// println(d2)
+// println(d3)
+//
+// Output:
+//
+// 1.2345
+// 1.2
+// 1.2000
+//
+func (d Decimal) rescale(exp int32) Decimal {
+ d.ensureInitialized()
+
+ if d.exp == exp {
+ return Decimal{
+ new(big.Int).Set(d.value),
+ d.exp,
+ }
+ }
+
+ // NOTE(vadim): must convert exps to float64 before - to prevent overflow
+ diff := math.Abs(float64(exp) - float64(d.exp))
+ value := new(big.Int).Set(d.value)
+
+ expScale := new(big.Int).Exp(tenInt, big.NewInt(int64(diff)), nil)
+ if exp > d.exp {
+ value = value.Quo(value, expScale)
+ } else if exp < d.exp {
+ value = value.Mul(value, expScale)
+ }
+
+ return Decimal{
+ value: value,
+ exp: exp,
+ }
+}
+
+// Abs returns the absolute value of the decimal.
+func (d Decimal) Abs() Decimal {
+ d.ensureInitialized()
+ d2Value := new(big.Int).Abs(d.value)
+ return Decimal{
+ value: d2Value,
+ exp: d.exp,
+ }
+}
+
+// Add returns d + d2.
+func (d Decimal) Add(d2 Decimal) Decimal {
+ rd, rd2 := RescalePair(d, d2)
+
+ d3Value := new(big.Int).Add(rd.value, rd2.value)
+ return Decimal{
+ value: d3Value,
+ exp: rd.exp,
+ }
+}
+
+// Sub returns d - d2.
+func (d Decimal) Sub(d2 Decimal) Decimal {
+ rd, rd2 := RescalePair(d, d2)
+
+ d3Value := new(big.Int).Sub(rd.value, rd2.value)
+ return Decimal{
+ value: d3Value,
+ exp: rd.exp,
+ }
+}
+
+// Neg returns -d.
+func (d Decimal) Neg() Decimal {
+ d.ensureInitialized()
+ val := new(big.Int).Neg(d.value)
+ return Decimal{
+ value: val,
+ exp: d.exp,
+ }
+}
+
+// Mul returns d * d2.
+func (d Decimal) Mul(d2 Decimal) Decimal {
+ d.ensureInitialized()
+ d2.ensureInitialized()
+
+ expInt64 := int64(d.exp) + int64(d2.exp)
+ if expInt64 > math.MaxInt32 || expInt64 < math.MinInt32 {
+ // NOTE(vadim): better to panic than give incorrect results, as
+ // Decimals are usually used for money
+ panic(fmt.Sprintf("exponent %v overflows an int32!", expInt64))
+ }
+
+ d3Value := new(big.Int).Mul(d.value, d2.value)
+ return Decimal{
+ value: d3Value,
+ exp: int32(expInt64),
+ }
+}
+
+// Shift shifts the decimal in base 10.
+// It shifts left when shift is positive and right if shift is negative.
+// In simpler terms, the given value for shift is added to the exponent
+// of the decimal.
+func (d Decimal) Shift(shift int32) Decimal {
+ d.ensureInitialized()
+ return Decimal{
+ value: new(big.Int).Set(d.value),
+ exp: d.exp + shift,
+ }
+}
+
+// Div returns d / d2. If it doesn't divide exactly, the result will have
+// DivisionPrecision digits after the decimal point.
+func (d Decimal) Div(d2 Decimal) Decimal {
+ return d.DivRound(d2, int32(DivisionPrecision))
+}
+
+// QuoRem does divsion with remainder
+// d.QuoRem(d2,precision) returns quotient q and remainder r such that
+// d = d2 * q + r, q an integer multiple of 10^(-precision)
+// 0 <= r < abs(d2) * 10 ^(-precision) if d>=0
+// 0 >= r > -abs(d2) * 10 ^(-precision) if d<0
+// Note that precision<0 is allowed as input.
+func (d Decimal) QuoRem(d2 Decimal, precision int32) (Decimal, Decimal) {
+ d.ensureInitialized()
+ d2.ensureInitialized()
+ if d2.value.Sign() == 0 {
+ panic("decimal division by 0")
+ }
+ scale := -precision
+ e := int64(d.exp - d2.exp - scale)
+ if e > math.MaxInt32 || e < math.MinInt32 {
+ panic("overflow in decimal QuoRem")
+ }
+ var aa, bb, expo big.Int
+ var scalerest int32
+ // d = a 10^ea
+ // d2 = b 10^eb
+ if e < 0 {
+ aa = *d.value
+ expo.SetInt64(-e)
+ bb.Exp(tenInt, &expo, nil)
+ bb.Mul(d2.value, &bb)
+ scalerest = d.exp
+ // now aa = a
+ // bb = b 10^(scale + eb - ea)
+ } else {
+ expo.SetInt64(e)
+ aa.Exp(tenInt, &expo, nil)
+ aa.Mul(d.value, &aa)
+ bb = *d2.value
+ scalerest = scale + d2.exp
+ // now aa = a ^ (ea - eb - scale)
+ // bb = b
+ }
+ var q, r big.Int
+ q.QuoRem(&aa, &bb, &r)
+ dq := Decimal{value: &q, exp: scale}
+ dr := Decimal{value: &r, exp: scalerest}
+ return dq, dr
+}
+
+// DivRound divides and rounds to a given precision
+// i.e. to an integer multiple of 10^(-precision)
+// for a positive quotient digit 5 is rounded up, away from 0
+// if the quotient is negative then digit 5 is rounded down, away from 0
+// Note that precision<0 is allowed as input.
+func (d Decimal) DivRound(d2 Decimal, precision int32) Decimal {
+ // QuoRem already checks initialization
+ q, r := d.QuoRem(d2, precision)
+ // the actual rounding decision is based on comparing r*10^precision and d2/2
+ // instead compare 2 r 10 ^precision and d2
+ var rv2 big.Int
+ rv2.Abs(r.value)
+ rv2.Lsh(&rv2, 1)
+ // now rv2 = abs(r.value) * 2
+ r2 := Decimal{value: &rv2, exp: r.exp + precision}
+ // r2 is now 2 * r * 10 ^ precision
+ var c = r2.Cmp(d2.Abs())
+
+ if c < 0 {
+ return q
+ }
+
+ if d.value.Sign()*d2.value.Sign() < 0 {
+ return q.Sub(New(1, -precision))
+ }
+
+ return q.Add(New(1, -precision))
+}
+
+// Mod returns d % d2.
+func (d Decimal) Mod(d2 Decimal) Decimal {
+ quo := d.Div(d2).Truncate(0)
+ return d.Sub(d2.Mul(quo))
+}
+
+// Pow returns d to the power d2
+func (d Decimal) Pow(d2 Decimal) Decimal {
+ var temp Decimal
+ if d2.IntPart() == 0 {
+ return NewFromFloat(1)
+ }
+ temp = d.Pow(d2.Div(NewFromFloat(2)))
+ if d2.IntPart()%2 == 0 {
+ return temp.Mul(temp)
+ }
+ if d2.IntPart() > 0 {
+ return temp.Mul(temp).Mul(d)
+ }
+ return temp.Mul(temp).Div(d)
+}
+
+// Cmp compares the numbers represented by d and d2 and returns:
+//
+// -1 if d < d2
+// 0 if d == d2
+// +1 if d > d2
+//
+func (d Decimal) Cmp(d2 Decimal) int {
+ d.ensureInitialized()
+ d2.ensureInitialized()
+
+ if d.exp == d2.exp {
+ return d.value.Cmp(d2.value)
+ }
+
+ rd, rd2 := RescalePair(d, d2)
+
+ return rd.value.Cmp(rd2.value)
+}
+
+// Equal returns whether the numbers represented by d and d2 are equal.
+func (d Decimal) Equal(d2 Decimal) bool {
+ return d.Cmp(d2) == 0
+}
+
+// Equals is deprecated, please use Equal method instead
+func (d Decimal) Equals(d2 Decimal) bool {
+ return d.Equal(d2)
+}
+
+// GreaterThan (GT) returns true when d is greater than d2.
+func (d Decimal) GreaterThan(d2 Decimal) bool {
+ return d.Cmp(d2) == 1
+}
+
+// GreaterThanOrEqual (GTE) returns true when d is greater than or equal to d2.
+func (d Decimal) GreaterThanOrEqual(d2 Decimal) bool {
+ cmp := d.Cmp(d2)
+ return cmp == 1 || cmp == 0
+}
+
+// LessThan (LT) returns true when d is less than d2.
+func (d Decimal) LessThan(d2 Decimal) bool {
+ return d.Cmp(d2) == -1
+}
+
+// LessThanOrEqual (LTE) returns true when d is less than or equal to d2.
+func (d Decimal) LessThanOrEqual(d2 Decimal) bool {
+ cmp := d.Cmp(d2)
+ return cmp == -1 || cmp == 0
+}
+
+// Sign returns:
+//
+// -1 if d < 0
+// 0 if d == 0
+// +1 if d > 0
+//
+func (d Decimal) Sign() int {
+ if d.value == nil {
+ return 0
+ }
+ return d.value.Sign()
+}
+
+// IsPositive return
+//
+// true if d > 0
+// false if d == 0
+// false if d < 0
+func (d Decimal) IsPositive() bool {
+ return d.Sign() == 1
+}
+
+// IsNegative return
+//
+// true if d < 0
+// false if d == 0
+// false if d > 0
+func (d Decimal) IsNegative() bool {
+ return d.Sign() == -1
+}
+
+// IsZero return
+//
+// true if d == 0
+// false if d > 0
+// false if d < 0
+func (d Decimal) IsZero() bool {
+ return d.Sign() == 0
+}
+
+// Exponent returns the exponent, or scale component of the decimal.
+func (d Decimal) Exponent() int32 {
+ return d.exp
+}
+
+// Coefficient returns the coefficient of the decimal. It is scaled by 10^Exponent()
+func (d Decimal) Coefficient() *big.Int {
+ d.ensureInitialized()
+ // we copy the coefficient so that mutating the result does not mutate the
+ // Decimal.
+ return big.NewInt(0).Set(d.value)
+}
+
+// IntPart returns the integer component of the decimal.
+func (d Decimal) IntPart() int64 {
+ scaledD := d.rescale(0)
+ return scaledD.value.Int64()
+}
+
+// BigInt returns integer component of the decimal as a BigInt.
+func (d Decimal) BigInt() *big.Int {
+ scaledD := d.rescale(0)
+ i := &big.Int{}
+ i.SetString(scaledD.String(), 10)
+ return i
+}
+
+// BigFloat returns decimal as BigFloat.
+// Be aware that casting decimal to BigFloat might cause a loss of precision.
+func (d Decimal) BigFloat() *big.Float {
+ f := &big.Float{}
+ f.SetString(d.String())
+ return f
+}
+
+// Rat returns a rational number representation of the decimal.
+func (d Decimal) Rat() *big.Rat {
+ d.ensureInitialized()
+ if d.exp <= 0 {
+ // NOTE(vadim): must negate after casting to prevent int32 overflow
+ denom := new(big.Int).Exp(tenInt, big.NewInt(-int64(d.exp)), nil)
+ return new(big.Rat).SetFrac(d.value, denom)
+ }
+
+ mul := new(big.Int).Exp(tenInt, big.NewInt(int64(d.exp)), nil)
+ num := new(big.Int).Mul(d.value, mul)
+ return new(big.Rat).SetFrac(num, oneInt)
+}
+
+// Float64 returns the nearest float64 value for d and a bool indicating
+// whether f represents d exactly.
+// For more details, see the documentation for big.Rat.Float64
+func (d Decimal) Float64() (f float64, exact bool) {
+ return d.Rat().Float64()
+}
+
+// String returns the string representation of the decimal
+// with the fixed point.
+//
+// Example:
+//
+// d := New(-12345, -3)
+// println(d.String())
+//
+// Output:
+//
+// -12.345
+//
+func (d Decimal) String() string {
+ return d.string(true)
+}
+
+// StringFixed returns a rounded fixed-point string with places digits after
+// the decimal point.
+//
+// Example:
+//
+// NewFromFloat(0).StringFixed(2) // output: "0.00"
+// NewFromFloat(0).StringFixed(0) // output: "0"
+// NewFromFloat(5.45).StringFixed(0) // output: "5"
+// NewFromFloat(5.45).StringFixed(1) // output: "5.5"
+// NewFromFloat(5.45).StringFixed(2) // output: "5.45"
+// NewFromFloat(5.45).StringFixed(3) // output: "5.450"
+// NewFromFloat(545).StringFixed(-1) // output: "550"
+//
+func (d Decimal) StringFixed(places int32) string {
+ rounded := d.Round(places)
+ return rounded.string(false)
+}
+
+// StringFixedBank returns a banker rounded fixed-point string with places digits
+// after the decimal point.
+//
+// Example:
+//
+// NewFromFloat(0).StringFixedBank(2) // output: "0.00"
+// NewFromFloat(0).StringFixedBank(0) // output: "0"
+// NewFromFloat(5.45).StringFixedBank(0) // output: "5"
+// NewFromFloat(5.45).StringFixedBank(1) // output: "5.4"
+// NewFromFloat(5.45).StringFixedBank(2) // output: "5.45"
+// NewFromFloat(5.45).StringFixedBank(3) // output: "5.450"
+// NewFromFloat(545).StringFixedBank(-1) // output: "540"
+//
+func (d Decimal) StringFixedBank(places int32) string {
+ rounded := d.RoundBank(places)
+ return rounded.string(false)
+}
+
+// StringFixedCash returns a Swedish/Cash rounded fixed-point string. For
+// more details see the documentation at function RoundCash.
+func (d Decimal) StringFixedCash(interval uint8) string {
+ rounded := d.RoundCash(interval)
+ return rounded.string(false)
+}
+
+// Round rounds the decimal to places decimal places.
+// If places < 0, it will round the integer part to the nearest 10^(-places).
+//
+// Example:
+//
+// NewFromFloat(5.45).Round(1).String() // output: "5.5"
+// NewFromFloat(545).Round(-1).String() // output: "550"
+//
+func (d Decimal) Round(places int32) Decimal {
+ // truncate to places + 1
+ ret := d.rescale(-places - 1)
+
+ // add sign(d) * 0.5
+ if ret.value.Sign() < 0 {
+ ret.value.Sub(ret.value, fiveInt)
+ } else {
+ ret.value.Add(ret.value, fiveInt)
+ }
+
+ // floor for positive numbers, ceil for negative numbers
+ _, m := ret.value.DivMod(ret.value, tenInt, new(big.Int))
+ ret.exp++
+ if ret.value.Sign() < 0 && m.Cmp(zeroInt) != 0 {
+ ret.value.Add(ret.value, oneInt)
+ }
+
+ return ret
+}
+
+// RoundBank rounds the decimal to places decimal places.
+// If the final digit to round is equidistant from the nearest two integers the
+// rounded value is taken as the even number
+//
+// If places < 0, it will round the integer part to the nearest 10^(-places).
+//
+// Examples:
+//
+// NewFromFloat(5.45).Round(1).String() // output: "5.4"
+// NewFromFloat(545).Round(-1).String() // output: "540"
+// NewFromFloat(5.46).Round(1).String() // output: "5.5"
+// NewFromFloat(546).Round(-1).String() // output: "550"
+// NewFromFloat(5.55).Round(1).String() // output: "5.6"
+// NewFromFloat(555).Round(-1).String() // output: "560"
+//
+func (d Decimal) RoundBank(places int32) Decimal {
+
+ round := d.Round(places)
+ remainder := d.Sub(round).Abs()
+
+ half := New(5, -places-1)
+ if remainder.Cmp(half) == 0 && round.value.Bit(0) != 0 {
+ if round.value.Sign() < 0 {
+ round.value.Add(round.value, oneInt)
+ } else {
+ round.value.Sub(round.value, oneInt)
+ }
+ }
+
+ return round
+}
+
+// RoundCash aka Cash/Penny/öre rounding rounds decimal to a specific
+// interval. The amount payable for a cash transaction is rounded to the nearest
+// multiple of the minimum currency unit available. The following intervals are
+// available: 5, 10, 25, 50 and 100; any other number throws a panic.
+// 5: 5 cent rounding 3.43 => 3.45
+// 10: 10 cent rounding 3.45 => 3.50 (5 gets rounded up)
+// 25: 25 cent rounding 3.41 => 3.50
+// 50: 50 cent rounding 3.75 => 4.00
+// 100: 100 cent rounding 3.50 => 4.00
+// For more details: https://en.wikipedia.org/wiki/Cash_rounding
+func (d Decimal) RoundCash(interval uint8) Decimal {
+ var iVal *big.Int
+ switch interval {
+ case 5:
+ iVal = twentyInt
+ case 10:
+ iVal = tenInt
+ case 25:
+ iVal = fourInt
+ case 50:
+ iVal = twoInt
+ case 100:
+ iVal = oneInt
+ default:
+ panic(fmt.Sprintf("Decimal does not support this Cash rounding interval `%d`. Supported: 5, 10, 25, 50, 100", interval))
+ }
+ dVal := Decimal{
+ value: iVal,
+ }
+
+ // TODO: optimize those calculations to reduce the high allocations (~29 allocs).
+ return d.Mul(dVal).Round(0).Div(dVal).Truncate(2)
+}
+
+// Floor returns the nearest integer value less than or equal to d.
+func (d Decimal) Floor() Decimal {
+ d.ensureInitialized()
+
+ if d.exp >= 0 {
+ return d
+ }
+
+ exp := big.NewInt(10)
+
+ // NOTE(vadim): must negate after casting to prevent int32 overflow
+ exp.Exp(exp, big.NewInt(-int64(d.exp)), nil)
+
+ z := new(big.Int).Div(d.value, exp)
+ return Decimal{value: z, exp: 0}
+}
+
+// Ceil returns the nearest integer value greater than or equal to d.
+func (d Decimal) Ceil() Decimal {
+ d.ensureInitialized()
+
+ if d.exp >= 0 {
+ return d
+ }
+
+ exp := big.NewInt(10)
+
+ // NOTE(vadim): must negate after casting to prevent int32 overflow
+ exp.Exp(exp, big.NewInt(-int64(d.exp)), nil)
+
+ z, m := new(big.Int).DivMod(d.value, exp, new(big.Int))
+ if m.Cmp(zeroInt) != 0 {
+ z.Add(z, oneInt)
+ }
+ return Decimal{value: z, exp: 0}
+}
+
+// Truncate truncates off digits from the number, without rounding.
+//
+// NOTE: precision is the last digit that will not be truncated (must be >= 0).
+//
+// Example:
+//
+// decimal.NewFromString("123.456").Truncate(2).String() // "123.45"
+//
+func (d Decimal) Truncate(precision int32) Decimal {
+ d.ensureInitialized()
+ if precision >= 0 && -precision > d.exp {
+ return d.rescale(-precision)
+ }
+ return d
+}
+
+// UnmarshalJSON implements the json.Unmarshaler interface.
+func (d *Decimal) UnmarshalJSON(decimalBytes []byte) error {
+ if string(decimalBytes) == "null" {
+ return nil
+ }
+
+ str, err := unquoteIfQuoted(decimalBytes)
+ if err != nil {
+ return fmt.Errorf("error decoding string '%s': %s", decimalBytes, err)
+ }
+
+ decimal, err := NewFromString(str)
+ *d = decimal
+ if err != nil {
+ return fmt.Errorf("error decoding string '%s': %s", str, err)
+ }
+ return nil
+}
+
+// MarshalJSON implements the json.Marshaler interface.
+func (d Decimal) MarshalJSON() ([]byte, error) {
+ var str string
+ if MarshalJSONWithoutQuotes {
+ str = d.String()
+ } else {
+ str = "\"" + d.String() + "\""
+ }
+ return []byte(str), nil
+}
+
+// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface. As a string representation
+// is already used when encoding to text, this method stores that string as []byte
+func (d *Decimal) UnmarshalBinary(data []byte) error {
+ // Extract the exponent
+ d.exp = int32(binary.BigEndian.Uint32(data[:4]))
+
+ // Extract the value
+ d.value = new(big.Int)
+ return d.value.GobDecode(data[4:])
+}
+
+// MarshalBinary implements the encoding.BinaryMarshaler interface.
+func (d Decimal) MarshalBinary() (data []byte, err error) {
+ // Write the exponent first since it's a fixed size
+ v1 := make([]byte, 4)
+ binary.BigEndian.PutUint32(v1, uint32(d.exp))
+
+ // Add the value
+ var v2 []byte
+ if v2, err = d.value.GobEncode(); err != nil {
+ return
+ }
+
+ // Return the byte array
+ data = append(v1, v2...)
+ return
+}
+
+// Scan implements the sql.Scanner interface for database deserialization.
+func (d *Decimal) Scan(value interface{}) error {
+ // first try to see if the data is stored in database as a Numeric datatype
+ switch v := value.(type) {
+
+ case float32:
+ *d = NewFromFloat(float64(v))
+ return nil
+
+ case float64:
+ // numeric in sqlite3 sends us float64
+ *d = NewFromFloat(v)
+ return nil
+
+ case int64:
+ // at least in sqlite3 when the value is 0 in db, the data is sent
+ // to us as an int64 instead of a float64 ...
+ *d = New(v, 0)
+ return nil
+
+ default:
+ // default is trying to interpret value stored as string
+ str, err := unquoteIfQuoted(v)
+ if err != nil {
+ return err
+ }
+ *d, err = NewFromString(str)
+ return err
+ }
+}
+
+// Value implements the driver.Valuer interface for database serialization.
+func (d Decimal) Value() (driver.Value, error) {
+ return d.String(), nil
+}
+
+// UnmarshalText implements the encoding.TextUnmarshaler interface for XML
+// deserialization.
+func (d *Decimal) UnmarshalText(text []byte) error {
+ str := string(text)
+
+ dec, err := NewFromString(str)
+ *d = dec
+ if err != nil {
+ return fmt.Errorf("error decoding string '%s': %s", str, err)
+ }
+
+ return nil
+}
+
+// MarshalText implements the encoding.TextMarshaler interface for XML
+// serialization.
+func (d Decimal) MarshalText() (text []byte, err error) {
+ return []byte(d.String()), nil
+}
+
+// GobEncode implements the gob.GobEncoder interface for gob serialization.
+func (d Decimal) GobEncode() ([]byte, error) {
+ return d.MarshalBinary()
+}
+
+// GobDecode implements the gob.GobDecoder interface for gob serialization.
+func (d *Decimal) GobDecode(data []byte) error {
+ return d.UnmarshalBinary(data)
+}
+
+// StringScaled first scales the decimal then calls .String() on it.
+// NOTE: buggy, unintuitive, and DEPRECATED! Use StringFixed instead.
+func (d Decimal) StringScaled(exp int32) string {
+ return d.rescale(exp).String()
+}
+
+func (d Decimal) string(trimTrailingZeros bool) string {
+ if d.exp >= 0 {
+ return d.rescale(0).value.String()
+ }
+
+ abs := new(big.Int).Abs(d.value)
+ str := abs.String()
+
+ var intPart, fractionalPart string
+
+ // NOTE(vadim): this cast to int will cause bugs if d.exp == INT_MIN
+ // and you are on a 32-bit machine. Won't fix this super-edge case.
+ dExpInt := int(d.exp)
+ if len(str) > -dExpInt {
+ intPart = str[:len(str)+dExpInt]
+ fractionalPart = str[len(str)+dExpInt:]
+ } else {
+ intPart = "0"
+
+ num0s := -dExpInt - len(str)
+ fractionalPart = strings.Repeat("0", num0s) + str
+ }
+
+ if trimTrailingZeros {
+ i := len(fractionalPart) - 1
+ for ; i >= 0; i-- {
+ if fractionalPart[i] != '0' {
+ break
+ }
+ }
+ fractionalPart = fractionalPart[:i+1]
+ }
+
+ number := intPart
+ if len(fractionalPart) > 0 {
+ number += "." + fractionalPart
+ }
+
+ if d.value.Sign() < 0 {
+ return "-" + number
+ }
+
+ return number
+}
+
+func (d *Decimal) ensureInitialized() {
+ if d.value == nil {
+ d.value = new(big.Int)
+ }
+}
+
+// Min returns the smallest Decimal that was passed in the arguments.
+//
+// To call this function with an array, you must do:
+//
+// Min(arr[0], arr[1:]...)
+//
+// This makes it harder to accidentally call Min with 0 arguments.
+func Min(first Decimal, rest ...Decimal) Decimal {
+ ans := first
+ for _, item := range rest {
+ if item.Cmp(ans) < 0 {
+ ans = item
+ }
+ }
+ return ans
+}
+
+// Max returns the largest Decimal that was passed in the arguments.
+//
+// To call this function with an array, you must do:
+//
+// Max(arr[0], arr[1:]...)
+//
+// This makes it harder to accidentally call Max with 0 arguments.
+func Max(first Decimal, rest ...Decimal) Decimal {
+ ans := first
+ for _, item := range rest {
+ if item.Cmp(ans) > 0 {
+ ans = item
+ }
+ }
+ return ans
+}
+
+// Sum returns the combined total of the provided first and rest Decimals
+func Sum(first Decimal, rest ...Decimal) Decimal {
+ total := first
+ for _, item := range rest {
+ total = total.Add(item)
+ }
+
+ return total
+}
+
+// Avg returns the average value of the provided first and rest Decimals
+func Avg(first Decimal, rest ...Decimal) Decimal {
+ count := New(int64(len(rest)+1), 0)
+ sum := Sum(first, rest...)
+ return sum.Div(count)
+}
+
+// RescalePair rescales two decimals to common exponential value (minimal exp of both decimals)
+func RescalePair(d1 Decimal, d2 Decimal) (Decimal, Decimal) {
+ d1.ensureInitialized()
+ d2.ensureInitialized()
+
+ if d1.exp == d2.exp {
+ return d1, d2
+ }
+
+ baseScale := min(d1.exp, d2.exp)
+ if baseScale != d1.exp {
+ return d1.rescale(baseScale), d2
+ }
+ return d1, d2.rescale(baseScale)
+}
+
+func min(x, y int32) int32 {
+ if x >= y {
+ return y
+ }
+ return x
+}
+
+func unquoteIfQuoted(value interface{}) (string, error) {
+ var bytes []byte
+
+ switch v := value.(type) {
+ case string:
+ bytes = []byte(v)
+ case []byte:
+ bytes = v
+ default:
+ return "", fmt.Errorf("could not convert value '%+v' to byte array of type '%T'",
+ value, value)
+ }
+
+ // If the amount is quoted, strip the quotes
+ if len(bytes) > 2 && bytes[0] == '"' && bytes[len(bytes)-1] == '"' {
+ bytes = bytes[1 : len(bytes)-1]
+ }
+ return string(bytes), nil
+}
+
+// NullDecimal represents a nullable decimal with compatibility for
+// scanning null values from the database.
+type NullDecimal struct {
+ Decimal Decimal
+ Valid bool
+}
+
+// Scan implements the sql.Scanner interface for database deserialization.
+func (d *NullDecimal) Scan(value interface{}) error {
+ if value == nil {
+ d.Valid = false
+ return nil
+ }
+ d.Valid = true
+ return d.Decimal.Scan(value)
+}
+
+// Value implements the driver.Valuer interface for database serialization.
+func (d NullDecimal) Value() (driver.Value, error) {
+ if !d.Valid {
+ return nil, nil
+ }
+ return d.Decimal.Value()
+}
+
+// UnmarshalJSON implements the json.Unmarshaler interface.
+func (d *NullDecimal) UnmarshalJSON(decimalBytes []byte) error {
+ if string(decimalBytes) == "null" {
+ d.Valid = false
+ return nil
+ }
+ d.Valid = true
+ return d.Decimal.UnmarshalJSON(decimalBytes)
+}
+
+// MarshalJSON implements the json.Marshaler interface.
+func (d NullDecimal) MarshalJSON() ([]byte, error) {
+ if !d.Valid {
+ return []byte("null"), nil
+ }
+ return d.Decimal.MarshalJSON()
+}
+
+// Trig functions
+
+// Atan returns the arctangent, in radians, of x.
+func (d Decimal) Atan() Decimal {
+ if d.Equal(NewFromFloat(0.0)) {
+ return d
+ }
+ if d.GreaterThan(NewFromFloat(0.0)) {
+ return d.satan()
+ }
+ return d.Neg().satan().Neg()
+}
+
+func (d Decimal) xatan() Decimal {
+ P0 := NewFromFloat(-8.750608600031904122785e-01)
+ P1 := NewFromFloat(-1.615753718733365076637e+01)
+ P2 := NewFromFloat(-7.500855792314704667340e+01)
+ P3 := NewFromFloat(-1.228866684490136173410e+02)
+ P4 := NewFromFloat(-6.485021904942025371773e+01)
+ Q0 := NewFromFloat(2.485846490142306297962e+01)
+ Q1 := NewFromFloat(1.650270098316988542046e+02)
+ Q2 := NewFromFloat(4.328810604912902668951e+02)
+ Q3 := NewFromFloat(4.853903996359136964868e+02)
+ Q4 := NewFromFloat(1.945506571482613964425e+02)
+ z := d.Mul(d)
+ b1 := P0.Mul(z).Add(P1).Mul(z).Add(P2).Mul(z).Add(P3).Mul(z).Add(P4).Mul(z)
+ b2 := z.Add(Q0).Mul(z).Add(Q1).Mul(z).Add(Q2).Mul(z).Add(Q3).Mul(z).Add(Q4)
+ z = b1.Div(b2)
+ z = d.Mul(z).Add(d)
+ return z
+}
+
+// satan reduces its argument (known to be positive)
+// to the range [0, 0.66] and calls xatan.
+func (d Decimal) satan() Decimal {
+ Morebits := NewFromFloat(6.123233995736765886130e-17) // pi/2 = PIO2 + Morebits
+ Tan3pio8 := NewFromFloat(2.41421356237309504880) // tan(3*pi/8)
+ pi := NewFromFloat(3.14159265358979323846264338327950288419716939937510582097494459)
+
+ if d.LessThanOrEqual(NewFromFloat(0.66)) {
+ return d.xatan()
+ }
+ if d.GreaterThan(Tan3pio8) {
+ return pi.Div(NewFromFloat(2.0)).Sub(NewFromFloat(1.0).Div(d).xatan()).Add(Morebits)
+ }
+ return pi.Div(NewFromFloat(4.0)).Add((d.Sub(NewFromFloat(1.0)).Div(d.Add(NewFromFloat(1.0)))).xatan()).Add(NewFromFloat(0.5).Mul(Morebits))
+}
+
+// sin coefficients
+var _sin = [...]Decimal{
+ NewFromFloat(1.58962301576546568060e-10), // 0x3de5d8fd1fd19ccd
+ NewFromFloat(-2.50507477628578072866e-8), // 0xbe5ae5e5a9291f5d
+ NewFromFloat(2.75573136213857245213e-6), // 0x3ec71de3567d48a1
+ NewFromFloat(-1.98412698295895385996e-4), // 0xbf2a01a019bfdf03
+ NewFromFloat(8.33333333332211858878e-3), // 0x3f8111111110f7d0
+ NewFromFloat(-1.66666666666666307295e-1), // 0xbfc5555555555548
+}
+
+// Sin returns the sine of the radian argument x.
+func (d Decimal) Sin() Decimal {
+ PI4A := NewFromFloat(7.85398125648498535156e-1) // 0x3fe921fb40000000, Pi/4 split into three parts
+ PI4B := NewFromFloat(3.77489470793079817668e-8) // 0x3e64442d00000000,
+ PI4C := NewFromFloat(2.69515142907905952645e-15) // 0x3ce8469898cc5170,
+ M4PI := NewFromFloat(1.273239544735162542821171882678754627704620361328125) // 4/pi
+
+ if d.Equal(NewFromFloat(0.0)) {
+ return d
+ }
+ // make argument positive but save the sign
+ sign := false
+ if d.LessThan(NewFromFloat(0.0)) {
+ d = d.Neg()
+ sign = true
+ }
+
+ j := d.Mul(M4PI).IntPart() // integer part of x/(Pi/4), as integer for tests on the phase angle
+ y := NewFromFloat(float64(j)) // integer part of x/(Pi/4), as float
+
+ // map zeros to origin
+ if j&1 == 1 {
+ j++
+ y = y.Add(NewFromFloat(1.0))
+ }
+ j &= 7 // octant modulo 2Pi radians (360 degrees)
+ // reflect in x axis
+ if j > 3 {
+ sign = !sign
+ j -= 4
+ }
+ z := d.Sub(y.Mul(PI4A)).Sub(y.Mul(PI4B)).Sub(y.Mul(PI4C)) // Extended precision modular arithmetic
+ zz := z.Mul(z)
+
+ if j == 1 || j == 2 {
+ w := zz.Mul(zz).Mul(_cos[0].Mul(zz).Add(_cos[1]).Mul(zz).Add(_cos[2]).Mul(zz).Add(_cos[3]).Mul(zz).Add(_cos[4]).Mul(zz).Add(_cos[5]))
+ y = NewFromFloat(1.0).Sub(NewFromFloat(0.5).Mul(zz)).Add(w)
+ } else {
+ y = z.Add(z.Mul(zz).Mul(_sin[0].Mul(zz).Add(_sin[1]).Mul(zz).Add(_sin[2]).Mul(zz).Add(_sin[3]).Mul(zz).Add(_sin[4]).Mul(zz).Add(_sin[5])))
+ }
+ if sign {
+ y = y.Neg()
+ }
+ return y
+}
+
+// cos coefficients
+var _cos = [...]Decimal{
+ NewFromFloat(-1.13585365213876817300e-11), // 0xbda8fa49a0861a9b
+ NewFromFloat(2.08757008419747316778e-9), // 0x3e21ee9d7b4e3f05
+ NewFromFloat(-2.75573141792967388112e-7), // 0xbe927e4f7eac4bc6
+ NewFromFloat(2.48015872888517045348e-5), // 0x3efa01a019c844f5
+ NewFromFloat(-1.38888888888730564116e-3), // 0xbf56c16c16c14f91
+ NewFromFloat(4.16666666666665929218e-2), // 0x3fa555555555554b
+}
+
+// Cos returns the cosine of the radian argument x.
+func (d Decimal) Cos() Decimal {
+
+ PI4A := NewFromFloat(7.85398125648498535156e-1) // 0x3fe921fb40000000, Pi/4 split into three parts
+ PI4B := NewFromFloat(3.77489470793079817668e-8) // 0x3e64442d00000000,
+ PI4C := NewFromFloat(2.69515142907905952645e-15) // 0x3ce8469898cc5170,
+ M4PI := NewFromFloat(1.273239544735162542821171882678754627704620361328125) // 4/pi
+
+ // make argument positive
+ sign := false
+ if d.LessThan(NewFromFloat(0.0)) {
+ d = d.Neg()
+ }
+
+ j := d.Mul(M4PI).IntPart() // integer part of x/(Pi/4), as integer for tests on the phase angle
+ y := NewFromFloat(float64(j)) // integer part of x/(Pi/4), as float
+
+ // map zeros to origin
+ if j&1 == 1 {
+ j++
+ y = y.Add(NewFromFloat(1.0))
+ }
+ j &= 7 // octant modulo 2Pi radians (360 degrees)
+ // reflect in x axis
+ if j > 3 {
+ sign = !sign
+ j -= 4
+ }
+ if j > 1 {
+ sign = !sign
+ }
+
+ z := d.Sub(y.Mul(PI4A)).Sub(y.Mul(PI4B)).Sub(y.Mul(PI4C)) // Extended precision modular arithmetic
+ zz := z.Mul(z)
+
+ if j == 1 || j == 2 {
+ y = z.Add(z.Mul(zz).Mul(_sin[0].Mul(zz).Add(_sin[1]).Mul(zz).Add(_sin[2]).Mul(zz).Add(_sin[3]).Mul(zz).Add(_sin[4]).Mul(zz).Add(_sin[5])))
+ } else {
+ w := zz.Mul(zz).Mul(_cos[0].Mul(zz).Add(_cos[1]).Mul(zz).Add(_cos[2]).Mul(zz).Add(_cos[3]).Mul(zz).Add(_cos[4]).Mul(zz).Add(_cos[5]))
+ y = NewFromFloat(1.0).Sub(NewFromFloat(0.5).Mul(zz)).Add(w)
+ }
+ if sign {
+ y = y.Neg()
+ }
+ return y
+}
+
+var _tanP = [...]Decimal{
+ NewFromFloat(-1.30936939181383777646e+4), // 0xc0c992d8d24f3f38
+ NewFromFloat(1.15351664838587416140e+6), // 0x413199eca5fc9ddd
+ NewFromFloat(-1.79565251976484877988e+7), // 0xc1711fead3299176
+}
+var _tanQ = [...]Decimal{
+ NewFromFloat(1.00000000000000000000e+0),
+ NewFromFloat(1.36812963470692954678e+4), //0x40cab8a5eeb36572
+ NewFromFloat(-1.32089234440210967447e+6), //0xc13427bc582abc96
+ NewFromFloat(2.50083801823357915839e+7), //0x4177d98fc2ead8ef
+ NewFromFloat(-5.38695755929454629881e+7), //0xc189afe03cbe5a31
+}
+
+// Tan returns the tangent of the radian argument x.
+func (d Decimal) Tan() Decimal {
+
+ PI4A := NewFromFloat(7.85398125648498535156e-1) // 0x3fe921fb40000000, Pi/4 split into three parts
+ PI4B := NewFromFloat(3.77489470793079817668e-8) // 0x3e64442d00000000,
+ PI4C := NewFromFloat(2.69515142907905952645e-15) // 0x3ce8469898cc5170,
+ M4PI := NewFromFloat(1.273239544735162542821171882678754627704620361328125) // 4/pi
+
+ if d.Equal(NewFromFloat(0.0)) {
+ return d
+ }
+
+ // make argument positive but save the sign
+ sign := false
+ if d.LessThan(NewFromFloat(0.0)) {
+ d = d.Neg()
+ sign = true
+ }
+
+ j := d.Mul(M4PI).IntPart() // integer part of x/(Pi/4), as integer for tests on the phase angle
+ y := NewFromFloat(float64(j)) // integer part of x/(Pi/4), as float
+
+ // map zeros to origin
+ if j&1 == 1 {
+ j++
+ y = y.Add(NewFromFloat(1.0))
+ }
+
+ z := d.Sub(y.Mul(PI4A)).Sub(y.Mul(PI4B)).Sub(y.Mul(PI4C)) // Extended precision modular arithmetic
+ zz := z.Mul(z)
+
+ if zz.GreaterThan(NewFromFloat(1e-14)) {
+ w := zz.Mul(_tanP[0].Mul(zz).Add(_tanP[1]).Mul(zz).Add(_tanP[2]))
+ x := zz.Add(_tanQ[1]).Mul(zz).Add(_tanQ[2]).Mul(zz).Add(_tanQ[3]).Mul(zz).Add(_tanQ[4])
+ y = z.Add(z.Mul(w.Div(x)))
+ } else {
+ y = z
+ }
+ if j&2 == 2 {
+ y = NewFromFloat(-1.0).Div(y)
+ }
+ if sign {
+ y = y.Neg()
+ }
+ return y
+}
diff --git a/vendor/github.com/shopspring/decimal/go.mod b/vendor/github.com/shopspring/decimal/go.mod
new file mode 100644
index 0000000000000..ae1b7aa3c7058
--- /dev/null
+++ b/vendor/github.com/shopspring/decimal/go.mod
@@ -0,0 +1,3 @@
+module github.com/shopspring/decimal
+
+go 1.13
diff --git a/vendor/github.com/shopspring/decimal/rounding.go b/vendor/github.com/shopspring/decimal/rounding.go
new file mode 100644
index 0000000000000..8008f55cb9801
--- /dev/null
+++ b/vendor/github.com/shopspring/decimal/rounding.go
@@ -0,0 +1,119 @@
+// Copyright 2009 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Multiprecision decimal numbers.
+// For floating-point formatting only; not general purpose.
+// Only operations are assign and (binary) left/right shift.
+// Can do binary floating point in multiprecision decimal precisely
+// because 2 divides 10; cannot do decimal floating point
+// in multiprecision binary precisely.
+
+package decimal
+
+type floatInfo struct {
+ mantbits uint
+ expbits uint
+ bias int
+}
+
+var float32info = floatInfo{23, 8, -127}
+var float64info = floatInfo{52, 11, -1023}
+
+// roundShortest rounds d (= mant * 2^exp) to the shortest number of digits
+// that will let the original floating point value be precisely reconstructed.
+func roundShortest(d *decimal, mant uint64, exp int, flt *floatInfo) {
+ // If mantissa is zero, the number is zero; stop now.
+ if mant == 0 {
+ d.nd = 0
+ return
+ }
+
+ // Compute upper and lower such that any decimal number
+ // between upper and lower (possibly inclusive)
+ // will round to the original floating point number.
+
+ // We may see at once that the number is already shortest.
+ //
+ // Suppose d is not denormal, so that 2^exp <= d < 10^dp.
+ // The closest shorter number is at least 10^(dp-nd) away.
+ // The lower/upper bounds computed below are at distance
+ // at most 2^(exp-mantbits).
+ //
+ // So the number is already shortest if 10^(dp-nd) > 2^(exp-mantbits),
+ // or equivalently log2(10)*(dp-nd) > exp-mantbits.
+ // It is true if 332/100*(dp-nd) >= exp-mantbits (log2(10) > 3.32).
+ minexp := flt.bias + 1 // minimum possible exponent
+ if exp > minexp && 332*(d.dp-d.nd) >= 100*(exp-int(flt.mantbits)) {
+ // The number is already shortest.
+ return
+ }
+
+ // d = mant << (exp - mantbits)
+ // Next highest floating point number is mant+1 << exp-mantbits.
+ // Our upper bound is halfway between, mant*2+1 << exp-mantbits-1.
+ upper := new(decimal)
+ upper.Assign(mant*2 + 1)
+ upper.Shift(exp - int(flt.mantbits) - 1)
+
+ // d = mant << (exp - mantbits)
+ // Next lowest floating point number is mant-1 << exp-mantbits,
+ // unless mant-1 drops the significant bit and exp is not the minimum exp,
+ // in which case the next lowest is mant*2-1 << exp-mantbits-1.
+ // Either way, call it mantlo << explo-mantbits.
+ // Our lower bound is halfway between, mantlo*2+1 << explo-mantbits-1.
+ var mantlo uint64
+ var explo int
+ if mant > 1< **NOTE:** You can only do this step **ONCE** after the upgrade of the `provider`. If the `explicit_resource_order` table exists in the state file and you run the `apply` command again it will apply all of the detected configurataion file changes to the `azurerm_frontdoor` resource in Azure. If you feel you have done this step in error you will need to remove the `explicit_resource_order` table from your state file, however modifying the state file is not advised and is intended for advanced users, administrators, and IT Professionals only.
+
+## Import Your azurerm_frontdoor_custom_https_configuration Settings Into Your State File
+
+At this point, you have now successfully upgraded your provider and modified your configuration file to be **v2.58.0** compliant. You will need to import your frontend endpoint `custom_https_configuration` settings into a new `azurerm_frontdoor_custom_https_configuration` resource. This is done by modifying your configuration file with a new `azurerm_frontdoor_custom_https_configuration` resource stub for each of your frontend endpoints in your configuration file so they can be imported into the state file without causing an error during `apply`. To do this you will need to add the following `azurerm_frontdoor_custom_https_configuration` stubs to the end of your configuration file:
+
+```hcl
+resource "azurerm_frontdoor_custom_https_configuration" "default" {
+}
+
+resource "azurerm_frontdoor_custom_https_configuration" "custom" {
+}
+```
+
+Once you have added these definitons to your configuration file you will run the following commands to import the new resources into your state file:
+
+```
+terraform import azurerm_frontdoor_custom_https_configuration.default /subscriptions/{subscription}/resourceGroups/{resorceGroup}/providers/Microsoft.Network/frontDoors/{frontDoor}/customHttpsConfiguration/default
+
+terraform import azurerm_frontdoor_custom_https_configuration.custom /subscriptions/{subscription}/resourceGroups/{resorceGroup}/providers/Microsoft.Network/frontDoors/{frontDoor}/customHttpsConfiguration/custom
+```
+
+The output from these command should look something like this:
+
+```
+> terraform import azurerm_frontdoor_custom_https_configuration.default /subscriptions/XXXXXX/resourceGroups/example-rg/providers/Microsoft.Network/frontDoors/exampleFrontdoor/customHttpsConfiguration/default
+azurerm_frontdoor_custom_https_configuration.default: Importing from ID "/subscriptions/XXXXXX/resourceGroups/example-rg/providers/Microsoft.Network/frontDoors/exampleFrontdoor/customHttpsConfiguration/default"...
+azurerm_frontdoor_custom_https_configuration.default: Import prepared!
+ Prepared azurerm_frontdoor_custom_https_configuration for import
+azurerm_frontdoor_custom_https_configuration.default: Refreshing state... [id=/subscriptions/XXXXXX/resourceGroups/example-rg/providers/Microsoft.Network/frontDoors/exampleFrontdoor/customHttpsConfiguration/default]
+
+Import successful!
+
+The resources that were imported are shown above. These resources are now in
+your Terraform state and will henceforth be managed by Terraform.
+```
+
+```
+> terraform import azurerm_frontdoor_custom_https_configuration.custom /subscriptions/XXXXXX/resourceGroups/example-rg/providers/Microsoft.Network/frontDoors/exampleFrontdoor/customHttpsConfiguration/custom
+azurerm_frontdoor_custom_https_configuration.custom: Importing from ID "/subscriptions/XXXXXX/resourceGroups/example-rg/providers/Microsoft.Network/frontDoors/exampleFrontdoor/customHttpsConfiguration/custom"...
+azurerm_frontdoor_custom_https_configuration.custom: Import prepared!
+ Prepared azurerm_frontdoor_custom_https_configuration for import
+azurerm_frontdoor_custom_https_configuration.custom: Refreshing state... [id=/subscriptions/XXXXXX/resourceGroups/example-rg/providers/Microsoft.Network/frontDoors/exampleFrontdoor/customHttpsConfiguration/custom]
+
+Import successful!
+
+The resources that were imported are shown above. These resources are now in
+your Terraform state and will henceforth be managed by Terraform.
+```
+
+Once the `azurerm_frontdoor_custom_https_configuration` stubs have been imported into your state file you will now need to update the stubs to have the correct values that live in Azure for your Front Door resource. So in this example we will be updating our `azurerm_frontdoor_custom_https_configuration` stubs to have these configurations settings:
+
+```hcl
+resource "azurerm_frontdoor_custom_https_configuration" "default" {
+ frontend_endpoint_id = "${azurerm_frontdoor.example.id}/frontendEndpoints/${local.default_frontend_name}"
+ custom_https_provisioning_enabled = false
+}
+
+resource "azurerm_frontdoor_custom_https_configuration" "custom" {
+ frontend_endpoint_id = "${azurerm_frontdoor.example.id}/frontendEndpoints/${local.custom_frontend_name}"
+ custom_https_provisioning_enabled = true
+
+ custom_https_configuration {
+ certificate_source = "FrontDoor"
+ }
+}
+```
+
+Now that you have updated the `azurerm_frontdoor_custom_https_configuration` resources with the correct values the last thing you need to do is run a `terrform plan` command and you will be back in the correct state where terraform can again manage your `azurerm_frontdoor` resource via Terraform. If all of the steps were followed correctly the output from the `plan` should look something like this.
+
+```
+> terraform plan
+
+Refreshing Terraform state in-memory prior to plan...
+The refreshed state will be used to calculate this plan, but will not be
+persisted to local or remote state storage.
+
+azurerm_resource_group.example: Refreshing state... [id=/subscriptions/XXXXXX/resourceGroups/example-rg]
+azurerm_frontdoor.example: Refreshing state... [id=/subscriptions/XXXXXX/resourceGroups/example-rg/providers/Microsoft.Network/frontDoors/exampleFrontdoor]
+azurerm_frontdoor_custom_https_configuration.default: Refreshing state... [id=/subscriptions/XXXXXX/resourceGroups/example-rg/providers/Microsoft.Network/frontDoors/exampleFrontdoor/customHttpsConfiguration/default]
+azurerm_frontdoor_custom_https_configuration.custom: Refreshing state... [id=/subscriptions/XXXXXX/resourceGroups/example-rg/providers/Microsoft.Network/frontDoors/exampleFrontdoor/customHttpsConfiguration/custom]
+
+------------------------------------------------------------------------
+
+No changes. Infrastructure is up-to-date.
+
+This means that Terraform did not detect any differences between your
+configuration and real physical resources that exist. As a result, no
+actions need to be performed.
+```
diff --git a/website/docs/guides/service_principal_client_certificate.html.markdown b/website/docs/guides/service_principal_client_certificate.html.markdown
index 25ff25fb66389..7e15305e3169a 100644
--- a/website/docs/guides/service_principal_client_certificate.html.markdown
+++ b/website/docs/guides/service_principal_client_certificate.html.markdown
@@ -73,7 +73,7 @@ To associate the public portion of the Client Certificate (the `*.crt` file) wit
The Public Key associated with the generated Certificate can be uploaded by selecting **Upload Certificate**, selecting the file which should be uploaded (in the example above, that'd be `service-principal.crt`) - and then hit **Add**.
-### Allowing the Service Principal to manage the Subscription
+### Allowing the Service Principal to manage the Subscription
Now that we've created the Application within Azure Active Directory and assigned the certificate we're using for authentication, we can now grant the Application permissions to manage the Subscription via its linked Service Principal. To do this, [navigate to the **Subscriptions** blade within the Azure Portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade), select the Subscription you wish to use, then click **Access Control (IAM)** and finally **Add** > **Add role assignment**.
diff --git a/website/docs/r/application_gateway.html.markdown b/website/docs/r/application_gateway.html.markdown
index be5e45c41405f..893ef9fcb20dc 100644
--- a/website/docs/r/application_gateway.html.markdown
+++ b/website/docs/r/application_gateway.html.markdown
@@ -256,9 +256,7 @@ A `frontend_ip_configuration` block supports the following:
* `private_ip_address` - (Optional) The Private IP Address to use for the Application Gateway.
-* `public_ip_address_id` - (Optional) The ID of a Public IP Address which the Application Gateway should use.
-
--> **NOTE:** The Allocation Method for this Public IP Address should be set to `Dynamic`.
+* `public_ip_address_id` - (Optional) The ID of a Public IP Address which the Application Gateway should use. The allocation method for the Public IP Address depends on the `sku` of this Application Gateway. Please refer to the [Azure documentation for public IP addresses](https://docs.microsoft.com/en-us/azure/virtual-network/public-ip-addresses#application-gateways) for details.
* `private_ip_address_allocation` - (Optional) The Allocation Method for the Private IP Address. Possible values are `Dynamic` and `Static`.
diff --git a/website/docs/r/consumption_budget_resource_group.html.markdown b/website/docs/r/consumption_budget_resource_group.html.markdown
new file mode 100644
index 0000000000000..5238694dc2f15
--- /dev/null
+++ b/website/docs/r/consumption_budget_resource_group.html.markdown
@@ -0,0 +1,192 @@
+---
+subcategory: "Consumption"
+layout: "azurerm"
+page_title: "Azure Resource Manager: azurerm_consumption_budget_resource_group"
+description: |-
+ Manages a Resource Group Consumption Budget.
+---
+
+# azurerm_consumption_budget_resource_group
+
+Manages a Resource Group Consumption Budget.
+
+## Example Usage
+
+```hcl
+data "azurerm_subscription" "current" {}
+
+resource "azurerm_resource_group" "example" {
+ name = "example"
+ location = "eastus"
+}
+
+resource "azurerm_monitor_action_group" "test" {
+ name = "example"
+ resource_group_name = azurerm_resource_group.example.name
+ short_name = "example"
+}
+
+resource "azurerm_consumption_budget_resource_group" "example" {
+ name = "example"
+ subscription_id = data.azurerm_subscription.current.subscription_id
+ resource_group_id = azurerm_resource_group.example.id
+
+ amount = 1000
+ time_grain = "Monthly"
+
+ time_period {
+ start_date = "2020-11-01T00:00:00Z"
+ end_date = "2020-12-01T00:00:00Z"
+ }
+
+ filter {
+ dimension {
+ name = "ResourceId"
+ values = [
+ azurerm_monitor_action_group.example.id,
+ ]
+ }
+
+ tag {
+ name = "foo"
+ values = [
+ "bar",
+ "baz",
+ ]
+ }
+ }
+
+ notification {
+ enabled = true
+ threshold = 90.0
+ operator = "EqualTo"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+
+ contact_groups = [
+ azurerm_monitor_action_group.example.id,
+ ]
+
+ contact_roles = [
+ "Owner",
+ ]
+ }
+
+ notification {
+ enabled = false
+ threshold = 100.0
+ operator = "GreaterThan"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+ }
+}
+```
+
+## Arguments Reference
+
+The following arguments are supported:
+
+* `name` - (Required) The name which should be used for this Resource Group Consumption Budget. Changing this forces a new Resource Group Consumption Budget to be created.
+
+* `resource_group_id` - (Required) The ID of the Resource Group to create the consumption budget for in the form of /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroup1. Changing this forces a new Resource Group Consumption Budget to be created.
+
+* `amount` - (Required) The total amount of cost to track with the budget.
+
+* `time_grain` - (Required) The time covered by a budget. Tracking of the amount will be reset based on the time grain. Must be one of `Monthly`, `Quarterly`, `Annually`, `BillingMonth`, `BillingQuarter`, or `BillingYear`. Defaults to `Monthly`.
+
+* `time_period` - (Required) A `time_period` block as defined below.
+
+* `notification` - (Required) One or more `notification` blocks as defined below.
+
+* `filter` - (Optional) A `filter` block as defined below.
+
+---
+
+A `filter` block supports the following:
+
+* `dimension` - (Optional) One or more `dimension` blocks as defined below to filter the budget on.
+
+* `tag` - (Optional) One or more `tag` blocks as defined below to filter the budget on.
+
+* `not` - (Optional) A `not` block as defined below to filter the budget on.
+
+---
+
+A `not` block supports the following:
+
+* `dimension` - (Optional) One `dimension` block as defined below to filter the budget on. Conflicts with `tag`.
+
+* `tag` - (Optional) One `tag` block as defined below to filter the budget on. Conflicts with `dimension`.
+
+---
+
+A `notification` block supports the following:
+
+* `operator` - (Required) The comparison operator for the notification. Must be one of `EqualTo`, `GreaterThan`, or `GreaterThanOrEqualTo`.
+
+* `threshold` - (Required) Threshold value associated with a notification. Notification is sent when the cost exceeded the threshold. It is always percent and has to be between 0 and 1000.
+
+* `contact_emails` - (Optional) Specifies a list of email addresses to send the budget notification to when the threshold is exceeded.
+
+* `contact_groups` - (Optional) Specifies a list of Action Group IDs to send the budget notification to when the threshold is exceeded.
+
+* `contact_roles` - (Optional) Specifies a list of contact roles to send the budget notification to when the threshold is exceeded.
+
+* `enabled` - (Optional) Should the notification be enabled?
+
+---
+
+A `dimension` block supports the following:
+
+* `name` - (Required) The name of the column to use for the filter. The allowed values are
+
+* `operator` - (Optional) The operator to use for comparison. The allowed values are `In`.
+
+* `values` - (Required) Specifies a list of values for the column. The allowed values are `ChargeType`, `Frequency`, `InvoiceId`, `Meter`, `MeterCategory`, `MeterSubCategory`, `PartNumber`, `PricingModel`, `Product`, `ProductOrderId`, `ProductOrderName`, `PublisherType`, `ReservationId`, `ReservationName`, `ResourceGroupName`, `ResourceGuid`, `ResourceId`, `ResourceLocation`, `ResourceType`, `ServiceFamily`, `ServiceName`, `UnitOfMeasure`.
+
+---
+
+A `tag` block supports the following:
+
+* `name` - (Required) The name of the tag to use for the filter.
+
+* `operator` - (Optional) The operator to use for comparison. The allowed values are `In`.
+
+* `values` - (Required) Specifies a list of values for the tag.
+
+---
+
+A `time_period` block supports the following:
+
+* `start_date` - (Required) The start date for the budget. The start date must be first of the month and should be less than the end date. Budget start date must be on or after June 1, 2017. Future start date should not be more than twelve months. Past start date should be selected within the timegrain period. Changing this forces a new Resource Group Consumption Budget to be created.
+
+* `end_date` - (Optional) The end date for the budget. If not set this will be 10 years after the start date.
+
+## Attributes Reference
+
+In addition to the Arguments listed above - the following Attributes are exported:
+
+* `id` - The ID of the Resource Group Consumption Budget.
+
+## Timeouts
+
+The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/docs/configuration/resources.html#timeouts) for certain actions:
+
+* `create` - (Defaults to 30 minutes) Used when creating the Resource Group Consumption Budget.
+* `read` - (Defaults to 5 minutes) Used when retrieving the Resource Group Consumption Budget.
+* `update` - (Defaults to 30 minutes) Used when updating the Resource Group Consumption Budget.
+* `delete` - (Defaults to 30 minutes) Used when deleting the Resource Group Consumption Budget.
+
+## Import
+
+Resource Group Consumption Budgets can be imported using the `resource id`, e.g.
+
+```shell
+terraform import azurerm_consumption_budget_resource_group.example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroup1/providers/Microsoft.Consumption/budgets/resourceGroup1
+```
diff --git a/website/docs/r/consumption_budget_subscription.html.markdown b/website/docs/r/consumption_budget_subscription.html.markdown
new file mode 100644
index 0000000000000..25f8ae57d7d03
--- /dev/null
+++ b/website/docs/r/consumption_budget_subscription.html.markdown
@@ -0,0 +1,191 @@
+---
+subcategory: "Consumption"
+layout: "azurerm"
+page_title: "Azure Resource Manager: azurerm_consumption_budget_subscription"
+description: |-
+ Manages a Subscription Consumption Budget.
+---
+
+# azurerm_consumption_budget_subscription
+
+Manages a Subscription Consumption Budget.
+
+## Example Usage
+
+```hcl
+data "azurerm_subscription" "current" {}
+
+resource "azurerm_resource_group" "example" {
+ name = "example"
+ location = "eastus"
+}
+
+resource "azurerm_monitor_action_group" "test" {
+ name = "example"
+ resource_group_name = azurerm_resource_group.example.name
+ short_name = "example"
+}
+
+resource "azurerm_consumption_budget_subscription" "example" {
+ name = "example"
+ subscription_id = data.azurerm_subscription.current.subscription_id
+
+ amount = 1000
+ time_grain = "Monthly"
+
+ time_period {
+ start_date = "2020-11-01T00:00:00Z"
+ end_date = "2020-12-01T00:00:00Z"
+ }
+
+ filter {
+ dimension {
+ name = "ResourceGroupName"
+ values = [
+ azurerm_resource_group.example.name,
+ ]
+ }
+
+ tag {
+ name = "foo"
+ values = [
+ "bar",
+ "baz",
+ ]
+ }
+ }
+
+ notification {
+ enabled = true
+ threshold = 90.0
+ operator = "EqualTo"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+
+ contact_groups = [
+ azurerm_monitor_action_group.example.id,
+ ]
+
+ contact_roles = [
+ "Owner",
+ ]
+ }
+
+ notification {
+ enabled = false
+ threshold = 100.0
+ operator = "GreaterThan"
+
+ contact_emails = [
+ "foo@example.com",
+ "bar@example.com",
+ ]
+ }
+}
+```
+
+## Arguments Reference
+
+The following arguments are supported:
+
+* `name` - (Required) The name which should be used for this Subscription Consumption Budget. Changing this forces a new Subscription Consumption Budget to be created.
+
+* `subscription_id` - (Required) The ID of the Consumption Budget. Changing this forces a new Subscription Consumption Budget to be created.
+
+* `amount` - (Required) The total amount of cost to track with the budget.
+
+* `time_grain` - (Required) The time covered by a budget. Tracking of the amount will be reset based on the time grain. Must be one of `Monthly`, `Quarterly`, `Annually`, `BillingMonth`, `BillingQuarter`, or `BillingYear`. Defaults to `Monthly`.
+
+* `time_period` - (Required) A `time_period` block as defined below.
+
+* `notification` - (Required) One or more `notification` blocks as defined below.
+
+* `filter` - (Optional) A `filter` block as defined below.
+
+---
+
+A `filter` block supports the following:
+
+* `dimension` - (Optional) One or more `dimension` blocks as defined below to filter the budget on.
+
+* `tag` - (Optional) One or more `tag` blocks as defined below to filter the budget on.
+
+* `not` - (Optional) A `not` block as defined below to filter the budget on.
+
+---
+
+A `not` block supports the following:
+
+* `dimension` - (Optional) One `dimension` block as defined below to filter the budget on. Conflicts with `tag`.
+
+* `tag` - (Optional) One `tag` block as defined below to filter the budget on. Conflicts with `dimension`.
+
+---
+
+A `notification` block supports the following:
+
+* `operator` - (Required) The comparison operator for the notification. Must be one of `EqualTo`, `GreaterThan`, or `GreaterThanOrEqualTo`.
+
+* `threshold` - (Required) Threshold value associated with a notification. Notification is sent when the cost exceeded the threshold. It is always percent and has to be between 0 and 1000.
+
+* `contact_emails` - (Optional) Specifies a list of email addresses to send the budget notification to when the threshold is exceeded.
+
+* `contact_groups` - (Optional) Specifies a list of Action Group IDs to send the budget notification to when the threshold is exceeded.
+
+* `contact_roles` - (Optional) Specifies a list of contact roles to send the budget notification to when the threshold is exceeded.
+
+* `enabled` - (Optional) Should the notification be enabled?
+
+---
+
+A `dimension` block supports the following:
+
+* `name` - (Required) The name of the column to use for the filter. The allowed values are
+
+* `operator` - (Optional) The operator to use for comparison. The allowed values are `In`.
+
+* `values` - (Required) Specifies a list of values for the column. The allowed values are `ChargeType`, `Frequency`, `InvoiceId`, `Meter`, `MeterCategory`, `MeterSubCategory`, `PartNumber`, `PricingModel`, `Product`, `ProductOrderId`, `ProductOrderName`, `PublisherType`, `ReservationId`, `ReservationName`, `ResourceGroupName`, `ResourceGuid`, `ResourceId`, `ResourceLocation`, `ResourceType`, `ServiceFamily`, `ServiceName`, `UnitOfMeasure`.
+
+---
+
+A `tag` block supports the following:
+
+* `name` - (Required) The name of the tag to use for the filter.
+
+* `operator` - (Optional) The operator to use for comparison. The allowed values are `In`.
+
+* `values` - (Required) Specifies a list of values for the tag.
+
+---
+
+A `time_period` block supports the following:
+
+* `start_date` - (Required) The start date for the budget. The start date must be first of the month and should be less than the end date. Budget start date must be on or after June 1, 2017. Future start date should not be more than twelve months. Past start date should be selected within the timegrain period. Changing this forces a new Subscription Consumption Budget to be created.
+
+* `end_date` - (Optional) The end date for the budget. If not set this will be 10 years after the start date.
+
+## Attributes Reference
+
+In addition to the Arguments listed above - the following Attributes are exported:
+
+* `id` - The ID of the Subscription Consumption Budget.
+
+## Timeouts
+
+The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/docs/configuration/resources.html#timeouts) for certain actions:
+
+* `create` - (Defaults to 30 minutes) Used when creating the Subscription Consumption Budget.
+* `read` - (Defaults to 5 minutes) Used when retrieving the Subscription Consumption Budget.
+* `update` - (Defaults to 30 minutes) Used when updating the Subscription Consumption Budget.
+* `delete` - (Defaults to 30 minutes) Used when deleting the Subscription Consumption Budget.
+
+## Import
+
+Subscription Consumption Budgets can be imported using the `resource id`, e.g.
+
+```shell
+terraform import azurerm_consumption_budget_subscription.example /subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Consumption/budgets/subscription1
+```
diff --git a/website/docs/r/container_registry.html.markdown b/website/docs/r/container_registry.html.markdown
index d38068168ec40..f6309fb2612b1 100644
--- a/website/docs/r/container_registry.html.markdown
+++ b/website/docs/r/container_registry.html.markdown
@@ -30,6 +30,51 @@ resource "azurerm_container_registry" "acr" {
admin_enabled = false
georeplication_locations = ["East US", "West Europe"]
}
+```
+
+## Example Usage (Encryption)
+
+```hcl
+resource "azurerm_resource_group" "rg" {
+ name = "example-resources"
+ location = "West Europe"
+}
+
+resource "azurerm_container_registry" "acr" {
+ name = "containerRegistry1"
+ resource_group_name = azurerm_resource_group.rg.name
+ location = azurerm_resource_group.rg.location
+ sku = "Premium"
+
+ identity {
+ type = "UserAssigned"
+ identity_ids = [
+ azurerm_user_assigned_identity.example.id
+ ]
+ }
+
+ encryption {
+ enabled = true
+ key_vault_key_id = data.azurerm_key_vault_key.example.id
+ identity_client_id = azurerm_user_assigned_identity.example.client_id
+ }
+
+}
+
+resource "azurerm_user_assigned_identity" "example" {
+ resource_group_name = azurerm_resource_group.example.name
+ location = azurerm_resource_group.example.location
+
+ name = "registry-uai"
+}
+
+data "azurerm_key_vault_key" "example" {
+ name = "super-secret"
+ key_vault_id = data.azurerm_key_vault.existing.id
+}
+
+
+
```
## Argument Reference
@@ -76,6 +121,10 @@ The following arguments are supported:
* `trust_policy` - (Optional) A `trust_policy` block as documented below.
+* `identity` - (Optional) An `identity` block as documented below.
+
+* `encryption` - (Optional) An `encryption` block as documented below.
+
~> **NOTE:** `quarantine_policy_enabled`, `retention_policy` and `trust_policy` are only supported on resources with the `Premium` SKU.
`georeplications` supports the following:
@@ -118,6 +167,22 @@ The following arguments are supported:
* `enabled` - (Optional) Boolean value that indicates whether the policy is enabled.
+`identity` supports the following:
+
+* `type` - (Required) The type of Managed Identity which should be assigned to the Container Registry. Possible values are `SystemAssigned`, `UserAssigned` and `SystemAssigned, UserAssigned`.
+
+* `identity_ids` - (Optional) A list of User Managed Identity ID's which should be assigned to the Container Registry.
+
+`encryption` supports the following:
+
+* `enabled` - (Optional) Boolean value that indicates whether encryption is enabled.
+
+* `key_vault_key_id` - (Required) The ID of the Key Vault Key.
+
+* `identity_client_id` - (Required) The client ID of the managed identity associated with the encryption key.
+
+~> **NOTE** The managed identity used in `encryption` also needs to be part of the `identity` block under `identity_ids`
+
---
## Attributes Reference
diff --git a/website/docs/r/cosmosdb_account.html.markdown b/website/docs/r/cosmosdb_account.html.markdown
index d32b91f71d517..4c62aad392d4d 100644
--- a/website/docs/r/cosmosdb_account.html.markdown
+++ b/website/docs/r/cosmosdb_account.html.markdown
@@ -116,6 +116,10 @@ The following arguments are supported:
* `backup` - (Optional) A `backup` block as defined below.
+* `cors_rule` - (Optional) A `cors_rule` block as defined below.
+
+* `identity` - (Optional) An `identity` block as defined below.
+
---
`consistency_policy` Configures the database consistency and supports the following:
@@ -160,6 +164,26 @@ A `backup` block supports the following:
* `retention_in_hours` - (Optional) The time in hours that each backup is retained. This is configurable only when `type` is `Periodic`. Possible values are between 8 and 720.
+---
+
+A `cors_rule` block supports the following:
+
+* `allowed_headers` - (Required) A list of headers that are allowed to be a part of the cross-origin request.
+
+* `allowed_methods` - (Required) A list of http headers that are allowed to be executed by the origin. Valid options are `DELETE`, `GET`, `HEAD`, `MERGE`, `POST`, `OPTIONS`, `PUT` or `PATCH`.
+
+* `allowed_origins` - (Required) A list of origin domains that will be allowed by CORS.
+
+* `exposed_headers` - (Required) A list of response headers that are exposed to CORS clients.
+
+* `max_age_in_seconds` - (Required) The number of seconds the client should cache a preflight response.
+
+---
+
+A `identity` block supports the following:
+
+* `type` - (Required) Specifies the type of Managed Service Identity that should be configured on this Cosmos Account. Possible value is only `SystemAssigned`.
+
## Attributes Reference
The following attributes are exported:
@@ -182,6 +206,15 @@ The following attributes are exported:
* `connection_strings` - A list of connection strings available for this CosmosDB account.
+---
+
+An `identity` block exports the following:
+
+* `principal_id` - The Principal ID associated with this Managed Service Identity.
+
+* `tenant_id` - The Tenant ID associated with this Managed Service Identity.
+
+
## Timeouts
The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/docs/configuration/resources.html#timeouts) for certain actions:
diff --git a/website/docs/r/cosmosdb_cassandra_table.html.markdown b/website/docs/r/cosmosdb_cassandra_table.html.markdown
index 9c12adb5ae50d..520096c055790 100644
--- a/website/docs/r/cosmosdb_cassandra_table.html.markdown
+++ b/website/docs/r/cosmosdb_cassandra_table.html.markdown
@@ -78,6 +78,10 @@ The following arguments are supported:
* `throughput` - (Optional) The throughput of Cassandra KeySpace (RU/s). Must be set in increments of `100`. The minimum value is `400`. This must be set upon database creation otherwise it cannot be updated without a manual terraform destroy-apply.
+* `default_ttl` - (Optional) Time to live of the Cosmos DB Cassandra table. Possible values are at least `-1`. `-1` means the Cassandra table never expires.
+
+* `analytical_storage_ttl` - (Optional) Time to live of the Analytical Storage. Possible values are at least `-1`. `-1` means the Analytical Storage never expires. Changing this forces a new resource to be created.
+
~> **Note:** throughput has a maximum value of `1000000` unless a higher limit is requested via Azure Support
* `autoscale_settings` - (Optional) An `autoscale_settings` block as defined below. This must be set upon database creation otherwise it cannot be updated without a manual terraform destroy-apply.
diff --git a/website/docs/r/cosmosdb_mongo_collection.html.markdown b/website/docs/r/cosmosdb_mongo_collection.html.markdown
index 4f53bf0a3169d..192d82530af2c 100644
--- a/website/docs/r/cosmosdb_mongo_collection.html.markdown
+++ b/website/docs/r/cosmosdb_mongo_collection.html.markdown
@@ -45,6 +45,7 @@ The following arguments are supported:
* `database_name` - (Required) The name of the Cosmos DB Mongo Database in which the Cosmos DB Mongo Collection is created. Changing this forces a new resource to be created.
* `default_ttl_seconds` - (Required) The default Time To Live in seconds. If the value is `-1` or `0`, items are not automatically expired.
* `shard_key` - (Required) The name of the key to partition on for sharding. There must not be any other unique index keys.
+* `analytical_storage_ttl` - (Optional) The default time to live of Analytical Storage for this Mongo Collection. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
* `index` - (Optional) One or more `index` blocks as defined below.
* `throughput` - (Optional) The throughput of the MongoDB collection (RU/s). Must be set in increments of `100`. The minimum value is `400`. This must be set upon database creation otherwise it cannot be updated without a manual terraform destroy-apply.
* `autoscale_settings` - (Optional) An `autoscale_settings` block as defined below. This must be set upon database creation otherwise it cannot be updated without a manual terraform destroy-apply. Requires `shard_key` to be set.
diff --git a/website/docs/r/cosmosdb_sql_container.html.markdown b/website/docs/r/cosmosdb_sql_container.html.markdown
index fc810cabfff7d..7d11bf9909977 100644
--- a/website/docs/r/cosmosdb_sql_container.html.markdown
+++ b/website/docs/r/cosmosdb_sql_container.html.markdown
@@ -72,6 +72,8 @@ The following arguments are supported:
* `default_ttl` - (Optional) The default time to live of SQL container. If missing, items are not expired automatically. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
+* `analytical_storage_ttl` - (Optional) The default time to live of Analytical Storage for this SQL container. If present and the value is set to `-1`, it is equal to infinity, and items don’t expire by default. If present and the value is set to some number `n` – items will expire `n` seconds after their last modified time.
+
* `conflict_resolution_policy` - (Optional) A `conflict_resolution_policy` blocks as defined below.
---
diff --git a/website/docs/r/data_factory_integration_runtime_azure_ssis.html.markdown b/website/docs/r/data_factory_integration_runtime_azure_ssis.html.markdown
index 8d21669760f2f..17f714916b0d7 100644
--- a/website/docs/r/data_factory_integration_runtime_azure_ssis.html.markdown
+++ b/website/docs/r/data_factory_integration_runtime_azure_ssis.html.markdown
@@ -54,7 +54,7 @@ The following arguments are supported:
* `edition` - (Optional) The Azure-SSIS Integration Runtime edition. Valid values are `Standard` and `Enterprise`. Defaults to `Standard`.
-* `license_type` - (Optional) The type of the license that is used. Valid values are `LicenseIncluded` and `BasePrize`. Defaults to `LicenseIncluded`.
+* `license_type` - (Optional) The type of the license that is used. Valid values are `LicenseIncluded` and `BasePrice`. Defaults to `LicenseIncluded`.
* `catalog_info` - (Optional) A `catalog_info` block as defined below.
diff --git a/website/docs/r/frontdoor.html.markdown b/website/docs/r/frontdoor.html.markdown
index ba2cbbcbc67bb..9f2c8f1fb3136 100644
--- a/website/docs/r/frontdoor.html.markdown
+++ b/website/docs/r/frontdoor.html.markdown
@@ -19,9 +19,9 @@ Below are some of the key scenarios that Azure Front Door Service addresses:
!> **Be Aware:** Azure is rolling out a breaking change on Friday 9th April which may cause issues with the CDN/FrontDoor resources. [More information is available in this Github issue](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11231) - however unfortunately this may necessitate a breaking change to the CDN and FrontDoor resources, more information will be posted [in the Github issue](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11231) as the necessary changes are identified.
-!> **Breaking Provider Change:** The `custom_https_provisioning_enabled` field and the `custom_https_configuration` block have been removed from the `azurerm_frontdoor` resource in the v2.58.0 provider due to changes made by the service team. If you wish to enable the custom https configuration functionality within your `azurerm_frontdoor` resource moving forward you will need to define a separate `azurerm_frontdoor_custom_https_configuration` block in your configuration file.
+!> **BREAKING CHANGE:** The `custom_https_provisioning_enabled` field and the `custom_https_configuration` block have been removed from the `azurerm_frontdoor` resource in the `v2.58.0` provider due to changes made by the service team. If you wish to enable the custom https configuration functionality within your `azurerm_frontdoor` resource moving forward you will need to define a separate `azurerm_frontdoor_custom_https_configuration` block in your configuration file.
-!> **Breaking Behavior Change:** With the release of the v2.58.0 provider, if you run the `apply` command against an existing Front Door resource the changes will not be applied. This will only happen once with preexisting Front Door instances and will not affect newly provisioned Front Door resources. This change in behavior in Terraform is due to an issue where the underlying service teams API is now returning the response JSON out of order from the way it was sent to the resource provider by Terraform causing unexpected discrepancies in the `plan` after the resource has been provisioned. This will only happen one time, to avoid unwanted changes from being provisioned, once the `explicit_resource_order` mapping structure has been persisted to the state file the resource will resume functioning normally.
+!> **BREAKING CHANGE:** With the release of the `v2.58.0` provider, if you run the `apply` command against an existing Front Door resource it **will not** apply the detected changes. Instead it will persist the `explicit_resource_order` mapping structure to the state file. Once this operation has completed the resource will resume functioning normally.This change in behavior in Terraform is due to an issue where the underlying service teams API is now returning the response JSON out of order from the way it was sent to the resource via Terraform causing unexpected discrepancies in the `plan` after the resource has been provisioned. If your pre-existing Front Door instance contains `custom_https_configuration` blocks there are additional steps that will need to be completed to succefully migrate your Front Door onto the `v2.58.0` provider which [can be found in this guide](../guides/2.58.0-frontdoor-upgrade-guide.html).
## Example Usage
diff --git a/website/docs/r/frontdoor_custom_https_configuration.html.markdown b/website/docs/r/frontdoor_custom_https_configuration.html.markdown
index 5d209a530ae87..b90b369005dca 100644
--- a/website/docs/r/frontdoor_custom_https_configuration.html.markdown
+++ b/website/docs/r/frontdoor_custom_https_configuration.html.markdown
@@ -10,15 +10,13 @@ description: |-
Manages the Custom Https Configuration for an Azure Front Door Frontend Endpoint..
-~> **NOTE:** Custom https configurations for a Front Door Frontend Endpoint can be defined both within [the `azurerm_frontdoor` resource](frontdoor.html) via the `custom_https_configuration` block and by using a separate resource, as described in the following sections.
-
-> **NOTE:** Defining custom https configurations using a separate `azurerm_frontdoor_custom_https_configuration` resource allows for parallel creation/update.
--> **NOTE:** UPCOMING BREAKING CHANGE: In order to address the ordering issue we have changed the design on how to retrieve existing sub resources such as frontend endpoints. Existing design will be deprecated and will result in an incorrect configuration. Please refer to the updated documentation below for more information.
+!> **BREAKING CHANGE:** In order to address the ordering issue we have changed the design on how to retrieve existing sub resources such as frontend endpoints. Existing design will be deprecated and will result in an incorrect configuration. Please refer to the updated documentation below for more information.
-!> **Be Aware:** Azure is rolling out a breaking change on Friday 9th April which may cause issues with the CDN/FrontDoor resources. [More information is available in this Github issue](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11231) - however unfortunately this may necessitate a breaking change to the CDN and FrontDoor resources, more information will be posted [in the Github issue](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11231) as the necessary changes are identified.
+!> **BREAKING CHANGE:** The `resource_group_name` field has been removed as of the `v2.58.0` provider release. If the `resource_group_name` field has been defined in your current `azurerm_frontdoor_custom_https_configuration` resource configuration file please remove it else you will receive a `An argument named "resource_group_name" is not expected here.` error. If your pre-existing Front Door instance contained inline `custom_https_configuration` blocks there are additional steps that will need to be completed to succefully migrate your Front Door onto the `v2.58.0` provider which [can be found in this guide](../guides/2.58.0-frontdoor-upgrade-guide.html).
-!> **Breaking Provider Change:** The `resource_group_name` field has been removed as of the v2.58.0 provider release. If the `resource_group_name` field has been defined in your current `azurerm_frontdoor_custom_https_configuration` resource configuration file please remove it else you will receive a `An argument named "resource_group_name" is not expected here.` error.
+!> **Be Aware:** Azure is rolling out a breaking change on Friday 9th April which may cause issues with the CDN/FrontDoor resources. [More information is available in this Github issue](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11231) - however unfortunately this may necessitate a breaking change to the CDN and FrontDoor resources, more information will be posted [in the Github issue](https://github.com/terraform-providers/terraform-provider-azurerm/issues/11231) as the necessary changes are identified.
```hcl
resource "azurerm_resource_group" "example" {
@@ -144,8 +142,8 @@ The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/d
## Import
-Front Door Custom Https Configurations can be imported using the `resource id` of the Frontend Endpoint, e.g.
+Front Door Custom Https Configurations can be imported using the `resource id` of the Front Door Custom Https Configuration, e.g.
```shell
-terraform import azurerm_frontdoor_custom_https_configuration.example_custom_https_1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Network/frontDoors/frontdoor1/frontendEndpoints/endpoint1
+terraform import azurerm_frontdoor_custom_https_configuration.example_custom_https_1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Network/frontDoors/frontdoor1/customHttpsConfiguration/endpoint1
```
diff --git a/website/docs/r/hdinsight_kafka_cluster.html.markdown b/website/docs/r/hdinsight_kafka_cluster.html.markdown
index 76218f631cfee..73957d6a8f63d 100644
--- a/website/docs/r/hdinsight_kafka_cluster.html.markdown
+++ b/website/docs/r/hdinsight_kafka_cluster.html.markdown
@@ -103,7 +103,9 @@ The following arguments are supported:
* `tier` - (Required) Specifies the Tier which should be used for this HDInsight Kafka Cluster. Possible values are `Standard` or `Premium`. Changing this forces a new resource to be created.
-* `min_tls_version` - (Optional) The minimal supported TLS version. Possible values are 1.0, 1.1 or 1.2. Changing this forces a new resource to be created.
+* `min_tls_version` - (Optional) The minimal supported TLS version. Possible values are `1.0`, `1.1` or `1.2`. Changing this forces a new resource to be created.
+
+* `encryption_in_transit_enabled` - (Optional) Whether encryption in transit is enabled for this HDInsight Kafka Cluster. Changing this forces a new resource to be created.
~> **NOTE:** Starting on June 30, 2020, Azure HDInsight will enforce TLS 1.2 or later versions for all HTTPS connections. For more information, see [Azure HDInsight TLS 1.2 Enforcement](https://azure.microsoft.com/en-us/updates/azure-hdinsight-tls-12-enforcement/).
diff --git a/website/docs/r/healthcare_service.html.markdown b/website/docs/r/healthcare_service.html.markdown
index d65d21a92ed79..33f9f9d1fce0b 100644
--- a/website/docs/r/healthcare_service.html.markdown
+++ b/website/docs/r/healthcare_service.html.markdown
@@ -61,6 +61,7 @@ The following arguments are supported:
~> **Please Note** In order to use a `Custom Key` from Key Vault for encryption you must grant Azure Cosmos DB Service access to your key vault. For instructions on how to configure your Key Vault correctly please refer to the [product documentation](https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-setup-cmk#add-an-access-policy-to-your-azure-key-vault-instance)
* `cors_configuration` - (Optional) A `cors_configuration` block as defined below.
+* `public_network_access_enabled` - (Optional) Whether public network access is enabled or disabled for this service instance.
* `kind` - (Optional) The type of the service. Values at time of publication are: `fhir`, `fhir-Stu3` and `fhir-R4`. Default value is `fhir`.
* `tags` - (Optional) A mapping of tags to assign to the resource.
diff --git a/website/docs/r/machine_learning_inference_cluster.html.markdown b/website/docs/r/machine_learning_inference_cluster.html.markdown
new file mode 100644
index 0000000000000..9de3e2932a6a4
--- /dev/null
+++ b/website/docs/r/machine_learning_inference_cluster.html.markdown
@@ -0,0 +1,166 @@
+---
+subcategory: "Machine Learning"
+layout: "azurerm"
+page_title: "Azure Resource Manager: azurerm_machine_learning_inference_cluster"
+description: |-
+ Manages a Machine Learning Inference Cluster.
+---
+
+# azurerm_machine_learning_inference_cluster
+
+Manages a Machine Learning Inference Cluster.
+
+~> **NOTE:** The Machine Learning Inference Cluster resource is used to attach an existing AKS cluster to the Machine Learning Workspace, it doesn't create the AKS cluster itself. Therefore it can only be created and deleted, not updated. Any change to the configuration will recreate the resource.
+
+## Example Usage
+
+```hcl
+data "azurerm_client_config" "current" {}
+
+resource "azurerm_resource_group" "example" {
+ name = "example-rg"
+ location = "west europe"
+ tags = {
+ "stage" = "example"
+ }
+}
+
+resource "azurerm_application_insights" "example" {
+ name = "example-ai"
+ location = azurerm_resource_group.example.location
+ resource_group_name = azurerm_resource_group.example.name
+ application_type = "web"
+}
+
+resource "azurerm_key_vault" "example" {
+ name = "example-kv"
+ location = azurerm_resource_group.example.location
+ resource_group_name = azurerm_resource_group.example.name
+ tenant_id = data.azurerm_client_config.current.tenant_id
+
+ sku_name = "standard"
+
+ purge_protection_enabled = true
+}
+
+resource "azurerm_storage_account" "example" {
+ name = "examplesa"
+ location = azurerm_resource_group.example.location
+ resource_group_name = azurerm_resource_group.example.name
+ account_tier = "Standard"
+ account_replication_type = "LRS"
+}
+
+resource "azurerm_machine_learning_workspace" "example" {
+ name = "example-mlw"
+ location = azurerm_resource_group.example.location
+ resource_group_name = azurerm_resource_group.example.name
+ application_insights_id = azurerm_application_insights.example.id
+ key_vault_id = azurerm_key_vault.example.id
+ storage_account_id = azurerm_storage_account.example.id
+
+ identity {
+ type = "SystemAssigned"
+ }
+}
+
+resource "azurerm_virtual_network" "example" {
+ name = "example-vnet"
+ address_space = ["10.1.0.0/16"]
+ location = azurerm_resource_group.example.location
+ resource_group_name = azurerm_resource_group.example.name
+}
+
+resource "azurerm_subnet" "example" {
+ name = "example-subnet"
+ resource_group_name = azurerm_resource_group.example.name
+ virtual_network_name = azurerm_virtual_network.example.name
+ address_prefix = "10.1.0.0/24"
+}
+
+resource "azurerm_kubernetes_cluster" "example" {
+ name = "example-aks"
+ location = azurerm_resource_group.example.location
+ resource_group_name = azurerm_resource_group.example.name
+
+ default_node_pool {
+ name = "default"
+ node_count = 3
+ vm_size = "Standard_D3_v2"
+ vnet_subnet_id = azurerm_subnet.example.id
+ }
+
+ identity {
+ type = "SystemAssigned"
+ }
+}
+
+resource "azurerm_machine_learning_inference_cluster" "example" {
+ name = "example"
+ location = azurerm_resource_group.example.location
+ cluster_purpose = "FastProd"
+ kubernetes_cluster_id = azurerm_kubernetes_cluster.example.id
+ description = "This is an example cluster used with Terraform"
+
+ machine_learning_workspace_id = azurerm_machine_learning_workspace.example.id
+
+ tags = {
+ "stage" = "example"
+ }
+}
+```
+
+## Arguments Reference
+
+The following arguments are supported:
+
+* `name` - (Required) The name which should be used for this Machine Learning Inference Cluster. Changing this forces a new Machine Learning Inference Cluster to be created.
+
+* `kubernetes_cluster_id` - (Required) The ID of the Kubernetes Cluster. Changing this forces a new Machine Learning Inference Cluster to be created.
+
+* `location` - (Required) The Azure Region where the Machine Learning Inference Cluster should exist. Changing this forces a new Machine Learning Inference Cluster to be created.
+
+* `machine_learning_workspace_id` - (Required) The ID of the Machine Learning Workspace. Changing this forces a new Machine Learning Inference Cluster to be created.
+
+* `cluster_purpose` - (Optional) The purpose of the Inference Cluster. Options are `DevTest`, `DenseProd` and `FastProd`. If used for Development or Testing, use `DevTest` here. Default purpose is `FastProd`, which is recommended for production workloads. Changing this forces a new Machine Learning Inference Cluster to be created.
+
+~> **NOTE:** When creating or attaching a cluster, if the cluster will be used for production (`cluster_purpose = "FastProd"`), then it must contain at least 12 virtual CPUs. The number of virtual CPUs can be calculated by multiplying the number of nodes in the cluster by the number of cores provided by the VM size selected. For example, if you use a VM size of "Standard_D3_v2", which has 4 virtual cores, then you should select 3 or greater as the number of nodes.
+
+* `description` - (Optional) The description of the Machine Learning compute.
+
+* `ssl` - (Optional) A `ssl` block as defined below.
+
+* `tags` - (Optional) A mapping of tags which should be assigned to the Machine Learning Inference Cluster. Changing this forces a new Machine Learning Inference Cluster to be created.
+
+
+---
+
+A `ssl` block supports the following:
+
+* `cert` - (Optional) The certificate for the ssl configuration. Changing this forces a new Machine Learning Inference Cluster to be created.
+
+* `cname` - (Optional) The cname of the ssl configuration. Changing this forces a new Machine Learning Inference Cluster to be created.
+
+* `key` - (Optional) The key content for the ssl configuration. Changing this forces a new Machine Learning Inference Cluster to be created.
+
+## Attributes Reference
+
+In addition to the Arguments listed above - the following Attributes are exported:
+
+* `id` - The ID of the Machine Learning Inference Cluster.
+
+## Timeouts
+
+The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/docs/configuration/resources.html#timeouts) for certain actions:
+
+* `create` - (Defaults to 30 minutes) Used when creating the Machine Learning Inference Cluster.
+* `read` - (Defaults to 5 minutes) Used when retrieving the Machine Learning Inference Cluster.
+* `delete` - (Defaults to 30 minutes) Used when deleting the Machine Learning Inference Cluster.
+
+## Import
+
+Machine Learning Inference Clusters can be imported using the `resource id`, e.g.
+
+```shell
+terraform import azurerm_machine_learning_inference_cluster.example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resGroup1/providers/Microsoft.MachineLearningServices/workspaces/workspace1/computes/cluster1
+```
diff --git a/website/docs/r/managed_disk.html.markdown b/website/docs/r/managed_disk.html.markdown
index 2b8c0ec8e677b..c2ba45e09f504 100644
--- a/website/docs/r/managed_disk.html.markdown
+++ b/website/docs/r/managed_disk.html.markdown
@@ -117,6 +117,11 @@ The following arguments are supported:
* `storage_account_id` - (Optional) The ID of the Storage Account where the `source_uri` is located. Required when `create_option` is set to `Import`. Changing this forces a new resource to be created.
+* `tier` - (Optional) The disk performance tier to use. Possible values are documented [here](https://docs.microsoft.com/en-us/azure/virtual-machines/disks-change-performance). This feature is currently supported only for premium SSDs.
+
+~> **NOTE:** Changing this value is disruptive if the disk is attached to a Virtual Machine. The VM will be shut down and de-allocated as required by Azure to action the change. Terraform will attempt to start the machine again after the update if it was in a `running` state when the apply was started.
+
+
* `tags` - (Optional) A mapping of tags to assign to the resource.
* `zones` - (Optional) A collection containing the availability zone to allocate the Managed Disk in.
diff --git a/website/docs/r/media_asset_filter.html.markdown b/website/docs/r/media_asset_filter.html.markdown
index a85fadc4ef97f..6e942737c330d 100644
--- a/website/docs/r/media_asset_filter.html.markdown
+++ b/website/docs/r/media_asset_filter.html.markdown
@@ -143,7 +143,7 @@ A `selection` block supports the following:
A `track_selection` block supports the following:
-* `condition` - (Optional) One or more `condition` blocks as defined above.
+* `condition` - (Required) One or more `condition` blocks as defined above.
## Attributes Reference
diff --git a/website/docs/r/media_services_account.html.markdown b/website/docs/r/media_services_account.html.markdown
index a033514713e10..0da723bbb9dcc 100644
--- a/website/docs/r/media_services_account.html.markdown
+++ b/website/docs/r/media_services_account.html.markdown
@@ -50,6 +50,15 @@ The following arguments are supported:
* `storage_account` - (Required) One or more `storage_account` blocks as defined below.
+* `identity` - (Optional) An `identity` block is documented below.
+
+* `storage_authentication_type` - (Optional) Specifies the storage authentication type.
+Possible value is `ManagedIdentity` or `System`.
+
+* `key_delivery_access_control` - (Optional) An `key_delivery_access_control` block is documented below.
+
+* `tags` - (Optional) A mapping of tags assigned to the resource.
+
---
A `storage_account` block supports the following:
@@ -60,12 +69,6 @@ A `storage_account` block supports the following:
~> **NOTE:** Whilst multiple `storage_account` blocks can be specified - one of them must be set to the primary
-* `identity` - (Optional) An `identity` block is documented below.
-
-* `storage_authentication_type` - (Optional) Specifies the storage authentication type.
-Possible value is `ManagedIdentity` or `System`.
-
-* `tags` - (Optional) A mapping of tags assigned to the resource.
---
A `identity` block supports the following:
@@ -74,6 +77,14 @@ A `identity` block supports the following:
---
+A `key_delivery_access_control` block supports the following:
+
+* `default_action` - (Optional) The Default Action to use when no rules match from `ip_allow_list`. Possible values are `Allow` and `Deny`.
+
+* `ip_allow_list` - (Optional) One or more IP Addresses, or CIDR Blocks which should be able to access the Key Delivery.
+
+---
+
## Attributes Reference
diff --git a/website/docs/r/monitor_aad_diagnostic_setting.html.markdown b/website/docs/r/monitor_aad_diagnostic_setting.html.markdown
new file mode 100644
index 0000000000000..0a58fd3e38eeb
--- /dev/null
+++ b/website/docs/r/monitor_aad_diagnostic_setting.html.markdown
@@ -0,0 +1,150 @@
+---
+subcategory: "Monitor"
+layout: "azurerm"
+page_title: "Azure Resource Manager: azurerm_monitor_aad_diagnostic_setting"
+description: |-
+ Manages an Azure Active Directory Diagnostic Setting for Azure Monitor.
+---
+
+# azurerm_monitor_aad_diagnostic_setting
+
+Manages an Azure Active Directory Diagnostic Setting for Azure Monitor.
+
+## Example Usage
+
+```hcl
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "example" {
+ name = "example-rg"
+ location = "west europe"
+}
+
+resource "azurerm_storage_account" "example" {
+ name = "examplestorageaccount"
+ resource_group_name = azurerm_resource_group.example.name
+ location = azurerm_resource_group.example.location
+ account_tier = "Standard"
+ account_kind = "StorageV2"
+ account_replication_type = "LRS"
+}
+
+resource "azurerm_monitor_aad_diagnostic_setting" "example" {
+ name = "setting1"
+ storage_account_id = azurerm_storage_account.example.id
+ log {
+ category = "SignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "AuditLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "NonInteractiveUserSignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "ServicePrincipalSignInLogs"
+ enabled = true
+ retention_policy {
+ enabled = true
+ days = 1
+ }
+ }
+ log {
+ category = "ManagedIdentitySignInLogs"
+ enabled = false
+ retention_policy {}
+ }
+ log {
+ category = "ProvisioningLogs"
+ enabled = false
+ retention_policy {}
+ }
+ log {
+ category = "ADFSSignInLogs"
+ enabled = false
+ retention_policy {}
+ }
+}
+```
+
+## Arguments Reference
+
+The following arguments are supported:
+
+* `name` - (Required) The name which should be used for this Monitor Azure Active Directory Diagnostic Setting. Changing this forces a new Monitor Azure Active Directory Diagnostic Setting to be created.
+
+* `log` - (Required) One or more `log` blocks as defined below.
+
+~> **Note:** At least one of the `log` blocks must have the `enabled` property set to `true`.
+
+---
+
+* `eventhub_authorization_rule_id` - (Optional) Specifies the ID of an Event Hub Namespace Authorization Rule used to send Diagnostics Data. Changing this forces a new resource to be created.
+
+-> **NOTE:** This can be sourced from [the `azurerm_eventhub_namespace_authorization_rule` resource](eventhub_namespace_authorization_rule.html) and is different from [a `azurerm_eventhub_authorization_rule` resource](eventhub_authorization_rule.html).
+
+* `eventhub_name` - (Optional) Specifies the name of the Event Hub where Diagnostics Data should be sent. If not specified, the default Event Hub will be used. Changing this forces a new resource to be created.
+
+* `log_analytics_workspace_id` - (Optional) Specifies the ID of a Log Analytics Workspace where Diagnostics Data should be sent.
+
+* `storage_account_id` - (Optional) The ID of the Storage Account where logs should be sent. Changing this forces a new resource to be created.
+
+-> **NOTE:** One of `eventhub_authorization_rule_id`, `log_analytics_workspace_id` and `storage_account_id` must be specified.
+
+---
+
+A `log` block supports the following:
+
+* `category` - (Required) The log category for the Azure Active Directory Diagnostic. Possible values are `AuditLogs`, `SignInLogs`, `ADFSSignInLogs`, `ManagedIdentitySignInLogs`, `NonInteractiveUserSignInLogs`, `ProvisioningLogs`, `ServicePrincipalSignInLogs`.
+
+* `retention_policy` - (Required) A `retention_policy` block as defined below.
+
+* `enabled` - (Optional) Is this Diagnostic Log enabled? Defaults to `true`.
+
+---
+
+A `retention_policy` block supports the following:
+
+* `enabled` - (Optional) Is this Retention Policy enabled? Defaults to `false`.
+
+* `days` - (Optional) The number of days for which this Retention Policy should apply. Defaults to `0`.
+
+## Attributes Reference
+
+In addition to the Arguments listed above - the following Attributes are exported:
+
+* `id` - The ID of the Monitor Azure Active Directory Diagnostic Setting.
+
+## Timeouts
+
+The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/docs/configuration/resources.html#timeouts) for certain actions:
+
+* `create` - (Defaults to 5 minutes) Used when creating the Monitor Azure Active Directory Diagnostic Setting.
+* `read` - (Defaults to 5 minutes) Used when retrieving the Monitor Azure Active Directory Diagnostic Setting.
+* `update` - (Defaults to 5 minutes) Used when updating the Monitor Azure Active Directory Diagnostic Setting.
+* `delete` - (Defaults to 5 minutes) Used when deleting the Monitor Azure Active Directory Diagnostic Setting.
+
+## Import
+
+Monitor Azure Active Directory Diagnostic Settings can be imported using the `resource id`, e.g.
+
+```shell
+terraform import azurerm_monitor_aad_diagnostic_setting.example /providers/Microsoft.AADIAM/diagnosticSettings/setting1
+```
diff --git a/website/docs/r/monitor_activity_log_alert.html.markdown b/website/docs/r/monitor_activity_log_alert.html.markdown
index 239208e165300..22e8baf7f9313 100644
--- a/website/docs/r/monitor_activity_log_alert.html.markdown
+++ b/website/docs/r/monitor_activity_log_alert.html.markdown
@@ -102,7 +102,7 @@ A `criteria` block supports the following:
A `service_health` block supports the following:
-* `events` (Optional) Events this alert will monitor Possible values are `Incident`, `Maintenance`, `Informational`, and `ActionRequired`.
+* `events` (Optional) Events this alert will monitor Possible values are `Incident`, `Maintenance`, `Informational`, `ActionRequired` and `Security`.
* `locations` (Optional) Locations this alert will monitor. For example, `West Europe`. Defaults to `Global`.
* `services` (Optional) Services this alert will monitor. For example, `Activity Logs & Alerts`, `Action Groups`. Defaults to all Services.
diff --git a/website/docs/r/mssql_database.html.markdown b/website/docs/r/mssql_database.html.markdown
index cdad2ff91b5ec..2cff713ab1741 100644
--- a/website/docs/r/mssql_database.html.markdown
+++ b/website/docs/r/mssql_database.html.markdown
@@ -32,7 +32,7 @@ resource "azurerm_storage_account" "example" {
account_replication_type = "LRS"
}
-resource "azurerm_sql_server" "example" {
+resource "azurerm_mssql_server" "example" {
name = "example-sqlserver"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
@@ -43,7 +43,7 @@ resource "azurerm_sql_server" "example" {
resource "azurerm_mssql_database" "test" {
name = "acctest-db-d"
- server_id = azurerm_sql_server.example.id
+ server_id = azurerm_mssql_server.example.id
collation = "SQL_Latin1_General_CP1_CI_AS"
license_type = "LicenseIncluded"
max_size_gb = 4
diff --git a/website/docs/r/netapp_volume.html.markdown b/website/docs/r/netapp_volume.html.markdown
index 35dc564d51ae6..d76744b35ebef 100644
--- a/website/docs/r/netapp_volume.html.markdown
+++ b/website/docs/r/netapp_volume.html.markdown
@@ -70,6 +70,7 @@ resource "azurerm_netapp_volume" "example" {
service_level = "Premium"
subnet_id = azurerm_subnet.example.id
protocols = ["NFSv4.1"]
+ security_style = "Unix"
storage_quota_in_gb = 100
# When creating volume from a snapshot
@@ -106,6 +107,8 @@ The following arguments are supported:
* `protocols` - (Optional) The target volume protocol expressed as a list. Supported single value include `CIFS`, `NFSv3`, or `NFSv4.1`. If argument is not defined it will default to `NFSv3`. Changing this forces a new resource to be created and data will be lost. Dual protocol scenario is supported for CIFS and NFSv3, for more information, please refer to [Create a dual-protocol volume for Azure NetApp Files](https://docs.microsoft.com/en-us/azure/azure-netapp-files/create-volumes-dual-protocol) document.
+* `security_style` - (Optional) Volume security style, accepted values are `Unix` or `Ntfs`. If not provided, single-protocol volume is created defaulting to `Unix` if it is `NFSv3` or `NFSv4.1` volume, if `CIFS`, it will default to `Ntfs`. In a dual-protocol volume, if not provided, its value will be `Ntfs`.
+
* `subnet_id` - (Required) The ID of the Subnet the NetApp Volume resides in, which must have the `Microsoft.NetApp/volumes` delegation. Changing this forces a new resource to be created.
* `storage_quota_in_gb` - (Required) The maximum Storage Quota allowed for a file system in Gigabytes.
diff --git a/website/docs/r/policy_set_definition.html.markdown b/website/docs/r/policy_set_definition.html.markdown
index 3f94fe61ded98..e14d523170735 100644
--- a/website/docs/r/policy_set_definition.html.markdown
+++ b/website/docs/r/policy_set_definition.html.markdown
@@ -96,7 +96,7 @@ An `policy_definition_group` block supports the following:
* `description` - (Optional) The description of this policy definition group.
-* `additional_metadata_id` - (Optional) The ID of a resource that contains additional metadata about this policy definition group.
+* `additional_metadata_resource_id` - (Optional) The ID of a resource that contains additional metadata about this policy definition group.
## Attributes Reference
diff --git a/website/docs/r/redis_cache.html.markdown b/website/docs/r/redis_cache.html.markdown
index f5e70868a29d0..b6bde38a213b2 100644
--- a/website/docs/r/redis_cache.html.markdown
+++ b/website/docs/r/redis_cache.html.markdown
@@ -69,6 +69,10 @@ The following arguments are supported:
* `redis_configuration` - (Optional) A `redis_configuration` as defined below - with some limitations by SKU - defaults/details are shown below.
+* `replicas_per_master` - (Optional) Amount of replicas to create per master for this Redis Cache.
+
+~> **Note:** Configuring the number of replicas per master is only available when using the Premium SKU and cannot be used in conjunction with shards.
+
* `shard_count` - (Optional) *Only available when using the Premium SKU* The number of Shards to create on the Redis Cluster.
* `subnet_id` - (Optional) *Only available when using the Premium SKU* The ID of the Subnet within which the Redis Cache should be deployed. This Subnet must only contain Azure Cache for Redis instances without any other type of resources. Changing this forces a new resource to be created.
@@ -83,6 +87,20 @@ The following arguments are supported:
A `redis_configuration` block supports the following:
+* `aof_backup_enabled` - (Optional) Enable or disable AOF persistence for this Redis Cache.
+* `aof_storage_connection_string_0` - (Optional) First Storage Account connection string for AOF persistence.
+* `aof_storage_connection_string_1` - (Optional) Second Storage Account connection string for AOF persistence.
+
+Example usage:
+
+```hcl
+redis_configuration {
+ aof_backup_enabled = true
+ aof_storage_connection_string_0 = "DefaultEndpointsProtocol=https;BlobEndpoint=${azurerm_storage_account.nc-cruks-storage-account.primary_blob_endpoint};AccountName=${azurerm_storage_account.mystorageaccount.name};AccountKey=${azurerm_storage_account.mystorageaccount.primary_access_key}"
+ aof_storage_connection_string_1 = "DefaultEndpointsProtocol=https;BlobEndpoint=${azurerm_storage_account.mystorageaccount.primary_blob_endpoint};AccountName=${azurerm_storage_account.mystorageaccount.name};AccountKey=${azurerm_storage_account.mystorageaccount.secondary_access_key}"
+}
+```
+
* `enable_authentication` - (Optional) If set to `false`, the Redis instance will be accessible without authentication. Defaults to `true`.
-> **NOTE:** `enable_authentication` can only be set to `false` if a `subnet_id` is specified; and only works if there aren't existing instances within the subnet with `enable_authentication` set to `true`.
diff --git a/website/docs/r/redis_enterprise_database.html.markdown b/website/docs/r/redis_enterprise_database.html.markdown
index a50175b19b079..0800c5e6809db 100644
--- a/website/docs/r/redis_enterprise_database.html.markdown
+++ b/website/docs/r/redis_enterprise_database.html.markdown
@@ -70,6 +70,10 @@ In addition to the Arguments listed above - the following Attributes are exporte
* `id` - The ID of the Redis Enterprise Database.
+* `primary_access_key` - The Primary Access Key for the Redis Enterprise Database Instance.
+
+* `secondary_access_key` - The Secondary Access Key for the Redis Enterprise Database Instance.
+
## Timeouts
The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/docs/configuration/resources.html#timeouts) for certain actions:
diff --git a/website/docs/r/servicebus_namespace_disaster_recovery_config.html.markdown b/website/docs/r/servicebus_namespace_disaster_recovery_config.html.markdown
new file mode 100644
index 0000000000000..b6df1d477affb
--- /dev/null
+++ b/website/docs/r/servicebus_namespace_disaster_recovery_config.html.markdown
@@ -0,0 +1,86 @@
+---
+subcategory: "Messaging"
+layout: "azurerm"
+page_title: "Azure Resource Manager: azurerm_servicebus_namespace_disaster_recovery_config"
+description: |-
+ Manages a Disaster Recovery Config for a Service Bus Namespace.
+---
+
+# azurerm_servicebus_namespace_disaster_recovery_config
+
+Manages a Disaster Recovery Config for a Service Bus Namespace.
+
+~> **NOTE:** Disaster Recovery Config is a Premium Sku only capability.
+
+## Example Usage
+
+```hcl
+resource "azurerm_resource_group" "example" {
+ name = "servicebus-replication"
+ location = "West Europe"
+}
+
+resource "azurerm_servicebus_namespace" "primary" {
+ name = "servicebus-primary"
+ location = azurerm_resource_group.example.location
+ resource_group_name = azurerm_resource_group.example.name
+ sku = "Premium"
+ capacity = "1"
+}
+
+resource "azurerm_servicebus_namespace" "secondary" {
+ name = "servicebus-secondary"
+ location = "West US"
+ resource_group_name = azurerm_resource_group.example.name
+ sku = "Premium"
+ capacity = "1"
+}
+
+resource "azurerm_servicebus_namespace_disaster_recovery_config" "example" {
+ name = "servicebus-alias-name"
+ primary_namespace_id = azurerm_servicebus_namespace.primary.id
+ partner_namespace_id = azurerm_resource_group.secondary.id
+}
+
+```
+
+## Argument Reference
+
+The following arguments are supported:
+
+* `name` - (Required) Specifies the name of the Disaster Recovery Config. This is the alias DNS name that will be created. Changing this forces a new resource to be created.
+
+* `primary_namespace_id` - (Required) The ID of the primary Service Bus Namespace to replicate. Changing this forces a new resource to be created.
+
+* `partner_namespace_id` - (Required) The ID of the Service Bus Namespace to replicate to.
+
+## Attributes Reference
+
+The following attributes are exported:
+
+* `id` - The Service Bus Namespace Disaster Recovery Config ID.
+
+* `alias_primary_connection_string` - The alias Primary Connection String for the ServiceBus Namespace.
+
+* `alias_secondary_connection_string` - The alias Secondary Connection String for the ServiceBus Namespace
+
+* `default_primary_key` - The primary access key for the authorization rule `RootManageSharedAccessKey`.
+
+* `default_secondary_key` - The secondary access key for the authorization rule `RootManageSharedAccessKey`.
+
+## Timeouts
+
+The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/docs/configuration/resources.html#timeouts) for certain actions:
+
+* `create` - (Defaults to 30 minutes) Used when creating the Service Bus Namespace Disaster Recovery Config.
+* `update` - (Defaults to 30 minutes) Used when updating the Service Bus Namespace Disaster Recovery Config.
+* `read` - (Defaults to 5 minutes) Used when retrieving the Service Bus Namespace Disaster Recovery Config.
+* `delete` - (Defaults to 30 minutes) Used when deleting the Service Bus Namespace Disaster Recovery Config.
+
+## Import
+
+Service Bus DR configs can be imported using the `resource id`, e.g.
+
+```shell
+terraform import azurerm_servicebus_namespace_disaster_recovery_config.config1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.ServiceBus/namespaces/namespace1/disasterRecoveryConfigs/config1
+```
diff --git a/website/docs/r/spring_cloud_service.html.markdown b/website/docs/r/spring_cloud_service.html.markdown
index dfc2498815f28..feea089de8b64 100644
--- a/website/docs/r/spring_cloud_service.html.markdown
+++ b/website/docs/r/spring_cloud_service.html.markdown
@@ -158,6 +158,22 @@ The following attributes are exported:
* `outbound_public_ip_addresses` - A list of the outbound Public IP Addresses used by this Spring Cloud Service.
+* `required_network_traffic_rules` - A list of `required_network_traffic_rules` blocks as defined below.
+
+---
+
+The `required_network_traffic_rules` supports the following:
+
+* `direction` - The direction of required traffic. Possible values are `Inbound`, `Outbound`.
+
+* `fqdns` - The FQDN list of required traffic.
+
+* `ips` - The ip list of required traffic.
+
+* `port` - The port of required traffic.
+
+* `protocol` - The protocol of required traffic.
+
## Timeouts
The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/docs/configuration/resources.html#timeouts) for certain actions:
diff --git a/website/docs/r/static_site.html.markdown b/website/docs/r/static_site.html.markdown
new file mode 100644
index 0000000000000..b1a33f433af0e
--- /dev/null
+++ b/website/docs/r/static_site.html.markdown
@@ -0,0 +1,60 @@
+---
+subcategory: "App Service (Web Apps)"
+layout: "azurerm"
+page_title: "Azure Resource Manager: azurerm_static_site"
+description: |-
+ Manages a Static Site.
+---
+
+# azurerm_static_site
+
+Manages an App Service Static Site.
+
+->**NOTE**: After the Static Site is provisioned, you'll need to associate your target repository, which contains your web app, to the Static Site, by following the [Azure Static Site document](https://docs.microsoft.com/en-us/azure/static-web-apps/github-actions-workflow).
+
+## Example Usage
+
+```hcl
+resource "azurerm_static_site" "example" {
+ name = "example"
+ resource_group_name = "example"
+ location = "West Europe"
+}
+```
+
+## Arguments Reference
+
+The following arguments are supported:
+
+* `name` - (Required) The name which should be used for this Static Web App. Changing this forces a new Static Web App to be created.
+
+* `location` - (Required) The Azure Region where the Static Web App should exist. Changing this forces a new Static Web App to be created.
+
+* `resource_group_name` - (Required) The name of the Resource Group where the Static Web App should exist. Changing this forces a new Static Web App to be created.
+
+## Attributes Reference
+
+In addition to the Arguments listed above - the following Attributes are exported:
+
+* `id` - The ID of the Static Web App.
+
+* `api_key` - The API key of this Static Web App, which is used for later interacting with this Static Web App from other clients, e.g. Github Action.
+
+* `default_host_name` - The default host name of the Static Web App.
+
+## Timeouts
+
+The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/docs/configuration/resources.html#timeouts) for certain actions:
+
+* `create` - (Defaults to 30 minutes) Used when creating the Static Web App.
+* `read` - (Defaults to 5 minutes) Used when retrieving the Static Web App.
+* `update` - (Defaults to 30 minutes) Used when updating the Static Web App.
+* `delete` - (Defaults to 30 minutes) Used when deleting the Static Web App.
+
+## Import
+
+Static Web Apps can be imported using the `resource id`, e.g.
+
+```shell
+terraform import azurerm_static_site.example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.Web/staticSites/my-static-site1
+```
\ No newline at end of file
diff --git a/website/docs/r/storage_account.html.markdown b/website/docs/r/storage_account.html.markdown
index 4d2f6eaee7ed4..f87fcfd0555e6 100644
--- a/website/docs/r/storage_account.html.markdown
+++ b/website/docs/r/storage_account.html.markdown
@@ -147,6 +147,8 @@ A `blob_properties` block supports the following:
* `versioning_enabled` - (Optional) Is versioning enabled? Default to `false`.
+* `change_feed_enabled` - (Optional) Is the blob service properties for change feed events enabled? Default to `false`.
+
* `default_service_version` - (Optional) The API Version which should be used by default for requests to the Data Plane API if an incoming request doesn't specify an API Version. Defaults to `2020-06-12`.
* `last_access_time_enabled` - (Optional) Is the last access time based tracking enabled? Default to `false`.
@@ -243,6 +245,8 @@ any combination of `Logging`, `Metrics`, `AzureServices`, or `None`.
* `ip_rules` - (Optional) List of public IP or IP ranges in CIDR Format. Only IPV4 addresses are allowed. Private IP address ranges (as defined in [RFC 1918](https://tools.ietf.org/html/rfc1918#section-3)) are not allowed.
* `virtual_network_subnet_ids` - (Optional) A list of resource ids for subnets.
+* `private_link_access` - (Optional) One or More `private_link_access` block as defined below.
+
~> **Note:** If specifying `network_rules`, one of either `ip_rules` or `virtual_network_subnet_ids` must be specified and `default_action` must be set to `Deny`.
~> **NOTE:** Network Rules can be defined either directly on the `azurerm_storage_account` resource, or using the `azurerm_storage_account_network_rules` resource - but the two cannot be used together. If both are used against the same Storage Account, spurious changes will occur. When managing Network Rules using this resource, to change from a `default_action` of `Deny` to `Allow` requires defining, rather than removing, the block.
@@ -253,6 +257,14 @@ any combination of `Logging`, `Metrics`, `AzureServices`, or `None`.
---
+A `private_link_access` block supports the following:
+
+* `endpoint_resource_id` - (Required) The resource id of the `azurerm_private_endpoint` of the resource access rule.
+
+* `endpoint_tenant_id` - (Optional) The tenant id of the `azurerm_private_endpoint` of the resource access rule. Defaults to the current tenant id.
+
+---
+
A `azure_files_authentication` block supports the following:
* `directory_type` - (Required) Specifies the directory service used. Possible values are `AADDS` and `AD`.
diff --git a/website/docs/r/storage_account_network_rules.html.markdown b/website/docs/r/storage_account_network_rules.html.markdown
index 272d45812ffa8..50f42a7529e29 100644
--- a/website/docs/r/storage_account_network_rules.html.markdown
+++ b/website/docs/r/storage_account_network_rules.html.markdown
@@ -88,6 +88,17 @@ The following arguments are supported:
-> **NOTE** User has to explicitly set `virtual_network_subnet_ids` to empty slice (`[]`) to remove it.
+* `private_link_access` - (Optional) One or More `private_link_access` block as defined below.
+
+---
+
+A `private_link_access` block supports the following:
+
+* `endpoint_resource_id` - (Required) The resource id of the `azurerm_private_endpoint` of the resource access rule.
+
+* `endpoint_tenant_id` - (Optional) The tenant id of the `azurerm_private_endpoint` of the resource access rule. Defaults to the current tenant id.
+
+
## Attributes Reference
The following attributes are exported in addition to the arguments listed above:
diff --git a/website/docs/r/virtual_machine_scale_set_extension.html.markdown b/website/docs/r/virtual_machine_scale_set_extension.html.markdown
index cec5f85492c63..283cee0bcee0a 100644
--- a/website/docs/r/virtual_machine_scale_set_extension.html.markdown
+++ b/website/docs/r/virtual_machine_scale_set_extension.html.markdown
@@ -47,6 +47,12 @@ The following arguments are supported:
* `type_handler_version` - (Required) Specifies the version of the extension to use, available versions can be found using the Azure CLI.
+~> **Note:** The `Publisher` and `Type` of Virtual Machine Scale Set Extensions can be found using the Azure CLI, via:
+
+```shell
+$ az vmss extension image list --location westus -o table
+```
+
---
* `auto_upgrade_minor_version` - (Optional) Should the latest version of the Extension be used at Deployment Time, if one is available? This won't auto-update the extension on existing installation. Defaults to `true`.
diff --git a/website/docs/r/virtual_network.html.markdown b/website/docs/r/virtual_network.html.markdown
index 8ce8f975f9391..3514bc5da7f89 100644
--- a/website/docs/r/virtual_network.html.markdown
+++ b/website/docs/r/virtual_network.html.markdown
@@ -93,8 +93,6 @@ The following arguments are supported:
-> **NOTE** Since `subnet` can be configured both inline and via the separate `azurerm_subnet` resource, we have to explicitly set it to empty slice (`[]`) to remove it.
-* `vm_protection_enabled` - (Optional) Whether to enable VM protection for all the subnets in this Virtual Network. Defaults to `false`.
-
* `tags` - (Optional) A mapping of tags to assign to the resource.
---
diff --git a/website/docs/r/windows_virtual_machine_scale_set.html.markdown b/website/docs/r/windows_virtual_machine_scale_set.html.markdown
index d45ba6cf4ec44..0056776f06f90 100644
--- a/website/docs/r/windows_virtual_machine_scale_set.html.markdown
+++ b/website/docs/r/windows_virtual_machine_scale_set.html.markdown
@@ -144,7 +144,7 @@ The following arguments are supported:
* `identity` - (Optional) A `identity` block as defined below.
-* `license_type` - (Optional) Specifies the type of on-premise license (also known as [Azure Hybrid Use Benefit](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-hybrid-use-benefit-licensing)) which should be used for this Virtual Machine Scale Set. Possible values are `None`, `Windows_Client` and `Windows_Server`. Changing this forces a new resource to be created.
+* `license_type` - (Optional) Specifies the type of on-premise license (also known as [Azure Hybrid Use Benefit](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-hybrid-use-benefit-licensing)) which should be used for this Virtual Machine Scale Set. Possible values are `None`, `Windows_Client` and `Windows_Server`.
* `max_bid_price` - (Optional) The maximum price you're willing to pay for each Virtual Machine in this Scale Set, in US Dollars; which must be greater than the current spot price. If this bid price falls below the current spot price the Virtual Machines in the Scale Set will be evicted using the `eviction_policy`. Defaults to `-1`, which means that each Virtual Machine in the Scale Set should not be evicted for price reasons.