From 9e9725c514639b9a576cf8e69d0f5cf2cbb47cb6 Mon Sep 17 00:00:00 2001 From: Haroon-Dweikat-Ntx Date: Mon, 9 Sep 2024 12:56:11 +0300 Subject: [PATCH] V4 storage containers on v4 temp design 2.0 (#18) * Feat/1.9.3 (#633) Co-authored-by: Abhishekism9450 <32683845+Abhishekism9450@users.noreply.github.com> Co-authored-by: Deepak Muley Co-authored-by: Abhishek * Feat/1.9.4 (#645) Co-authored-by: Frederic M <43849398+fad3t@users.noreply.github.com> Co-authored-by: ArtemProt Co-authored-by: Abhishekism9450 <32683845+Abhishekism9450@users.noreply.github.com> * new tf design * import changes * package name change for fc * package name for fc is foundationCentral * package name to foundationcentral * fixes around acctest * examples folder * v4 design * some fixes after merging * datasource for subnets,vpcs, fips * datasource for pbrs * lint fixes. go error (gomnd, gosimple, golint) * go checks, magic numbers(gomnd) * fix config testcase as base client will differ in sdks * datadource for clusters * lint fixes * resource for subnets * adding go mod for public repo * lint fixes * lint fix * lint fix for client name * test config as client will be different for sdks * adding crud for fips * address groups v4 * service groups * resource for service groups * crud for service groups * CRUD for address groups * data source for network security * CRUD for network security * microseg sdk pointing to internals * datasource for directory services * CRUD for directory service * datasource for saml * CRUD for idp * delete Operation for directory service * CRUD for user groups * datasource for categories * Crud and tcs for categories * crud & test for images * sdk versioning * templates datasource * datasource for template versions * deploy templates * spec for vms * create Ops * CUD ops done * Get VMs * VMs * CRUD for vm disks * CRUD/ds serial ports * cdrom CRUD * insert/eject cdrom * vm actions power * vm shutdown actions * CRUD gpus * missing return in vms * vm clone resource * tcs for vm resource * tcs for vms * acc for images * adding more tcs * Vms disk Tcs * tcs for serial port * resource tcs for cdrom * TCs for cdroms * vm clone example and docs * tcs for vm power state * power state testcase * shutdown Tcs * Adding TCs for Gpus * vm clone testcases * fic for argument naming * resource and tests for update guest customization for next boot * guest customization update doc and example * cluster v4 resource * fix for b2 version * data source for storage containers and storage container modules * implement storage containers resources, Create Operation done * fix create operation for storage containers, implement Delete Operation * implement Update Operation for storage containers * acc tests for storage containers * storage container docs for resource and data source * test_config_v4 file for acc tests * extract storage containers into separate package, use new prism sdk instead of internal sdk * implement data source for storage stats info * set the default values for sampling interval and stat type attributes, convert timestamp to string in the flattenValueTimestamp method * Revert "set the default values for sampling interval and stat type attributes, convert timestamp to string in the flattenValueTimestamp method" This reverts commit 2abc8b3b2dc11c453aaa775a0f84ffeaf677246a. * set the default values for sampling interval and stat type attributes, convert timestamp to string in the flattenValueTimestamp method * acc test for storage stat info * rename from storage stats to storage container stat * add test case to validate required args * docs for storage container stats info * use internal prism sdk * use internal prism sdk * set the default values for sampling interval and stat type attributes, convert timestamp to string in the flattenValueTimestamp method * exclude vendor * remove all other modules and sdk, update provider resource/data maps, change the name from v4 to v2 * remove all other modules and sdk, update provider resource/data maps, change the name from v4 to v2 * change the storage container stats info from v4 to v2, add examples * change the test_config_v4 from v4 to v2 * Revert "exclude vendor" This reverts commit 5728f1196ec8c29ed0382b757d22342fe64fe89e. --------- Co-authored-by: Abhishek Chaudhary Co-authored-by: Abhishekism9450 <32683845+Abhishekism9450@users.noreply.github.com> Co-authored-by: Deepak Muley Co-authored-by: Abhishek Co-authored-by: Frederic M <43849398+fad3t@users.noreply.github.com> Co-authored-by: ArtemProt Co-authored-by: Gevorg --- .../storage_containers_stats_info_v2/main.tf | 37 + .../terraform.tfvars | 5 + .../variables.tf | 13 + examples/storage_containers_v2/main.tf | 67 ++ .../storage_containers_v2/terraform.tfvars | 5 + examples/storage_containers_v2/variables.tf | 13 + go.mod | 2 + go.sum | 4 + nutanix/config.go | 8 + nutanix/provider/provider.go | 5 + nutanix/sdks/v4/clusters/clusters.go | 37 + nutanix/sdks/v4/prism/prism.go | 6 +- ..._source_nutanix_storage_container_stats.go | 306 ++++++++ .../data_source_nutanix_storge_container.go | 292 +++++++ ...rce_nutanix_storge_container_stats_test.go | 327 ++++++++ ...ta_source_nutanix_storge_container_test.go | 81 ++ .../data_source_nutanix_storge_containers.go | 291 +++++++ ...a_source_nutanix_storge_containers_test.go | 165 ++++ .../v2/storagecontainersv2/main_test.go | 45 ++ ...resource_nutanix_storge_containers_test.go | 227 ++++++ .../resource_nutanix_storge_containers_v2.go | 732 ++++++++++++++++++ test_config_v2.json | 12 + ...utanix_storage_stats_info_v2.html.markdown | 91 +++ .../docs/d/storage_container_v2.html.markdown | 80 ++ .../d/storage_containers_v2.html.markdown | 86 ++ .../r/storage_containers_v2.html.markdown | 118 +++ 26 files changed, 3054 insertions(+), 1 deletion(-) create mode 100644 examples/storage_containers_stats_info_v2/main.tf create mode 100644 examples/storage_containers_stats_info_v2/terraform.tfvars create mode 100644 examples/storage_containers_stats_info_v2/variables.tf create mode 100644 examples/storage_containers_v2/main.tf create mode 100644 examples/storage_containers_v2/terraform.tfvars create mode 100644 examples/storage_containers_v2/variables.tf create mode 100644 nutanix/sdks/v4/clusters/clusters.go create mode 100644 nutanix/services/v2/storagecontainersv2/data_source_nutanix_storage_container_stats.go create mode 100644 nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_container.go create mode 100644 nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_container_stats_test.go create mode 100644 nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_container_test.go create mode 100644 nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_containers.go create mode 100644 nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_containers_test.go create mode 100644 nutanix/services/v2/storagecontainersv2/main_test.go create mode 100644 nutanix/services/v2/storagecontainersv2/resource_nutanix_storge_containers_test.go create mode 100644 nutanix/services/v2/storagecontainersv2/resource_nutanix_storge_containers_v2.go create mode 100644 website/docs/d/nutanix_storage_stats_info_v2.html.markdown create mode 100644 website/docs/d/storage_container_v2.html.markdown create mode 100644 website/docs/d/storage_containers_v2.html.markdown create mode 100644 website/docs/r/storage_containers_v2.html.markdown diff --git a/examples/storage_containers_stats_info_v2/main.tf b/examples/storage_containers_stats_info_v2/main.tf new file mode 100644 index 000000000..edd79221f --- /dev/null +++ b/examples/storage_containers_stats_info_v2/main.tf @@ -0,0 +1,37 @@ +terraform{ + required_providers { + nutanix = { + source = "nutanix/nutanix" + version = "1.3.0" + } + } +} + +#definig nutanix configuration +provider "nutanix"{ + username = var.nutanix_username + password = var.nutanix_password + endpoint = var.nutanix_endpoint + port = 9440 + insecure = true +} + +#pull all clusters data +data "nutanix_clusters" "clusters"{} + +#create local variable pointing to desired cluster +locals { + cluster1 = [ + for cluster in data.nutanix_clusters.clusters.entities : + cluster.metadata.uuid if cluster.service_list[0] != "PRISM_CENTRAL" + ][0] +} + +#pull the storage container stats data +data "nutanix_storage_container_stats_info_v2" "test" { + ext_id = "" + start_time = "" + end_time = "" +} + + diff --git a/examples/storage_containers_stats_info_v2/terraform.tfvars b/examples/storage_containers_stats_info_v2/terraform.tfvars new file mode 100644 index 000000000..511c54417 --- /dev/null +++ b/examples/storage_containers_stats_info_v2/terraform.tfvars @@ -0,0 +1,5 @@ +#define values to the variables to be used in terraform file +nutanix_username = "admin" +nutanix_password = "Nutanix/123456" +nutanix_endpoint = "10.xx.xx.xx" +nutanix_port = 9440 diff --git a/examples/storage_containers_stats_info_v2/variables.tf b/examples/storage_containers_stats_info_v2/variables.tf new file mode 100644 index 000000000..dcd130ec8 --- /dev/null +++ b/examples/storage_containers_stats_info_v2/variables.tf @@ -0,0 +1,13 @@ +#define the type of variables to be used in terraform file +variable "nutanix_username" { + type = string +} +variable "nutanix_password" { + type = string +} +variable "nutanix_endpoint" { + type = string +} +variable "nutanix_port" { + type = string +} diff --git a/examples/storage_containers_v2/main.tf b/examples/storage_containers_v2/main.tf new file mode 100644 index 000000000..7d5d65e8c --- /dev/null +++ b/examples/storage_containers_v2/main.tf @@ -0,0 +1,67 @@ +terraform{ + required_providers { + nutanix = { + source = "nutanix/nutanix" + version = "1.3.0" + } + } +} + +#definig nutanix configuration +provider "nutanix"{ + username = var.nutanix_username + password = var.nutanix_password + endpoint = var.nutanix_endpoint + port = 9440 + insecure = true +} + +#pull all clusters data +data "nutanix_clusters" "clusters"{} + +#create local variable pointing to desired cluster +locals { + cluster1 = [ + for cluster in data.nutanix_clusters.clusters.entities : + cluster.metadata.uuid if cluster.service_list[0] != "PRISM_CENTRAL" + ][0] +} + +#creating storage container +resource "nutanix_storage_containers_v2" "example" { + name = "" + cluster_ext_id = local.cluster1 + logical_advertised_capacity_bytes = + logical_explicit_reserved_capacity_bytes = + replication_factor = + nfs_whitelist_addresses { + ipv4 { + value = "" + prefix_length ="" + } + } + erasure_code = "OFF" + is_inline_ec_enabled = false + has_higher_ec_fault_domain_preference = false + cache_deduplication = "OFF" + on_disk_dedup = "OFF" + is_compression_enabled = true + is_internal = false + is_software_encryption_enabled = false +} + +#output the storage container info +output "subnet" { + value = nutanix_storage_containers_v2.example +} + + +#pull all storage containers data in the system +data "nutanix_storage_containers_v2" "example"{} + + +#pull a storage containers data by ext id +data "nutanix_storage_container_v2" "example"{ + cluster_ext_id = local.cluster1 + storage_container_ext_id = nutanix_storage_containers_v2.example.id +} diff --git a/examples/storage_containers_v2/terraform.tfvars b/examples/storage_containers_v2/terraform.tfvars new file mode 100644 index 000000000..511c54417 --- /dev/null +++ b/examples/storage_containers_v2/terraform.tfvars @@ -0,0 +1,5 @@ +#define values to the variables to be used in terraform file +nutanix_username = "admin" +nutanix_password = "Nutanix/123456" +nutanix_endpoint = "10.xx.xx.xx" +nutanix_port = 9440 diff --git a/examples/storage_containers_v2/variables.tf b/examples/storage_containers_v2/variables.tf new file mode 100644 index 000000000..dcd130ec8 --- /dev/null +++ b/examples/storage_containers_v2/variables.tf @@ -0,0 +1,13 @@ +#define the type of variables to be used in terraform file +variable "nutanix_username" { + type = string +} +variable "nutanix_password" { + type = string +} +variable "nutanix_endpoint" { + type = string +} +variable "nutanix_port" { + type = string +} diff --git a/go.mod b/go.mod index cf028b447..9185d6583 100644 --- a/go.mod +++ b/go.mod @@ -14,6 +14,8 @@ require ( github.com/nutanix/ntnx-api-golang-clients/prism-go-client/v4 v4.0.1-beta.1 // github.com/nutanix/ntnx-api-golang-clients/prism-go-client/v4 v4.0.3-alpha.2 github.com/nutanix-core/ntnx-api-golang-sdk-internal/iam-go-client/v16 v16.8.0-5280 + github.com/nutanix-core/ntnx-api-golang-sdk-internal/clustermgmt-go-client/v16 v16.9.0-8538 + github.com/nutanix-core/ntnx-api-golang-sdk-internal/prism-go-client/v16 v16.9.0-8500 github.com/spf13/cast v1.3.1 github.com/stretchr/testify v1.7.0 gopkg.in/yaml.v2 v2.4.0 diff --git a/go.sum b/go.sum index e06842ac2..26a09d853 100644 --- a/go.sum +++ b/go.sum @@ -454,6 +454,10 @@ github.com/nutanix/ntnx-api-golang-clients/prism-go-client/v4 v4.0.1-beta.1 h1:h github.com/nutanix/ntnx-api-golang-clients/prism-go-client/v4 v4.0.1-beta.1/go.mod h1:Yhk+xD4mN90OKEHnk5ARf97CX5p4+MEC/B/YIVoZeZ0= github.com/nutanix-core/ntnx-api-golang-sdk-internal/iam-go-client/v16 v16.8.0-5280 h1:sYX9SWnyph1+gjibK8kOQNS5WmbdakCVw2kU8/oCWn8= github.com/nutanix-core/ntnx-api-golang-sdk-internal/iam-go-client/v16 v16.8.0-5280/go.mod h1:cSEUNcUEpaGpZq3CXj4wSczM3zzPQLzTDfYwhkl0aLQ= +github.com/nutanix-core/ntnx-api-golang-sdk-internal/clustermgmt-go-client/v16 v16.9.0-8538 h1:cL38XRjDYwwoKMiK9qRXF7ADXO2wxMqI64lpgn2yQR4= +github.com/nutanix-core/ntnx-api-golang-sdk-internal/clustermgmt-go-client/v16 v16.9.0-8538/go.mod h1:Wt2vo6h0QCGvQGKyY2Tw9OOU0dhhtjRL5nTd0Lx8Gho= +github.com/nutanix-core/ntnx-api-golang-sdk-internal/prism-go-client/v16 v16.9.0-8500 h1:UPGaPcMuM30BTQ6FflAgF5LP/8t8/zVDFIOeZAtXn+8= +github.com/nutanix-core/ntnx-api-golang-sdk-internal/prism-go-client/v16 v16.9.0-8500/go.mod h1:qmOw/29LhPpII8cDmbTL0OF3btwp97ss7nFcQz72NDM= github.com/oklog/run v1.0.0 h1:Ru7dDtJNOyC66gQ5dQmaCa0qIsAUFY3sFpK1Xk8igrw= github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA= github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= diff --git a/nutanix/config.go b/nutanix/config.go index 72aeb55db..956a71e0c 100644 --- a/nutanix/config.go +++ b/nutanix/config.go @@ -21,6 +21,7 @@ import ( "github.com/terraform-providers/terraform-provider-nutanix/nutanix/sdks/v3/foundation" v3 "github.com/terraform-providers/terraform-provider-nutanix/nutanix/sdks/v3/prism" "github.com/terraform-providers/terraform-provider-nutanix/nutanix/sdks/v4/iam" + "github.com/terraform-providers/terraform-provider-nutanix/nutanix/sdks/v4/clusters" ) // Version represents api version @@ -99,6 +100,11 @@ func (c *Config) Client() (*Client, error) { if err != nil { return nil, err } + clustersClient, err := clusters.NewClustersClient(configCreds) + if err != nil { + return nil, err + } + return &Client{ WaitTimeout: c.WaitTimeout, API: v3Client, @@ -110,6 +116,7 @@ func (c *Config) Client() (*Client, error) { PrismAPI: prismClient, MicroSegAPI: microsegClient, IamAPI: iamClient, + ClusterAPI: clustersClient, }, nil } @@ -125,4 +132,5 @@ type Client struct { PrismAPI *prism.Client MicroSegAPI *microseg.Client IamAPI *iam.Client + ClusterAPI *clusters.Client } diff --git a/nutanix/provider/provider.go b/nutanix/provider/provider.go index 7affb01b7..15b417e66 100644 --- a/nutanix/provider/provider.go +++ b/nutanix/provider/provider.go @@ -18,6 +18,7 @@ import ( "github.com/terraform-providers/terraform-provider-nutanix/nutanix/services/v1/prism" "github.com/terraform-providers/terraform-provider-nutanix/nutanix/services/v2/networkingv2" "github.com/terraform-providers/terraform-provider-nutanix/nutanix/services/v2/iamv2" + "github.com/terraform-providers/terraform-provider-nutanix/nutanix/services/v2/storagecontainersv2" ) var requiredProviderFields map[string][]string = map[string][]string{ @@ -251,6 +252,9 @@ func Provider() *schema.Provider { "nutanix_users_v2": iamv2.DatasourceNutanixUsersV2(), "nutanix_authorization_policy_v2": iamv2.DatasourceNutanixAuthorizationPolicyV2(), "nutanix_authorization_policies_v2": iamv2.DatasourceNutanixAuthorizationPoliciesV2(), + "nutanix_storage_container_v2": storagecontainersv2.DatasourceNutanixStorageContainerV2(), + "nutanix_storage_containers_v2": storagecontainersv2.DatasourceNutanixStorageContainersV2(), + "nutanix_storage_container_stats_info_v2": storagecontainersv2.DatasourceNutanixStorageStatsInfoV2(), }, ResourcesMap: map[string]*schema.Resource{ "nutanix_virtual_machine": prism.ResourceNutanixVirtualMachine(), @@ -316,6 +320,7 @@ func Provider() *schema.Provider { "nutanix_users_v2": iamv2.ResourceNutanixUserV2(), "nutanix_authorization_policy_v2": iamv2.ResourceNutanixAuthPoliciesV2(), "nutanix_saml_identity_providers_v2": iamv2.ResourceNutanixSamlIdpV2(), + "nutanix_storage_containers_v2": storagecontainersv2.ResourceNutanixStorageContainersV2(), }, ConfigureContextFunc: providerConfigure, } diff --git a/nutanix/sdks/v4/clusters/clusters.go b/nutanix/sdks/v4/clusters/clusters.go new file mode 100644 index 000000000..390626f95 --- /dev/null +++ b/nutanix/sdks/v4/clusters/clusters.go @@ -0,0 +1,37 @@ +package clusters + +import ( + "github.com/nutanix-core/ntnx-api-golang-sdk-internal/clustermgmt-go-client/v16/api" + network "github.com/nutanix-core/ntnx-api-golang-sdk-internal/clustermgmt-go-client/v16/client" + + "github.com/terraform-providers/terraform-provider-nutanix/nutanix/client" +) + +type Client struct { + ClusterEntityAPI *api.ClustersApi + StorageContainersAPI *api.StorageContainersApi +} + +func NewClustersClient(credentials client.Credentials) (*Client, error) { + var baseClient *network.ApiClient + + // check if all required fields are present. Else create an empty client + if credentials.Username != "" && credentials.Password != "" && credentials.Endpoint != "" { + pcClient := network.NewApiClient() + + pcClient.Host = credentials.Endpoint + pcClient.Password = credentials.Password + pcClient.Username = credentials.Username + pcClient.Port = 9440 + pcClient.VerifySSL = false + + baseClient = pcClient + } + + f := &Client{ + ClusterEntityAPI: api.NewClustersApi(baseClient), + StorageContainersAPI: api.NewStorageContainersApi(baseClient), + } + + return f, nil +} diff --git a/nutanix/sdks/v4/prism/prism.go b/nutanix/sdks/v4/prism/prism.go index d6b1f7881..ef6511bca 100644 --- a/nutanix/sdks/v4/prism/prism.go +++ b/nutanix/sdks/v4/prism/prism.go @@ -3,11 +3,14 @@ package prism import ( "github.com/nutanix/ntnx-api-golang-clients/prism-go-client/v4/api" prism "github.com/nutanix/ntnx-api-golang-clients/prism-go-client/v4/client" + "github.com/nutanix-core/ntnx-api-golang-sdk-internal/prism-go-client/v16/api" + prism "github.com/nutanix-core/ntnx-api-golang-sdk-internal/prism-go-client/v16/client" "github.com/terraform-providers/terraform-provider-nutanix/nutanix/client" ) type Client struct { - TaskRefAPI *api.TasksApi + TaskRefAPI *api.TasksApi + CategoriesAPIInstance *api.CategoriesApi } func NewPrismClient(credentials client.Credentials) (*Client, error) { @@ -28,6 +31,7 @@ func NewPrismClient(credentials client.Credentials) (*Client, error) { f := &Client{ TaskRefAPI: api.NewTasksApi(baseClient), + CategoriesAPIInstance: api.NewCategoriesApi(baseClient), } return f, nil diff --git a/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storage_container_stats.go b/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storage_container_stats.go new file mode 100644 index 000000000..87d1c7fe6 --- /dev/null +++ b/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storage_container_stats.go @@ -0,0 +1,306 @@ +package storagecontainersv2 + +import ( + "context" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + clustermgmtStats "github.com/nutanix-core/ntnx-api-golang-sdk-internal/clustermgmt-go-client/v16/models/clustermgmt/v4/stats" + clsstats "github.com/nutanix-core/ntnx-api-golang-sdk-internal/clustermgmt-go-client/v16/models/common/v1/stats" + + conns "github.com/terraform-providers/terraform-provider-nutanix/nutanix" + "github.com/terraform-providers/terraform-provider-nutanix/utils" +) + +func DatasourceNutanixStorageStatsInfoV2() *schema.Resource { + return &schema.Resource{ + ReadContext: DatasourceNutanixStorageStatsInfoV2Read, + Schema: map[string]*schema.Schema{ + "ext_id": { + Type: schema.TypeString, + Required: true, + }, + "start_time": { + Type: schema.TypeString, + Required: true, + }, + "end_time": { + Type: schema.TypeString, + Required: true, + }, + "sampling_interval": { + Type: schema.TypeInt, + Default: 1, + Optional: true, + }, + "stat_type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"AVG", "MIN", "MAX", "LAST", "SUM", "COUNT"}, false), + }, + "tenant_id": { + Type: schema.TypeString, + Computed: true, + }, + "links": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "href": { + Type: schema.TypeString, + Computed: true, + }, + "rel": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "controller_num_iops": SchemaForValueTimestamp(), + "controller_io_bandwidth_kbps": SchemaForValueTimestamp(), + "controller_avg_io_latencyu_secs": SchemaForValueTimestamp(), + "controller_num_read_iops": SchemaForValueTimestamp(), + "controller_num_write_iops": SchemaForValueTimestamp(), + "controller_read_io_bandwidth_kbps": SchemaForValueTimestamp(), + "controller_write_io_bandwidth_kbps": SchemaForValueTimestamp(), + "controller_avg_read_io_latencyu_secs": SchemaForValueTimestamp(), + "controller_avg_write_io_latencyu_secs": SchemaForValueTimestamp(), + "storage_reserved_capacity_bytes": SchemaForValueTimestamp(), + "storage_actual_physical_usage_bytes": SchemaForValueTimestamp(), + "data_reduction_saving_ratio_ppm": SchemaForValueTimestamp(), + "data_reduction_total_saving_ratio_ppm": SchemaForValueTimestamp(), + "storage_free_bytes": SchemaForValueTimestamp(), + "storage_capacity_bytes": SchemaForValueTimestamp(), + "data_reduction_saved_bytes": SchemaForValueTimestamp(), + "data_reduction_overall_pre_reduction_bytes": SchemaForValueTimestamp(), + "data_reduction_overall_post_reduction_bytes": SchemaForValueTimestamp(), + "data_reduction_compression_saving_ratio_ppm": SchemaForValueTimestamp(), + "data_reduction_dedup_saving_ratio_ppm": SchemaForValueTimestamp(), + "data_reduction_erasure_coding_saving_ratio_ppm": SchemaForValueTimestamp(), + "data_reduction_thin_provision_saving_ratio_ppm": SchemaForValueTimestamp(), + "data_reduction_clone_saving_ratio_ppm": SchemaForValueTimestamp(), + "data_reduction_snapshot_saving_ratio_ppm": SchemaForValueTimestamp(), + "data_reduction_zero_write_savings_bytes": SchemaForValueTimestamp(), + "controller_read_io_ratio_ppm": SchemaForValueTimestamp(), + "controller_write_io_ratio_ppm": SchemaForValueTimestamp(), + "storage_replication_factor": SchemaForValueTimestamp(), + "storage_usage_bytes": SchemaForValueTimestamp(), + "storage_tier_das_sata_usage_bytes": SchemaForValueTimestamp(), + "storage_tier_ssd_usage_bytes": SchemaForValueTimestamp(), + "health": SchemaForValueTimestamp(), + "container_ext_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func SchemaForValueTimestamp() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeInt, + Computed: true, + }, + "timestamp": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + } +} + +func DatasourceNutanixStorageStatsInfoV2Read(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.Client).ClusterAPI + + extID := d.Get("ext_id") + startTime := d.Get("start_time") + endTime := d.Get("end_time") + samplingInterval := d.Get("sampling_interval") + + if samplingInterval.(int) <= 0 { + return diag.Errorf("sampling_interval should be greater than 0") + } + + statType := clsstats.DownSamplingOperator(7) // Default value is LAST, Aggregation containing only the last recorded value. + + subMap := map[string]interface{}{ + "SUM": 2, + "MIN": 3, + "MAX": 4, + "AVG": 5, + "COUNT": 6, + "LAST": 7, + } + pVal := subMap[d.Get("stat_type").(string)] + if pVal != nil { + statType = clsstats.DownSamplingOperator(pVal.(int)) + } + resp, err := conn.StorageContainersAPI.GetStorageContainerById(utils.StringPtr(extID.(string))) + if err != nil { + return diag.Errorf("error while fetching Storage Container : %v", err) + } + + // Extract E-Tag Header + etagValue := conn.ClusterEntityAPI.ApiClient.GetEtag(resp) + + args := make(map[string]interface{}) + args["If-Match"] = etagValue + + startTimeVal, err := time.Parse(time.RFC3339, startTime.(string)) + if err != nil { + return diag.Errorf("error while parsing start_time : %v", err) + } + endTimeVal, err := time.Parse(time.RFC3339, endTime.(string)) + if err != nil { + return diag.Errorf("error while parsing end_time : %v", err) + } + + statsResp, err := conn.StorageContainersAPI.GetStorageContainerStats(utils.StringPtr(extID.(string)), &startTimeVal, &endTimeVal, utils.IntPtr(samplingInterval.(int)), &statType, args) + if err != nil { + return diag.Errorf("error while fetching Storage Container : %v", err) + } + + getStatsResp := statsResp.Data.GetValue().(clustermgmtStats.StorageContainerStats) + + if err := d.Set("ext_id", getStatsResp.ExtId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("tenant_id", getStatsResp.TenantId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("links", flattenLinks(getStatsResp.Links)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("container_ext_id", getStatsResp.ContainerExtId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("controller_num_iops", flattenValueTimestamp(getStatsResp.ControllerNumIops)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("controller_io_bandwidth_kbps", flattenValueTimestamp(getStatsResp.ControllerIoBandwidthkBps)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("controller_avg_io_latencyu_secs", flattenValueTimestamp(getStatsResp.ControllerAvgIoLatencyuSecs)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("controller_num_read_iops", flattenValueTimestamp(getStatsResp.ControllerNumReadIops)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("controller_num_write_iops", flattenValueTimestamp(getStatsResp.ControllerNumWriteIops)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("controller_read_io_bandwidth_kbps", flattenValueTimestamp(getStatsResp.ControllerReadIoBandwidthkBps)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("controller_write_io_bandwidth_kbps", flattenValueTimestamp(getStatsResp.ControllerWriteIoBandwidthkBps)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("controller_avg_read_io_latencyu_secs", flattenValueTimestamp(getStatsResp.ControllerAvgReadIoLatencyuSecs)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("controller_avg_write_io_latencyu_secs", flattenValueTimestamp(getStatsResp.ControllerAvgWriteIoLatencyuSecs)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("storage_reserved_capacity_bytes", flattenValueTimestamp(getStatsResp.StorageReservedCapacityBytes)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("storage_actual_physical_usage_bytes", flattenValueTimestamp(getStatsResp.StorageActualPhysicalUsageBytes)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("data_reduction_saving_ratio_ppm", flattenValueTimestamp(getStatsResp.DataReductionSavingRatioPpm)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("data_reduction_total_saving_ratio_ppm", flattenValueTimestamp(getStatsResp.DataReductionTotalSavingRatioPpm)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("storage_free_bytes", flattenValueTimestamp(getStatsResp.StorageFreeBytes)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("storage_capacity_bytes", flattenValueTimestamp(getStatsResp.StorageCapacityBytes)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("data_reduction_saved_bytes", flattenValueTimestamp(getStatsResp.DataReductionSavedBytes)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("data_reduction_overall_pre_reduction_bytes", flattenValueTimestamp(getStatsResp.DataReductionOverallPreReductionBytes)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("data_reduction_overall_post_reduction_bytes", flattenValueTimestamp(getStatsResp.DataReductionOverallPostReductionBytes)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("data_reduction_compression_saving_ratio_ppm", flattenValueTimestamp(getStatsResp.DataReductionCompressionSavingRatioPpm)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("data_reduction_dedup_saving_ratio_ppm", flattenValueTimestamp(getStatsResp.DataReductionDedupSavingRatioPpm)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("data_reduction_erasure_coding_saving_ratio_ppm", flattenValueTimestamp(getStatsResp.DataReductionErasureCodingSavingRatioPpm)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("data_reduction_thin_provision_saving_ratio_ppm", flattenValueTimestamp(getStatsResp.DataReductionThinProvisionSavingRatioPpm)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("data_reduction_clone_saving_ratio_ppm", flattenValueTimestamp(getStatsResp.DataReductionCloneSavingRatioPpm)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("data_reduction_snapshot_saving_ratio_ppm", flattenValueTimestamp(getStatsResp.DataReductionSnapshotSavingRatioPpm)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("data_reduction_zero_write_savings_bytes", flattenValueTimestamp(getStatsResp.DataReductionZeroWriteSavingsBytes)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("controller_read_io_ratio_ppm", flattenValueTimestamp(getStatsResp.ControllerReadIoRatioPpm)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("controller_write_io_ratio_ppm", flattenValueTimestamp(getStatsResp.ControllerWriteIoRatioPpm)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("storage_replication_factor", flattenValueTimestamp(getStatsResp.StorageReplicationFactor)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("storage_usage_bytes", flattenValueTimestamp(getStatsResp.StorageUsageBytes)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("storage_tier_das_sata_usage_bytes", flattenValueTimestamp(getStatsResp.StorageTierDasSataUsageBytes)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("storage_tier_ssd_usage_bytes", flattenValueTimestamp(getStatsResp.StorageTierSsdUsageBytes)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("health", flattenValueTimestamp(getStatsResp.Health)); err != nil { + return diag.FromErr(err) + } + + d.SetId(*getStatsResp.ContainerExtId) + return nil +} + +func flattenValueTimestamp(timeIntValuePairs []clsstats.TimeIntValuePair) []map[string]interface{} { + if len(timeIntValuePairs) > 0 { + timeIntValueList := make([]map[string]interface{}, len(timeIntValuePairs)) + + for k, v := range timeIntValuePairs { + timeValuePair := map[string]interface{}{} + if v.Value != nil { + timeValuePair["value"] = v.Value + } + if v.Timestamp != nil { + timeValuePair["timestamp"] = v.Timestamp.Format("2006-01-02T15:04:05Z07:00") + } + + timeIntValueList[k] = timeValuePair + } + return timeIntValueList + } + return nil +} diff --git a/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_container.go b/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_container.go new file mode 100644 index 000000000..e8d0dfb1d --- /dev/null +++ b/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_container.go @@ -0,0 +1,292 @@ +package storagecontainersv2 + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + clustermgmt "github.com/nutanix-core/ntnx-api-golang-sdk-internal/clustermgmt-go-client/v16/models/clustermgmt/v4/config" + + conns "github.com/terraform-providers/terraform-provider-nutanix/nutanix" + "github.com/terraform-providers/terraform-provider-nutanix/utils" +) + +func DatasourceNutanixStorageContainerV2() *schema.Resource { + return &schema.Resource{ + ReadContext: DatasourceNutanixStorageContainerV2Read, + Schema: map[string]*schema.Schema{ + "ext_id": { + Type: schema.TypeString, + Required: true, + }, + "tenant_id": { + Type: schema.TypeString, + Computed: true, + }, + "links": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "href": { + Type: schema.TypeString, + Computed: true, + }, + "rel": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "container_ext_id": { + Type: schema.TypeString, + Computed: true, + }, + "owner_ext_id": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Computed: true, + }, + "cluster_ext_id": { + Type: schema.TypeString, + Computed: true, + }, + "storage_pool_ext_id": { + Type: schema.TypeString, + Computed: true, + }, + "is_marked_for_removal": { + Type: schema.TypeBool, + Computed: true, + }, + "max_capacity_bytes": { + Type: schema.TypeInt, + Computed: true, + }, + "logical_explicit_reserved_capacity_bytes": { + Type: schema.TypeInt, + Computed: true, + }, + "logical_implicit_reserved_capacity_bytes": { + Type: schema.TypeInt, + Computed: true, + }, + "logical_advertised_capacity_bytes": { + Type: schema.TypeInt, + Computed: true, + }, + "replication_factor": { + Type: schema.TypeInt, + Computed: true, + }, + "nfs_whitelist_addresses": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ipv4": SchemaForValuePrefixLength(), + "ipv6": SchemaForValuePrefixLength(), + "fqdn": SchemaForFqdnValue(), + }, + }, + }, + "is_nfs_whitelist_inherited": { + Type: schema.TypeBool, + Computed: true, + }, + "erasure_code": { + Type: schema.TypeString, + Computed: true, + }, + "is_inline_ec_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "has_higher_ec_fault_domain_preference": { + Type: schema.TypeBool, + Computed: true, + }, + "erasure_code_delay_secs": { + Type: schema.TypeInt, + Computed: true, + }, + "cache_deduplication": { + Type: schema.TypeString, + Computed: true, + }, + "on_disk_dedup": { + Type: schema.TypeString, + Computed: true, + }, + "is_compression_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "compression_delay_secs": { + Type: schema.TypeInt, + Computed: true, + }, + "is_internal": { + Type: schema.TypeBool, + Computed: true, + }, + "is_software_encryption_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "is_encrypted": { + Type: schema.TypeBool, + Computed: true, + }, + "affinity_host_ext_id": { + Type: schema.TypeString, + Computed: true, + }, + "cluster_name": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func DatasourceNutanixStorageContainerV2Read(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.Client).ClusterAPI + + extID := d.Get("ext_id") + resp, err := conn.StorageContainersAPI.GetStorageContainerById(utils.StringPtr(extID.(string))) + if err != nil { + return diag.Errorf("error while fetching Storage Container : %v", err) + } + + getResp := resp.Data.GetValue().(clustermgmt.StorageContainer) + + if err := d.Set("ext_id", getResp.ExtId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("tenant_id", getResp.TenantId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("links", flattenLinks(getResp.Links)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("container_ext_id", getResp.ContainerExtId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("owner_ext_id", getResp.OwnerExtId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("name", getResp.Name); err != nil { + return diag.FromErr(err) + } + if err := d.Set("cluster_ext_id", getResp.ClusterExtId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("storage_pool_ext_id", getResp.StoragePoolExtId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("is_marked_for_removal", getResp.IsMarkedForRemoval); err != nil { + return diag.FromErr(err) + } + if err := d.Set("max_capacity_bytes", getResp.MaxCapacityBytes); err != nil { + return diag.FromErr(err) + } + if err := d.Set("logical_explicit_reserved_capacity_bytes", getResp.LogicalExplicitReservedCapacityBytes); err != nil { + return diag.FromErr(err) + } + if err := d.Set("logical_implicit_reserved_capacity_bytes", getResp.LogicalImplicitReservedCapacityBytes); err != nil { + return diag.FromErr(err) + } + if err := d.Set("logical_advertised_capacity_bytes", getResp.LogicalAdvertisedCapacityBytes); err != nil { + return diag.FromErr(err) + } + if err := d.Set("replication_factor", getResp.ReplicationFactor); err != nil { + return diag.FromErr(err) + } + if err := d.Set("nfs_whitelist_addresses", flattenNfsWhitelistAddresses(getResp.NfsWhitelistAddress)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("is_nfs_whitelist_inherited", getResp.IsNfsWhitelistInherited); err != nil { + return diag.FromErr(err) + } + if err := d.Set("erasure_code", flattenErasureCodeStatus(getResp.ErasureCode)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("is_inline_ec_enabled", getResp.IsInlineEcEnabled); err != nil { + return diag.FromErr(err) + } + if err := d.Set("has_higher_ec_fault_domain_preference", getResp.HasHigherEcFaultDomainPreference); err != nil { + return diag.FromErr(err) + } + if err := d.Set("erasure_code_delay_secs", getResp.ErasureCodeDelaySecs); err != nil { + return diag.FromErr(err) + } + if err := d.Set("cache_deduplication", flattenCacheDeduplication(getResp.CacheDeduplication)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("on_disk_dedup", flattenOnDiskDedup(getResp.OnDiskDedup)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("is_compression_enabled", getResp.IsCompressionEnabled); err != nil { + return diag.FromErr(err) + } + if err := d.Set("compression_delay_secs", getResp.CompressionDelaySecs); err != nil { + return diag.FromErr(err) + } + if err := d.Set("is_internal", getResp.IsInternal); err != nil { + return diag.FromErr(err) + } + if err := d.Set("is_software_encryption_enabled", getResp.IsSoftwareEncryptionEnabled); err != nil { + return diag.FromErr(err) + } + if err := d.Set("is_encrypted", getResp.IsEncrypted); err != nil { + return diag.FromErr(err) + } + if err := d.Set("affinity_host_ext_id", getResp.AffinityHostExtId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("cluster_name", getResp.ClusterName); err != nil { + return diag.FromErr(err) + } + + d.SetId(*getResp.ContainerExtId) + return nil +} + +func SchemaForFqdnValue() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + } +} + +func SchemaForValuePrefixLength() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeString, + Computed: true, + }, + "prefix_length": { + Type: schema.TypeInt, + Computed: true, + }, + }, + }, + } +} diff --git a/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_container_stats_test.go b/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_container_stats_test.go new file mode 100644 index 000000000..b59f3d455 --- /dev/null +++ b/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_container_stats_test.go @@ -0,0 +1,327 @@ +package storagecontainersv2_test + +import ( + "fmt" + "os" + "regexp" + "testing" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + + acc "github.com/terraform-providers/terraform-provider-nutanix/nutanix/acctest" +) + +const datasourceNameStorageStatsInfo = "data.nutanix_storage_container_stats_info_v2.test" + +func TestAccNutanixStorageStatsInfoV2Datasource_Basic(t *testing.T) { + path, _ := os.Getwd() + filepath := path + "/../../../../test_config_v2.json" + + // Start time is now + startTime := time.Now() + + // End time is two hours later + endTime := startTime.Add(2 * time.Hour) + + // Format the times to RFC3339 format + startTimeFormatted := startTime.UTC().Format(time.RFC3339) + endTimeFormatted := endTime.UTC().Format(time.RFC3339) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acc.TestAccPreCheck(t) }, + Providers: acc.TestAccProviders, + Steps: []resource.TestStep{ + { + Config: testStorageContainerConfig(filepath) + testStorageStatsDatasourceV2Config(startTimeFormatted, endTimeFormatted), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet(datasourceNameStorageStatsInfo, "container_ext_id"), + ), + }, + }, + }) +} + +func TestAccNutanixStorageStatsInfoV2Datasource_SampleInterval(t *testing.T) { + path, _ := os.Getwd() + filepath := path + "/../../../../test_config_v2.json" + + // Start time is now + startTime := time.Now() + + // End time is two hours later + endTime := startTime.Add(2 * time.Hour) + + // Format the times to RFC3339 format + startTimeFormatted := startTime.UTC().Format(time.RFC3339) + endTimeFormatted := endTime.UTC().Format(time.RFC3339) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acc.TestAccPreCheck(t) }, + Providers: acc.TestAccProviders, + Steps: []resource.TestStep{ + { + Config: testStorageContainerConfig(filepath) + testStorageStatsDatasourceV2SampleInterval(startTimeFormatted, endTimeFormatted, 2), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet(datasourceNameStorageStatsInfo, "container_ext_id"), + ), + }, + }, + }) +} + +func TestAccNutanixStorageStatsInfoV2Datasource_StatType(t *testing.T) { + path, _ := os.Getwd() + filepath := path + "/../../../../test_config_v2.json" + + // Start time is now + startTime := time.Now() + + // End time is two hours later + endTime := startTime.Add(2 * time.Hour) + + // Format the times to RFC3339 format + startTimeFormatted := startTime.UTC().Format(time.RFC3339) + endTimeFormatted := endTime.UTC().Format(time.RFC3339) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acc.TestAccPreCheck(t) }, + Providers: acc.TestAccProviders, + Steps: []resource.TestStep{ + { + Config: testStorageContainerConfig(filepath) + testStorageStatsDatasourceV2StatType(startTimeFormatted, endTimeFormatted, "COUNT"), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet(datasourceNameStorageStatsInfo, "container_ext_id"), + ), + }, + }, + }) +} + +func TestAccNutanixStorageStatsInfoV2Datasource_InvalidSampleInterval(t *testing.T) { + + // Start time is now + startTime := time.Now() + + // End time is two hours later + endTime := startTime.Add(2 * time.Hour) + + // Format the times to RFC3339 format + startTimeFormatted := startTime.UTC().Format(time.RFC3339) + endTimeFormatted := endTime.UTC().Format(time.RFC3339) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acc.TestAccPreCheck(t) }, + Providers: acc.TestAccProviders, + Steps: []resource.TestStep{ + { + Config: testStorageStatsDatasourceV2InvalidSampleInterval(startTimeFormatted, endTimeFormatted, 0), + ExpectError: regexp.MustCompile("sampling_interval should be greater than 0"), + }, + }, + }) +} + +func TestAccNutanixStorageStatsInfoV2Datasource_InvalidStatType(t *testing.T) { + + // Start time is now + startTime := time.Now() + + // End time is two hours later + endTime := startTime.Add(2 * time.Hour) + + // Format the times to RFC3339 format + startTimeFormatted := startTime.UTC().Format(time.RFC3339) + endTimeFormatted := endTime.UTC().Format(time.RFC3339) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acc.TestAccPreCheck(t) }, + Providers: acc.TestAccProviders, + Steps: []resource.TestStep{ + { + Config: testStorageStatsDatasourceV2InvalidStatType(startTimeFormatted, endTimeFormatted, "INVALID"), + ExpectError: regexp.MustCompile("running pre-apply refresh"), + }, + }, + }) +} + +func TestAccNutanixStorageStatsInfoV2Datasource_MissingRequiredArgs(t *testing.T) { + + // Start time is now + startTime := time.Now() + + // End time is two hours later + endTime := startTime.Add(2 * time.Hour) + + // Format the times to RFC3339 format + startTimeFormatted := startTime.UTC().Format(time.RFC3339) + endTimeFormatted := endTime.UTC().Format(time.RFC3339) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acc.TestAccPreCheck(t) }, + Providers: acc.TestAccProviders, + Steps: []resource.TestStep{ + { + Config: testStorageStatsDatasourceV2MissingExtId(startTimeFormatted, endTimeFormatted, "SUM"), + ExpectError: regexp.MustCompile("Missing required argument"), + }, + { + Config: testStorageStatsDatasourceV2MissingStartTime(endTimeFormatted, "SUM"), + ExpectError: regexp.MustCompile("Missing required argument"), + }, + { + Config: testStorageStatsDatasourceV2MissingEndTime(startTimeFormatted, "SUM"), + ExpectError: regexp.MustCompile("Missing required argument"), + }, + }, + }) +} + +func testStorageContainerConfig(filepath string) string { + return fmt.Sprintf(` + + data "nutanix_clusters" "clusters" {} + + locals{ + cluster = [ + for cluster in data.nutanix_clusters.clusters.entities : + cluster.metadata.uuid if cluster.service_list[0] != "PRISM_CENTRAL" + ][0] + config = (jsondecode(file("%s"))) + storage_container = local.config.storage_container + } + + resource "nutanix_storage_containers_v2" "test" { + name = local.storage_container.name + cluster_ext_id = local.cluster + logical_advertised_capacity_bytes = local.storage_container.logical_advertised_capacity_bytes + logical_explicit_reserved_capacity_bytes = local.storage_container.logical_explicit_reserved_capacity_bytes + replication_factor = local.storage_container.replication_factor + nfs_whitelist_addresses { + ipv4 { + value = local.storage_container.nfs_whitelist_addresses.ipv4.value + prefix_length = local.storage_container.nfs_whitelist_addresses.ipv4.prefix_length + } + } + erasure_code = "OFF" + is_inline_ec_enabled = false + has_higher_ec_fault_domain_preference = false + cache_deduplication = "OFF" + on_disk_dedup = "OFF" + is_compression_enabled = true + is_internal = false + is_software_encryption_enabled = false + } + + `, filepath) +} + +func testStorageStatsDatasourceV2Config(startTime, endTime string) string { + return fmt.Sprintf(` + + + data "nutanix_storage_container_stats_info_v2" "test" { + ext_id = nutanix_storage_containers_v2.test.id + start_time = "%s" + end_time = "%s" + depends_on = [nutanix_storage_containers_v2.test] + } + + + `, startTime, endTime) +} + +func testStorageStatsDatasourceV2SampleInterval(startTime, endTime string, sampleInterval int) string { + return fmt.Sprintf(` + + + data "nutanix_storage_container_stats_info_v2" "test" { + ext_id = nutanix_storage_containers_v2.test.id + start_time = "%s" + end_time = "%s" + sampling_interval = %d + depends_on = [nutanix_storage_containers_v2.test] + } + + + `, startTime, endTime, sampleInterval) +} + +func testStorageStatsDatasourceV2StatType(startTime, endTime, statType string) string { + return fmt.Sprintf(` + + data "nutanix_storage_container_stats_info_v2" "test" { + ext_id = nutanix_storage_containers_v2.test.id + start_time = "%s" + end_time = "%s" + stat_type = "%s" + depends_on = [nutanix_storage_containers_v2.test] + } + + `, startTime, endTime, statType) +} + +func testStorageStatsDatasourceV2InvalidSampleInterval(startTime, endTime string, sampleInterval int) string { + return fmt.Sprintf(` + + + data "nutanix_storage_container_stats_info_v2" "test" { + ext_id = "000000-0000000000-00000000" + start_time = "%s" + end_time = "%s" + sampling_interval = %d + } + + + `, startTime, endTime, sampleInterval) +} + +func testStorageStatsDatasourceV2InvalidStatType(startTime, endTime, statType string) string { + return fmt.Sprintf(` + + data "nutanix_storage_container_stats_info_v2" "test" { + ext_id = "000000-0000000000-00000000" + start_time = "%s" + end_time = "%s" + stat_type = "%s" + } + + `, startTime, endTime, statType) +} + +func testStorageStatsDatasourceV2MissingExtId(startTime, endTime, statType string) string { + return fmt.Sprintf(` + + data "nutanix_storage_container_stats_info_v2" "test" { + start_time = "%s" + end_time = "%s" + stat_type = "%s" + } + + `, startTime, endTime, statType) +} + +func testStorageStatsDatasourceV2MissingStartTime(endTime, statType string) string { + return fmt.Sprintf(` + + data "nutanix_storage_container_stats_info_v2" "test" { + ext_id = "000000-0000000000-00000000" + end_time = "%s" + stat_type = "%s" + } + + `, endTime, statType) +} + +func testStorageStatsDatasourceV2MissingEndTime(startTime, statType string) string { + return fmt.Sprintf(` + + data "nutanix_storage_container_stats_info_v2" "test" { + ext_id = "000000-0000000000-00000000" + start_time = "%s" + stat_type = "%s" + } + + `, startTime, statType) +} diff --git a/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_container_test.go b/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_container_test.go new file mode 100644 index 000000000..c82d8b404 --- /dev/null +++ b/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_container_test.go @@ -0,0 +1,81 @@ +package storagecontainersv2_test + +import ( + "fmt" + "os" + "strconv" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + + acc "github.com/terraform-providers/terraform-provider-nutanix/nutanix/acctest" +) + +const datasourceName_StorageContainer = "data.nutanix_storage_container_v2.test" + +func TestAccNutanixStorageContainerV2Datasource_Basic(t *testing.T) { + path, _ := os.Getwd() + filepath := path + "/../../../../test_config_v2.json" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acc.TestAccPreCheck(t) }, + Providers: acc.TestAccProviders, + Steps: []resource.TestStep{ + { + Config: testStorageContainerV4Config(filepath), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet(datasourceName_StorageContainer, "container_ext_id"), + resource.TestCheckResourceAttr(datasourceName_StorageContainer, "name", testVars.StorageContainer.Name), + resource.TestCheckResourceAttr(datasourceName_StorageContainer, "logical_advertised_capacity_bytes", strconv.Itoa(testVars.StorageContainer.LogicalAdvertisedCapacityBytes)), + resource.TestCheckResourceAttr(datasourceName_StorageContainer, "logical_explicit_reserved_capacity_bytes", strconv.Itoa(testVars.StorageContainer.LogicalExplicitReservedCapacityBytes)), + resource.TestCheckResourceAttr(datasourceName_StorageContainer, "replication_factor", strconv.Itoa(testVars.StorageContainer.ReplicationFactor)), + resource.TestCheckResourceAttr(datasourceName_StorageContainer, "nfs_whitelist_addresses.0.ipv4.0.value", testVars.StorageContainer.NfsWhitelistAddresses.Ipv4.Value), + resource.TestCheckResourceAttr(datasourceName_StorageContainer, "nfs_whitelist_addresses.0.ipv4.0.prefix_length", strconv.Itoa(testVars.StorageContainer.NfsWhitelistAddresses.Ipv4.PrefixLength)), + ), + }, + }, + }) +} + +func testStorageContainerV4Config(filepath string) string { + return fmt.Sprintf(` + data "nutanix_clusters" "clusters" {} + + locals{ + cluster = [ + for cluster in data.nutanix_clusters.clusters.entities : + cluster.metadata.uuid if cluster.service_list[0] != "PRISM_CENTRAL" + ][0] + config = (jsondecode(file("%s"))) + storage_container = local.config.storage_container + } + + resource "nutanix_storage_containers_v2" "test" { + name = local.storage_container.name + cluster_ext_id = local.cluster + logical_advertised_capacity_bytes = local.storage_container.logical_advertised_capacity_bytes + logical_explicit_reserved_capacity_bytes = local.storage_container.logical_explicit_reserved_capacity_bytes + replication_factor = local.storage_container.replication_factor + nfs_whitelist_addresses { + ipv4 { + value = local.storage_container.nfs_whitelist_addresses.ipv4.value + prefix_length = local.storage_container.nfs_whitelist_addresses.ipv4.prefix_length + } + } + erasure_code = "OFF" + is_inline_ec_enabled = false + has_higher_ec_fault_domain_preference = false + cache_deduplication = "OFF" + on_disk_dedup = "OFF" + is_compression_enabled = true + is_internal = false + is_software_encryption_enabled = false + } + + data "nutanix_storage_container_v2" "test" { + ext_id = resource.nutanix_storage_containers_v2.test.id + } + + + `, filepath) +} diff --git a/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_containers.go b/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_containers.go new file mode 100644 index 000000000..1b09d6ef4 --- /dev/null +++ b/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_containers.go @@ -0,0 +1,291 @@ +package storagecontainersv2 + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + clustermgmt "github.com/nutanix-core/ntnx-api-golang-sdk-internal/clustermgmt-go-client/v16/models/clustermgmt/v4/config" + clsConfig "github.com/nutanix-core/ntnx-api-golang-sdk-internal/clustermgmt-go-client/v16/models/common/v1/config" + clsResponse "github.com/nutanix-core/ntnx-api-golang-sdk-internal/clustermgmt-go-client/v16/models/common/v1/response" + + conns "github.com/terraform-providers/terraform-provider-nutanix/nutanix" + "github.com/terraform-providers/terraform-provider-nutanix/utils" +) + +func DatasourceNutanixStorageContainersV2() *schema.Resource { + return &schema.Resource{ + ReadContext: DatasourceNutanixStorageContainersV2Read, + Schema: map[string]*schema.Schema{ + "page": { + Type: schema.TypeInt, + Optional: true, + }, + "limit": { + Type: schema.TypeInt, + Optional: true, + }, + "filter": { + Type: schema.TypeString, + Optional: true, + }, + "order_by": { + Type: schema.TypeString, + Optional: true, + }, + "apply": { + Type: schema.TypeString, + Optional: true, + }, + "select": { + Type: schema.TypeString, + Optional: true, + }, + "storage_containers": { + Type: schema.TypeList, + Computed: true, + Elem: DatasourceNutanixStorageContainerV2(), + }, + }, + } +} + +func DatasourceNutanixStorageContainersV2Read(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.Client).ClusterAPI + + // initialize query params + var filter, orderBy, selectQ *string + var page, limit *int + + if pagef, ok := d.GetOk("page"); ok { + page = utils.IntPtr(pagef.(int)) + } else { + page = nil + } + if limitf, ok := d.GetOk("limit"); ok { + limit = utils.IntPtr(limitf.(int)) + } else { + limit = nil + } + if filterf, ok := d.GetOk("filter"); ok { + filter = utils.StringPtr(filterf.(string)) + } else { + filter = nil + } + if order, ok := d.GetOk("order_by"); ok { + orderBy = utils.StringPtr(order.(string)) + } else { + orderBy = nil + } + if selectQy, ok := d.GetOk("apply"); ok { + selectQ = utils.StringPtr(selectQy.(string)) + } else { + selectQ = nil + } + + resp, err := conn.StorageContainersAPI.ListStorageContainers(page, limit, filter, orderBy, selectQ) + if err != nil { + return diag.Errorf("error while fetching Storage Containers : %v", err) + } + + getResp := resp.Data.GetValue().([]clustermgmt.StorageContainer) + + if err := d.Set("storage_containers", flattenStorageContainers(getResp)); err != nil { + return diag.FromErr(err) + } + + d.SetId(resource.UniqueId()) + return nil +} + +func flattenStorageContainers(storageContainers []clustermgmt.StorageContainer) []interface{} { + if len(storageContainers) > 0 { + storageContainersList := make([]interface{}, len(storageContainers)) + + for k, v := range storageContainers { + storageContainer := make(map[string]interface{}) + + storageContainer["ext_id"] = v.ContainerExtId + storageContainer["tenant_id"] = v.TenantId + storageContainer["links"] = flattenLinks(v.Links) + storageContainer["container_ext_id"] = v.ContainerExtId + storageContainer["owner_ext_id"] = v.OwnerExtId + storageContainer["name"] = v.Name + storageContainer["cluster_ext_id"] = v.ClusterExtId + storageContainer["storage_pool_ext_id"] = v.StoragePoolExtId + storageContainer["is_marked_for_removal"] = v.IsMarkedForRemoval + // storageContainer["marked_for_removal"] = v.M + storageContainer["max_capacity_bytes"] = v.MaxCapacityBytes + storageContainer["logical_explicit_reserved_capacity_bytes"] = v.LogicalExplicitReservedCapacityBytes + storageContainer["logical_implicit_reserved_capacity_bytes"] = v.LogicalImplicitReservedCapacityBytes + storageContainer["logical_advertised_capacity_bytes"] = v.LogicalAdvertisedCapacityBytes + storageContainer["replication_factor"] = v.ReplicationFactor + storageContainer["nfs_whitelist_addresses"] = flattenNfsWhitelistAddresses(v.NfsWhitelistAddress) + storageContainer["is_nfs_whitelist_inherited"] = v.IsNfsWhitelistInherited + storageContainer["erasure_code"] = flattenErasureCodeStatus(v.ErasureCode) + storageContainer["is_inline_ec_enabled"] = v.IsInlineEcEnabled + storageContainer["has_higher_ec_fault_domain_preference"] = v.HasHigherEcFaultDomainPreference + storageContainer["erasure_code_delay_secs"] = v.ErasureCodeDelaySecs + storageContainer["cache_deduplication"] = flattenCacheDeduplication(v.CacheDeduplication) + storageContainer["on_disk_dedup"] = flattenOnDiskDedup(v.OnDiskDedup) + storageContainer["is_compression_enabled"] = v.IsCompressionEnabled + storageContainer["compression_delay_secs"] = v.CompressionDelaySecs + storageContainer["is_internal"] = v.IsInternal + storageContainer["is_software_encryption_enabled"] = v.IsSoftwareEncryptionEnabled + storageContainer["is_encrypted"] = v.IsEncrypted + storageContainer["cluster_name"] = v.ClusterName + + storageContainersList[k] = storageContainer + } + return storageContainersList + } + return nil +} + +func flattenNfsWhitelistAddresses(pr []clsConfig.IPAddressOrFQDN) []map[string]interface{} { + if len(pr) > 0 { + ips := make([]map[string]interface{}, len(pr)) + + for k, v := range pr { + ip := make(map[string]interface{}) + + if v.Ipv4 != nil { + ip["ipv4"] = flattenIPv4Address(v.Ipv4) + } + if v.Ipv6 != nil { + ip["ipv6"] = flattenIPv6Address(v.Ipv6) + } + if v.Fqdn != nil { + ip["fqdn"] = flattenFQDN(v.Fqdn) + } + ips[k] = ip + } + return ips + } + return nil +} + +func flattenCacheDeduplication(pr *clustermgmt.CacheDeduplication) string { + if pr != nil { + const one, two, three, four = 1, 2, 3, 4 + if *pr == clustermgmt.CacheDeduplication(one) { + return "REDACTED" + } + if *pr == clustermgmt.CacheDeduplication(two) { + return "NONE" + } + if *pr == clustermgmt.CacheDeduplication(three) { + return "OFF" + } + if *pr == clustermgmt.CacheDeduplication(four) { + return "ON" + } + } + return "UNKNOWN" +} + +func flattenErasureCodeStatus(pr *clustermgmt.ErasureCodeStatus) string { + if pr != nil { + const one, two, three, four = 1, 2, 3, 4 + if *pr == clustermgmt.ErasureCodeStatus(one) { + return "REDACTED" + } + if *pr == clustermgmt.ErasureCodeStatus(two) { + return "NONE" + } + if *pr == clustermgmt.ErasureCodeStatus(three) { + return "OFF" + } + if *pr == clustermgmt.ErasureCodeStatus(four) { + return "ON" + } + } + return "UNKNOWN" +} + +func flattenOnDiskDedup(pr *clustermgmt.OnDiskDedup) string { + if pr != nil { + const one, two, three, four = 1, 2, 3, 4 + if *pr == clustermgmt.OnDiskDedup(one) { + return "REDACTED" + } + if *pr == clustermgmt.OnDiskDedup(two) { + return "NONE" + } + if *pr == clustermgmt.OnDiskDedup(three) { + return "OFF" + } + if *pr == clustermgmt.OnDiskDedup(four) { + return "POST_PROCESS" + } + } + return "UNKNOWN" +} + +func flattenLinks(pr []clsResponse.ApiLink) []map[string]interface{} { + if len(pr) > 0 { + linkList := make([]map[string]interface{}, len(pr)) + + for k, v := range pr { + links := map[string]interface{}{} + if v.Href != nil { + links["href"] = v.Href + } + if v.Rel != nil { + links["rel"] = v.Rel + } + + linkList[k] = links + } + return linkList + } + return nil +} + +func flattenIPv4Address(pr *clsConfig.IPv4Address) []interface{} { + if pr != nil { + ipv4 := make([]interface{}, 0) + + ip := make(map[string]interface{}) + + ip["value"] = pr.Value + ip["prefix_length"] = pr.PrefixLength + + ipv4 = append(ipv4, ip) + + return ipv4 + } + return nil +} + +func flattenIPv6Address(pr *clsConfig.IPv6Address) []interface{} { + if pr != nil { + ipv6 := make([]interface{}, 0) + + ip := make(map[string]interface{}) + + ip["value"] = pr.Value + ip["prefix_length"] = pr.PrefixLength + + ipv6 = append(ipv6, ip) + + return ipv6 + } + return nil +} + +func flattenFQDN(pr *clsConfig.FQDN) []interface{} { + if pr != nil { + fqdn := make([]interface{}, 0) + + f := make(map[string]interface{}) + + f["value"] = pr.Value + + fqdn = append(fqdn, f) + + return fqdn + } + return nil +} diff --git a/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_containers_test.go b/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_containers_test.go new file mode 100644 index 000000000..b94f759b8 --- /dev/null +++ b/nutanix/services/v2/storagecontainersv2/data_source_nutanix_storge_containers_test.go @@ -0,0 +1,165 @@ +package storagecontainersv2_test + +import ( + "fmt" + "os" + "strconv" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + + acc "github.com/terraform-providers/terraform-provider-nutanix/nutanix/acctest" +) + +const datasourceNameStorageContainersV4 = "data.nutanix_storage_containers_v2.test" + +func TestAccNutanixStorageContainersV2Datasource_Basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { acc.TestAccPreCheck(t) }, + Providers: acc.TestAccProviders, + Steps: []resource.TestStep{ + { + Config: testStorageContainersV4DatasourceV4Config(), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet(datasourceNameStorageContainersV4, "storage_containers.#"), + ), + }, + }, + }) +} + +func TestAccNutanixStorageContainersV2Datasource_WithFilter(t *testing.T) { + path, _ := os.Getwd() + filepath := path + "/../../../../test_config_v2.json" + resource.Test(t, resource.TestCase{ + PreCheck: func() { acc.TestAccPreCheck(t) }, + Providers: acc.TestAccProviders, + Steps: []resource.TestStep{ + { + Config: testStorageContainersV4DatasourceV4WithFilterConfig(filepath), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet(datasourceNameStorageContainersV4, "storage_containers.#"), + resource.TestCheckResourceAttr(datasourceNameStorageContainersV4, "storage_containers.#", "1"), + resource.TestCheckResourceAttrSet(datasourceNameStorageContainersV4, "storage_containers.0.container_ext_id"), + resource.TestCheckResourceAttr(datasourceNameStorageContainersV4, "storage_containers.0.name", testVars.StorageContainer.Name), + resource.TestCheckResourceAttr(datasourceNameStorageContainersV4, "storage_containers.0.logical_advertised_capacity_bytes", strconv.Itoa(testVars.StorageContainer.LogicalAdvertisedCapacityBytes)), + resource.TestCheckResourceAttr(datasourceNameStorageContainersV4, "storage_containers.0.logical_explicit_reserved_capacity_bytes", strconv.Itoa(testVars.StorageContainer.LogicalExplicitReservedCapacityBytes)), + resource.TestCheckResourceAttr(datasourceNameStorageContainersV4, "storage_containers.0.replication_factor", strconv.Itoa(testVars.StorageContainer.ReplicationFactor)), + resource.TestCheckResourceAttr(datasourceNameStorageContainersV4, "storage_containers.0.nfs_whitelist_addresses.0.ipv4.0.value", testVars.StorageContainer.NfsWhitelistAddresses.Ipv4.Value), + resource.TestCheckResourceAttr(datasourceNameStorageContainersV4, "storage_containers.0.nfs_whitelist_addresses.0.ipv4.0.prefix_length", strconv.Itoa(testVars.StorageContainer.NfsWhitelistAddresses.Ipv4.PrefixLength)), + ), + }, + }, + }) +} + +func TestAccNutanixStorageContainersV2Datasource_WithLimit(t *testing.T) { + path, _ := os.Getwd() + filepath := path + "/../../../../test_config_v2.json" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acc.TestAccPreCheck(t) }, + Providers: acc.TestAccProviders, + Steps: []resource.TestStep{ + { + Config: testStorageContainersV4DatasourceV4WithLimitConfig(filepath), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet(datasourceNameStorageContainersV4, "storage_containers.#"), + resource.TestCheckResourceAttr(datasourceNameStorageContainersV4, "storage_containers.#", "1"), + ), + }, + }, + }) +} + +func testStorageContainersV4DatasourceV4Config() string { + return ` + data "nutanix_storage_containers_v2" "test"{} + ` +} + +func testStorageContainersV4DatasourceV4WithFilterConfig(filepath string) string { + return fmt.Sprintf(` + + data "nutanix_clusters" "clusters" {} + + locals{ + cluster = [ + for cluster in data.nutanix_clusters.clusters.entities : + cluster.metadata.uuid if cluster.service_list[0] != "PRISM_CENTRAL" + ][0] + config = (jsondecode(file("%s"))) + storage_container = local.config.storage_container + } + + resource "nutanix_storage_containers_v2" "test" { + name = local.storage_container.name + cluster_ext_id = local.cluster + logical_advertised_capacity_bytes = local.storage_container.logical_advertised_capacity_bytes + logical_explicit_reserved_capacity_bytes = local.storage_container.logical_explicit_reserved_capacity_bytes + replication_factor = local.storage_container.replication_factor + nfs_whitelist_addresses { + ipv4 { + value = local.storage_container.nfs_whitelist_addresses.ipv4.value + prefix_length = local.storage_container.nfs_whitelist_addresses.ipv4.prefix_length + } + } + erasure_code = "OFF" + is_inline_ec_enabled = false + has_higher_ec_fault_domain_preference = false + cache_deduplication = "OFF" + on_disk_dedup = "OFF" + is_compression_enabled = true + is_internal = false + is_software_encryption_enabled = false + } + + data "nutanix_storage_containers_v2" "test" { + filter = "name eq '${local.storage_container.name}'" + depends_on = [nutanix_storage_containers_v2.test] + } + + `, filepath) +} + +func testStorageContainersV4DatasourceV4WithLimitConfig(filepath string) string { + return fmt.Sprintf(` + data "nutanix_clusters" "clusters" {} + + locals{ + cluster = [ + for cluster in data.nutanix_clusters.clusters.entities : + cluster.metadata.uuid if cluster.service_list[0] != "PRISM_CENTRAL" + ][0] + config = (jsondecode(file("%s"))) + storage_container = local.config.storage_container + } + + resource "nutanix_storage_containers_v2" "test" { + name = local.storage_container.name + cluster_ext_id = local.cluster + logical_advertised_capacity_bytes = local.storage_container.logical_advertised_capacity_bytes + logical_explicit_reserved_capacity_bytes = local.storage_container.logical_explicit_reserved_capacity_bytes + replication_factor = local.storage_container.replication_factor + nfs_whitelist_addresses { + ipv4 { + value = local.storage_container.nfs_whitelist_addresses.ipv4.value + prefix_length = local.storage_container.nfs_whitelist_addresses.ipv4.prefix_length + } + } + erasure_code = "OFF" + is_inline_ec_enabled = false + has_higher_ec_fault_domain_preference = false + cache_deduplication = "OFF" + on_disk_dedup = "OFF" + is_compression_enabled = true + is_internal = false + is_software_encryption_enabled = false + } + + data "nutanix_storage_containers_v2" "test" { + limit = 1 + depends_on = [nutanix_storage_containers_v2.test] + } + `, filepath) +} diff --git a/nutanix/services/v2/storagecontainersv2/main_test.go b/nutanix/services/v2/storagecontainersv2/main_test.go new file mode 100644 index 000000000..2c2f5a9cc --- /dev/null +++ b/nutanix/services/v2/storagecontainersv2/main_test.go @@ -0,0 +1,45 @@ +package storagecontainersv2_test + +import ( + "encoding/json" + "log" + "os" + "testing" +) + +type TestConfig struct { + StorageContainer struct { + Name string `json:"name"` + LogicalAdvertisedCapacityBytes int `json:"logical_advertised_capacity_bytes"` + LogicalExplicitReservedCapacityBytes int `json:"logical_explicit_reserved_capacity_bytes"` + ReplicationFactor int `json:"replication_factor"` + NfsWhitelistAddresses struct { + Ipv4 struct { + Value string `json:"value"` + PrefixLength int `json:"prefix_length"` + } `json:"ipv4"` + } `json:"nfs_whitelist_addresses"` + } `json:"storage_container"` +} + +var testVars TestConfig + +func loadVars(filepath string, varStuct interface{}) { + // Read config.json from home current path + configData, err := os.ReadFile(filepath) + if err != nil { + log.Printf("Got this error while reading config.json: %s", err.Error()) + os.Exit(1) + } + + err = json.Unmarshal(configData, varStuct) + if err != nil { + log.Printf("Got this error while unmarshalling config.json: %s", err.Error()) + os.Exit(1) + } +} +func TestMain(m *testing.M) { + log.Println("Do some crazy stuff before tests!") + loadVars("../../../../test_config_v2.json", &testVars) + os.Exit(m.Run()) +} diff --git a/nutanix/services/v2/storagecontainersv2/resource_nutanix_storge_containers_test.go b/nutanix/services/v2/storagecontainersv2/resource_nutanix_storge_containers_test.go new file mode 100644 index 000000000..7523a6afc --- /dev/null +++ b/nutanix/services/v2/storagecontainersv2/resource_nutanix_storge_containers_test.go @@ -0,0 +1,227 @@ +package storagecontainersv2_test + +import ( + "fmt" + "os" + "regexp" + "strconv" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + + acc "github.com/terraform-providers/terraform-provider-nutanix/nutanix/acctest" +) + +const resourceNameStorageContainers = "nutanix_storage_containers_v2.test" + +func TestAccNutanixStorageContainersV2Resource_Basic(t *testing.T) { + path, _ := os.Getwd() + filepath := path + "/../../../../test_config_v2.json" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acc.TestAccFoundationPreCheck(t) }, + Providers: acc.TestAccProviders, + Steps: []resource.TestStep{ + { + Config: testStorageContainersResourceConfig(filepath), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet(resourceNameStorageContainers, "container_ext_id"), + resource.TestCheckResourceAttr(resourceNameStorageContainers, "name", testVars.StorageContainer.Name), + resource.TestCheckResourceAttr(resourceNameStorageContainers, "logical_advertised_capacity_bytes", strconv.Itoa(testVars.StorageContainer.LogicalAdvertisedCapacityBytes)), + resource.TestCheckResourceAttr(resourceNameStorageContainers, "logical_explicit_reserved_capacity_bytes", strconv.Itoa(testVars.StorageContainer.LogicalExplicitReservedCapacityBytes)), + resource.TestCheckResourceAttr(resourceNameStorageContainers, "replication_factor", strconv.Itoa(testVars.StorageContainer.ReplicationFactor)), + resource.TestCheckResourceAttr(resourceNameStorageContainers, "nfs_whitelist_addresses.0.ipv4.0.value", testVars.StorageContainer.NfsWhitelistAddresses.Ipv4.Value), + resource.TestCheckResourceAttr(resourceNameStorageContainers, "nfs_whitelist_addresses.0.ipv4.0.prefix_length", strconv.Itoa(testVars.StorageContainer.NfsWhitelistAddresses.Ipv4.PrefixLength)), + ), + }, + // test update + { + Config: testStorageContainersResourceUpdateConfig(filepath), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet(resourceNameStorageContainers, "container_ext_id"), + resource.TestCheckResourceAttr(resourceNameStorageContainers, "name", fmt.Sprintf("%s_updated", testVars.StorageContainer.Name)), + resource.TestCheckResourceAttr(resourceNameStorageContainers, "logical_advertised_capacity_bytes", strconv.Itoa(testVars.StorageContainer.LogicalAdvertisedCapacityBytes)), + resource.TestCheckResourceAttr(resourceNameStorageContainers, "logical_explicit_reserved_capacity_bytes", strconv.Itoa(testVars.StorageContainer.LogicalExplicitReservedCapacityBytes)), + resource.TestCheckResourceAttr(resourceNameStorageContainers, "replication_factor", strconv.Itoa(testVars.StorageContainer.ReplicationFactor)), + resource.TestCheckResourceAttr(resourceNameStorageContainers, "nfs_whitelist_addresses.0.ipv4.0.value", "192.168.15.0"), + resource.TestCheckResourceAttr(resourceNameStorageContainers, "nfs_whitelist_addresses.0.ipv4.0.prefix_length", strconv.Itoa(testVars.StorageContainer.NfsWhitelistAddresses.Ipv4.PrefixLength)), + ), + }, + }, + }) +} + +func TestAccNutanixStorageContainersV2Resource_WithNoClusterExtId(t *testing.T) { + path, _ := os.Getwd() + filepath := path + "/../../../../test_config_v2.json" + resource.Test(t, resource.TestCase{ + PreCheck: func() { acc.TestAccPreCheck(t) }, + Providers: acc.TestAccProviders, + Steps: []resource.TestStep{ + { + Config: testStorageContainersResourceWithoutClusterExtIdConfig(filepath), + ExpectError: regexp.MustCompile("Missing required argument"), + }, + }, + }) +} +func TestAccNutanixStorageContainersV2Resource_WithNoName(t *testing.T) { + path, _ := os.Getwd() + filepath := path + "/../../../../test_config_v2.json" + resource.Test(t, resource.TestCase{ + PreCheck: func() { acc.TestAccPreCheck(t) }, + Providers: acc.TestAccProviders, + Steps: []resource.TestStep{ + { + Config: testStorageContainersResourceWithoutNameConfig(filepath), + ExpectError: regexp.MustCompile("Missing required argument"), + }, + }, + }) +} + +func testStorageContainersResourceConfig(filepath string) string { + return fmt.Sprintf(` + + data "nutanix_clusters" "clusters" {} + + locals{ + cluster = [ + for cluster in data.nutanix_clusters.clusters.entities : + cluster.metadata.uuid if cluster.service_list[0] != "PRISM_CENTRAL" + ][0] + config = (jsondecode(file("%s"))) + storage_container = local.config.storage_container + } + + resource "nutanix_storage_containers_v2" "test" { + name = local.storage_container.name + cluster_ext_id = local.cluster + logical_advertised_capacity_bytes = local.storage_container.logical_advertised_capacity_bytes + logical_explicit_reserved_capacity_bytes = local.storage_container.logical_explicit_reserved_capacity_bytes + replication_factor = local.storage_container.replication_factor + nfs_whitelist_addresses { + ipv4 { + value = local.storage_container.nfs_whitelist_addresses.ipv4.value + prefix_length = local.storage_container.nfs_whitelist_addresses.ipv4.prefix_length + } + } + erasure_code = "OFF" + is_inline_ec_enabled = false + has_higher_ec_fault_domain_preference = false + cache_deduplication = "OFF" + on_disk_dedup = "OFF" + is_compression_enabled = true + is_internal = false + is_software_encryption_enabled = false + }`, filepath) +} + +func testStorageContainersResourceUpdateConfig(filepath string) string { + return fmt.Sprintf(` + + data "nutanix_clusters" "clusters" {} + + locals{ + cluster = [ + for cluster in data.nutanix_clusters.clusters.entities : + cluster.metadata.uuid if cluster.service_list[0] != "PRISM_CENTRAL" + ][0] + config = (jsondecode(file("%s"))) + storage_container = local.config.storage_container + } + + resource "nutanix_storage_containers_v2" "test" { + name = "${local.storage_container.name}_updated" + cluster_ext_id = local.cluster + logical_advertised_capacity_bytes = local.storage_container.logical_advertised_capacity_bytes + logical_explicit_reserved_capacity_bytes = local.storage_container.logical_explicit_reserved_capacity_bytes + replication_factor = local.storage_container.replication_factor + nfs_whitelist_addresses { + ipv4 { + value = "192.168.15.0" + prefix_length = local.storage_container.nfs_whitelist_addresses.ipv4.prefix_length + } + } + erasure_code = "OFF" + is_inline_ec_enabled = false + has_higher_ec_fault_domain_preference = false + cache_deduplication = "OFF" + on_disk_dedup = "OFF" + is_compression_enabled = true + is_internal = false + is_software_encryption_enabled = false + }`, filepath) +} + +func testStorageContainersResourceWithoutNameConfig(filepath string) string { + return fmt.Sprintf(` + + data "nutanix_clusters" "clusters" {} + + locals{ + cluster = [ + for cluster in data.nutanix_clusters.clusters.entities : + cluster.metadata.uuid if cluster.service_list[0] != "PRISM_CENTRAL" + ][0] + config = (jsondecode(file("%s"))) + storage_container = local.config.storage_container + } + + resource "nutanix_storage_containers_v2" "test" { + cluster_ext_id = local.cluster + logical_advertised_capacity_bytes = local.storage_container.logical_advertised_capacity_bytes + logical_explicit_reserved_capacity_bytes = local.storage_container.logical_explicit_reserved_capacity_bytes + replication_factor = local.storage_container.replication_factor + nfs_whitelist_addresses { + ipv4 { + value = local.storage_container.nfs_whitelist_addresses.ipv4.value + prefix_length = local.storage_container.nfs_whitelist_addresses.ipv4.prefix_length + } + } + erasure_code = "OFF" + is_inline_ec_enabled = false + has_higher_ec_fault_domain_preference = false + cache_deduplication = "OFF" + on_disk_dedup = "OFF" + is_compression_enabled = true + is_internal = false + is_software_encryption_enabled = false + }`, filepath) +} + +func testStorageContainersResourceWithoutClusterExtIdConfig(filepath string) string { + return fmt.Sprintf(` + + data "nutanix_clusters" "clusters" {} + + locals{ + cluster = [ + for cluster in data.nutanix_clusters.clusters.entities : + cluster.metadata.uuid if cluster.service_list[0] != "PRISM_CENTRAL" + ][0] + config = (jsondecode(file("%s"))) + storage_container = local.config.storage_container + } + + resource "nutanix_storage_containers_v2" "test" { + name = local.storage_container.name + logical_advertised_capacity_bytes = local.storage_container.logical_advertised_capacity_bytes + logical_explicit_reserved_capacity_bytes = local.storage_container.logical_explicit_reserved_capacity_bytes + replication_factor = local.storage_container.replication_factor + nfs_whitelist_addresses { + ipv4 { + value = local.storage_container.nfs_whitelist_addresses.ipv4.value + prefix_length = local.storage_container.nfs_whitelist_addresses.ipv4.prefix_length + } + } + erasure_code = "OFF" + is_inline_ec_enabled = false + has_higher_ec_fault_domain_preference = false + cache_deduplication = "OFF" + on_disk_dedup = "OFF" + is_compression_enabled = true + is_internal = false + is_software_encryption_enabled = false + }`, filepath) +} diff --git a/nutanix/services/v2/storagecontainersv2/resource_nutanix_storge_containers_v2.go b/nutanix/services/v2/storagecontainersv2/resource_nutanix_storge_containers_v2.go new file mode 100644 index 000000000..219d555b2 --- /dev/null +++ b/nutanix/services/v2/storagecontainersv2/resource_nutanix_storge_containers_v2.go @@ -0,0 +1,732 @@ +package storagecontainersv2 + +import ( + "context" + "encoding/json" + "fmt" + "log" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + clustermgmtConfig "github.com/nutanix-core/ntnx-api-golang-sdk-internal/clustermgmt-go-client/v16/models/clustermgmt/v4/config" + clsCommonConfig "github.com/nutanix-core/ntnx-api-golang-sdk-internal/clustermgmt-go-client/v16/models/common/v1/config" + clsPrismConfig "github.com/nutanix-core/ntnx-api-golang-sdk-internal/clustermgmt-go-client/v16/models/prism/v4/config" + prismConfig "github.com/nutanix-core/ntnx-api-golang-sdk-internal/prism-go-client/v16/models/prism/v4/config" + + conns "github.com/terraform-providers/terraform-provider-nutanix/nutanix" + "github.com/terraform-providers/terraform-provider-nutanix/nutanix/sdks/v4/prism" + "github.com/terraform-providers/terraform-provider-nutanix/utils" +) + +func ResourceNutanixStorageContainersV2() *schema.Resource { + return &schema.Resource{ + CreateContext: ResourceNutanixStorageContainersV2Create, + ReadContext: ResourceNutanixStorageContainersV2Read, + UpdateContext: ResourceNutanixStorageContainersV2Update, + DeleteContext: ResourceNutanixStorageContainersV2Delete, + Schema: map[string]*schema.Schema{ + "cluster_ext_id": { + Type: schema.TypeString, + Required: true, + }, + "ext_id": { + Type: schema.TypeString, + Computed: true, + Optional: true, + }, + "tenant_id": { + Type: schema.TypeString, + Computed: true, + }, + "links": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "href": { + Type: schema.TypeString, + Computed: true, + }, + "rel": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "container_ext_id": { + Type: schema.TypeString, + Computed: true, + Optional: true, + }, + "owner_ext_id": { + Type: schema.TypeString, + Computed: true, + Optional: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + "storage_pool_ext_id": { + Type: schema.TypeString, + Computed: true, + }, + "is_marked_for_removal": { + Type: schema.TypeBool, + Computed: true, + }, + "max_capacity_bytes": { + Type: schema.TypeInt, + Computed: true, + }, + "logical_explicit_reserved_capacity_bytes": { + Type: schema.TypeInt, + Computed: true, + Optional: true, + }, + "logical_implicit_reserved_capacity_bytes": { + Type: schema.TypeInt, + Computed: true, + }, + "logical_advertised_capacity_bytes": { + Type: schema.TypeInt, + Computed: true, + Optional: true, + }, + "replication_factor": { + Type: schema.TypeInt, + Computed: true, + Optional: true, + }, + "nfs_whitelist_addresses": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ipv4": resourceSchemaForValuePrefixLength(), + "ipv6": resourceSchemaForValuePrefixLength(), + "fqdn": resourceSchemaForFqdnValue(), + }, + }, + }, + "erasure_code": { + Type: schema.TypeString, + Computed: true, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"NONE", "OFF", "ON"}, false), + }, + "is_inline_ec_enabled": { + Type: schema.TypeBool, + Computed: true, + Optional: true, + }, + "has_higher_ec_fault_domain_preference": { + Type: schema.TypeBool, + Computed: true, + Optional: true, + }, + "erasure_code_delay_secs": { + Type: schema.TypeInt, + Computed: true, + Optional: true, + }, + "cache_deduplication": { + Type: schema.TypeString, + Computed: true, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"NONE", "OFF", "ON"}, false), + }, + "on_disk_dedup": { + Type: schema.TypeString, + Computed: true, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"NONE", "OFF", "POST_PROCESS"}, false), + }, + "is_compression_enabled": { + Type: schema.TypeBool, + Computed: true, + Optional: true, + }, + "compression_delay_secs": { + Type: schema.TypeInt, + Computed: true, + Optional: true, + }, + "is_internal": { + Type: schema.TypeBool, + Computed: true, + Optional: true, + }, + "is_software_encryption_enabled": { + Type: schema.TypeBool, + Computed: true, + Optional: true, + }, + "is_encrypted": { + Type: schema.TypeBool, + Computed: true, + }, + "affinity_host_ext_id": { + Type: schema.TypeString, + Computed: true, + Optional: true, + }, + "cluster_name": { + Type: schema.TypeString, + Computed: true, + }, + "ignore_small_files": { + Type: schema.TypeBool, + Computed: true, + Optional: true, + }, + }, + } +} + +func ResourceNutanixStorageContainersV2Create(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + + conn := meta.(*conns.Client).ClusterAPI + body := &clustermgmtConfig.StorageContainer{} + + clusterExtId := d.Get("cluster_ext_id") + + if extId, ok := d.GetOk("ext_id"); ok { + body.ExtId = utils.StringPtr(extId.(string)) + } + if containerExtId, ok := d.GetOk("container_ext_id"); ok { + body.ContainerExtId = utils.StringPtr(containerExtId.(string)) + } + if ownerExtId, ok := d.GetOk("owner_ext_id"); ok { + body.OwnerExtId = utils.StringPtr(ownerExtId.(string)) + } + if name, ok := d.GetOk("name"); ok { + body.Name = utils.StringPtr(name.(string)) + } + if logicalExplicitReservedCapacityBytes, ok := d.GetOk("logical_explicit_reserved_capacity_bytes"); ok { + body.LogicalExplicitReservedCapacityBytes = utils.Int64Ptr(int64(logicalExplicitReservedCapacityBytes.(int))) + log.Printf("[DEBUG] logicalAdvertisedCapacityBytes: %v", utils.Int64Ptr(int64(logicalExplicitReservedCapacityBytes.(int)))) + } + if logicalAdvertisedCapacityBytes, ok := d.GetOk("logical_advertised_capacity_bytes"); ok { + body.LogicalAdvertisedCapacityBytes = utils.Int64Ptr(int64(logicalAdvertisedCapacityBytes.(int))) + log.Printf("[DEBUG] logical_explicit_reserved_capacity_bytes: %v", utils.Int64Ptr(int64(logicalAdvertisedCapacityBytes.(int)))) + } + if replicationFactor, ok := d.GetOk("replication_factor"); ok { + body.ReplicationFactor = utils.IntPtr(replicationFactor.(int)) + log.Printf("[DEBUG] replicationFactor: %v", utils.IntPtr(int(replicationFactor.(int)))) + } + if nfsWhitelistAddresses, ok := d.GetOk("nfs_whitelist_addresses"); ok { + body.NfsWhitelistAddress = expandNfsWhitelistAddresses(nfsWhitelistAddresses) + } + if erasureCode, ok := d.GetOk("erasure_code"); ok { + subMap := map[string]interface{}{ + "NONE": 2, + "OFF": 3, + "ON": 4, + } + pVal := subMap[erasureCode.(string)] + p := clustermgmtConfig.ErasureCodeStatus(pVal.(int)) + body.ErasureCode = &p + } + if isInlineEcEnabled, ok := d.GetOk("is_inline_ec_enabled"); ok { + body.IsInlineEcEnabled = utils.BoolPtr(bool(isInlineEcEnabled.(bool))) + } + if hasHigherEcFaultDomainPreference, ok := d.GetOk("has_higher_ec_fault_domain_preference"); ok { + body.HasHigherEcFaultDomainPreference = utils.BoolPtr(bool(hasHigherEcFaultDomainPreference.(bool))) + } + if erasureCodeDelaySecs, ok := d.GetOk("erasure_code_delay_secs"); ok { + body.ErasureCodeDelaySecs = utils.IntPtr(int(erasureCodeDelaySecs.(int))) + } + if cacheDeduplication, ok := d.GetOk("cache_deduplication"); ok { + subMap := map[string]interface{}{ + "NONE": 2, + "OFF": 3, + "ON": 4, + } + pVal := subMap[cacheDeduplication.(string)] + p := clustermgmtConfig.CacheDeduplication(pVal.(int)) + body.CacheDeduplication = &p + } + if onDiskDedup, ok := d.GetOk("on_disk_dedup"); ok { + subMap := map[string]interface{}{ + "NONE": 2, + "OFF": 3, + "POST_PROCESS": 4, + } + pVal := subMap[onDiskDedup.(string)] + p := clustermgmtConfig.OnDiskDedup(pVal.(int)) + body.OnDiskDedup = &p + } + if isCompressionEnabled, ok := d.GetOk("is_compression_enabled"); ok { + body.IsCompressionEnabled = utils.BoolPtr(bool(isCompressionEnabled.(bool))) + } + if compressionDelaySecs, ok := d.GetOk("compression_delay_secs"); ok { + body.CompressionDelaySecs = utils.IntPtr(int(compressionDelaySecs.(int))) + } + if isInternal, ok := d.GetOk("is_internal"); ok { + body.IsInternal = utils.BoolPtr(bool(isInternal.(bool))) + } + if isSoftwareEncryptionEnabled, ok := d.GetOk("is_software_encryption_enabled"); ok { + body.IsSoftwareEncryptionEnabled = utils.BoolPtr(bool(isSoftwareEncryptionEnabled.(bool))) + } + if affinityHostExtId, ok := d.GetOk("affinity_host_ext_id"); ok { + body.AffinityHostExtId = utils.StringPtr(affinityHostExtId.(string)) + } + + jsonBody, _ := json.MarshalIndent(body, "", " ") + log.Printf("[DEBUG] create storage container body: %s", string(jsonBody)) + resp, err := conn.StorageContainersAPI.CreateStorageContainer(body, utils.StringPtr(clusterExtId.(string))) + if err != nil { + return diag.Errorf("error while creating storage containers : %v", err) + } + + TaskRef := resp.Data.GetValue().(clsPrismConfig.TaskReference) + taskUUID := TaskRef.ExtId + + taskconn := meta.(*conns.Client).PrismAPI + // Wait for the cluster to be available + stateConf := &resource.StateChangeConf{ + Pending: []string{"PENDING", "RUNNING", "QUEUED"}, + Target: []string{"SUCCEEDED"}, + Refresh: taskStateRefreshPrismTaskGroupFunc(ctx, taskconn, utils.StringValue(taskUUID)), + Timeout: d.Timeout(schema.TimeoutCreate), + } + + if _, errWaitTask := stateConf.WaitForStateContext(ctx); errWaitTask != nil { + return diag.Errorf("error waiting for storage container (%s) to create: %s", utils.StringValue(taskUUID), errWaitTask) + } + + // Get UUID from TASK API + + resourceUUID, err := taskconn.TaskRefAPI.GetTaskById(taskUUID, nil) + if err != nil { + return diag.Errorf("error while fetching storage container UUID : %v", err) + } + rUUID := resourceUUID.Data.GetValue().(prismConfig.Task) + + uuid := rUUID.EntitiesAffected[0].ExtId + d.SetId(*uuid) + + // Delay/sleep for 2 Minute, replication factor is not updated immediately + time.Sleep(2 * time.Minute) + + return ResourceNutanixStorageContainersV2Read(ctx, d, meta) +} + +func ResourceNutanixStorageContainersV2Read(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + log.Printf("[DEBUG] Reading storage container with ID: %s", d.Id()) + conn := meta.(*conns.Client).ClusterAPI + + resp, err := conn.StorageContainersAPI.GetStorageContainerById(utils.StringPtr(d.Id())) + if err != nil { + return diag.Errorf("error while fetching Storage Container : %v", err) + } + + getResp := resp.Data.GetValue().(clustermgmtConfig.StorageContainer) + + jsonBody, _ := json.MarshalIndent(getResp, "", " ") + log.Printf("[DEBUG] read storage container body: %s", string(jsonBody)) + + if err := d.Set("ext_id", getResp.ExtId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("tenant_id", getResp.TenantId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("links", flattenLinks(getResp.Links)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("container_ext_id", getResp.ContainerExtId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("owner_ext_id", getResp.OwnerExtId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("name", getResp.Name); err != nil { + return diag.FromErr(err) + } + if err := d.Set("cluster_ext_id", getResp.ClusterExtId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("storage_pool_ext_id", getResp.StoragePoolExtId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("is_marked_for_removal", getResp.IsMarkedForRemoval); err != nil { + return diag.FromErr(err) + } + if err := d.Set("max_capacity_bytes", getResp.MaxCapacityBytes); err != nil { + return diag.FromErr(err) + } + if err := d.Set("logical_explicit_reserved_capacity_bytes", getResp.LogicalExplicitReservedCapacityBytes); err != nil { + return diag.FromErr(err) + } + if err := d.Set("logical_implicit_reserved_capacity_bytes", getResp.LogicalImplicitReservedCapacityBytes); err != nil { + return diag.FromErr(err) + } + if err := d.Set("logical_advertised_capacity_bytes", getResp.LogicalAdvertisedCapacityBytes); err != nil { + return diag.FromErr(err) + } + if err := d.Set("replication_factor", getResp.ReplicationFactor); err != nil { + return diag.FromErr(err) + } + if err := d.Set("nfs_whitelist_addresses", flattenNfsWhitelistAddresses(getResp.NfsWhitelistAddress)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("erasure_code", flattenErasureCodeStatus(getResp.ErasureCode)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("is_inline_ec_enabled", getResp.IsInlineEcEnabled); err != nil { + return diag.FromErr(err) + } + if err := d.Set("has_higher_ec_fault_domain_preference", getResp.HasHigherEcFaultDomainPreference); err != nil { + return diag.FromErr(err) + } + if err := d.Set("erasure_code_delay_secs", getResp.ErasureCodeDelaySecs); err != nil { + return diag.FromErr(err) + } + if err := d.Set("cache_deduplication", flattenCacheDeduplication(getResp.CacheDeduplication)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("on_disk_dedup", flattenOnDiskDedup(getResp.OnDiskDedup)); err != nil { + return diag.FromErr(err) + } + if err := d.Set("is_compression_enabled", getResp.IsCompressionEnabled); err != nil { + return diag.FromErr(err) + } + if err := d.Set("compression_delay_secs", getResp.CompressionDelaySecs); err != nil { + return diag.FromErr(err) + } + if err := d.Set("is_internal", getResp.IsInternal); err != nil { + return diag.FromErr(err) + } + if err := d.Set("is_software_encryption_enabled", getResp.IsSoftwareEncryptionEnabled); err != nil { + return diag.FromErr(err) + } + if err := d.Set("is_encrypted", getResp.IsEncrypted); err != nil { + return diag.FromErr(err) + } + if err := d.Set("affinity_host_ext_id", getResp.AffinityHostExtId); err != nil { + return diag.FromErr(err) + } + if err := d.Set("cluster_name", getResp.ClusterName); err != nil { + return diag.FromErr(err) + } + return nil +} + +func ResourceNutanixStorageContainersV2Update(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + log.Printf("[DEBUG] Update Storage Container") + conn := meta.(*conns.Client).ClusterAPI + + resp, err := conn.StorageContainersAPI.GetStorageContainerById(utils.StringPtr(d.Id())) + if err != nil { + return diag.Errorf("error while fetching storage container : %v", err) + } + + // Extract E-Tag Header + etagValue := conn.ClusterEntityAPI.ApiClient.GetEtag(resp) + + args := make(map[string]interface{}) + args["If-Match"] = etagValue + + respStorageContainer := resp.Data.GetValue().(clustermgmtConfig.StorageContainer) + updateSpec := respStorageContainer + + if d.HasChange("ext_id") { + updateSpec.ExtId = utils.StringPtr(d.Get("ext_id").(string)) + } + if d.HasChange("container_ext_id") { + updateSpec.ContainerExtId = utils.StringPtr(d.Get("container_ext_id").(string)) + } + if d.HasChange("owner_ext_id") { + updateSpec.OwnerExtId = utils.StringPtr(d.Get("owner_ext_id").(string)) + } + if d.HasChange("name") { + updateSpec.Name = utils.StringPtr(d.Get("name").(string)) + } + if d.HasChange("logical_explicit_reserved_capacity_bytes") { + updateSpec.LogicalExplicitReservedCapacityBytes = utils.Int64Ptr(int64(d.Get("logical_explicit_reserved_capacity_bytes").(int))) + } + if d.HasChange("logical_advertised_capacity_bytes") { + updateSpec.LogicalAdvertisedCapacityBytes = utils.Int64Ptr(int64(d.Get("logical_advertised_capacity_bytes").(int))) + } + if d.HasChange("replication_factor") { + updateSpec.ReplicationFactor = utils.IntPtr(d.Get("replication_factor").(int)) + } + if d.HasChange("nfs_whitelist_addresses") { + log.Printf("[DEBUG] nfs_whitelist_addresses: %v", d.Get("nfs_whitelist_addresses")) + updateSpec.NfsWhitelistAddress = expandNfsWhitelistAddresses(d.Get("nfs_whitelist_addresses")) + } + if d.HasChange("erasure_code") { + subMap := map[string]interface{}{ + "NONE": 2, + "OFF": 3, + "ON": 4, + } + pVal := subMap[d.Get("erasure_code").(string)] + p := clustermgmtConfig.ErasureCodeStatus(pVal.(int)) + updateSpec.ErasureCode = &p + } + if d.HasChange("is_inline_ec_enabled") { + updateSpec.IsInlineEcEnabled = utils.BoolPtr(d.Get("is_inline_ec_enabled").(bool)) + } + if d.HasChange("has_higher_ec_fault_domain_preference") { + updateSpec.HasHigherEcFaultDomainPreference = utils.BoolPtr(d.Get("has_higher_ec_fault_domain_preference").(bool)) + } + if d.HasChange("erasure_code_delay_secs") { + updateSpec.ErasureCodeDelaySecs = utils.IntPtr(d.Get("erasure_code_delay_secs").(int)) + } + if d.HasChange("cache_deduplication") { + subMap := map[string]interface{}{ + "NONE": 2, + "OFF": 3, + "ON": 4, + } + pVal := subMap[d.Get("cache_deduplication").(string)] + p := clustermgmtConfig.CacheDeduplication(pVal.(int)) + updateSpec.CacheDeduplication = &p + } + if d.HasChange("on_disk_dedup") { + subMap := map[string]interface{}{ + "NONE": 2, + "OFF": 3, + "POST_PROCESS": 4, + } + pVal := subMap[d.Get("on_disk_dedup").(string)] + p := clustermgmtConfig.OnDiskDedup(pVal.(int)) + updateSpec.OnDiskDedup = &p + } + if d.HasChange("is_compression_enabled") { + updateSpec.IsCompressionEnabled = utils.BoolPtr(d.Get("is_compression_enabled").(bool)) + } + if d.HasChange("compression_delay_secs") { + updateSpec.CompressionDelaySecs = utils.IntPtr(d.Get("compression_delay_secs").(int)) + } + if d.HasChange("is_internal") { + updateSpec.IsInternal = utils.BoolPtr(d.Get("is_internal").(bool)) + } + if d.HasChange("is_software_encryption_enabled") { + updateSpec.IsSoftwareEncryptionEnabled = utils.BoolPtr(d.Get("is_software_encryption_enabled").(bool)) + } + if d.HasChange("affinity_host_ext_id") { + updateSpec.AffinityHostExtId = utils.StringPtr(d.Get("affinity_host_ext_id").(string)) + } + + updateResp, err := conn.StorageContainersAPI.UpdateStorageContainerById(utils.StringPtr(d.Id()), &updateSpec, args) + if err != nil { + return diag.Errorf("error while updating storage container : %v", err) + } + + TaskRef := updateResp.Data.GetValue().(clsPrismConfig.TaskReference) + taskUUID := TaskRef.ExtId + + taskconn := meta.(*conns.Client).PrismAPI + // Wait for the cluster to be available + stateConf := &resource.StateChangeConf{ + Pending: []string{"PENDING", "RUNNING", "QUEUED"}, + Target: []string{"SUCCEEDED"}, + Refresh: taskStateRefreshPrismTaskGroupFunc(ctx, taskconn, utils.StringValue(taskUUID)), + Timeout: d.Timeout(schema.TimeoutCreate), + } + + if _, errWaitTask := stateConf.WaitForStateContext(ctx); errWaitTask != nil { + return diag.Errorf("error waiting for storage container (%s) to update: %s", utils.StringValue(taskUUID), errWaitTask) + } + + // delay/sleep for 1 Minute, replication factor is not updated immediately + time.Sleep(60 * time.Second) + return ResourceNutanixStorageContainersV2Read(ctx, d, meta) +} + +func ResourceNutanixStorageContainersV2Delete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.Client).ClusterAPI + + // default value for ignoreSmallFiles is true + ignoreSmallFiles := true + if ignoreSmallFile, ok := d.GetOk("ignore_small_files"); ok { + ignoreSmallFiles = *utils.BoolPtr(ignoreSmallFile.(bool)) + } + + resp, err := conn.StorageContainersAPI.DeleteStorageContainerById(utils.StringPtr(d.Id()), utils.BoolPtr(ignoreSmallFiles)) + if err != nil { + return diag.Errorf("error while Deleting storage containers : %v", err) + } + + TaskRef := resp.Data.GetValue().(clsPrismConfig.TaskReference) + taskUUID := TaskRef.ExtId + + taskconn := meta.(*conns.Client).PrismAPI + // Wait for the cluster to be available + stateConf := &resource.StateChangeConf{ + Pending: []string{"PENDING", "RUNNING", "QUEUED"}, + Target: []string{"SUCCEEDED"}, + Refresh: taskStateRefreshPrismTaskGroupFunc(ctx, taskconn, utils.StringValue(taskUUID)), + Timeout: d.Timeout(schema.TimeoutCreate), + } + + if _, errWaitTask := stateConf.WaitForStateContext(ctx); errWaitTask != nil { + return diag.Errorf("error waiting for storage container (%s) to delete: %s", utils.StringValue(taskUUID), errWaitTask) + } + return nil +} + +func expandNfsWhitelistAddresses(nfsWhitelistAddresses interface{}) []clsCommonConfig.IPAddressOrFQDN { + + if nfsWhitelistAddresses != nil { + nfsWhitelistAddressesList := nfsWhitelistAddresses.([]interface{}) + ips := make([]clsCommonConfig.IPAddressOrFQDN, len(nfsWhitelistAddressesList)) + + ip := &clsCommonConfig.IPAddressOrFQDN{} + prI := nfsWhitelistAddresses.([]interface{}) + val := prI[0].(map[string]interface{}) + + if ipv4, ok := val["ipv4"]; ok && len(ipv4.([]interface{})) > 0 { + ip.Ipv4 = expandIPv4Address(ipv4) + } + if ipv6, ok := val["ipv6"]; ok && len(ipv6.([]interface{})) > 0 { + log.Printf("[DEBUG] ipv6: %v", ipv6) + + ip.Ipv6 = expandIPv6Address(ipv6) + } + if fqdn, ok := val["fqdn"]; ok && len(fqdn.([]interface{})) > 0 { + ip.Fqdn = expandFQDN(fqdn.([]interface{})) + } + ips[0] = *ip + return ips + } + return nil +} + +func resourceSchemaForValuePrefixLength() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "prefix_length": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, + }, + }, + } +} + +func resourceSchemaForFqdnValue() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + }, + }, + } +} + +func taskStateRefreshPrismTaskGroupFunc(ctx context.Context, client *prism.Client, taskUUID string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + // data := base64.StdEncoding.EncodeToString([]byte("ergon")) + // encodeUUID := data + ":" + taskUUID + vresp, err := client.TaskRefAPI.GetTaskById(utils.StringPtr(taskUUID), nil) + + if err != nil { + return "", "", (fmt.Errorf("error while polling prism task: %v", err)) + } + + // get the group results + + v := vresp.Data.GetValue().(prismConfig.Task) + + if getTaskStatus(v.Status) == "CANCELED" || getTaskStatus(v.Status) == "FAILED" { + return v, getTaskStatus(v.Status), + fmt.Errorf("error_detail: %s, progress_message: %d", utils.StringValue(v.ErrorMessages[0].Message), utils.IntValue(v.ProgressPercentage)) + } + return v, getTaskStatus(v.Status), nil + } +} + +func getTaskStatus(pr *prismConfig.TaskStatus) string { + if pr != nil { + if *pr == prismConfig.TaskStatus(6) { + return "FAILED" + } + if *pr == prismConfig.TaskStatus(7) { + return "CANCELED" + } + if *pr == prismConfig.TaskStatus(2) { + return "QUEUED" + } + if *pr == prismConfig.TaskStatus(3) { + return "RUNNING" + } + if *pr == prismConfig.TaskStatus(5) { + return "SUCCEEDED" + } + } + return "UNKNOWN" +} + +func expandIPv4Address(pr interface{}) *clsCommonConfig.IPv4Address { + if pr != nil { + ipv4 := &clsCommonConfig.IPv4Address{} + prI := pr.([]interface{}) + val := prI[0].(map[string]interface{}) + + if value, ok := val["value"]; ok { + ipv4.Value = utils.StringPtr(value.(string)) + } + if prefix, ok := val["prefix_length"]; ok { + ipv4.PrefixLength = utils.IntPtr(prefix.(int)) + } + return ipv4 + } + return nil +} + +func expandIPv6Address(pr interface{}) *clsCommonConfig.IPv6Address { + if pr != nil { + ipv6 := &clsCommonConfig.IPv6Address{} + prI := pr.([]interface{}) + val := prI[0].(map[string]interface{}) + + if value, ok := val["value"]; ok { + ipv6.Value = utils.StringPtr(value.(string)) + } + if prefix, ok := val["prefix_length"]; ok { + ipv6.PrefixLength = utils.IntPtr(prefix.(int)) + } + return ipv6 + } + return nil +} + +func expandFQDN(pr []interface{}) *clsCommonConfig.FQDN { + + if len(pr) > 0 { + fqdn := clsCommonConfig.FQDN{} + val := pr[0].(map[string]interface{}) + if value, ok := val["value"]; ok { + fqdn.Value = utils.StringPtr(value.(string)) + } + + return &fqdn + } + return nil +} diff --git a/test_config_v2.json b/test_config_v2.json index c42f11622..d690c9cde 100644 --- a/test_config_v2.json +++ b/test_config_v2.json @@ -88,5 +88,17 @@ "group_search_type":"NON_RECURSIVE", "white_listed_groups":["test","test_updated"] } + }, + "storage_container":{ + "name": "terraform_storage_container_test", + "logical_advertised_capacity_bytes": 1073741824000, + "logical_explicit_reserved_capacity_bytes": 20, + "replication_factor": 1, + "nfs_whitelist_addresses":{ + "ipv4":{ + "value":"192.168.14.0", + "prefix_length":32 + } + } } } \ No newline at end of file diff --git a/website/docs/d/nutanix_storage_stats_info_v2.html.markdown b/website/docs/d/nutanix_storage_stats_info_v2.html.markdown new file mode 100644 index 000000000..24f68e3d3 --- /dev/null +++ b/website/docs/d/nutanix_storage_stats_info_v2.html.markdown @@ -0,0 +1,91 @@ +--- +layout: "nutanix" +page_title: "NUTANIX: nutanix_storage_container_stats_info_v2" +sidebar_current: "docs-nutanix-datasource-storage-stats-info-v4" +description: |- + This operation retrieves a Stats for a Storage Container. +--- + +# nutanix_storage_container_stats_info_v2 + +Provides a datasource to Fetches the stats information of the Storage Container identified by {containerExtId}. + + +## Example Usage + +```hcl + data "nutanix_storage_container_stats_info_v2" "test"{ + ext_id = "{storage_container_ext_id}" + start_time = "2024-08-01T00:00:00Z" + end_time = "2024-08-30T00:00:00Z" + sampling_interval = 1 + stat_type = "SUM" + } +``` + +## Argument Reference + +The following arguments are supported: + +* `ext_id`: (Required) storage container UUID +* `start_time`: (Required) storage container UUID +* `end_time`: (Required) storage container UUID +* `sampling_interval`: (Optional) storage container UUID +* `stat_type`: (Optional) storage container UUID + * available values: + * `AVG`: - Aggregation indicating mean or average of all values. + * `MIN`: - Aggregation containing lowest of all values. + * `MAX`: - Aggregation containing highest of all values. + * `LAST`: - Aggregation containing only the last recorded value. + * `SUM`: - Aggregation with sum of all values. + * `COUNT`: - Aggregation containing total count of values. + +## Attribute Reference + +The following attributes are exported: + +* `ext_id`: - the storage container uuid +* `tenant_id`: - A globally unique identifier that represents the tenant that owns this entity. +* `links`: - A HATEOAS style link for the response. Each link contains a user-friendly name identifying the link and an address for retrieving the particular resource. +* `container_ext_id`: - the storage container uuid +* `controller_num_iops`: - Number of I/O per second. +* `controller_io_bandwidth_kbps`: - Total I/O bandwidth - kB per second. +* `controller_avg_io_latencyu_secs`: - Average I/O latency in micro secs. +* `controller_num_read_iops`: - Number of read I/O per second. +* `controller_num_write_iops`: - Number of write I/O per second. +* `controller_read_io_bandwidth_kbps`: - Read I/O bandwidth - kB per second. +* `controller_write_io_bandwidth_kbps`: - Write I/O bandwidth - kB per second. +* `controller_avg_read_io_latencyu_secs`: - Average read I/O latency in microseconds. +* `controller_avg_write_io_latencyu_secs`: - Average read I/O latency in microseconds. +* `storage_reserved_capacity_bytes`: - Implicit physical reserved capacity(aggregated on vDisk level due to thick provisioning) in bytes. +* `storage_actual_physical_usage_bytes`: - Actual physical disk usage of the container without accounting for the reservation. +* `data_reduction_saving_ratio_ppm`: - Saving ratio in PPM as a result of Deduplication, compression and Erasure Coding. +* `data_reduction_total_saving_ratio_ppm`: - Saving ratio in PPM consisting of Deduplication, Compression, Erasure Coding, Cloning, and Thin Provisioning. +* `storage_free_bytes`: - Free storage in bytes. +* `storage_capacity_bytes`: - Storage capacity in bytes. +* `data_reduction_saved_bytes`: - Storage savings in bytes as a result of all the techniques. +* `data_reduction_overall_pre_reduction_bytes`: - Usage in bytes before reduction of Deduplication, Compression, Erasure Coding, Cloning, and Thin provisioning. +* `data_reduction_overall_post_reduction_bytes`: - Usage in bytes after reduction of Deduplication, Compression, Erasure Coding, Cloning, and Thin provisioning. +* `data_reduction_compression_saving_ratio_ppm`: - Saving ratio in PPM as a result of the Compression technique. +* `data_reduction_dedup_saving_ratio_ppm`: - Saving ratio in PPM as a result of the Deduplication technique. +* `data_reduction_erasure_coding_saving_ratio_ppm`: - Saving ratio in PPM as a result of the Erasure Coding technique. +* `data_reduction_thin_provision_saving_ratio_ppm`: - Saving ratio in PPM as a result of the Thin Provisioning technique. +* `data_reduction_clone_saving_ratio_ppm`: - Saving ratio in PPM as a result of the Cloning technique. +* `data_reduction_snapshot_saving_ratio_ppm`: - Saving ratio in PPM as a result of Snapshot technique. +* `data_reduction_zero_write_savings_bytes`: - Total amount of savings in bytes as a result of zero writes. +* `controller_read_io_ratio_ppm`: - Ratio of read I/O to total I/O in PPM. +* `controller_write_io_ratio_ppm`: - Ratio of read I/O to total I/O in PPM. +* `storage_replication_factor`: - Replication factor of Container. +* `storage_usage_bytes`: - Used storage in bytes. +* `storage_tier_das_sata_usage_bytes`: - Total usage on HDD tier for the Container in bytes. +* `storage_tier_ssd_usage_bytes`: - Total usage on SDD tier for the Container in bytes +* `health`: - Health of the container is represented by an integer value in the range 0-100. Higher value is indicative of better health. + +### controller_num_iops,controller_io_bandwidth_kbps,controller_io_bandwidth_kbps ....,health + +* `value`: Value of the stat at the recorded date and time in extended ISO-8601 format." +* `timestamp`: The date and time at which the stat was recorded.The value should be in extended ISO-8601 format. For example, start time of 2022-04-23T01:23:45.678+09:00 would consider all stats starting at 1:23:45.678 on the 23rd of April 2022. Details around ISO-8601 format can be found at https://www.iso.org/standard/70907.html + + + +See detailed information in [Nutanix Storage Containers v4](https://developers.nutanix.com/api-reference?namespace=clustermgmt&version=v4.0.b2). \ No newline at end of file diff --git a/website/docs/d/storage_container_v2.html.markdown b/website/docs/d/storage_container_v2.html.markdown new file mode 100644 index 000000000..8f4834477 --- /dev/null +++ b/website/docs/d/storage_container_v2.html.markdown @@ -0,0 +1,80 @@ +--- +layout: "nutanix" +page_title: "NUTANIX: nutanix_storage_container_v2" +sidebar_current: "docs-nutanix-datasource-storage-container-v4" +description: |- + This operation retrieves a Storage Container configuration. +--- + +# nutanix_storage_container_v2 + +Provides a datasource to Fetch the configuration details of the existing Storage Container identified by the {containerExtId}. + +## Example Usage + +```hcl + data "nutanix_storage_container_v2" "test"{ + ext_id = {{ storage container uuid }} + } +``` + +## Argument Reference + +The following arguments are supported: + +* `ext_id`: (Required) storage container UUID + +## Attribute Reference + +The following attributes are exported: + +* `ext_id`: - the storage container uuid +* `tenant_id`: - A globally unique identifier that represents the tenant that owns this entity. +* `links`: - A HATEOAS style link for the response. Each link contains a user-friendly name identifying the link and an address for retrieving the particular resource. + +* `container_ext_id`: - the storage container ext id +* `owner_ext_id`: - owner ext id +* `name`: Name of the storage container. Note that the name of Storage Container should be unique per cluster. +* `cluster_ext_id`: - ext id for the cluster owning the storage container. +* `storage_pool_ext_id`: - extId of the Storage Pool owning the Storage Container instance. +* `is_marked_for_removal`: - Indicates if the Storage Container is marked for removal. This field is set when the Storage Container is about to be destroyed. +* `max_capacity_bytes`: - Maximum physical capacity of the Storage Container in bytes. +* `logical_explicit_reserved_capacity_bytes`: - Total reserved size (in bytes) of the container (set by Admin). This also accounts for the container's replication factor. The actual reserved capacity of the container will be the maximum of explicitReservedCapacity and implicitReservedCapacity. +* `logical_implicit_reserved_capacity_bytes`: - This is the summation of reservations provisioned on all vdisks in the container. The actual reserved capacity of the container will be the maximum of explicitReservedCapacity and implicitReservedCapacity +* `logical_advertised_capacity_bytes`: - Max capacity of the Container as defined by the user. +* `replication_factor`: - Replication factor of the Storage Container. +* `nfs_whitelist_addresses`: - List of NFS addresses which need to be whitelisted. +* `is_nfs_whitelist_inherited`: - Indicates whether the NFS whitelist is inherited from global config. +* `erasure_code`: - Indicates the current status value for Erasure Coding for the Container. available values: `NONE`, `OFF`, `ON` + +* `is_inline_ec_enabled`: - Indicates whether data written to this container should be inline erasure coded or not. This field is only considered when ErasureCoding is enabled. +* `has_higher_ec_fault_domain_preference`: - Indicates whether to prefer a higher Erasure Code fault domain. +* `erasure_code_delay_secs`: - Delay in performing ErasureCode for the current Container instance. +* `cache_deduplication`: - Indicates the current status of Cache Deduplication for the Container. available values: `NONE`, `OFF`, `ON` +* `on_disk_dedup`: - Indicates the current status of Disk Deduplication for the Container. available values: `NONE`, `OFF`, `POST_PROCESS` +* `is_compression_enabled`: - Indicates whether the compression is enabled for the Container. +* `compression_delay_secs`: - The compression delay in seconds. +* `is_internal`: - Indicates whether the Container is internal and is managed by Nutanix. +* `is_software_encryption_enabled`: - Indicates whether the Container instance has software encryption enabled. +* `is_encrypted`: - Indicates whether the Container is encrypted or not. +* `affinity_host_ext_id`: - Affinity host extId for RF 1 Storage Container. +* `cluster_name`: - Corresponding name of the Cluster owning the Storage Container instance. + + +### nfs_whitelist_addresses + +* `ipv4`: Reference to address configuration +* `ipv6`: Reference to address configuration +* `fqdn`: Reference to address configuration + +### ipv4, ipv6 (Reference to address configuration) + +* `value`: value of address +* `prefix_length`: The prefix length of the network to which this host IPv4/IPv6 address belongs. + +### fqdn (Reference to address configuration) + +* `value`: value of fqdn address + + +See detailed information in [Nutanix Storage Containers v4](https://developers.nutanix.com/api-reference?namespace=clustermgmt&version=v4.0.b2). \ No newline at end of file diff --git a/website/docs/d/storage_containers_v2.html.markdown b/website/docs/d/storage_containers_v2.html.markdown new file mode 100644 index 000000000..278115659 --- /dev/null +++ b/website/docs/d/storage_containers_v2.html.markdown @@ -0,0 +1,86 @@ +--- +layout: "nutanix" +page_title: "NUTANIX: nutanix_storage_containers_v2" +sidebar_current: "docs-nutanix-datasource-storage-containers-v4" +description: |- + This operation retrieves a List of the Storage Containers present in the system. +--- + +# nutanix_storage_containers_v2 + +Provides a datasource to Lists the Storage Containers present in the system. + +## Example Usage + +```hcl + data "nutanix_storage_containers_v2" "test"{ } +``` + +## Argument Reference + +The following arguments are supported: + + +* `page`: (Optional) A URL query parameter that specifies the page number of the result set. It must be a positive integer between 0 and the maximum number of pages that are available for that resource. Any number out of this range might lead to no results. +* `limit`: (Optional) A URL query parameter that specifies the total number of records returned in the result set. Must be a positive integer between 1 and 100. Any number out of this range will lead to a validation error. If the limit is not provided, a default value of 50 records will be returned in the result set. +* `filter`: (Optional) A URL query parameter that allows clients to filter a collection of resources. +* `order_by`: (Optional) A URL query parameter that allows clients to specify the sort criteria for the returned list of objects. Resources can be sorted in ascending order using asc or descending order using desc. If asc or desc are not specified, the resources will be sorted in ascending order by default. +* `select`: A URL query parameter that allows clients to request a specific set of properties for each entity or complex type. Expression specified with the $select must conform to the OData V4.01 URL conventions. + +* `storage_containers`:Lists the Storage Containers present in the system. + +## Attribute Reference + +The following attributes are exported: + +* `ext_id`: - the storage container uuid +* `tenant_id`: - A globally unique identifier that represents the tenant that owns this entity. +* `links`: - A HATEOAS style link for the response. Each link contains a user-friendly name identifying the link and an address for retrieving the particular resource. + +* `container_ext_id`: - the storage container ext id +* `owner_ext_id`: - owner ext id +* `name`: Name of the storage container. Note that the name of Storage Container should be unique per cluster. +* `cluster_ext_id`: - ext id for the cluster owning the storage container. +* `storage_pool_ext_id`: - extId of the Storage Pool owning the Storage Container instance. +* `is_marked_for_removal`: - Indicates if the Storage Container is marked for removal. This field is set when the Storage Container is about to be destroyed. +* `max_capacity_bytes`: - Maximum physical capacity of the Storage Container in bytes. +* `logical_explicit_reserved_capacity_bytes`: - Total reserved size (in bytes) of the container (set by Admin). This also accounts for the container's replication factor. The actual reserved capacity of the container will be the maximum of explicitReservedCapacity and implicitReservedCapacity. +* `logical_implicit_reserved_capacity_bytes`: - This is the summation of reservations provisioned on all vdisks in the container. The actual reserved capacity of the container will be the maximum of explicitReservedCapacity and implicitReservedCapacity +* `logical_advertised_capacity_bytes`: - Max capacity of the Container as defined by the user. +* `replication_factor`: - Replication factor of the Storage Container. +* `nfs_whitelist_addresses`: - List of NFS addresses which need to be whitelisted. +* `is_nfs_whitelist_inherited`: - Indicates whether the NFS whitelist is inherited from global config. +* `erasure_code`: - Indicates the current status value for Erasure Coding for the Container. available values: `NONE`, `OFF`, `ON` + +* `is_inline_ec_enabled`: - Indicates whether data written to this container should be inline erasure coded or not. This field is only considered when ErasureCoding is enabled. +* `has_higher_ec_fault_domain_preference`: - Indicates whether to prefer a higher Erasure Code fault domain. +* `erasure_code_delay_secs`: - Delay in performing ErasureCode for the current Container instance. +* `cache_deduplication`: - Indicates the current status of Cache Deduplication for the Container. available values: `NONE`, `OFF`, `ON` +* `on_disk_dedup`: - Indicates the current status of Disk Deduplication for the Container. available values: `NONE`, `OFF`, `POST_PROCESS` +* `is_compression_enabled`: - Indicates whether the compression is enabled for the Container. +* `compression_delay_secs`: - The compression delay in seconds. +* `is_internal`: - Indicates whether the Container is internal and is managed by Nutanix. +* `is_software_encryption_enabled`: - Indicates whether the Container instance has software encryption enabled. +* `is_encrypted`: - Indicates whether the Container is encrypted or not. +* `affinity_host_ext_id`: - Affinity host extId for RF 1 Storage Container. +* `cluster_name`: - Corresponding name of the Cluster owning the Storage Container instance. + + +### nfs_whitelist_addresses + +* `ipv4`: Reference to address configuration +* `ipv6`: Reference to address configuration +* `fqdn`: Reference to address configuration + +### ipv4, ipv6 (Reference to address configuration) + +* `value`: value of address +* `prefix_length`: The prefix length of the network to which this host IPv4/IPv6 address belongs. + +### fqdn (Reference to address configuration) + +* `value`: value of fqdn address + + + +See detailed information in [Nutanix Storage Containers v4](https://developers.nutanix.com/api-reference?namespace=clustermgmt&version=v4.0.b2). \ No newline at end of file diff --git a/website/docs/r/storage_containers_v2.html.markdown b/website/docs/r/storage_containers_v2.html.markdown new file mode 100644 index 000000000..b5e185465 --- /dev/null +++ b/website/docs/r/storage_containers_v2.html.markdown @@ -0,0 +1,118 @@ +--- +layout: "nutanix" +page_title: "NUTANIX: nutanix_storage_containers_v2" +sidebar_current: "docs-nutanix-resource-storage-containers-v4" +description: |- + Create Virtual Private Cloud . +--- + +# nutanix_storage_containers_v2 + +Provides Nutanix resource to create VPC. + + +## Example + +```hcl + data "nutanix_storage_containers_v2" "test"{ + name = "{{name of storage container }}" + logical_advertised_capacity_bytes = 1073741824000 + logical_explicit_reserved_capacity_bytes = 32 + replication_factor = 1 + nfs_whitelist_addresses { + ipv4 { + value = "{{ ipv4 address }}" + prefix_length = 32 + } + } + erasure_code = "OFF" + is_inline_ec_enabled = false + has_higher_ec_fault_domain_preference = false + cache_deduplication = "OFF" + on_disk_dedup = "OFF" + is_compression_enabled = true + is_internal = false + is_software_encryption_enabled = false + } +``` + +## Argument Reference + +The following arguments are supported: + + +* `owner_ext_id`: -(Optional) owner ext id +* `name`: -(Required) Name of the storage container. Note that the name of Storage Container should be unique per cluster. +* `logical_explicit_reserved_capacity_bytes`: -(Optional) Total reserved size (in bytes) of the container (set by Admin). This also accounts for the container's replication factor. The actual reserved capacity of the container will be the maximum of explicitReservedCapacity and implicitReservedCapacity. +* `logical_advertised_capacity_bytes`: -(Optional) Max capacity of the Container as defined by the user. +* `replication_factor`: -(Optional) Replication factor of the Storage Container. +* `nfs_whitelist_addresses`: -(Optional) List of NFS addresses which need to be whitelisted. +* `erasure_code`: -(Optional) Indicates the current status value for Erasure Coding for the Container. available values: `NONE`, `OFF`, `ON` +* `is_inline_ec_enabled`: -(Optional) Indicates whether data written to this container should be inline erasure coded or not. This field is only considered when ErasureCoding is enabled. +* `has_higher_ec_fault_domain_preference`: -(Optional) Indicates whether to prefer a higher Erasure Code fault domain. +* `erasure_code_delay_secs`: -(Optional) Delay in performing ErasureCode for the current Container instance. +* `cache_deduplication`: -(Optional) Indicates the current status of Cache Deduplication for the Container. available values: `NONE`, `OFF`, `ON` +* `on_disk_dedup`: - Indicates the current status of Disk Deduplication for the Container. available values: `NONE`, `OFF`, `POST_PROCESS` +* `is_compression_enabled`: -(Optional) Indicates whether the compression is enabled for the Container. +* `compression_delay_secs`: -(Optional) The compression delay in seconds. +* `is_internal`: - Indicates whether the Container is internal and is managed by Nutanix. +* `is_software_encryption_enabled`: -(Optional) Indicates whether the Container instance has software encryption enabled. +* `affinity_host_ext_id`: -(Optional) Affinity host extId for RF 1 Storage Container. + + + +## Attribute Reference + +The following attributes are exported: + +* `ext_id`: - the storage container uuid +* `tenant_id`: - A globally unique identifier that represents the tenant that owns this entity. +* `links`: - A HATEOAS style link for the response. Each link contains a user-friendly name identifying the link and an address for retrieving the particular resource. + +* `container_ext_id`: - the storage container ext id +* `owner_ext_id`: - owner ext id +* `name`: Name of the storage container. Note that the name of Storage Container should be unique per cluster. +* `cluster_ext_id`: - ext id for the cluster owning the storage container. +* `storage_pool_ext_id`: - extId of the Storage Pool owning the Storage Container instance. +* `is_marked_for_removal`: - Indicates if the Storage Container is marked for removal. This field is set when the Storage Container is about to be destroyed. +* `max_capacity_bytes`: - Maximum physical capacity of the Storage Container in bytes. +* `logical_explicit_reserved_capacity_bytes`: - Total reserved size (in bytes) of the container (set by Admin). This also accounts for the container's replication factor. The actual reserved capacity of the container will be the maximum of explicitReservedCapacity and implicitReservedCapacity. +* `logical_implicit_reserved_capacity_bytes`: - This is the summation of reservations provisioned on all vdisks in the container. The actual reserved capacity of the container will be the maximum of explicitReservedCapacity and implicitReservedCapacity +* `logical_advertised_capacity_bytes`: - Max capacity of the Container as defined by the user. +* `replication_factor`: - Replication factor of the Storage Container. +* `nfs_whitelist_addresses`: - List of NFS addresses which need to be whitelisted. +* `is_nfs_whitelist_inherited`: - Indicates whether the NFS whitelist is inherited from global config. +* `erasure_code`: - Indicates the current status value for Erasure Coding for the Container. available values: `NONE`, `OFF`, `ON` + +* `is_inline_ec_enabled`: - Indicates whether data written to this container should be inline erasure coded or not. This field is only considered when ErasureCoding is enabled. +* `has_higher_ec_fault_domain_preference`: - Indicates whether to prefer a higher Erasure Code fault domain. +* `erasure_code_delay_secs`: - Delay in performing ErasureCode for the current Container instance. +* `cache_deduplication`: - Indicates the current status of Cache Deduplication for the Container. available values: `NONE`, `OFF`, `ON` +* `on_disk_dedup`: - Indicates the current status of Disk Deduplication for the Container. available values: `NONE`, `OFF`, `POST_PROCESS` +* `is_compression_enabled`: - Indicates whether the compression is enabled for the Container. +* `compression_delay_secs`: - The compression delay in seconds. +* `is_internal`: - Indicates whether the Container is internal and is managed by Nutanix. +* `is_software_encryption_enabled`: - Indicates whether the Container instance has software encryption enabled. +* `is_encrypted`: - Indicates whether the Container is encrypted or not. +* `affinity_host_ext_id`: - Affinity host extId for RF 1 Storage Container. +* `cluster_name`: - Corresponding name of the Cluster owning the Storage Container instance. + + +### nfs_whitelist_addresses + +* `ipv4`: Reference to address configuration +* `ipv6`: Reference to address configuration +* `fqdn`: Reference to address configuration + +### ipv4, ipv6 (Reference to address configuration) + +* `value`: value of address +* `prefix_length`: The prefix length of the network to which this host IPv4/IPv6 address belongs. + +### fqdn (Reference to address configuration) + +* `value`: value of fqdn address + + + +See detailed information in [Nutanix Storage Containers v4](https://developers.nutanix.com/api-reference?namespace=clustermgmt&version=v4.0.b2). \ No newline at end of file