Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IC for adding container_configuration to batch pool resource. #3311

Merged
merged 6 commits into from
May 12, 2019
Merged

IC for adding container_configuration to batch pool resource. #3311

merged 6 commits into from
May 12, 2019

Conversation

tshauck
Copy link
Contributor

@tshauck tshauck commented Apr 26, 2019

Hi -- this PR adds the container_configuration block to the batch pool resource.

This setting is necessary for the Pool to support containerized workloads. There are more settings not included in this PR, e.g. setting a default registry. I'd like to add that at some point, but this seemed like a good intermediate step since this is my first PR and if the docker is enabled then the Tasks can still set the registry or other items.

Copy link
Collaborator

@katbyte katbyte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @tshauck,

Thank you for the PR. I've let some comments inline that should be addressed before we merge this.

// ExpandBatchPoolContainerConfiguration expands the Batch pool container configuration
func ExpandBatchPoolContainerConfiguration(list []interface{}) (*batch.ContainerConfiguration, error) {
if len(list) == 0 {
return nil, fmt.Errorf("Error: container configuration should be defined")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should just return nil here? i don't think this is an actual error

website/docs/d/batch_pool.html.markdown Show resolved Hide resolved
@@ -567,6 +588,15 @@ func resourceArmBatchPoolRead(d *schema.ResourceData, meta interface{}) error {
d.Set("node_agent_sku_id", props.DeploymentConfiguration.VirtualMachineConfiguration.NodeAgentSkuID)
}

if props.DeploymentConfiguration != nil &&
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we change this to:

		if dcfg := props.DeploymentConfiguration; dcfg != nil
			if vmcfg := dcfg.VirtualMachineConfiguration; vmcfg != nil {
				d.Set("container_configuration", azure.FlattenBatchPoolContainerConfiguration(vmcfg. ContainerConfiguration)) //this function should check if its nil and return nothing if it is

@@ -274,6 +288,17 @@ func dataSourceArmBatchPoolRead(d *schema.ResourceData, meta interface{}) error
}
}

if props.DeploymentConfiguration != nil &&
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as below in the resource. we should nest a bunch of assignments and if's

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, thanks for the comment, looks much batter. I made the update and slightly refactored the later code since it also uses dcfg.VirtualMachineConfiguration.

azurerm/data_source_batch_pool.go Show resolved Hide resolved
@tshauck
Copy link
Contributor Author

tshauck commented May 9, 2019

@katbyte Thanks for the review! I tried to address the comments, let me know what you think.

@ghost ghost removed the waiting-response label May 9, 2019
Copy link
Collaborator

@katbyte katbyte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates @tshauck,

Left a couple minor comments inline but overall looking good!

result["type"] = *armContainerConfiguration.Type
}

return result
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs to be:

Suggested change
return result
return []interface{}{result}

to fix:

------- Stdout: -------
=== RUN   TestAccDataSourceAzureRMBatchPool_complete
=== PAUSE TestAccDataSourceAzureRMBatchPool_complete
=== CONT  TestAccDataSourceAzureRMBatchPool_complete

------- Stderr: -------
panic: container_configuration: '': source data must be an array or slice, got map

@@ -128,6 +132,8 @@ The following arguments are supported:

* `certificate` - (Optional) One or more `certificate` blocks that describe the certificates to be installed on each compute node in the pool.

* `container_configuration` - (Optional) The container configuration used in the pool's VMs.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like your missing the type attribute description here

@tshauck
Copy link
Contributor Author

tshauck commented May 12, 2019

@katbyte Thanks for the comments! I think I've made the changes in the last two commits: https://github.com/terraform-providers/terraform-provider-azurerm/compare/75879f6..f925fdc. I rebased off master as I was getting some errors I think were resolved later commits.

Copy link
Collaborator

@katbyte katbyte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates @tshauck, LGTM now 🚀

@katbyte katbyte merged commit c7547ba into hashicorp:master May 12, 2019
katbyte added a commit that referenced this pull request May 12, 2019
@tshauck tshauck deleted the feature/batch-container-settings branch May 14, 2019 21:13
@ghost
Copy link

ghost commented May 17, 2019

This has been released in version 1.28.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
	version = "~> 1.28.0"
}
# ... other configuration ...

@ghost
Copy link

ghost commented Jun 12, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Jun 12, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants