Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/openstack: Load Balancing Member Resource #4359

Merged
merged 1 commit into from
Jan 31, 2016

Conversation

jtopjian
Copy link
Contributor

This commit adds the openstack_lb_member_v1 resource. This resource models a
load balancing member which was previously coupled to the openstack_lb_pool_v1
resource.

By creating an actual member resource, load balancing members can now be
dynamically managed through terraform.

Fixes #3197

@jtopjian jtopjian changed the title provider/openstack: Load Balancing Member Resource [wip] provider/openstack: Load Balancing Member Resource Dec 17, 2015
@jtopjian
Copy link
Contributor Author

This is a first draft. Everything works and all tests pass, but I'm not in a rush to merge it -- I'd first like to see if anyone can test this out as well as get some feedback.

I did not do anything to the existing members block of the openstack_lb_pool_v1 resource. As it stands, it will continue to function as it did before. However, I've made note that it's considered to be "deprecated" and to use this resource instead. There's no harm in continuing to use the original members block -- it's just not terribly useful.

This commit adds the openstack_lb_member_v1 resource. This resource models a
load balancing member which was previously coupled to the openstack_lb_pool_v1
resource.

By creating an actual member resource, load balancing members can now be
dynamically managed through terraform.
@KangyiZengVizio
Copy link

you are a super genius ! thanks @jtopjian

@justinclayton
Copy link
Contributor

@jtopjian we will probably be able to test this for you, especially if that gives you confidence to merge this change sooner :-).

@jtopjian
Copy link
Contributor Author

@justinclayton That'd be awesome -- thanks!

@justinclayton
Copy link
Contributor

@jtopjian after some testing it looks like it's pretty much all working, with one nagging exception: I've found a case where adding members seems to destroy and recreate existing members needlessly.

If I define an openstack_lb_member_v1 resource with count, I'll often want to derive address from a set of openstack_compute_instance_v2 resources, like this:

variable num_instances { default = "3" }

resource "openstack_compute_instance_v2" "node_n" {
  count = "${var.num_instances}"
  ...
}

resource "openstack_lb_member_v1" "member_n" {
  count          = "${var.num_instances}"
  pool_id        = "${openstack_lb_pool_v1.pool_1.id}"
  address        = "${element(openstack_compute_instance_v2.node_n.*.network.0.fixed_ip_v4, count.index)}"
  port           = 22
  admin_state_up = "true"
}

This works great, as expected. However, if I bump my num_instances variable to 4, my plan gets a bit longer than anticipated:

+ openstack_compute_instance_v2.node_n.3
    ...

-/+ openstack_lb_member_v1.member_n.0
    address:        "172.16.14.143" => "${element(openstack_compute_instance_v2.node_n.*.network.0.fixed_ip_v4, count.index)}" (forces new resource)
    ...

-/+ openstack_lb_member_v1.member_n.1
    address:        "172.16.14.145" => "${element(openstack_compute_instance_v2.node_n.*.network.0.fixed_ip_v4, count.index)}" (forces new resource)
    ...

-/+ openstack_lb_member_v1.member_n.2
    address:        "172.16.14.144" => "${element(openstack_compute_instance_v2.node_n.*.network.0.fixed_ip_v4, count.index)}" (forces new resource)
    ...

+ openstack_lb_member_v1.member_n.3
    address:        "" => "${element(openstack_compute_instance_v2.node_n.*.network.0.fixed_ip_v4, count.index)}"
    ...

And sure enough, terraform apply produces this:

openstack_lb_member_v1.member_n.0: Destroying...
openstack_lb_member_v1.member_n.1: Destroying...
openstack_lb_member_v1.member_n.2: Destroying...
openstack_lb_member_v1.member_n.0: Destruction complete
openstack_lb_member_v1.member_n.2: Destruction complete
openstack_lb_member_v1.member_n.1: Destruction complete
openstack_compute_instance_v2.node_n.3: Creating...
openstack_lb_member_v1.member_n.0: Creating...
  address:        "" => "172.16.14.143"
  ...
openstack_lb_member_v1.member_n.1: Creating...
  address:        "" => "172.16.14.145"
  ...
openstack_lb_member_v1.member_n.2: Creating...
  address:        "" => "172.16.14.144"
  ...
openstack_lb_member_v1.member_n.3: Creating...
  address:        "" => "172.16.14.146"
  ...
openstack_lb_member_v1.member_n.3: Creation complete
openstack_lb_member_v1.member_n.2: Creation complete
openstack_lb_member_v1.member_n.1: Creation complete
openstack_lb_member_v1.member_n.0: Creation complete

I'm going to dig into the code a bit to see why this might be the case. I fear my old foe non-determinism is somehow to blame. Let me know if you have any ideas in the meantime.

PS - Oddly, enough, reducing the count is still handled properly.

@jtopjian
Copy link
Contributor Author

@justinclayton Thanks for trying this out! I think you might be running into the bug described in #3885. If it's the same bug, I'll probably wait until it's resolved until merging this feature in since I see element being used quite frequently with this resource.

@justinclayton
Copy link
Contributor

@jtopjian you're correct, it looks like #3885 is exactly the problem we're having. Seems like element is the real culprit there. Is this something being actively investigated? Looks like there hasn't been movement in almost a month there.

@justinclayton
Copy link
Contributor

@jtopjian Since it looks like the issue I'm having is in core and not related to any code in this PR, I'm going to say the rest of it works as advertised and we should go ahead and :shipit: .

@jtopjian
Copy link
Contributor Author

jtopjian commented Jan 6, 2016

I'm on the fence about it... I have a feeling that element or some type of interpolation is going to be used 9/10 times with this resource. I'd hate to merge it only to have a wave of bug reports upon the next release.

@phinze thoughts?

@jtopjian jtopjian removed the wip label Jan 31, 2016
@jtopjian jtopjian changed the title [wip] provider/openstack: Load Balancing Member Resource provider/openstack: Load Balancing Member Resource Jan 31, 2016
@jtopjian
Copy link
Contributor Author

OK, I'm going to go ahead and pull the gun on this one. Core is definitely aware of the bug, but since it may take some time to clean up, I'd like to move forward with this resource.

jtopjian added a commit that referenced this pull request Jan 31, 2016
provider/openstack: Load Balancing Member Resource
@jtopjian jtopjian merged commit baeaee0 into hashicorp:master Jan 31, 2016
@tgagor
Copy link

tgagor commented Aug 17, 2016

count argument on openstack_lb_member_v1 is not documented :-)
@justinclayton thanks for nice example.

@jtopjian
Copy link
Contributor Author

@tgagor count is core to Terraform :)

https://www.terraform.io/docs/configuration/interpolation.html

@tgagor
Copy link

tgagor commented Aug 17, 2016

Thanks, I was quite sure it was earlier directly in resource description.

@jtopjian jtopjian deleted the jtopjian-openstack-lb-member branch March 23, 2017 16:07
@ghost
Copy link

ghost commented Apr 15, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 15, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

openstack_lb_pool_v1, multiple members
5 participants