Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-architecture of gluster cookbook to support multi-volume per peer, LVM, size declaration. Removal of broken code. Working kitchen testing. #47

Merged
merged 14 commits into from
Feb 3, 2016

Conversation

andyrepton
Copy link
Collaborator

Hey @shortdudey123 ,

There's a lot changed here, so I'll run through it. Rather than declaring bricks on a peer level, I've moved bricks to be declared at a volume level, which makes more sense and allows for multiple volumes per peer. It also allows for different types of volumes per peer. In addition, I've added the LVM cookbook into the dependencies, and the cookbook now requires an additional, blank, block device to be added to the servers at boot time (I use terraform to deploy, so this is relatively simple when you are using cloud-based deployment methods). By using this method, you can define a size per volume, which can then be extended at a later date (the cookbook auto-resizes the LV when it's increased without any downtime). This does mean that the old way of using a whole disk is now unavailable and will prevent people from just upgrading to this cookbook: to stress, people will probably need to rebuild their gluster nodes to upgrade to this method, as it's an architecture rewrite.

I've put an example of attributes into the attributes commented out. I've also bumped the default version to 3.6 from 3.4, as 3.4 breaks on the kitchen setup.

Some problems that still need to be addressed but I'm working through them:
-Ensuring that the 'master' node doesn't attempt to add a new volume until all of the other nodes have created the LV first: this is a common issue with chef and we'd need to think of some sort of locking mechanism to get around it. I'm using a service discovery tool to get around this issue at the moment but it would be nice to get it in here too.
-You can only create one brick per volume per peer, and it would be nice to be able to have multiple bricks per peer, especially for striped volumes that see performance increase when you use this method. This can be solved by changing the server_setup section to use a 'brick name' rather than the volume name for the LV, but I haven't had time to dig into it properly
-The size is per LV, but if you create a distributed (or distributed-striped/replicated etc) volume this size will show up as a multiple of this value. Not sure if we want to add logic that detects that and changes the size accordingly.

I have written some monitoring recipes too that use public/private repos to install NRPE nagios checks. I'll try and open a PR for them soon.

Finally, I have a request: would you be willing to consider adding me as a collaborator to this project? I've rewritten a large part of it and am actively invested in using it in the future. Being able to discuss potential changes and test them with you before merging is of big interest to me.

Thanks!

Andy

P.S: I suspect the rake tests will fail, I'll fix them in a minute.

@shortdudey123
Copy link
Owner

Thanks for the big PR!!! I will review / test it over the next couple days :)

@@ -18,5 +18,5 @@
# limitations under the License.
#

default['gluster']['version'] = '3.4'
default['gluster']['version'] = '3.6'
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3.7.6 is the newest version. Should we just bump it all the way up since this is a major rewrite anyway?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds sensible to me. I didn't realise 3.7 was out of beta... it's been a while since I looked!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm leaving it at 3.6, 3.7 breaks on CentOS 7.1

        (up to date)

   ================================================================================
   Error executing action `install` on resource 'yum_package[glusterfs-server]'
   ================================================================================

   Chef::Exceptions::Exec
   ----------------------
   yum -d0 -e0 -y install glusterfs-server-3.7.6-1.el7 returned 1:
   STDOUT:  You could try using --skip-broken to work around the problem
    You could try running: rpm -Va --nofiles --nodigest

   STDERR: Error: Package: glusterfs-server-3.7.6-1.el7.x86_64 (glusterfs)
       Requires: liburcu-cds.so.1()(64bit)
   Error: Package: glusterfs-server-3.7.6-1.el7.x86_64 (glusterfs)
       Requires: liburcu-bp.so.1()(64bit)```

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like these are both provided by the userspace-rcu package in EPEL

http://www.rpmfind.net/linux/RPM/epel/7/x86_64/u/userspace-rcu-0.7.9-1.el7.x86_64.html

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either way, this can be a new pull request if we would like to push to 3.7, I don't want to get held back with testing 3.7 when it's not the focus of this PR.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fair enough

@shortdudey123
Copy link
Owner

would you be willing to consider adding me as a collaborator to this project?

Definitely! Once this PR is merged i will add you

@@ -2,6 +2,7 @@
# Cookbook Name:: gluster
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add this header to the other recipe files as well?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thanks! And yes, will do.

@shortdudey123
Copy link
Owner

I will test this more over the weekend, and make any additional comments. Looks good so far (besides the small comments mentioned above)

@shortdudey123 shortdudey123 added this to the v5.0.0 milestone Nov 14, 2015
@andyrepton
Copy link
Collaborator Author

@shortdudey123 Great thanks for the comments. I'm currently on vacation in St Louis so I'll take a look on Monday or Tuesday to address them :)

@shortdudey123
Copy link
Owner

@Seth-Karlo sounds good, thanks!

@andyrepton
Copy link
Collaborator Author

I have tested using this one liner:
for OS in centos-65 centos-71 ubuntu-1204 ubuntu-1404; do kitchen converge replicated\[12]-$OS ; kitchen destroy; done
All passed successfully.

@shortdudey123
Copy link
Owner

Will do some additional testing hopefully before the weekend

- `node['gluster']['server']['volumes'][VOLUME_NAME]['peer_names']` - an optional array of Chef node names for peers used in the volume
- `node['gluster']['server']['volumes'][VOLUME_NAME]['peers']` - an array of FQDNs for peers used in the volume
- `node['gluster']['server']['volumes'][VOLUME_NAME]['quota']` - an optional disk quota to set for the volume, such as '10GB'
- `node['gluster']['server']['volumes'][VOLUME_NAME]['replica_count']` - the number of replicas to create
- `node['gluster']['server']['volumes'][VOLUME_NAME]['volume_type']` - the volume type to use; this value can be 'replicated', 'distributed-replicated', 'distributed', 'striped' or 'distributed-striped'
- `node['gluster']['server']['volumes'][VOLUME_NAME]['size']` - The size of the gluster volume you would like to create, for example, 100M or 5G. This is passed through to the lvm cookbook.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have this in the readme twice :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first is preceded by 'The absolute minimum configuration is'

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@andyrepton
Copy link
Collaborator Author

Fixed the server_extend recipe and tested here :) Please let me know how your tests go!

@shortdudey123
Copy link
Owner

The lvm cookbook installs them in the execution phase, but you do the require in the complication phase.

@shortdudey123
Copy link
Owner

@Seth-Karlo hope you enjoyed your holidays!

Any update on getting the lvm gem installed during compilation instead of execution?

@andyrepton
Copy link
Collaborator Author

No chance yet, I've moved on to other projects, so I need to find the time and space to get this working.

@shortdudey123
Copy link
Owner

sounds good, thanks!

@shortdudey123
Copy link
Owner

@Seth-Karlo i have a PR (sous-chefs/lvm#89) in to allow compile time install so that should take care of the issue above :) (then you would just need to set 2 attributes to true)

@andyrepton
Copy link
Collaborator Author

Hey Grant!

Life is pretty hectic over here, but I will do my best to test it and confirm it from here this week! Thanks for picking it up.

Andy

@shortdudey123
Copy link
Owner

No problem, i am working on a diff that would work :)

@shortdudey123
Copy link
Owner

Added a diff above
Test kitchen will fail since I guess the gluster repo is in limbo

From http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/EPEL.repo/epel-7.README

RPMs for RHEL and CentOS 6 and 7 are now provided by the CentOS
Storage SIG.

Install the SIG repo:
    `yum install centos-release-gluster36`

Install Gluster. While the packages are in testing (about two weeks):
    `yum --enablerepo=centos-gluster36-test install glusterfs-server`

After the packages emerge from testing:
    `yum install glusterfs-server`

For more information please see
https://wiki.centos.org/SpecialInterestGroup/Storage

@andyrepton
Copy link
Collaborator Author

I tested on the Ubuntu 14.04 box. Looks good from here when I add in your attributes! I've pushed the change up, could you test and confirm it looks good please?

@andyrepton
Copy link
Collaborator Author

Hmm, the checks pass for me locally. What version of Rubocop is being used? I'm using the latest ChefDK and GuardClause does not get triggered.

EDIT: Found it. Working on it now.

@shortdudey123
Copy link
Owner

could you test and confirm it looks good please?

Yup, i will test a couple different things over the weekend

What version of Rubocop is being used?

from the travis build log: Using rubocop 0.36.0

@shortdudey123
Copy link
Owner

@Seth-Karlo is replicated1-ubuntu-1404 under test kitchen working for you? i had to make the following change to get it to work (since the vbox is build with an IDE controller and not SATA).

diff --git a/test/integration/Vagrantfile.erb b/test/integration/Vagrantfile.erb
index 334c9bd..fc69e63 100644
--- a/test/integration/Vagrantfile.erb
+++ b/test/integration/Vagrantfile.erb
@@ -2,7 +2,7 @@ Vagrant.configure("2") do |c|
   c.vm.box = "<%= config[:box] %>"
   c.vm.box_url = "<%= config[:box_url] %>"

-<% if config[:box] == "opscode-centos-7.1" || config[:box] == "opscode-ubuntu-14.04"
+<% if config[:box] == "opscode-centos-7.1"
      disk_controller = "SATA"
    else
      disk_controller = "IDE"

@andyrepton
Copy link
Collaborator Author

@shortdudey123 The Ubuntu 1404 box I downloaded on Sunday has SATA, not IDE. Are you using the latest one or a cached one? It's the 1204 one that has an IDE one from what I can see.

@shortdudey123
Copy link
Owner

possibly? let me delete it and grab it again

$ grep lastStateChange box.ovf 
    <vbox:Machine ovf:required="false" version="1.12-macosx" uuid="{32e05b89-6eaf-4516-b785-216ed7719981}" name="packer-ubuntu-14.04-amd64" OSType="Ubuntu_64" snapshotFolder="Snapshots" lastStateChange="2014-10-21T14:52:45Z">
$ 

@andyrepton
Copy link
Collaborator Author

Yep, mine is version 1.15:

mikasa:chef-gluster andy$ grep lastStateChange ~/.vagrant.d/boxes/opscode-ubuntu-14.04/0/virtualbox/box.ovf
    <vbox:Machine ovf:required="false" version="1.15-macosx" uuid="{4986e285-44b5-4bc5-ac4a-5e70a9184ad8}" name="ubuntu-14.04-amd64" OSType="Ubuntu_64" snapshotFolder="Snapshots" lastStateChange="2015-12-23T23:52:18Z">

@shortdudey123
Copy link
Owner

Worked :) yay for old stuff haha

Still had to make 2 changes to get CentOS to work though... Removing my centos 7 box to see if the 2nd change is needed. The first one is though since the repo has moved.

diff --git a/recipes/repository.rb b/recipes/repository.rb
index 0a7ed75..111bd4b 100644
--- a/recipes/repository.rb
+++ b/recipes/repository.rb
@@ -47,8 +47,15 @@ when 'ubuntu'
     end
   end
 when 'redhat', 'centos'
+  # CentOS 6 and 7 have Gluster in the Storage SIG instead of a gluster hosted repo
+  repo_url = if node['platform_version'].to_i > 5
+               "http://mirror.centos.org/centos/$releasever/storage/$basearch/gluster-#{node['gluster']['version']}/"
+             else
+               "http://download.gluster.org/pub/gluster/glusterfs/#{node['gluster']['version']}/LATEST/EPEL.repo/epel-$releasever/$basearch/"
+             end
+
   yum_repository 'glusterfs' do
-    url "http://download.gluster.org/pub/gluster/glusterfs/#{node['gluster']['version']}/LATEST/EPEL.repo/epel-$releasever/$basearch/"
+    url repo_url
     gpgcheck false
     action :create
   end
diff --git a/test/integration/Vagrantfile.erb b/test/integration/Vagrantfile.erb
index 334c9bd..cb6b94c 100644
--- a/test/integration/Vagrantfile.erb
+++ b/test/integration/Vagrantfile.erb
@@ -25,6 +25,11 @@ Vagrant.configure("2") do |c|
   c.vm.network "private_network", ip: "<%= config[:ipaddress] %>"
 <% end %>

+  c.vm.synced_folder ".", "/vagrant", disabled: true
+<% config[:synced_folders].each do |source, destination, options| %>
+  c.vm.synced_folder "<%= source %>", "<%= destination %>", <%= options %>
+<% end %>
+
   c.vm.provider :<%= config[:provider] %> do |p|
     <% if config[:additional_disk] && config[:provider] == 'virtualbox' %>
       <% disk_path = "/tmp/#{rand(36**50).to_s(36)}_disk2.vdi" %>

@andyrepton
Copy link
Collaborator Author

It's a bit hard for me to copy and paste that in, could you add it to a branch or a separate PR so I can drag it in, or would you prefer (as it's your code) to merge this, then immediately merge the centos fix on top?

@shortdudey123
Copy link
Owner

Yup, i can put it in my master and then stuff should pass :)

@shortdudey123
Copy link
Owner

Merged in #51 so kitchen tests will pass once this is merged.

Since this is such a large change, i want to test resizing and scaling. Should be able to get that done tonight or tomorrow. Then this thing can be merged in :)

Thank you very much for all the updates!

@@ -7,3 +7,4 @@
version '4.0.2'
depends 'apt', '>= 2.0'
depends 'yum', '>= 3.0'
depends 'lvm'
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs version constraint '>= 1.5.1' since thats when the compile time attributes got released

shortdudey123 added a commit that referenced this pull request Feb 3, 2016
Re-architecture of gluster cookbook to support multi-volume per peer, LVM, size declaration. Removal of broken code. Working kitchen testing.
@shortdudey123 shortdudey123 merged commit 13d87c4 into shortdudey123:master Feb 3, 2016
@shortdudey123
Copy link
Owner

@Seth-Karlo merged and added you as a collaborator on the repo :)

@andyrepton
Copy link
Collaborator Author

@shortdudey123 Cool! Thanks very much. Let me know if you'd like me to review any PRs :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants