Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement 'voxpupuli/puppet-mongodb' clusters #10

Closed
jeff1evesque opened this issue Oct 19, 2018 · 14 comments
Closed

Implement 'voxpupuli/puppet-mongodb' clusters #10

jeff1evesque opened this issue Oct 19, 2018 · 14 comments

Comments

@jeff1evesque
Copy link
Owner

jeff1evesque commented Oct 19, 2018

We need to implement the puppet-mongodb to define necessary configurations for a mongodb cluster.

@jeff1evesque
Copy link
Owner Author

We can write our own custom manifests, or a series of hiera yaml files, using automatic parameter lookup:

@jeff1evesque
Copy link
Owner Author

For simplicity of writing code, we'll use the manifests approach, and provide an option to override values via yaml syntax. This will allow us to write a dynamic wrapper around the puppet-mongodb implementation. Specifically, the yaml will be a list of hash arrays, which can be iterated within the corresponding manifests. Otherwise, duplicate directives may be defined in the yaml to accomplish a similar result.

@jeff1evesque
Copy link
Owner Author

Actually, we'll use this issue to define necessary yaml, and refactor an existing module.

@jeff1evesque
Copy link
Owner Author

Using a dynamic manifest, which iterates over a yaml, means that the enforcement would be constrained to a particular virtual machine. Therefore, we'll revert to a more verbose implementation of separate yaml configurations. This way, a particular virtual machine, will be allocated its own configuration.

@jeff1evesque
Copy link
Owner Author

jeff1evesque commented Oct 20, 2018

To test these configurations, we can use a virtualbox, and deploy a puppetserver, along with a docker-compose set of vms. Specifically, each container will have a corresponding hostname defined in the mongodb_cluster.yaml. This means we'll need to test + tweak the mongodb_cluster repository.

@jeff1evesque
Copy link
Owner Author

jeff1evesque commented Oct 21, 2018

3fba67d: should create two replicated mongodb shards. I followed the example from the puppet contrib module. However, I extended it, by defining two shards (instead of one), and extended their replication from two instances, to three instances per shard:

  -> mongodb_shard { 'rs1' :
    member => 'rs1/repl1-mongod1:27018',
    keys   => [{
      'rs1.foo' => {
        'name' => 1,
      }
    }],
  }
  -> mongodb_shard { 'rs2' :
    member => 'rs2/repl2-mongod1:27018',
    keys   => [{
      'rs1.foo' => {
        'name' => 1,
      }
    }],
  }

My previous attempt by creating a wrapper around the official mongodb module, is at minimum redundant, and pointless. The above configuration I think more than suffices for deploying a replicated + sharded configuration. However, my personal machine is rather weak. I don't think I have enough memory to test this configuration. I already have a puppetserver deployed on the cloud. I could test 3-4 instances locally on virtualbox, then maybe deploy a temporary t2.large, since I need an extra 4+ vms to complete this setup. There's a nice docker image with a puppet agent baked in. We could deploy some of these containers, and puppet agent -t to synch them to the puppetserver, that way the above configurations can succeed.

@jeff1evesque
Copy link
Owner Author

jeff1evesque commented Oct 23, 2018

The install_puppet_agent_centos script was able to successfully connect to a puppetserver. The consolidated install_puppet_agent should likewise succeed with ubuntu in the mix. We can now relocate the puppet code in this repository to the puppetserver. After adjusting the node name to reflect the target vm in the site.pp, we can enforce the installation of mongodb on each successive agent vm.

@jeff1evesque
Copy link
Owner Author

The tailend of our puppetagent trace indicates some configuration problems with the site.pp:

...
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Class[Mongodb::Server]: parameter 'bind_ip' expects an Array value, got String (file: /etc/puppetlabs/code/environments/production/manifests/site.pp, line: 5, column: 6) on node xxx-xxx-xxx-xxx
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

@jeff1evesque
Copy link
Owner Author

5393c0a: the installation seems to mostly succeed. However, since other vm's in the defined replica-shard are not available (haven't been provisioned), the puppet agent fails to configure the shard:

[root@xxx-xxx-xxx-xxx jeff1evesque]# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Loading facts
Info: Caching catalog for xxx-xxx-xxx-xxx
Info: Applying configuration version '1540345348'
Notice: /Stage[main]/Mongodb::Repo::Yum/Yumrepo[mongodb]/ensure: created
Info: Yumrepo[mongodb](provider=inifile): changing mode of /etc/yum.repos.d/mongodb.repo from 600 to 644
Notice: /Stage[main]/Mongodb::Server::Install/Package[mongodb_server]/ensure: created
Notice: /Stage[main]/Mongodb::Server::Config/File[/etc/mongod.conf]/content:
--- /etc/mongod.conf    2016-03-23 18:03:39.000000000 +0000
+++ /tmp/puppet-file20181024-1567-lskmf9        2018-10-24 01:42:33.398057319 +0000
@@ -1,76 +1,22 @@
-# mongod.conf
+# mongodb.conf - generated from Puppet
+

 #where to log
 logpath=/var/log/mongodb/mongod.log
-
 logappend=true
-
+# Set this option to configure the mongod or mongos process to bind to and
+# listen for connections from applications on this address.
+# You may concatenate a list of comma separated values to bind mongod to multiple IP addresses.
+bind_ip = xxx-xxx-xxx-xxx
 # fork and run in background
 fork=true
-
-#port=27017
-
-dbpath=/var/lib/mongo
-
+dbpath=/var/lib/mongodb
 # location of pidfile
 pidfilepath=/var/run/mongodb/mongod.pid
-
-# Listen to local interface only. Comment out to listen on all interfaces.
-bind_ip=127.0.0.1
-
-# Disables write-ahead journaling
-# nojournal=true
-
-# Enables periodic logging of CPU utilization and I/O wait
-#cpu=true
-
+# Enables journaling
+journal = true
 # Turn on/off security.  Off is currently the default
-#noauth=true
-#auth=true
-
-# Verbose logging output.
-#verbose=true
-
-# Inspect all client data for validity on receipt (useful for
-# developing drivers)
-#objcheck=true
-
-# Enable db quota management
-#quota=true
-
-# Set oplogging level where n is
-#   0=off (default)
-#   1=W
-#   2=R
-#   3=both
-#   7=W+some reads
-#diaglog=0
-
-# Ignore query hints
-#nohints=true
-
-# Enable the HTTP interface (Defaults to port 28017).
-#httpinterface=true
-
-# Turns off server-side scripting.  This will result in greatly limited
-# functionality
-#noscripting=true
-
-# Turns off table scans.  Any query that would do a table scan fails.
-#notablescan=true
-
-# Disable data file preallocation.
-#noprealloc=true
-
-# Specify .ns file size for new databases.
-# nssize=<size>
-
-# Replication Options
+noauth=true
+# Is the mongod instance a configuration server
+configsvr = true

-# in replicated mongo databases, specify the replica set name here
-#replSet=setname
-# maximum size in megabytes for replication operation log
-#oplogSize=1024
-# path to a key file storing authentication info for connections
-# between replica set members
-#keyFile=/path/to/keyfile

Info: Computing checksum on file /etc/mongod.conf
Info: /Stage[main]/Mongodb::Server::Config/File[/etc/mongod.conf]: Filebucketed /etc/mongod.conf to puppet with sum 0aa1300d8c64318b1a7683cb3fee646e
Notice: /Stage[main]/Mongodb::Server::Config/File[/etc/mongod.conf]/content: content changed '{md5}0aa1300d8c64318b1a7683cb3fee646e' to '{md5}ec95d53b0f8864927ed3da873b10ca59'
Notice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb]/ensure: created
Notice: /Stage[main]/Mongodb::Server::Config/File[/var/run/mongodb/mongod.pid]/ensure: created
Info: Class[Mongodb::Server::Config]: Scheduling refresh of Class[Mongodb::Server::Service]
Info: Class[Mongodb::Server::Service]: Scheduling refresh of Service[mongodb]
Notice: /Stage[main]/Mongodb::Server::Service/Service[mongodb]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Mongodb::Server::Service/Service[mongodb]: Unscheduling refresh on Service[mongodb]
Notice: /Stage[main]/Mongodb::Client/Package[mongodb_client]/ensure: created
Notice: /Stage[main]/Mongodb::Mongos::Install/Package[mongodb_mongos]/ensure: created
Notice: /Stage[main]/Mongodb::Mongos::Config/File[/etc/mongodb-shard.conf]/ensure: defined content as '{md5}0409ad279f6fbc1982e743e2ee711f8d'
Info: Class[Mongodb::Mongos::Config]: Scheduling refresh of Class[Mongodb::Mongos::Service]
Info: Class[Mongodb::Mongos::Service]: Scheduling refresh of Service[mongos]
Notice: /Stage[main]/Mongodb::Mongos::Service/File[/etc/sysconfig/mongos]/ensure: defined content as '{md5}0b2a6f509c2021713c4d32d0af3e8750'
Notice: /Stage[main]/Mongodb::Mongos::Service/File[/etc/init.d/mongos]/ensure: defined content as '{md5}9d69735d7aa8e87f31b0e17c91670b70'
Notice: /Stage[main]/Mongodb::Mongos::Service/Service[mongos]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Mongodb::Mongos::Service/Service[mongos]: Unscheduling refresh on Service[mongos]
Notice: /Stage[main]/Main/Node[xxx-xxx-xxx-xxx]/Mongodb_shard[rs1]/ensure: created
Error: /Stage[main]/Main/Node[xxx-xxx-xxx-xxx]/Mongodb_shard[rs1]: Could not evaluate: sh.addShard() failed for shard rs1: couldn'tconnecttonewshardsocketexception[CONNECT_ERROR]forrs1/repl1-mongod1:27018
Notice: /Stage[main]/Main/Node[xxx-xxx-xxx-xxx]/Mongodb_shard[rs2]: Dependency Mongodb_shard[rs1] has failures: true
Warning: /Stage[main]/Main/Node[xxx-xxx-xxx-xxx]/Mongodb_shard[rs2]: Skipping because of failed dependencies
Info: Stage[main]: Unscheduling all events on Stage[main]
Notice: Applied catalog in 12.46 seconds

@jeff1evesque
Copy link
Owner Author

I spun up a 3 mongodb instances, each 1GB memory. However, I need to apply the refactored install_puppet_agent, which reboots the vm at the end. Also, seems I reached the limit of my elastic ips. Somewhat cumbersome, since that means the ip refreshes each time the vm reboots. Also, the site.pp, requires each node name to correspond to the vms hostname. So, I'll further the test likely tomorrow.

@jeff1evesque
Copy link
Owner Author

I lied, wanted to test the refactored install_puppet_agent. Seems some minor fixes were needed. Seems similar problems to earlier. The bind_ip needs to be an array, and not string:

...
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Class[Mongodb::Server]: parameter 'bind_ip' expects an Array value, got String (file: /etc/puppetlabs/code/environments/production/manifests/site.pp, line: 35, column: 6) on node xxx-xxx-xxx-xxx
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

@jeff1evesque
Copy link
Owner Author

a92e52c: after fixing our bind_ip in our site.pp for our mongodb instances, we reach similar to our earlier results. Specifically, it seems that mongodb installed. However, since the other replica nodes are not available, creating the defined replica-shard fails:

[root@xxx-xxx-xxx-xxx1 jeff1evesque]# /opt/puppetlabs/bin/puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Loading facts
Info: Caching catalog for xxx-xxx-xxx-xxx1
Info: Applying configuration version '1540348388'
Notice: /Stage[main]/Mongodb::Repo::Yum/Yumrepo[mongodb]/ensure: created
Info: Yumrepo[mongodb](provider=inifile): changing mode of /etc/yum.repos.d/mongodb.repo from 600 to 644
Notice: /Stage[main]/Mongodb::Server::Install/Package[mongodb_server]/ensure: created
Notice: /Stage[main]/Mongodb::Server::Config/File[/etc/mongod.conf]/content:
--- /etc/mongod.conf    2016-03-23 18:03:39.000000000 +0000
+++ /tmp/puppet-file20181024-6520-1lg5hz8       2018-10-24 02:33:11.770260256 +0000
@@ -1,76 +1,24 @@
-# mongod.conf
+# mongodb.conf - generated from Puppet
+

 #where to log
 logpath=/var/log/mongodb/mongod.log
-
 logappend=true
-
+# Set this option to configure the mongod or mongos process to bind to and
+# listen for connections from applications on this address.
+# You may concatenate a list of comma separated values to bind mongod to multiple IP addresses.
+bind_ip = xxx-xxx-xxx-xxx
 # fork and run in background
 fork=true
-
-#port=27017
-
-dbpath=/var/lib/mongo
-
+dbpath=/var/lib/mongodb
 # location of pidfile
 pidfilepath=/var/run/mongodb/mongod.pid
-
-# Listen to local interface only. Comment out to listen on all interfaces.
-bind_ip=127.0.0.1
-
-# Disables write-ahead journaling
-# nojournal=true
-
-# Enables periodic logging of CPU utilization and I/O wait
-#cpu=true
-
+# Enables journaling
+journal = true
 # Turn on/off security.  Off is currently the default
-#noauth=true
-#auth=true
-
-# Verbose logging output.
-#verbose=true
-
-# Inspect all client data for validity on receipt (useful for
-# developing drivers)
-#objcheck=true
-
-# Enable db quota management
-#quota=true
-
-# Set oplogging level where n is
-#   0=off (default)
-#   1=W
-#   2=R
-#   3=both
-#   7=W+some reads
-#diaglog=0
-
-# Ignore query hints
-#nohints=true
-
-# Enable the HTTP interface (Defaults to port 28017).
-#httpinterface=true
-
-# Turns off server-side scripting.  This will result in greatly limited
-# functionality
-#noscripting=true
-
-# Turns off table scans.  Any query that would do a table scan fails.
-#notablescan=true
-
-# Disable data file preallocation.
-#noprealloc=true
-
-# Specify .ns file size for new databases.
-# nssize=<size>
-
-# Replication Options
+noauth=true
+# Is the mongod instance a shard server
+shardsvr = true
+# Configure ReplicaSet membership
+replSet = rs1

-# in replicated mongo databases, specify the replica set name here
-#replSet=setname
-# maximum size in megabytes for replication operation log
-#oplogSize=1024
-# path to a key file storing authentication info for connections
-# between replica set members
-#keyFile=/path/to/keyfile

Info: Computing checksum on file /etc/mongod.conf
Info: /Stage[main]/Mongodb::Server::Config/File[/etc/mongod.conf]: Filebucketed /etc/mongod.conf to puppet with sum 0aa1300d8c64318b1a7683cb3fee646e
Notice: /Stage[main]/Mongodb::Server::Config/File[/etc/mongod.conf]/content: content changed '{md5}0aa1300d8c64318b1a7683cb3fee646e' to '{md5}c79134e998f851bbe857a920079b530a'
Notice: /Stage[main]/Mongodb::Server::Config/File[/var/lib/mongodb]/ensure: created
Notice: /Stage[main]/Mongodb::Server::Config/File[/var/run/mongodb/mongod.pid]/ensure: created
Info: Class[Mongodb::Server::Config]: Scheduling refresh of Class[Mongodb::Server::Service]
Info: Class[Mongodb::Server::Service]: Scheduling refresh of Service[mongodb]
Notice: /Stage[main]/Mongodb::Server::Service/Service[mongodb]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Mongodb::Server::Service/Service[mongodb]: Unscheduling refresh on Service[mongodb]
Notice: /Stage[main]/Mongodb::Client/Package[mongodb_client]/ensure: created
Notice: /Stage[main]/Main/Node[xxx-xxx-xxx-xxx1]/Mongodb_replset[rs1]/ensure: created
Warning: Can't connect to replicaset member repl1-mongod1:27018.
Warning: Can't connect to replicaset member repl1-mongod2:27018.
Warning: Can't connect to replicaset member repl1-mongod3:27018.
Error: /Stage[main]/Main/Node[xxx-xxx-xxx-xxx1]/Mongodb_replset[rs1]: Could not evaluate: Can't connect to any member of replicaset rs1.
Info: Stage[main]: Unscheduling all events on Stage[main]
Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml
Notice: Applied catalog in 41.15 seconds

I will resume the tests tomorrow (need to work on some hw). Just wanted to make sure I tested this case, since we opened, and suggested a fix to their sharding.pp example.

@jeff1evesque
Copy link
Owner Author

Each of our mongodb instances are failing to connect to the rs1 replica set:

[root@xxx-xxx-xxx-xxx jeff1evesque]# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Loading facts
Info: Caching catalog for xxx-xxx-xxx-xxx
Info: Applying configuration version '1540429088'
Notice: /Stage[main]/Main/Node[xxx-xxx-xxx-xxx]/Mongodb_replset[rs1]/ensure: created
Warning: Can't connect to replicaset member xxx-xxx-xxx-xxx:27018.
Error: /Stage[main]/Main/Node[xxx-xxx-xxx-xxx]/Mongodb_replset[rs1]: Could not evaluate: rs.initiate() failed for replicaset rs1: couldn't initiate : can't find self in the replset config my port: 27018
Notice: Applied catalog in 12.63 seconds

@jeff1evesque
Copy link
Owner Author

I deployed a single replicated shard. The 4 vm instances (1 mongos, 3 mongodb) seem good:

[root@xxx-xxx-xxx-xxx jeff1evesque]# mongos --test
2018-10-25T03:42:09.844+0000 shardKeyTest passed
2018-10-25T03:42:09.845+0000 shardObjTest passed
[root@xxx-xxx-xxx-xxx jeff1evesque]#
[root@xxx-xxx-xxx-xxx jeff1evesque]#
[root@xxx-xxx-xxx-xxx jeff1evesque]#
[root@xxx-xxx-xxx-xxx jeff1evesque]#
[root@xxx-xxx-xxx-xxx jeff1evesque]# mongo
MongoDB shell version: 2.6.12
connecting to: test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
        http://docs.mongodb.org/
Questions? Try the support group
        http://groups.google.com/group/mongodb-user
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "version" : 4,
        "minCompatibleVersion" : 4,
        "currentVersion" : 5,
        "clusterId" : ObjectId("5bd1148061d20651b9457a00")
}
  shards:
        {  "_id" : "rs1",  "host" : "rs1/xxx-xxx-xxx-xxx:27018,yyy-yyy-yyy-yyy:27018,zzz-zzz-zzz-zzz:27018" }
  databases:
        {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
        {  "_id" : "test",  "partitioned" : false,  "primary" : "rs1" }
        {  "_id" : "rs1",  "partitioned" : true,  "primary" : "rs1" }
                rs1.foo
                        shard key: { "name" : 1 }
                        chunks:
                                rs1     1
                        { "name" : { "$minKey" : 1 } } -->> { "name" : { "$maxKey" : 1 } } on : rs1 Timestamp(1, 0)

mongos>

The only thing worth noting:

  • node names in the site.pp, have been changed on the puppetserver to reflect the hostname
  • the bind_ip in the site.pp needed to be adjusted
    • further control can limit access by security group

I think this issue is complete. To implement a 2+ sharded instances, we would need to uncomment, and possibly add additional entries in our mongos, as well as adding more mongodb nodes:

node 'mongos-1' {
  class {'mongodb::globals':
    manage_package_repo => true,
  }
  -> class {'mongodb::server':
    configsvr => true,
    bind_ip   => [$::ipaddress],
  }
  -> class {'mongodb::client': }
  -> class {'mongodb::mongos':
    configdb => ["${::ipaddress}:27019"],
  }
  -> mongodb_shard { 'rs1' :
    member => 'rs1/repl1-mongod1:27018',
    keys   => [{
      'rs1.foo' => {
        'name' => 1,
      }
    }],
  }
##  -> mongodb_shard { 'rs2' :
##    member => 'rs2/repl2-mongod1:27018',
##    keys   => [{
##      'rs1.foo' => {
##        'name' => 1,
##      }
##    }],
##  }
}

jeff1evesque added a commit that referenced this issue Oct 25, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant