Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds Vagrant install #4

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,5 @@ bower_components
npm-debug.log
newrelic_agent.log
.DS_Store
*.swp
*.swp
.vagrant
40 changes: 40 additions & 0 deletions README.Vagrant.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
Vagrant Instructions
========================================================

A Vagrant configuration is included for local development of castlemind. To use
it, you will need:

* [VirtualBox](https://www.virtualbox.org/)
* [ansible](https://ansible.com) >= 1.7
* [Vagrant](https://vagrantup.com) >= 1.6.5

### Installation

$ git clone [email protected]:nic-wolf/castlemind
$ cd castlemind
$ ansible-galaxy install rjzaworski.nodeapp
$ vagrant up

The castlemind application will now be running on a VM at `192.168.32.30`:

$ curl 192.168.32.30

### Development

The host machine's `castlemind` directory is synced on the Vagrant VM at
`/home/vagrant/castlemind`:

$ vagrant ssh
vagrant@precise64:~$ cd ~/castlemind

The application is managed using upstart; to restart it, ssh on to the VM and
run the restart job:

$ vagrant ssh
vagrant@precise64:~$ sudo service castlemind restart

Confirm that one (or both) worker processes are up and running:

vagrant@precise64:~$ sudo tail /var/log/upstart/castlemind-worker-1.log
vagrant@precise64:~$ sudo tail /var/log/upstart/castlemind-worker-2.log

31 changes: 31 additions & 0 deletions Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :

VAGRANTFILE_API_VERSION = '2'

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

# Build a new VM without much memory
config.vm.define 'vagrant_castlemind' do |web|

# Base the VM on Ubuntu Precise (14.04 / LTS release)
web.vm.box = 'hashicorp/precise64'

# Expose the VM at 192.168.32.30
web.vm.network 'private_network', ip: '192.168.32.30'

# Expose the current directory to the host machine at ~/castlemind
web.vm.synced_folder '.', '/home/vagrant/castlemind'

end

# Provision the host(s)
config.vm.provision 'ansible' do |ansible|
ansible.playbook = 'provisioning/playbook.yml'
ansible.groups = {
'castlemind' => ['vagrant_castlemind']
}
end

end

5 changes: 4 additions & 1 deletion newrelic.js
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,9 @@ exports.config = {
* issues with the agent, 'info' and higher will impose the least overhead on
* production applications.
*/
level : 'info'
level : 'info',

filepath: require('path').resolve(__dirname, './newrelic_agent.log')
}
};

51 changes: 51 additions & 0 deletions provisioning/group_vars/castlemind.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
---
app_root: /home/vagrant

nodeapp_name: castlemind
nodeapp_user: vagrant
nodeapp_index: '{{ app_root }}/{{ nodeapp_name }}/bin/www'
nodeapp_node_version: 0.10.33
nodeapp_num_workers: 2
nodeapp_env:
NODE_ENV: production
PORT: "`printf '32%02i' $NODEAPP_INDEX`"

nginx_http_params:
gzip_comp_level: 6
gzip_vary: 'on'
gzip_min_length: 1000
gzip_proxied: any
gzip_types: 'text/plain text/html text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript'
gzip_buffers: '16 8k'

proxy_cache_path: '/var/cache/nginx levels=1:2 keys_zone=one:8m max_size=3000m inactive=600m'
proxy_temp_path: '/var/tmp'

nginx_upstreams:
- name: app_proxy
servers:
- 127.0.0.1:3201
- 127.0.0.1:3202

nginx_sites:
- server:
file_name: castlemind
listen: 80
root: '{{ app_root }}/{{nodeapp_name}}'

error_page: 404 /errors/404.html
error_page: 500 501 502 503 504 /errors/5xx.html
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These errors would be shown if the node process dies but nginx is still handling incoming requests. We don't actually have a public/errors/5xx.html file checked in, but it would be an easy add.


location1:
name: /
proxy_pass: http://app_proxy/
proxy_redirect: 'off'
proxy_read_timeout: 2 # seconds
proxy_set_header: 'Host $host'
proxy_set_header: 'X-Real-IP $remote_addr'
proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
proxy_http_version: 1.1

proxy_cache: 'one'
proxy_cache_key: 'sfs$request_uri$scheme'

8 changes: 8 additions & 0 deletions provisioning/playbook.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
- hosts: castlemind
sudo: yes
roles:
- nginx
- rjzaworski.nodeapp
- castlemind

4 changes: 4 additions & 0 deletions provisioning/roles/castlemind/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
- include: ufw.yml
- include: newrelic.yml

6 changes: 6 additions & 0 deletions provisioning/roles/castlemind/tasks/newrelic.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
- name: ensure newrelic_agent.log exists
file: path={{ app_root }}/{{ nodeapp_name }}/newrelic_agent.log state=touch owner={{ nodeapp_user }} mode=0644

- name: restart node app
service: name={{ nodeapp_name }} state=restarted
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is brute force—it would be much more graceful to restart using a handler


13 changes: 13 additions & 0 deletions provisioning/roles/castlemind/tasks/ufw.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
- name: (ufw) deny all incoming
ufw: state=enabled policy=deny direction=incoming

- name: (ufw) allow ssh
ufw: rule=allow port=ssh

- name: (ufw) allow www
ufw: rule=allow port=www

- name: (ufw) reload
ufw: state=reloaded

172 changes: 172 additions & 0 deletions provisioning/roles/nginx/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
nginx
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a checked-in copy of the code in bennojoy/nginx#14. It's only copied here to get access to those changes without requiring the joys of git submodules or a forked playbook over on ansible-galaxy.

=====

This role installs and configures the nginx web server. The user can specify
any http configuration parameters they wish to apply their site. Any number of
sites can be added with configurations of your choice.

Requirements
------------

This role requires Ansible 1.4 or higher and platform requirements are listed
in the metadata file.

Role Variables
--------------

The variables that can be passed to this role and a brief description about
them are as follows.

# The max clients allowed
nginx_max_clients: 512

# A hash of the http paramters. Note that any
# valid nginx http paramters can be added here.
# (see the nginx documentation for details.)
nginx_http_params:
sendfile: "on"
tcp_nopush: "on"
tcp_nodelay: "on"
keepalive_timeout: "65"
access_log: "/var/log/nginx/access.log"
error_log: "/var/log/nginx/error.log"

# A list of hashs that define the servers for nginx,
# as with http parameters. Any valid server parameters
# can be defined here.
nginx_sites:
- server:
file_name: foo
listen: 8080
server_name: localhost
root: "/tmp/site1"
location1: {name: /, try_files: "$uri $uri/ /index.html"}
location2: {name: /images/, try_files: "$uri $uri/ /index.html"}
- server:
file_name: bar
listen: 9090
server_name: ansible
root: "/tmp/site2"
location1: {name: /, try_files: "$uri $uri/ /index.html"}
location2: {name: /images/, try_files: "$uri $uri/ /index.html"}

Examples
========

1) Install nginx with HTTP directives of choices, but with no sites
configured:

- hosts: all
roles:
- {role: nginx,
nginx_http_params: { sendfile: "on",
access_log: "/var/log/nginx/access.log"},
nginx_sites: none }


2) Install nginx with different HTTP directives than previous example, but no
sites configured.

- hosts: all
roles:
- {role: nginx,
nginx_http_params: { tcp_nodelay: "on",
error_log: "/var/log/nginx/error.log"},
nginx_sites: none }

Note: Please make sure the HTTP directives passed are valid, as this role
won't check for the validity of the directives. See the nginx documentation
for details.

3) Install nginx and add a site to the configuration.

- hosts: all

roles:
- role: nginx,
nginx_http_params:
sendfile: "on"
access_log: "/var/log/nginx/access.log"
nginx_sites:
- server:
file_name: bar
listen: 8080
location1: {name: "/", try_files: "$uri $uri/ /index.html"}
location2: {name: /images/, try_files: "$uri $uri/ /index.html"}

Note: Each site added is represented by list of hashes, and the configurations
generated are populated in `/etc/nginx/sites-available/` and have corresponding
symlinks from `/etc/nginx/sites-enabled/`

The file name for the specific site configurtaion is specified in the hash
with the key "file_name", any valid server directives can be added to hash.
For location directive add the key "location" suffixed by a unique number, the
value for the location is hash, please make sure they are valid location
directives.

4) Install Nginx and add 2 sites (different method)

---
- hosts: all
roles:
- role: nginx
nginx_http_params:
sendfile: "on"
access_log: "/var/log/nginx/access.log"
nginx_sites:
- server:
file_name: foo
listen: 8080
server_name: localhost
root: "/tmp/site1"
location1: {name: /, try_files: "$uri $uri/ /index.html"}
location2: {name: /images/, try_files: "$uri $uri/ /index.html"}
- server:
file_name: bar
listen: 9090
server_name: ansible
root: "/tmp/site2"
location1: {name: /, try_files: "$uri $uri/ /index.html"}
location2: {name: /images/, try_files: "$uri $uri/ /index.html"}


5) Install Nginx as proxy for another app

- hosts: all
roles:
- role: nginx,
nginx_upstreams:
- name: app_proxy
server: 127.0.0.1:3200
nginx_sites:
- server:
file_name: foo
server_name: 'ansible'
listen: 8080
location1:
- name: /
proxy_pass: http://app_proxy/
proxy_redirect: 'off'
proxy_set_header: 'Host $host'
proxy_set_header: 'X-Real-IP $remote_addr'
proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'




Dependencies
------------

None

License
-------

BSD

Author Information
------------------

Benno Joy


32 changes: 32 additions & 0 deletions provisioning/roles/nginx/defaults/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---

nginx_max_clients: 512

nginx_http_params:
sendfile: "on"
tcp_nopush: "on"
tcp_nodelay: "on"
keepalive_timeout: "65"

nginx_log_dir: "/var/log/nginx"
nginx_access_log_name: "access.log"
nginx_error_log_name: "error.log"
nginx_separete_logs_per_site: False

nginx_upstreams: []

nginx_sites:
- server:
file_name: foo
listen: 8080
server_name: localhost
root: "/tmp/site1"
location1: {name: /, try_files: "$uri $uri/ /index.html"}
location2: {name: /images/, try_files: "$uri $uri/ /index.html"}
- server:
file_name: bar
listen: 9090
server_name: ansible
root: "/tmp/site2"
location1: {name: /, try_files: "$uri $uri/ /index.html"}
location2: {name: /images/, try_files: "$uri $uri/ /index.html"}
Loading