-
Notifications
You must be signed in to change notification settings - Fork 5
Red Onion Installation Guide
RedOnion is meant to provide an easy way to stand up a NSM sensor on RHEL/CentOS 6.6. If using RHEL, please note that you will need to install all the prereqs prior to running the redonion_bootstrap.sh script. This is done as many expressed that they have requirements to pull from their own redhat repositories within their environment and never connect to the "public" redhat repositories. Do note that this script will connect to the ntop and epel redhat repositories.
This script will install the following tools:
PFring - Installed w/ DKMS, all tools built with pfring support
Bro - Protocol Detection / Scripting / Intel Matching
Suricata - By default We are only using Suricata for it's signature matching capabilities
Moloch - Full Packet Capture and Node.js viewer interface
Elasticsearch - for Moloch Backend
Emerging Threats Pro or Community Rulesets
Oinkmaster (Yes, i'm moving to pulled pork soon)
Emerging Threats Luajit Rulesets - via Github
Splunk Universal Forwarder (If you have a Splunk backend for log aggregation)
Logstash w/ Elasticsearch Cluster or Syslog Receiver Support
You can install them all, or you can install just a few individually. There is no built in logaggregation being done on the sensor at this time. That may change in the future, but it requires more hardware.
So far I have worked with the following hardware guidelines below:
If monitoring a line under 100Mbps the lowest hardware i've run on has been 8 core w/o hyperthreading and 32GB memory. These are just general (slightly overestimated) guidelines.
###Around 100Mbps
HP DL380 - 2U
8 core CPU
32GB
2x 100G SSD (for OS)
16TB Disk (FPC storage)
Broadcom 4 port Gig NIC - Management
Intel 4 port Gig NIC - Sniffing
###Anywhere around 500Mbps
HP DL380 - 2U
2x 8 core CPU
64GB
2x 100G SSD (for OS)
24TB Disk (FPC storage)
Broadcom 4 port Gig NIC - Management
Intel 4 port Gig NIC - Sniffing
###Up to 1000Mbps
HP DL360p - 1U + 4U Disk Arrays
2x 10 core CPU
96GB Memory
2x 100G SSD (for OS)
48TB Disk ( FPC storage - HP P2000 + HP DL2700)
Broadcom 4 port Gig NIC - Management
Intel 4 port Gig NIC - Sniffing
Any "database grade" server will probably be a good start.
#Installation Steps:
- Setup and partition server. Many of the guidelines from the Moloch documentation hold true here. Recommend fresh updated version of RHEL/Centos 6.6
- git clone https://github.com/hadojae/redonion
- Modify the global variables at the top of redonion_bootstrap.sh to fit your deployment
- Run './redonion_bootstrap.sh -ro' as root
- Follow any prompts during the script
- When complete if running -ro install it will ask you to start up all the things, either start them then or later by uncommenting persistence script in crontab.
Notes:
Set manage_ip to the address you want to use in staging, not Production. Once you are ready to go production, then you will need to change the IP in these places:
Bro - /opt/bro/bin/node.cfg
Redhat - /etc/sysconfig/network-scripts/ifcfg-eth$i
*If you are not sniffing traffic, you will not see logs generated, but you can still verify that things start up properly if you want.
##Pfring
lsmod | grep pfring
- verify that pfring is loaded properly
lsmod | grep igb_zc
- if using igb_zc nic driver, verify that the correct driver is loaded
cat /proc/net/pf_ring/ [tab]
- this will show you processes currently utilizing the kernel driver (run after you start up a few things)
##Bro
/opt/bro/bin/broctl start
/opt/bro/bin/broctl start
/opt/bro/bin/broctl status
/opt/bro/bin/broctl netstats
##Suricata
service suricata start
tail -f /opt/suricata/var/log/suricata/suricata.log
- shows the suricata startup information - any failed signatures / missing software / etc
tail -f /opt/suricata/var/log/suricata/fast.log
- output of the rule hits
##Moloch
The install script starts everything up
ps aux | grep elasticsearch
ps aux | grep viewer
ps aux | grep capture
##What the install script does NOT handle
1) CPU Pinning
If you are running on hardware that is close or red lining on your amount of traffic you will need to do some CPU pinning to make sure processes don't run into each other to prevent dropped packets.
Make sure you disable the irqbalance service - service irqbalance disable *at boot
Example: 40 virtualized cores - 1000MB link 1-40 on htop (really 0-39)
1 and 2 - Server processes / Splunk UF
3 to 11 - Suricata
12 to 17 - Moloch Capture processes via zbalance_ipc
18 and 19 - Moloch Viewer
20 to 30 - Moloch Elasticsearch
31 - Bro Manager
32 - Bro Proxy
33 to 40 - Bro Workers
Example: 32 virtualized cores - 300Mb link 1-32 on htop (really 0-31)
1 and 2 - Server processes / Splunk UF
3 - Bro Manager
4 - Bro Proxy
5 to 8 - Bro Workers
9 to 15 - Suricata
16-25 - Moloch Elasticsearch
26 and 27 - Moloch Capture
28 and 29 - Moloch Viewer
30 to 32 - Server processes
Modify the CPU pinning for the following applications in the following places
Bro - /opt/bro/etc/node.cfg
Suricata - /opt/suricata/etc/suricata/suricata.yml
Moloch-capture - /opt/moloch/bin/run_capture.sh
Moloch Viewer - /opt/moloch/bin/run_viewer.sh
Moloch Elasticsearch - /opt/ro_persist.sh
2) Load balancing bigger links with zbalance_ipc
Moloch-capture is a single threaded process that gives some problems when eating a large amount of traffic on a linux 2.6 kernel - https://github.com/aol/moloch/wiki/FAQ#kernel-and-tpacket_v3-support
The workaround for us is to load balance via software using a pfring application called zbalance_ipc - http://www.ntop.org/pf_ring/how-to-promote-scalability-with-pf_ring-zc-and-n2disk/
zbalance_ipc requires that we use pfring in zero copy mode, which is not free, unless you are research or education. It costs 149 euro for a single 1G license - http://www.nmon.net/shop/cart.php
Using zbalance_ipc complicates persistence and requires hugepages support, but there is a file in the main repo - ro_persist_zbalance.sh - which can be modified for these purposes. Here is a snippet of that code that shows how to start zbalance.
#restart zbalance
echo `date`" - Starting zbalance"
echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages # Allocate 1G for hugepages
mount -t hugetlbfs none /mnt/huge # Mount the filesystem
/usr/local/bin/zbalance_ipc -d -i zc:eth4 -c 1 -m 1 -n 6 # run zbalance in daemon mode on interface eth4. -c means cluster id, -m means hash mode, -n means number of application instances in this case 6 moloch-capture instances
Currently the persistence script checks each moloch-capture instance and looks for specific files. If you are running more or less than 6, then you will need to modify the persistence file, the config files in /opt/moloch/bin/*-run_capture.sh.
In my testing, Bro and Suricata do not appreciate when you load balance for them. In our production deployments requiring zbalance_ipc we have two cables connected to the TAP receiving the exact same packets. One is moloch load balanced with zbalance_ipc and the other is bro and suricata.
3) Defining what rulesets you want to use in Suricata
You will need to manually modify the Suricata config if you want to enable/disable rulesets. By default only a few rulesets are enabled. This can be found in $install_dir/suricata/etc/suricata/suricata.yml.
If you want to add another ruleset to Suricata, you will need tell suricata.yml to load the file and then put the rule file in $install_dir/suricata/etc/suricata/rules
#Install Script Troubleshooting
One of the things that i have seen cause trouble has been the rerunning of the install script many times. There are a lot of variables to set, and admittedly the docs may not be the most proficient. This build script will eventually be a distribution available in VM and ISO form. If you have run the install script a few times and are getting weird errors, I would recommend you start from scratch. Reinstall Centos/RHEL and start from square one.
PFring can have odd issues with kernel versions. If you are experiencing issues with the pfring install, I would recommend that you try to drop down a kernel minor version and see if that fixes the issue. Also note the "start from scratch" method if you have been banging your head on the wall for a bit. Please post an issue if you need help, I'll be more than happy to help out.
video of install in the works