Skip to content

homer seven setup

aqsyonas edited this page Aug 2, 2019 · 20 revisions

HOMER 7


Major Changes

  1. Capture Server was changed from Kamailio to Heplify-Server.
  2. PostgreSQL was used instead of MySQL due to better partitioning, native JSON support.
  3. Support Natively different Metrics Stacks. (Prometheus, Loki, ElasticSearch...)
  4. Better APIs


Table of Contents

  1. HEPlify Capture Agent
  2. PostgreSQL-10
  3. HEPlify-Server
  4. VictoriaMetrics and Prometheus
  5. Homer Web App
  6. Grafana
  7. Loki and Promtail
  8. Useful Information and References

Overview of Components

There are quite a few components which make up a complete Homer 7 stack. Below is the list for this guide. These are the links to each individual component for easy reference.

Operating System Note

Centos7 was used to as preferred OS for this installation but should work fine for the most RedHat distributions. If you prefer to use a different Linux distribution, please adjust accordingly and/or feel free to suggest the required edits so as to make this guide as complete as possible.


The HEPlify Capture Agent

As anyone knows, you can't gather information without someone and/or something listening for it. HEPlify Capture needs to be installed on the system you would like to capture the traffic on. The HEPlify Capture captures the data, sends it to the HEPlify-Server to be ingested which sent it out to the other components of the stack.

OS and Hardware Requirements

Hardware: I built this on a physical 1U Supermicro mini server with an Atom processor and a 16GB SSD and it is running just fine. You'll need 2 NICs, one for management, and one for the mirrored port from the switch.

OS Packages:

  • EPEL-Release
  • The Go programming Language.
  • PCAP Libraries

Install

  1. Install your prerequisites.
    yum install epel-release -y
    yum install go -y
    yum install -y libpcap-devel

  2. Clone the github repo
    git clone https://github.com/sipcapture/heplify

  3. Run the make file in the cloned location with the make command.

  4. Move the files to /opt/heplify path.

  5. Note: Heplify capture requires root permissions to run.

Testing
You should now be able to start the heplify capture by running the heplify executable file. Output should be sent to the screen, and the heplify.log file should show the most recent information.

Service Installation

  • Copy the example service file to the proper spot in the file system.
    cp /opt/heplify/example/heplify.service /etc/systemd/system/

  • Modify the executable path in the file to match what you want to be capturing. This is where you would modify it to specify which physical interface to listen on, as well as what server to send the captured packets to.

  • This is what the production hep capture service file looks like. Note that [interface_name] is the system name of the interface which will be listening and will be the monitor destination below.

      [Unit]
      Description=Captures packets from wire and sends them to Homer
      After=network.target
      
      [Service]
      WorkingDirectory=/opt/heplify
      ExecStart=/opt/heplify/heplify -i [interface_name] -hs [ip_of_heplify_server]:9060 -m SIPRTCP
      ExecStop=/bin/kill ${MAINPID}
      Restart=on-failure
      RestartSec=10s
      Type=simple
      
      [Install]
      WantedBy=multi-user.target
    
  • Enable the service.
    systemctl daemon-reload
    systemctl enable heplify
    systemctl start heplify

  • Validate that the service is running by using systemctl status heplify

  • The log is written to /opt/heplify/heplify.log

Setting up the Mirror Port on the Switch

For this guide I used a Cisco switch to connect everything. In order for the HEPlify Capture Agent to receive the data from the VoIP services the traffic needs to be mirrored into the agent. Here are the commands for most Cisco switches.

  • Configure the source for the monitor session. This is the interface or interfaces from which you would like to capture the data. You can add multiples to this list if needed.

monitor session 1 source interface GigabitEthernet 0/0/x

  • Configure the destination for the monitor session. This should be the 2nd NIC port connected to the HEPlify Capture Agent hardware.

monitor session 1 destination interface GigabitEthernet 0/0/y

Here is Cisco's SPAN Guide for reference: SPAN Command Reference


PostgreSQL 10

PostgreSQL is where the HEPlify-Server stores all of the raw data that it ingests from the HEPlify Capture Agent. My suggestion is to size this appropriately to your environment. A smaller environment will probably not need as much resources as I've specified here, whereas a larger environment will probably require more. If you can back the database with fast disk that is helpful as well.

VM Specs
Minimum 4 CPUs
Minimum 16GB RAM
Minimum 1TB Storage Space

Package Requirements:

  • Epel-Release
    yum install -y epel-release

Note: Make sure your PostgreSQL Data directory is configured to point to a very large data storage space. By default PostgreSQL will put all data into the same directory as the configuration files.

Install and Start PostgreSQL-10

  1. Get the postgresql-10 repo installed.
    rpm -Uvh https://yum.postgresql.org/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm

  2. Install postgresql-10 from the repo.
    yum install postgresql10-server postgresql10 -y

  3. Initialize the postgresql-10 configuration. This sets up the system databases and gets the PostgreSQL server ready to run.
    /usr/pgsql-10/bin/postgresql-10-setup initdb

  4. Verify the version installed.
    /usr/pgsql-10/bin/postgres -V

  5. Enable the services.
    systemctl enable postgresql-10
    systemctl start postgresql-10

  6. Set the password on the postgres user account on the database server.
    su -l postgres
    psql
    \password
    \q (to quit)
    exit (leave the postgres account and go back to root)

  7. Modify the connection file to allow inbound connections to the postgresql services.
    vi /var/lib/pgsql/10/data/pg_hba.conf
    Add to the bottom of the file this line:
    host all all [IP of HEPlify-Server]/32 password
    host all all [IP of homer-app server]/32 password

  8. Modify the postgresql.conf configuration file and set the following variables to have a good running server.

  • I found this PostgreSQL tuning guide: http://linuxfinances.info/info/quickstart.html
  • File Path: /var/lib/pgsql/10/data/postgresql.conf
  • listen_addresses = '*' under the -Connection Settings- header.
  • shared_buffers = 1024MB
  • effective_cache_size = 12GB
    Note: This will likely be most of the servers physical memory, if postgres is installed by itself.
  • max_locks_per_transaction = 1000
  • data_directory = '[path to data directory]'
  1. Restart the postgresql-10 service to commit the changes.
    systemctl restart postgresql-10

  2. Allow the firewall to accept the inbound connections on port 5432 for PostgreSQL clients.
    firewall-cmd --add-port=5432/tcp --permanent
    firewall-cmd --reload


HEPlify-Server

The HEPlify-Server is the "traffic cop" of all of the data that comes into the Homer 7 stack. It is this component which ingests all the data, then sends it out to all the other components in the proper format.

System Requirements:

  • 8 vCPUs
  • 16GB RAM
  • 100GB Storage

Package Requirements

  • epel-release
  • libpcap-devel
  • Go programming language

Installation

  1. Install the required components.
    yum install epel-release libpcap-devel
    yum install go -y

  2. Install the PostgreSQL Client.

  • Get the postgresql-10 repo installed.
    rpm -Uvh https://yum.postgresql.org/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm
  • Install postgresql-10 client.
    yum install postgresql10 -y

Install the Heplify-Server Binaries

  1. cd into /opt

  2. Clone the github repo into the /opt directory.
    git clone https://github.com/sipcapture/heplify-server

  3. Build the services using Go.
    go build cmd/heplify-server/heplify-server.go

  4. Modify the configuration file to point to the proper Postgres services.
    cp /opt/heplify-server/example/homer7_config/heplify-server.toml /opt/heplify-server/
    vi /opt/heplify-server/heplify-server.toml

  • Set DBAddr to "[ip of postgres]:5432"
  • Set DBUser and DBPass according to what you configured when setting up the PostgreSQL-10 database user.
  • Set DBWorker equal to the number of CPUs in the system.
  • Set ESDiscovery to false if you do not plan on using Elasticsearch.
  • Set LogDbg to "hep,sql"
  • Set LogLvl to "warning"
  • Set DiscardMethod to ["OPTIONS","NOTIFY"]
  • Set PromAddr to "0.0.0.0:9096"

Note: A good guide on all of the HEPlify-Server configuration parameters can be found in the Wiki here: HEPLIFY-SERVER-Settings


  1. Setup the system service to run the heplify-server binary when the system loads.
    cp /opt/heplify-server/example/heplify-server.service /etc/systemd/system/
    systemctl daemon-reload
    systemctl enable heplify-server
    systemctl start heplify-server
  • You can verify the service is running with systemctl status heplify-server

  • You can verify that the service is able to connect to the postgre server by looking at the /opt/heplify-server/heplify-server.log file.

  1. Open the required firewall ports so the Heplify capture agent can send data in.
    firewall-cmd add-port=9060/udp --permanent
    firewall-cmd --reload

Note: If needed, do this same thing on any other firewall to allow the connectivity through. Port 9060/UDP is the port that the HEPlify capture agent sends to by default.

If everything is working properly the Heplify-Server service is now ready to ingest data from the Heplify capture agent service. You can verify this by using the tail command to look at the heplify-server.log file. If working properly you will see the packets flowing through this file.

tail -f /opt/heplify-server.log

You can verify that PostgreSQL is writing data into the database by using the top command. There should be postmaster commands running in the output.


VictoriaMetrics and Prometheus

VictoriaMetrics takes the place of InfluxDB to store the time series data for long periods of time, and Prometheus is used to ingest that data from the HEPlify-server and send it into the VictoriaMetrics database. It makes sense to put both of these on the same server since Prometheus is the process that will feed VictoriaMetrics the data.

System Requirements:

  • 4 vCPUs
  • 8GB RAM
  • 1TB Storage

Package Requirements

  • EPEL
    yum install epel-release -y
  • Go programming language
    yum install go -y

VictoriaMetrics Install

VictoriaMetrics Source Information: https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/Single-server-VictoriaMetrics#how-to-build-from-sources

  • Please reference this for sizing and scaling information.

Installing with Yum package manager

  1. This installs the victoriametrics binaries into /usr/local/bin
    yum -y install yum-plugin-copr
    yum copr enable antonpatsev/VictoriaMetrics
    yum makecache
    yum -y install victoriametrics

  2. Open the necessary Firewall port(s).
    firewall-cmd --add-port=8428/tcp --permanent
    firewall-cmd --reload

  3. Create a directory to store the VictoriaMetrics database data. If possible this should be placed into large fast storage, and mounted accordingly.
    mkdir /data

  4. Start VictoriaMetrics from the command line.

  • By default I will use the following arguments.
    -storageDataPath /data (Defines the path to the data directory.)
    -retentionPeriod 1 (Defines the retention period in months. Anything over this will be deleted.)
    victoriametrics -storageDataPath /data -retentionPeriod 1
  1. Create a system service file to run the VictoriaMetrics binaries as a service.
  • Create file /etc/systemd/system/victoriametrics.service

  • Add the following content.

      	[Unit]
      	Description=VictoriaMetrics Server
      	After=network.target
      	
      	[Service]
      	WorkingDirectory=/usr/local/bin
      	ExecStart=/usr/local/bin/victoriametrics -storageDataPath /data -retentionPeriod 1
      	Restart=on-failure
      	RestartSec=10s
      	Type=simple
      	
      	[Install]
      	WantedBy=multi-user.target
    
  1. Start the service.
    systemctl daemon-reload
    systemctl enable victoriametrics
    systemctl start victoriametrics

Prometheus Install

For Prometheus to scrape the data from the HEPlify-server services, that server has to be configured to allow connections on the Prometheus port. By default this is port 9096 in the configuration file, defined by the PromAddr variable.

  • Do the following on the HEPlify-server host to allow this connectivity.
    firewall-cmd --add-port=9096/tcp --permanent
    firewall-cmd --reload

This is the document I followed to install Prometheus: https://www.fosslinux.com/10398/how-to-install-and-configure-prometheus-on-centos-7.htm

  1. Download the most recent version of Prometheus from the official download site, https://prometheus.io/download/
  • As of May 2019 this is the command.
    wget https://github.com/prometheus/prometheus/releases/download/v2.10.0/prometheus-2.10.0.linux-amd64.tar.gz
  1. Create a new user that the prometheus service will run under.
    useradd --no-create-home --shell /bin/false prometheus

  2. Create the required directory structure.
    mkdir /etc/prometheus
    mkdir /var/lib/prometheus
    chown prometheus:prometheus /var/lib/prometheus

  3. Extract the prometheus tarball.
    tar -zxvf prometheus-2.10.0.linux-amd64.tar.gz

  4. Copy the Prometheus binaries to /usr/local/bin/
    cp prometheus-2.10.0.linux-amd64/prometheus /usr/local/bin/
    cp prometheus-2.10.0.linux-amd64/promtool /usr/local/bin/

6.Change the ownership of the copied files to the prometheus user and group.
chown prometheus:prometheus /usr/local/bin/prometheus
chown prometheus:prometheus /usr/local/bin/promtool

  1. Copy consoles and console_libraries to /etc/prometheus
    cp -r prometheus-2.10.0.linux-amd64/consoles /etc/prometheus
    cp -r prometheus-2.10.0.linux-amd64/console_libraries /etc/prometheus

  2. Change ownership of the /etc/prometheus directory to the prometheus user and group.
    chown -R prometheus:prometheus /etc/prometheus

  3. Now that the binaries have been installed we need to create the configuration file and setup the service file. You can get a baseline configuration file from the Homer docker image on the HEPlify-server server. Use this as the baseline configuration and change the following variable accordingly.

  • Path: /opt/heplify-server/docker/hep-prom-graf/prometheus/prometheus.yml

  • Set external_labels value to whatever make sense to you. This is the label that the Prometheus service will put on the time series.

  • Set the job_name - targets value to the IP address of the HEPlify-server system.

  • Add this to the end of the file to write the data into the VictoriaMetrics storage system.

      		remote_write:
      		  - url: http://127.0.0.1:8428/api/v1/write
      		    queue_config:
      		      max_samples_per_send: 10000
    
  1. Change the ownership of the prometheus.yml file to the prometheus user and group.
    chown prometheus:prometheus /etc/prometheus/prometheus.yml

  2. Create a service file so the prometheus service can run on system startup.
    vi /etc/systemd/system/prometheus.service

    	[Unit]
    	Description=Prometheus
    	Wants=network-online.target
    	After=network-online.target victoriametrics.service
    
    	[Service]
    	User=prometheus
    	Group=prometheus
    	Type=simple
    	ExecStart=/usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /var/lib/prometheus/ --web.console.templates=/etc/prometheus/consoles --web.console.libraries=/etc/prometheus/console_libraries
    	
    	[Install]
    	WantedBy=multi-user.target
    
  3. Start the service and enable it.
    systemctl daemon-reload
    systemctl start prometheus
    systemctl enable prometheus

  4. Make sure the service is running and working using systemctl status prometheus


Homer Web App

This is the Web application which displays the Homer WUI to the user.

System Requirements: (A little beefier because Grafana will also be installed on this server.)

  • 2 vCPUs
  • 8GB RAM
  • 40GB Storage

Note: Because this will be running java nodes, make sure that you install the java binaries on system install.

Package Requirements

  • EPEL
    yum install epel-release -y
  • Go programming language
    yum install go -y
  1. Allow the web port through the firewall. By default this is port 80.
    firewall-cmd --add-port=80/tcp --permanent
    firewall-cmd --reload

  2. Clone the GitHub repository to /opt/homer-app
    cd /opt
    git clone https://github.com/sipcapture/homer-app

  3. Install NVM from GitHub (Source: https://github.com/nvm-sh/nvm#installation-and-update)
    curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash

  • Log out of the server connection and reconnect to finish this piece.
  1. Get the current Homer-App node version.
    cat /opt/homer-app/.node_version

  2. Install a matching node version accordingly.
    nvm install 8.9.1

  3. Install npm dependencies.
    cd /opt/homer-app
    npm install [email protected]
    npm install [email protected]
    npm install [email protected]
    npm install @babel/[email protected]
    npm install [email protected]
    npm install && npm install -g knex eslint eslint-plugin-html eslint-plugin-json eslint-config-google

  4. Configure the Postgres Database connection in the config.js file.
    vi /opt/homer-app/config.js
    Set host: value to the IP address of the PostgreSQL server.
    Set user: and password: values to a user with access to the homer_data and homer_config databases. These are automatically generated the first time you run the heplify-server service.

  • This is an example of what the configuration block will look like for the PostgreSQL configuration.

      	const pgsql = {
      	  host: '[postgre_server_ip]',
      	  user: '[postgre_user]',
      	  port: 5432,
      	  password: '[postgre_password]',
      	  charset: 'utf8',
      	  timezone: 'utc',
      	  pool: {
      	    afterCreate: function(connection, callback) {
      	      connection.query('SET timezone = "UTC";', function(err) {
      	        callback(err, connection);
      	      });
      	    },
      	  },
      	};
    
  1. Build the app bundle using webpack. Do this within the /opt/homer-app directory.
    npm run build

You can now test the install by running npm start within the /opt/homer-app directory. Then browse to the server web address.

  • Default username: admin
  • Default password: sipcapture
  1. Setting up the homer-app as a service.
  • Add the following service file in /etc/systemd/system/

  • Name: homer-app.service
    Put this in the file:

      	[Unit]
      	Description=Homer 7 UI
      	After=network.target
      	
      	[Service]
      	WorkingDirectory=/opt/homer-app
      	ExecStart=/root/.nvm/versions/node/v8.9.1/bin/node --max_old_space_size=2048 /opt/homer-app/bootstrap.js
      	Restart=on-failure
      	RestartSec=10s
      	Type=simple
      	
      	[Install]
      	WantedBy=multi-user.target
    
  1. Reload the systemctl services and start the service.
    systemctl daemon-reload
    systemctl enable homer-app
    systemctl start homer-app

You can now browse to the server IP in order to verify that it's working properly.

Running with Nginx

  1. Install the nginx service.
    yum install nginx -y

If you have Selinux running, make sure you set the following variable otherwise nginx will not be able to redirect properly.
setsebool -P httpd_can_network_connect 1

  1. Set the homer-app configuration to use a non-standard web port to stop it from advertising outside the server. Reconfigure the /opt/homer-app/server/config.js file.
  • Under export default {
    http_port: 8080
    https_port: 8443
  1. Restart the homer-app service.

  2. Configure nginx:

  3. Create a new configuration file for the homer app under /etc/nginx/conf.d
    vi /etc/nginx/conf.d/homer-app.conf
    Add this code:

     		server {
     		    listen 80;
     		    server_name <hostname_of_homer>;
     		
     		    location / {
     		        proxy_set_header   X-Forwarded-For $remote_addr;
     		        proxy_set_header   Host $http_host;
     		        proxy_pass         http://127.0.0.1:8080;
     		    }
     		}
    
  4. Enable nginx and start it.
    systemctl enable nginx
    systemctl start nginx


Grafana

The Grafana services will be installed on the same server as the Homer WUI server. Grafana is the service capable of reading the VictoriaMetrics data and displaying it in a human-viewable fashion.

Grafana RPM Install Guide: https://grafana.com/docs/installation/rpm/

Installation

Install Via YUM Repository

  1. Add the following to a new file at /etc/yum.repos.d/grafana.repo

     [grafana]
     name=grafana
     baseurl=https://packages.grafana.com/oss/rpm
     repo_gpgcheck=1
     enabled=1
     gpgcheck=1
     gpgkey=https://packages.grafana.com/gpg.key
     sslverify=1
     sslcacert=/etc/pki/tls/certs/ca-bundle.crt
    
  2. Install using yum install grafana

  3. Start the Grafana service and enable it for boot.
    systemctl start grafana-server
    systemctl enable grafana-server

  4. Grafana runs on port 3000 by default, so open this in the firewall.
    firewall-cmd --add-port=3000/tcp --permanent
    firewall-cmd --reload

Initial Connection and Datasource Setup

You can now browse to the Grafana server for the first time. The default username and password are both "admin" and you will be required to set a new admin password on first login.

http://homer-IP:3000

Once connected add the VictoriaMetrics datasource like any other Prometheus datasource.

  • Go to Add Datasource
  • Select Prometheus
  • Accept the default name of "Prometheus".
  • Set the HTTP URL to http://<victoriametrics_ip>:8428
  • Leave all other values at their defaults and click on "Save & Test" to complete.
  • If all goes well you should see a green "Datasource is Working" message.

Loading in the Homer Dashboards

The Homer Grafana dashboards can be found in the Homer Docker image GitHub source.

Source: https://github.com/sipcapture/homer-docker/tree/master/heplify-server/hom7-hep-prom-graf/grafana/provisioning/dashboards

The ones that seem to work best with Homer7 are the following.

  • QOS_RTCP.json
  • SIP_Calls_Registers.json
  • SIP_Error_Rates.json
  • SIP_KPI.json
  • SIP_Methods_Responses.json
  • SIP_Overview.json

To load the dashboards in browse to Dashboards > Manage

  • Click the Import button
  • Click the Upload .json File button
  • Select the .json file to import
  • In the Options window select the Prometheus datasource, then click the Import button to finish.
  • Repeat this process to load in the other dashboards.

Attach Grafana to Postgres

Source: https://grafana.com/docs/features/datasources/postgres/

It is best practice to only allow the Grafana reader user to have SELECT permissions on the PostgreSQL databases. Log into the database server using the psql client and configure a new user account for this. Note that "schema" below refers to the specific database you want to query.
psql
CREATE USER grafanareader WITH PASSWORD 'password';
GRANT CONNECT ON DATABASE homer_data TO grafanareader;
GRANT USAGE ON SCHEMA public TO grafanareader;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO grafanareader;
\q (quit)

Next the pg_hba.conf file needs to be modified to allow connectivity from the Grafana server IP address. If Grafana is being hosted on the same server as the Homer web-app then this is already done.

Open the Grafana dashboard and go to Configuration > Data Sources

  • Click Add data source
  • Select PostgreSQL
  • Give the Data Source a good name
  • Set the PostgreSQL Connection values
  • Host = <ip_of_postgre>:5432
  • Database = homer_data
  • User = grafanareader
  • Password =
  • SSL Mode = disable (set this based on whether you're using SSL certificates for communication)
  • Set the PostgreSQL details value
    Version = 10
  • Click Save & Test

If all of this works you'll receive a green bar with "Database Connection OK"


Loki and Promtail

Loki is Grafana's log ingestion server.

Loki Source: https://github.com/grafana/loki#loki-like-prometheus-but-for-logs

System Requirements

  • 2 vCPUs
  • 4GB RAM
  • 500GB Storage

Package Requirements

  • EPEL
    yum install epel-release -y
  • Go programming language
    yum install go -y

Installing Loki

  1. Use Go to grab the loki files and get them installed.
    go get github.com/grafana/loki cd $HOME/go/src/github.com/grafana/loki/cmd/loki

  2. Get the dependencies as well.
    go get -d ./… go build ./cmd/loki

  3. Install promtail next.
    go build ./cmd/promtail

  4. Copy the resulting built packages to /opt/loki
    cp -r $HOME/go/src/github.com/grafana/loki /opt/loki

Running Loki and Promtail

  1. First create the folders in the file system where you want to store the Loki data.
    mkdir /var/loki
    mkdir /var/loki/index /var/loki/chunks

  2. Modify the configuration file in /opt/loki/cmd/loki/loki-local-config.yaml and make the following changes.
    boltdb directory: /var/loki/index
    filesystem directory: /var/loki/chunks

Note: Make sure you modify these so they reside in a large file storage space. If you have a lot of voice traffic these logs will very quickly fill up your disk space.

  1. Run Loki by using command /opt/loki -config.file=/opt/loki/cmd/loki/loki-local-config.yaml If all goes well this will generate the tables and indexes in the filesystem.

  2. Set up Service Files

  • Create a service file so the Loki service can run on system startup. vi /etc/systemd/system/loki.service

      [Unit]
      Description=Grafana Loki
      Wants=network-online.target
      After=network-online.target 
      
      [Service]
      WorkingDirectory=/opt/loki
      ExecStart=/opt/loki/loki -config.file=/opt/loki/cmd/loki/loki-local-config.yaml
      Restart=on-failure
      RestartSec=10s
      Type=simple
      
      [Install]
      WantedBy=multi-user.target
    
  • Start the loki service. systemctl daemon-reload
    systemctl start loki
    systemctl enable loki

  • Create a service file so the Promtail service can run on system startup.
    vi /etc/systemd/system/promtail.service

      [Unit]
      Description=Grafana Promtail
      Wants=network-online.target
      After=network-online.target loki.service
      
      [Service]
      WorkingDirectory=/opt/loki
      ExecStart=/opt/loki/promtail -config.file=/opt/loki/cmd/promtail/promtail-local-config.yaml
      Restart=on-failure
      RestartSec=10s
      Type=simple
      
      [Install]
      WantedBy=multi-user.target
    
  • Start the promtail service.
    systemctl daemon-reload
    systemctl start promtail
    systemctl enable promtail

Note: It is important that the Promtail service start After the Loki service, otherwise the datasource may become unstable in Grafana.

  1. Open the Firewall Ports
  • Loki needs to have port TCP/3100 open in order to receive data.
    firewall-cmd --add-port=3100/tcp --permanent
    firewall-cmd --reload

Sending Data into Loki

Now that Loki is up and running we need to send data into it. This is done on the HEPlify-server. You need to modify the heplify-server.toml file on the HEPlify Server.

  • Set the LokiURL value to "http://<loki_server_ip>:3100/api/prom/push"
  • Set the LokiHEPFilter value to [1,5,100]
  • Restart the heplify-server service.

Connect Grafana to Loki

  • On the Grafana web interface go to Configuration > Data Sources
  1. Click Add data source
  2. Select Loki from the data source type list.
  3. Give the Datasource a name, or accept the default Loki name.
  4. Under HTTP URL enter: http://<loki_server_ip>:3100
  5. Click on Save & Test
    If this works properly you will get a nice green message that states "Data source connected and labels found."

Viewing Data in Grafana

To view the data that Loki generates you need to go to the Explore feature, then select the Loki datasource in the drop-down on the top. Then click on Log labels and select the job ID that you'd like to see.

Connect Homer Web to Loki

Log into the Homer web-app as an admin and go to Preferences.

  • Select advanced
  • Modify the lokiserver connection to point to the IP address of the Loki server.

That's it! If you've followed this guide carefully, you should have a working Homer 7 stack collecting and reporting on data.


Useful Information

Here is some other useful information that I have collected for everyone's reference.

KPIs

Key Performance Indicators which can be looked up here: https://tools.ietf.org/html/rfc6076

Session Establishment Ratio (SER)

This metric is used to detect the ability of a terminating UA or downstream proxy to successfully establish sessions per new session INVITE requests.

Session Establishment Effectiveness Ratio (SEER)

This metric is complimentary to SER, but is intended to exclude the potential effects of an individual user of the target UA from the metric.

Session Completion Ratio (SCR)

A session completion is defined as a SIP dialog, which completes without failing due to a lack of response from an intended proxy or UA.

Ineffective Registration Attempts (IRAs)

Ineffective registration attempts are utilized to detect failures or impairments causing the inability of a registrar to receive a UA REGISTER request.

Ineffective Session Attempts (ISAs)

Ineffective session attempts occur when a proxy or agent internally releases a setup request with a failed or overloaded condition.


The Homer Dashboard

The Homer dashboard allows you to grab all the signaling details for a set of calls. Searching is a little non-intuitive. Here are some suggestions to make use of the search features.

  • Set the Time appropriately on the top-left of the screen where it says "Today". My suggestion is to use a specific time frame.

  • Next enter either the To number or the From number in the Search query and click on Search. This will bring up a new window with all of the results from the query.

  • To searches work best.

  • From the resulting values you can click on the SID to bring up the details from the call.

  • Flow = SIP Ladder

  • QoS = RTCP QoS Values

  • Loki allows you to search the Loki log server for all of the resulting logs and drill down into specific packet information on the RTCP values.

  • Export allows you to export the information into a PCAP file.

Clone this wiki locally