From c2b508835e0192055bc16f0acd31cb851a1e67aa Mon Sep 17 00:00:00 2001 From: Barry O'Donovan Date: Sat, 21 Sep 2024 11:36:55 +0000 Subject: [PATCH] Deployed 38c291d0 to 7.0 with MkDocs 1.6.1 and mike 2.1.3 --- 7.0/features/peeringdb/index.html | 2 +- 7.0/search/search_index.json | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/7.0/features/peeringdb/index.html b/7.0/features/peeringdb/index.html index c5a52886..c1700504 100644 --- a/7.0/features/peeringdb/index.html +++ b/7.0/features/peeringdb/index.html @@ -2348,7 +2348,7 @@

ASN Detail

Existence of PeeringDB Records

On the customer overview page from IXP Manager v5.0, we provide an indication (yes/no) as to whether a customer has a PeeringDB record. Generally it is important for IXPs to encourage their members to create PeeringDB entries to ensure your IXP is properly represented on their database.

-

Whether a customer has a PeeringDB entry is updated daily via the cronjobs.md. If you want to run it manually, run this Artisan command:

+

Whether a customer has a PeeringDB entry is updated daily via the task scheduler. If you want to run it manually, run this Artisan command:

$ php artisan ixp-manager:update-in-peeringdb -vv
 PeeringDB membership updated - before/after/missing: 92/92/17
 
diff --git a/7.0/search/search_index.json b/7.0/search/search_index.json index c70179c7..8c250bde 100644 --- a/7.0/search/search_index.json +++ b/7.0/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to IXP Manager","text":"

IXP Manager is the most trusted IXP platform worldwide.

It is a full stack management system for Internet eXchange Points (IXPs) which includes an administration and customer portal; provides end to end provisioning; and both teaches and implements best practice. It allows IXPs to manage new customers, provision new connections / services and monitor traffic usage. It has a number of provisioning templates including its ability to generate secure proven route server configuration and provides a built in looking glass for these.

INEX are pleased to release IXP Manager under an open source license (the GNU Public License V2) which we hope will benefit the wider IXP community, and especially new and small IXPs looking to expand.

Additional information: https://www.ixpmanager.org/

"},{"location":"#other-links","title":"Other Links","text":""},{"location":"#people-behind-ixp-manager","title":"People Behind IXP Manager","text":"

INEX is an Internet eXchange Point and Ireland's IP peering hub. It is a neutral, industry-owned Association, founded in 1996, that provides IP peering facilities for its members. INEX membership is open to all organisations that can benefit from peering their IP traffic. See: https://www.inex.ie/. INEX is IXP Manager's core team and founder.

Principle Authors:

The team at INEX deserves special mention - its CEO, Eileen Gallagher for her steadfast support and promotion of the project; the INEX Board of Directors - past and present - for their approval to open source the project and their on-going support of the project; and all team members who contribute real-world feedback, suggestions and all-round support!

See the team page on the IXP Manager website for more details.

We are also grateful for all the individuals who have contributed code, issues, mailing list help and feature requests.

"},{"location":"#sponsors","title":"Sponsors","text":"

IXP Manager is extremely grateful to its existing sponsors and patrons and continues to seek sponsors for on going development. If you are interested, please see out call for sponsorship.

"},{"location":"#license","title":"License","text":"

IXP Manager is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version v2.0 of the License.

IXP Manager is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License v2.0 along with IXP Manager. If not, see:

http://www.gnu.org/licenses/gpl-2.0.html

"},{"location":"#documentation-license","title":"Documentation License","text":"

This documentation is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

"},{"location":"dev/api/","title":"API","text":"

You'll find API documentation when logged into IXP manager under My Account / API Keys.

"},{"location":"dev/authentication/","title":"Authentication & Session Management (Development Notes)","text":"

Please read the authentication usage instructions first as that provides some key information for understanding the below brief notes.

IXP Manager uses Laravel's standard authentication framework with Laravel's Eloquent ORM.

See also Barry O'Donovan's PHP write up of the 2fa and user session management changes introduced in v5.3.0 and the v5.2.0 to v5.3.0 diff.

"},{"location":"dev/authentication/#multiple-sessions-remember-me-cookies","title":"Multiple Sessions / Remember Me Cookies","text":"

The default Laravel functionality around remember me is a single shared token across multiple devices. In practice this never worked well for us (but this is most likely a consequence of using LaravelDoctrine). Regardless, we also wanted to expand the functionality to uniquely identify each session and allow other sessions to be logged out by the user.

"},{"location":"dev/authentication/#sessionguard-and-userprovider","title":"SessionGuard and UserProvider","text":"

Most of the functionality exists in the session guard and user provider classes. We have overridden these here:

Again, the idea is to minimise the changes required to the core Laravel framework (and Laravel Eloquent).

"},{"location":"dev/authentication/#how-it-works","title":"How It Works","text":"

When you log in and check remember me, IXP Manager will create a new UserRememberToken database entry with a unique token and an expiry set to config( 'auth.guards.web.expire') (note this is minutes).

Your browser will be sent a cookie named remember_web_xxxx (where xxxx is random). This cookie contains the encrypted token that was created (UserRememberToken). IXP Manager uses this cookie to create a new authenticated session if your previous session has timed out, etc.

Note that this remember_web_xxxx cookie has an indefinite expiry date - the actual expiring of the remember me session is handled by the expiry field in the UserRememberToken database entry.

When you subsequently make a request to IXP Manager:

  1. the SessionGuard (see the user() method) first tries to retrieve your session via the standard browser session cookie (laravel_session).
  2. If that does not exist or has expired, it then looks for remember_web_xxxx cookie and, if it exists, validates it and looks you in with a new laravel_session.
"},{"location":"dev/authentication/#two-factor-authentication","title":"Two-Factor Authentication","text":"

Two-factor authentication (2fa) is implemented using the pragmarx/google2fa package via its Laravel bridge antonioribeiro/google2fa-laravel.

The database table for storing a user's secret key is user_2fa. 2FA for a user is enabled if:

  1. there exists a $user->user2FA entity (one to one); and
  2. Auth::getUser()->user2FA->enabled is true.

Once 2fa is enabled, the mechanism for enforcing it is the 2fa middleware. This is applied to all authenticated http web requests via app/Providers/RouteServiceProvider.php.

"},{"location":"dev/authentication/#avoiding-2fa-on-remember-me-sessions","title":"Avoiding 2fa on Remember Me Sessions","text":"

We use the antonioribeiro/google2fa-laravel bridge's PragmaRX\\Google2FALaravel\\Events\\LoginSucceeded event to update a user's remember me token via the listener IXP\\Listeners\\Auth\\Google2FALoginSucceeded. The update is to set user_remember_tokens.is_2fa_complete to true so that the SessionGuard knows to skip 2fa on these sessions.

"},{"location":"dev/ci/","title":"Continuous Integration","text":"

IXP Manager grew out of a code base and schema that started in the early '90s. Long before test driven development or behaviour driven development was fashionable for PHP. However, as IXP Manager is taking over more and more critical configuration tasks, we continue to back fill some automated testing with continuous integration for critical elements.

We use GitHub Actions for continuous integration which is provided free for public repositories.

Our current build status is:

The CI system runs the full suite of tests every time a commit is pushed to GitHub. As such, any build failing states are usually transitory. Official IXP Manager releases are only made when all tests pass.

We use two types of unit tests:

  1. PHP Unit for standard unit tests;
  2. Laravel Dusk for browser based tests.

We won't be aggressively writing tests for the existing codebase but will add tests as appropriate as we continue development. What follows is some basic instructions on how to set up tests and an overview (or links) to some of the tests we have implemented.

DISCLAIMER: This is not a tutorial on unit testing, phpunit, Laravel Dusk or anything else. If you have no experience with these tools, please read up on them elsewhere first.

"},{"location":"dev/ci/#setting-up-phpunit-tests","title":"Setting Up PHPUnit Tests","text":"

Documentation by real example can be found via the GitHub Actions workflow files and the CI data directory which contains scripts, database dumps and configurations.

Testing assumes a known good sample database which contains a small mix of customers with different configuration options. The files generated from this database are tested against known good configuration files. You first need to create a database, add a database user, import this testing database and then configure a .env file for testing (see the one here use here).

In MySQL:

CREATE DATABASE ixp_ci CHARACTER SET = 'utf8mb4' COLLATE = 'utf8mb4_unicode_ci';\nGRANT ALL ON `ixp_ci`.* TO `ixp_ci`@`localhost` IDENTIFIED BY 'somepassword';\nFLUSH PRIVILEGES;\n

Then import the sample database:

cat data/ci/ci_test_db.sql.bz2  | mysql -h localhost -u ixp_ci -psomepassword ixp_ci\n

Now, create your .env for testing, such as:

DB_HOST=localhost\nDB_DATABASE=ixp_ci\nDB_USERNAME=ixp_ci\nDB_PASSWORD=somepassword\n

Note that the phpunit.xml file in the root directory has some default settings matching the test database. You should not need to edit these.

"},{"location":"dev/ci/#setting-up-laravel-dusk","title":"Setting Up Laravel Dusk","text":"

Please review the official documentation here.

You need to ensure the development packages for IXP Manager are installed via:

# move to the root directory of IXP Manager\ncd $IXPROOT\ncomposer install --dev\n

You need to set the APP_URL environment variable in your .env file. This value should match the URL you use to access your application in a browser.

"},{"location":"dev/ci/#test-database-notes","title":"Test Database Notes","text":"
  1. The SUPERADMIN username / password is one-way hashed using bcrypt. If you want to log into the frontend of the test database, these details are: travis / travisci.
  2. There are two test CUSTADMIN accounts which can be accessed using username / password: hecustadmin / travisci and imcustadmin / travisci.
  3. There are two test CUSTUSER accounts which can be accessed using username / password: hecustuser / travisci and imcustuser / travisci.
"},{"location":"dev/ci/#running-tests","title":"Running Tests","text":"

In one console session, start the artisan / Laravel web server:

# move to the root directory of IXP Manager\ncd $IXPROOT\nphp artisan serve\n

And then kick off all the tests which includes PHPUnit and Laravel Dusk tests, run:

./vendor/bin/phpunit\n

Sample output:

PHPUnit 7.2.2 by Sebastian Bergmann and contributors.\n\n...............................................................  63 / 144 ( 43%)\n............................................................... 126 / 144 ( 87%)\n..................                                              144 / 144 (100%)\n\nTime: 1.86 minutes, Memory: 103.73MB\n

If you only want to run Laravel Dusk / browser tests, run the following (shown with sample output):

$ php artisan dusk\nPHPUnit 6.5.8 by Sebastian Bergmann and contributors.\n\n..                                                                  2 / 2 (100%)\n\nTime: 12.73 seconds, Memory: 24.00MB\n

If you want to exclude the browser based tests, just exclude that directory as follows:

$ ./vendor/bin/phpunit --filter '/^((?!Tests\\\\Browser).)*$/'\nPHPUnit 7.2.2 by Sebastian Bergmann and contributors.\n\n...............................................................  63 / 142 ( 44%)\n............................................................... 126 / 142 ( 88%)\n................                                                142 / 142 (100%)\n\nTime: 1.59 minutes, Memory: 106.41MB\n

You can also limit tests to specific test suites:

$ ./vendor/bin/phpunit --testsuite 'Dusk / Browser Test Suite'\n$ ./vendor/bin/phpunit --testsuite 'Docstore Test Suite'\n$ ./vendor/bin/phpunit --testsuite 'IXP Manager Test Suite'\n
"},{"location":"dev/cla/","title":"Contributor License Agreement","text":"

Please see the contributing instructions for full details.

Third-party patches are welcomed for adding functionality and fixing bugs. Before they can be accepted into the project, contributors must sign the below Contributor License Agreement (gpg --clearsign inex-cla.txt) and email it to operations (at) inex (dot) ie.

Individual Contributor License Agreement v1.1\n=============================================\n\nInternet Neutral Exchange Association Company Limited By Guarantee\n\nThis document clarifies the terms under which You, the person listed below,\nmay make Contributions \u2014 which may include without limitation, software, bug\nfixes, configuration changes, documentation, or any other materials \u2014 to any\nof the projects owned or managed by Internet Neutral Exchange Association\nCompany Limited By Guarantee, hereinafter known as \"INEX\".\n\nPlease complete the following information about You and the Contributions.\nIf You have questions about these terms, please contact us at\noperations@inex.ie.\n\nYou accept and agree to the following terms and conditions for Your present\nand future Contributions submitted to INEX.  Except for the license granted\nherein to INEX, You reserve all right, title, and interest in and to Your\nContributions.\n\nINEX projects (code, documentation, and any other materials) are released\nunder the terms of the GNU General Public License, v2.0.\n\nYou certify that:\n\n(a) Your Contributions are created in whole or in part by You and You have\nthe right to submit it under the designated license; or\n\n(b) Your Contributions are based upon previous work that, to the best of\nyour knowledge, is covered under an appropriate open source license and You\nhave the right under that license to submit that work with modifications,\nwhether created in whole or in part by You, under the designated license; or\n\n(c) Your Contributions are provided directly to You by some other person who\ncertified (a) or (b) and You have not modified them.\n\n(d) You understand and agree that INEX projects and Your Contributions are\npublic and that a record of the Contributions (including all metadata and\npersonal information You submit with them) is maintained indefinitely and\nmay be redistributed in a manner consistent with INEX's policies and/or the\nrequirements of the GNU General Public License v2.0 where they are relevant.\n\n(e) You are granting Your Contributions to INEX under the terms of the GNU\nGeneral Public License v2.0.\n\nFull Name:\nEmail Addresses:\nDate:\n
"},{"location":"dev/docker/","title":"Docker","text":"

For development purposes, we have both Docker and Vagrant build files.

This page on Docker for IXP Manager development should be read in conjunction with the official IXP Manager Docker repository. While that repository is not for development purposes, the terminology and container descriptions apply here also.

"},{"location":"dev/docker/#tldr-guide-to-get-docker-running","title":"TL;DR Guide to Get Docker Running","text":"

If you want to get IXP Manager with Docker up and running quickly, follow these steps:

  1. Install Docker (see: https://www.docker.com/community-edition)
  2. Clone IXP Manager to a directory:

    git clone https://github.com/inex/IXP-Manager.git ixpmanager\ncd ixpmanager\n
    3. Copy the stock Docker/IXP Manager configuration file and default database:

    cp .env.docker .env\ncp tools/docker/containers/mysql/docker.sql.dist tools/docker/containers/mysql/docker.sql\n
  3. Spin up the Docker containers

    docker-compose -p ixpm up\n
  4. Access IXP Manager on: http://localhost:8880/

  5. Log in with one of the following username / passwords:

  6. Admin user: docker / docker01

  7. Customer Admin: as112admin / as112admin
  8. Customer User: as112user / as112user
"},{"location":"dev/docker/#general-overview","title":"General Overview","text":"

As the IXP Manager ecosystem grows, it becomes harder and harder to maintain ubiquitous development environments. Docker is ideally suited to solving these issues.

The multi-container Docker environment for developing IXP Manager builds an IXP Manager system which includes:

"},{"location":"dev/docker/#useful-docker-commands","title":"Useful Docker Commands","text":"

The following are a list of Docker commands that are useful in general but also specifically for the IXP Manager Docker environment.

Note that in our examples, we set the Docker project name to ixpm and so that prefix is used in some of the below. We also assume that only one IXP Manager Docker environments is running and so we complete Docker container names with _1.

# show all running containers\ndocker ps\n\n# show all containers - running and stopped\ndocker ps -a\n\n# stop a container:\ndocker stop <container>\n\n# start a container:\ndocker start <container>\n\n# we create a 'ixpmanager' network. Sometimes you need to delete this when\n# stopping / starting the environment\ndocker network rm ixpm_ixpmanager\n\n# you can copy files into a docker container via:\ndocker cp test.sql ixmp_mysql_1:/tmp\n# where 'ixmp_mysql_1' is the container name.\n# reverse it to copy a file out:\ndocker cp ixmp_mysql_1:/tmp/dump.sql .\n\n# I use two useful aliases for stopping and removing all containers:\nalias docker-stop-all='docker stop $(docker ps -a -q)'\nalias docker-rm-all='docker rm $(docker ps -a -q)'\n\n# You can also remove all unused (unattached) volumes:\ndocker volume prune\n# WARNING: you might want to check what will be deleted with:\ndocker volume ls -f dangling=true\n
"},{"location":"dev/docker/#development-use","title":"Development Use","text":"

As above, please note that in our examples, we set the Docker project name to ixpm and so that prefix is used in some of the below. We also assume that only one IXP Manager Docker environments is running and so we complete Docker container names with _1.

For most routine development use, you only need two containers usually:

docker-compose -p ixpm up mysql www\n

If you are sending emails as part of your development process, include the mail catcher:

docker-compose -p ixpm up mysql www mailcatcher\n
"},{"location":"dev/docker/#mysql-database","title":"MySQL Database","text":"

When the mysql container builds, it pre-populates the database with the contents of the SQL file found at tools/docker/containers/mysql/docker.sql. This is not present by default and is ignored by Git (to ensure you do not accidentally commit a production database!).

A default SQL database is bundled and should be placed in this location via:

cp tools/docker/containers/mysql/docker.sql.dist tools/docker/containers/mysql/docker.sql\n

You can put your own database here also. If you do, you will need to rebuilt the mysql contain:

docker-compose build mysql\n

You can access the MySQL database via:

docker exec -it ixpm_mysql_1 mysql -i ixpmanager\n

And you can get shell access to the container with:

docker exec -it ixpm_mysql_1 bash\n

You can also connect to the MySQL server from your local machine using tools such as the standard MySQL client, TablePlus or Sequel Pro with the following settings:

mysql --protocol=TCP --port 33060 -u root\n
"},{"location":"dev/docker/#web-server","title":"Web Server","text":"

Note that the www container mounts the IXP Manager development directory under /srv/ixpmanager. This means all local code changes are immediately reflected on the Docker web server.

The Dockerfile used to build the www container can be found at tools/docker/containers/www/Dockerfile.

You can access the container with:

docker exec -it ixpm_www_1 bash\n
"},{"location":"dev/docker/#mailcatcher","title":"Mailcatcher","text":"

We include a mailcatcher container which catches all emails IXP Manager sends and displays them on a web frontend. Ensure this container is started by either:

# start all containers:\ndocker-compose -p ixpm up\n# start mailcatcher with (at least) mysql and www:\ndocker-compose -p ixpm up mysql www mailcatcher\n

The .env.docker config contains the following SMTP / mail settings which ensure emails get send to the mailcatcher:

MAIL_HOST=172.30.201.11\nMAIL_PORT=1025\n

You can then view emails sent on: http://localhost:1080/

"},{"location":"dev/docker/#emulating-switches","title":"Emulating Switches","text":"

ixpm_switch1_1 and ixpm_switch2_1 emulate switches by replaying SNMP dumps from real INEX switches (with some sanitisation). The OIDs for traffic have been replaced with a dynamic function to give varying values.

From the www container, you can interact with these via:

$ docker exec -it ixpm_www_1 bash\n# ping switch1\n...\n64 bytes from switch1 (172.30.201.60): icmp_seq=1 ttl=64 time=0.135 ms\n...\n# ping switch2\n...\n64 bytes from switch2 (172.30.201.61): icmp_seq=1 ttl=64 time=0.150 ms\n...\n\n\n# snmpwalk -On -v 2c -c switch1 switch1 .1.3.6.1.2.1.1.5.0\n.1.3.6.1.2.1.1.5.0 = STRING: \"switch1\"\n# snmpwalk -On -v 2c -c switch2 switch2 .1.3.6.1.2.1.1.5.0\n.1.3.6.1.2.1.1.5.0 = STRING: \"switch2\"\n

You'll note from the above that the hostnames switch1 and switch2 work from the www container. Note also that the SNMP community is the hostname (switch1 or switch2 as appropriate).

The packaged database only contains switch1. This allows you can add the second switch via http://localhost:8880/switch/add-by-snmp by setting the hostname and community to switch2.

If you want to add new customers for testing, add switch2 and then use interfaces Ethernet 2, 8, 13 and 6 and 7 as a lag as these have been preset to provide dynamic stats.

"},{"location":"dev/docker/#route-server-and-clients","title":"Route Server and Clients","text":"

The containers include a working route server (ixpm_rs1_1) with 3 IPv4 clients (ixpm_cust-as112_1, ixpm_cust-as42_1, ixpm_cust-as1213_1) and and 2 IPv6 clients (ixpm_cust-as1213-v6_1, ixpm_cust-as25441-v6_1). IXP Manager also includes a working looking glass for this with Bird's Eye installed on the route server.

You can access the Bird BGP clients on the five sample customers using the following examples:

# access Bird command line interface:\ndocker exec -it ixpm_cust-as112_1 birdc\n\n# run a specific Bird command\ndocker exec -it ixpm_cust-as112_1 birdc show protocols\n

The route server runs an IPv4 and an IPv6 daemon. These can be accessed via the looking glass at http://127.0.0.1:8881/ or on the command line via:

# ipv4 daemon:\ndocker exec -it ixpm_rs1_1 birdc -s /var/run/bird/bird-rs1-ipv4.ctl\n\n# ipv6 daemon:\ndocker exec -it ixpm_rs1_1 birdc6 -s /var/run/bird/bird-rs1-ipv6.ctl\n

In this container, Bird's Eye can be found at /srv/birdseye with the web server config under /etc/lighttpd/lighttpd.conf.

We include the IXP Manager scripts for updating the route server configuration and reconfiguring Bird:

# get shell access to the container\ndocker exec -it ixpm_rs1_1 bash\n\n# all scripts under the following directory\ncd /usr/local/sbin/\n\n# reconfigure both daemons:\n./api-reconfigure-all-v4.sh\n\n# reconfigure a specific daemon with verbosity:\n./api-reconfigure-v4.sh -d -h rs1-ipv4\n
"},{"location":"dev/docker/#mrtg-grapher","title":"Mrtg / Grapher","text":"

For developing / testing Grapher with Mrtg, we include a container that runs Mrtg via cron from a pre-configured mrtg.conf file.

NB: please ensure to update the GRAPHER_BACKENDS option in .env so it includes mrtg as follows:

GRAPHER_BACKENDS=\"mrtg|dummy\"\n

The configuration file matches the docker.sql configuration and can be seen in the IXP Manager source directory at tools/docker/mrtg/mrtg.cfg.

You can access the Mrtg container via:

docker exec -it ixpm_mrtg_1 sh\n

We also install a script in the root directory of the container that will pull a new configuration from IXP Manager. Run it via:

docker exec -it ixpm_mrtg_1 sh\ncd /\n./update-mrtg-conf.sh\n\n# or without entering the container:\ndocker exec -it ixpm_mrtg_1 /update-mrtg-conf.sh\n

It will replace /etc/mrtg.conf for the next cron run. It also sets the configuration not to run as a daemon as cron is more useful for development.

"},{"location":"dev/docker/#managing-dockerfile","title":"Managing Dockerfile","text":"

In this example, we look at the ixpmanager/www Docker image and update it from base php:7.0-apache to php:7.3-apache:

cd cd tools/docker/containers/www\n# replace:\n#    FROM php:7.0-apache\n# with:\n#    FROM php:7.3-apache\ndocker build .\n# or tag with: docker build -t ixpmanager/www:v5.0.0\ndocker push\n# if you're not logged in, use: docker login\n

Do not forget to update your docker-compose.yml files to reference the new tag.

"},{"location":"dev/docker/#dev-tool-integrations","title":"Dev Tool Integrations","text":"

NB: these tools and integrations are not IXP Manager specific but rather the typical Docker / PHP development tool chain. Please use support forums for the relevant sections / tools rather than contacting the IXP Manager developers directly.

"},{"location":"dev/docker/#php-storm-and-xdebug","title":"PHP Storm and Xdebug","text":"

We are big fans of PhpStorm at IXP Manager DevHQ. One key feature is PhpStorm's integration with PHP Xdebug. We of course also need this to work with Docker.

Some background information on Xdebug is provided below but you are expected to be familiar with the Xdebug documentation on remote debugging.

The way interactive remote debugging works with Xdebug is as follows:

  1. Enable remote debugging on the PHP server (php.ini settings on the www container).
  2. Your browser, with a suitable plugin, includes a Xdebug parameter to signal you want remote debugging started for this request (either a GET, POST or Cookie setting).
  3. PHP Xdebug connects to the configured remote debugger (PhpStorm in our case) allowing you to set break points, step through instructions, view variable contents at a point in time, etc.

To get this working with Docker, we need to work through each of these steps.

1. Enable Remote Debugging in the Docker Container

If you examine the www container Dockerfile in the IXP Manager source under tools/docker/containers/www/Dockerfile, you will see that we:

[xdebug]\nzend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-20151012/xdebug.so\nxdebug.remote_enable=1\nxdebug.remote_port=9001\nxdebug.remote_autostart=0\nxdebug.idekey=PHPSTORM\nxdebug.profiler_enable=0\nxdebug.profiler_enable_trigger=1\nxdebug.profiler_output_dir=/srv/ixpmanager/storage/tmp\nxdebug.auto_trace=0\nxdebug.trace_enable_trigger=1\nxdebug.trace_output_dir=/srv/ixpmanager/storage/tmp\n

Note that the zend_extension may change as it is dynamically set by the build script. We also chose port 9001 rather than the default of 9000 due to local conflicts with common tool chains.

The one key element that is missing in the INI is the remote debugger IP address. This needs to be set to your development computer's LAN address (there are other options but this works best in practice). Once you know this address (say it's 192.0.2.23), set the following in the ${IXPROOT}/.env:

# For PHP xdebug, put in the IP address of your host\nDOCKER_XDEBUG_CONFIG_REMOTE_HOST=192.0.2.23\n

When you start the Docker environment from $IXPROOT using docker-compose with something like:

cd $IXPROOT\ndocker-compose -p ixpm up mysql www\n

then docker-compose will use this setting from the .env file and it will be passed through the Xdebug.

2. Install a Xdebug Plugin on Your Browser

Some recommended plugins from the Xdebug documentation on remote debugging are these: Firefox, Chrome, Safari. It can also be enabled manually using a GET parameter - see the Xdebug documentation.

The only required parameter is the session key. For PhpStorm, the default is PHPSTORM unless you have configured it differently (see step 3 below).

The PHP Xdebug browser plugins allow you to enable debugging on a per request basis. See the Firefox link above to the plugin homepage for screenshots (as of 2018-01 at least).

3. Configure PhpStorm

PhpStorm have their own documentation for Xdebug. The short version to match the above two steps is:

You now need to create a Run/Debug Configuration. This is so you can map file paths on the remote system (www container) to your local development files:

For testing, set a break point in public/index.php and access your development IXP Manager using your new browser plugin. You should be able to step through each statement and - presuming your mappings are correctly set up - step into any file in the project.

Profiling and Function Traces

You may have noticed in the Xdebug configuration above, we have allowed for the triggering of function traces and profiling also. The browser plugins should support these - certainly the Firefox one does (leave the trigger key blank in both cases).

When you request an IXP Manager page via Firefox with profiling enabled, you will find the cachegrind file in $IXPROOT/storage/tmp on your own system You can then view this in PhpStorm via the menu Tools -> Analyze Xdebug Profiler Snapshot....

Function traces can be found in the same directory - these are just text files.

"},{"location":"dev/docs/","title":"Documentation","text":"

From v4 onwards, we use GitHub Pages with MkDocs to build the documentation.

Both the site and the content are hosted on GitHub.

"},{"location":"dev/docs/#contributing-suggesting-errata","title":"Contributing / Suggesting Errata","text":"

We welcome contributions or errata that improve the quality of our documentation. Please use one of the following two channels:

  1. Via the standard GitHub workflow of forking our documentation repository, making your edits in your fork and them opening a pull request.
  2. If you are not familiar with GitHub, then please open an issue on the documentation repository with your suggestions.
"},{"location":"dev/docs/#building-locally","title":"Building Locally","text":"

If you haven't already, install MkDocs. These instructions work on MacOS as of 2024:

# create a venv\npython3 -m venv venv\ncd venv\n\n# install mkdocs\n./bin/pip install mkdocs mike pymdown-extensions mkdocs-material mkdocs-git-revision-date-localized-plugin\n

The documentation can then be built locally as follows:

git clone https://github.com/inex/ixp-manager-docs-md.git\ncd ixp-manager-docs-md\n./venv/bin/mkdocs build\n

You can serve them locally with the following and then access them via http://127.0.0.1:8000 -

./venv/bin/mkdocs serve\n

Or to see the full versioning site, use:

./venv/bin/mike serve\n
"},{"location":"dev/docs/#deploying-to-live-site","title":"Deploying to Live Site","text":"

Since September 2024, we now use documentation versioning and so you should only be pushing to the latest major.minor release of the dev major.minor version.

Never deploy to historical versions!

As an example, as the time of writing, 6.4.x is the latest release and 7.0 is in development. We did our final push to 6.4 via:

PATH=./venv/bin:$PATH ./venv/bin/mike deploy --push --update-aliases 6.4 latest\n

And all new documentation will be pushed to dev via:

PATH=./venv/bin:$PATH ./venv/bin/mike deploy --push --update-aliases 7.0 dev\n

Once 7.0 is released, we will push a final update to 7.0 updating it to latest:

PATH=./venv/bin:$PATH ./venv/bin/mike deploy --push --update-aliases 7.0 latest\n

And all new documentation will be pushed to dev via:

PATH=./venv/bin:$PATH ./venv/bin/mike deploy --push --update-aliases 7.1 dev\n

Note that PATH=./venv/bin:$PATH is used as mike in turn calls mkdocs which is in this path.

You must be an authorised user for this but we welcome pull requests against the documentation repository!

Do not forget to push your changes to GutHub (if you have push permissions):

git add .\ngit commit -am \"your commit message\"\ngit push\n
"},{"location":"dev/foil/","title":"Foil","text":"

Foil is the view layer of IXP Manager.

"},{"location":"dev/foil/#undefined-variables","title":"Undefined Variables","text":"

Foil is configured to throw an exception if a variable is undefined.

Methods to test a variable include:

<?= isset( $t->aaa ) ? 'b' : 'c' ?>\n// c\n

Methods that do not work include:

<?= $t->aaa ?? 'c' ?>\n
"},{"location":"dev/forms/","title":"Forms","text":"

This page collates various notes and bets practice for writing HTML forms with IXP Manager and Laravel.

IXP Manager uses the library Former to generate forms. Here are some examples of how to use Former.

"},{"location":"dev/forms/#html5-validation","title":"HTML5 Validation","text":"

Former adds HTML5 validation tags when it created forms. If you wish to test the PHP code's validation rules, you will need to disable this is development by setting the following .env setting:

FORMER_LIVE_VALIDATION=false\n
"},{"location":"dev/forms/#checkboxes","title":"Checkboxes","text":"

We had some issues using Former's checkboxes correctly - this is why we are providing the correct way to use them.

"},{"location":"dev/forms/#configuration","title":"Configuration","text":"

First make sure that the Former configuration file (config/former.php) is correctly configured:

// Whether checkboxes should always be present in the POST data,\n// no matter if you checked them or not\n'push_checkboxes'         => true,\n\n// The value a checkbox will have in the POST array if unchecked\n'unchecked_value'         => 0,\n
"},{"location":"dev/forms/#view","title":"View","text":"

The following is the structure of a checkbox input:

Note: in this example the checkbox will be unchecked:

Former::checkbox( 'checkbox-name' )\n    ->id( 'checkbox-id' )\n    ->label( 'my-label' )\n    ->text( 'my-text' )\n    ->value( 1 )\n    ->blockHelp( \"Help text\u201d );\n

To check a checkbox by default add the following function to the checkbox structure above:

    ->check()\n

If the checkbox has to be checked depending on a variable:

    ->check( $myVariableIsChecked ? 1 : 0 )\n

Note: The above case should be an exception and not a common way to populate the checkboxes. To populate the checkboxes correctly you have to do it via the controller as explained below.

"},{"location":"dev/forms/#controller","title":"Controller","text":"

You can populate a form via the controller with the function Former::populate() by the usual method of passing an array of values:

Former::populate([\n    'my-checkbox' => $object->isChecked() ? 1 : 0,\n]);\n
"},{"location":"dev/forms/#mardown-textarea","title":"Mardown Textarea","text":"

IXP Manager uses the library Mardown to edit notes input field. Here are some examples of how to use Markdown.

"},{"location":"dev/forms/#view_1","title":"View","text":"

You will have to use the following HTML structure to be able to add markdown to your textrea :

<div class=\"form-group\">\n    <label for=\"notes\" class=\"control-label col-lg-2 col-sm-4\">Notes</label>\n    <div class=\"col-sm-8\">\n        <ul class=\"nav nav-tabs\">\n            <li role=\"presentation\" class=\"active\">\n                <a class=\"tab-link-body-note\" href=\"#body\">Body</a>\n            </li>\n            <li role=\"presentation\">\n                <a class=\"tab-link-preview-note\" href=\"#preview\">Preview</a>\n            </li>\n        </ul>\n        <br>\n        <div class=\"tab-content\">\n            <div role=\"tabpanel\" class=\"tab-pane active\" id=\"body\">\n                <textarea class=\"form-control\" style=\"font-family:monospace;\" rows=\"20\" id=\"notes\" name=\"notes\"><?= $note_value ?></textarea>\n            </div>\n            <div role=\"tabpanel\" class=\"tab-pane\" id=\"preview\">\n                <div class=\"well well-preview\" style=\"background: rgb(255,255,255);\">\n                    Loading...\n                </div>\n            </div>\n        </div>\n        <br><br>\n    </div>\n</div>\n
The href=\"#body\" from <a id=\"tab-link-body\" class=\"tab-link-body-note\" href=\"#body\">Body</a> have to match with the id=\"body\" from <div role=\"tabpanel\" class=\"tab-pane active\" id=\"body\">.

The same for <a id=\"tab-link-preview\" class=\"tab-link-preview-note\" href=\"#preview\">Preview</a> and <div role=\"tabpanel\" class=\"tab-pane\" id=\"preview\">.

If you want to add more than one textarea with markdown to your page you will have to make sure that the HTML ID of the inputs are different like on the following example :

<div class=\"form-group\">\n    <label for=\"notes\" class=\"control-label col-lg-2 col-sm-4\">Public Notes</label>\n    <div class=\"col-sm-8\">\n\n        <ul class=\"nav nav-tabs\">\n            <li role=\"presentation\" class=\"active\">\n                <a class=\"tab-link-body-note\" href=\"#body1\">Body</a>\n            </li>\n            <li role=\"presentation\">\n                <a class=\"tab-link-preview-note\" href=\"#preview1\">Preview</a>\n            </li>\n        </ul>\n\n        <br>\n\n        <div class=\"tab-content\">\n            <div role=\"tabpanel\" class=\"tab-pane active\" id=\"body1\">\n                <textarea class=\"form-control\" style=\"font-family:monospace;\" rows=\"20\" id=\"notes\" name=\"notes\"><?= $t->notes ?></textarea>\n                <p class=\"help-block\">These notes are visible (but not editable) to the member. You can use markdown here.</p>\n            </div>\n            <div role=\"tabpanel\" class=\"tab-pane\" id=\"preview1\">\n                <div class=\"well well-preview\" style=\"background: rgb(255,255,255);\">\n                    Loading...\n                </div>\n            </div>\n        </div>\n\n        <br><br>\n    </div>\n\n</div>\n\n\n<div class=\"form-group\">\n\n    <label for=\"private_notes\" class=\"control-label col-lg-2 col-sm-4\">Private Notes</label>\n    <div class=\"col-sm-8\">\n\n        <ul class=\"nav nav-tabs\">\n            <li role=\"presentation\" class=\"active\">\n                <a class=\"tab-link-body-note\" href=\"#body2\">Body</a>\n            </li>\n            <li role=\"presentation\">\n                <a class=\"tab-link-preview-note\" href=\"#preview2\">Preview</a>    \n            </li>\n        </ul>\n\n        <br>\n\n        <div class=\"tab-content\">\n            <div role=\"tabpanel\" class=\"tab-pane active\" id=\"body2\">\n                <textarea class=\"form-control\" style=\"font-family:monospace;\" rows=\"20\" id=\"private_notes\" name=\"private_notes\"><?= $t->private_notes ?></textarea>\n                <p class=\"help-block\">These notes are <b>NOT</b> visible to the member. You can use markdown here.</p>\n            </div>\n            <div role=\"tabpanel\" class=\"tab-pane\" id=\"preview2\">\n                <div class=\"well well-preview\" style=\"background: rgb(255,255,255);\">\n                    Loading...\n                </div>\n            </div>\n        </div>\n\n        <br><br>\n    </div>\n\n</div>\n

Note: Please do not change the HTML class of the elements!

"},{"location":"dev/frontend-crud/","title":"Frontend CRUD","text":"

IXP Manager, like many applications, has a lot of tables that need basic CRUD access: Ceate, Read, Update and Delete (plus list and view). In older versions of IXP Manager (and as yet unupdated code), we used this Zend Framework trait to allow us to repidly deploy CRUD interfaces.

For IXP Manager >= v4.7, we have duplicated (and improved) this to create a scaffolding framework in Laravel. This page documents that class.

"},{"location":"dev/frontend-crud/#configuration","title":"Configuration","text":"

In any controller using extending the Doctrine2Frontend class, a _feInit() method is required which configures the controller and, for example, allows you to set what is displayed for different levels of user privileges.

The primary purpose of this function is to define the anonymous object _feParams (using an object ensures that the view gets a reference to the object and not a copy of a static array at a point in time):

<?php\nprotected function _feInit()\n{\n    $this->view->feParams = $this->_feParams = (object)[\n\n        // the ORM entity object that CRUD operations will affect:\n        'entity'            => InfrastructureEntity::class,\n\n        'pagetitle'         => 'Infrastructures',\n\n        // default is false. If true, add / edit / delete will be disabled\n        'readonly'          => false,\n\n        'titleSingular'     => 'Infrastructure',\n        'nameSingular'      => 'an infrastructure',\n\n        'viewFolderName'    => 'infrastructure',\n\n        'readonly'          => self::$read_only,\n\n        'listColumns' => [\n            // what columns to display in the list view\n            'id'         => [ 'title' => 'DB ID', 'display' => true ],\n            'name'       => 'Name',\n            'shortname'  => 'Shortname'\n        ],\n\n        'listOrderBy'    => 'name',    // how to order columns\n        'listOrderByDir' => 'ASC',     // direction of order columns\n    ];\n\n    // you can then override some of the above for different user privileges (for example)\n    switch( Auth::user() ? Auth::user()->getPrivs() : UserEntity::AUTH_PUBLIC ) {\n\n        case UserEntity::AUTH_SUPERUSER:\n            $this->_feParams->pagetitle = 'Infrastructures (Superuser View)';\n\n            $this->_feParams->listColumns = array_merge(\n                $this->_feParams->listColumns, [\n                    // ...\n                ];\n            );\n            break;\n\n        default:\n            if( php_sapi_name() !== \"cli\" ) {\n                abort( 'error/insufficient-permissions' );\n            }\n    }\n\n    // display the same information in the single object view as the list of objects\n    $this->_feParams->viewColumns = $this->_feParams->listColumns;\n}\n
"},{"location":"dev/frontend-crud/#access-privileges","title":"Access Privileges","text":"

By default, all Doctrine2Frontend controllers can only be accessed by an authenticated super user (Entities\\User::AUTH_SUPERUSER). You can change this by setting the following property on your implementation:

<?php\n/**\n * The minimum privileges required to access this controller.\n *\n * If you set this to less than the superuser, you need to manage privileges and access\n * within your own implementation yourself.\n *\n * @var int\n */\npublic static $minimum_privilege = UserEntity::AUTH_SUPERUSER;\n

If you set this to less than the superuser, you need to manage privileges and access within your own implementation yourself.

This is normally handled in a number of ways:

  1. dedicated Request object utilising the authorize() method;
  2. additional middleware;
  3. per action basis;
  4. in feInit()

The feInit() method would normally look something like the following:

<?php\n// phpunit / artisan trips up here without the cli test:\nif( php_sapi_name() !== 'cli' ) {\n\n    // custom access controls:\n    switch( Auth::check() ? Auth::user()->getPrivs() : UserEntity::AUTH_PUBLIC ) {\n        case UserEntity::AUTH_SUPERUSER:\n            break;\n\n        case UserEntity::AUTH_CUSTUSER:\n            switch( Route::current()->getName() ) {\n                case 'Layer2AddressController@forVlanInterface':\n                    break;\n\n                default:\n                    $this->unauthorized();\n            }\n            break;\n\n        default:\n            $this->unauthorized();\n    }\n}\n

The $this->unauthorized( $url = '', $code = 302 ) calls abort() with the given redirect code and URL. The default parameters will do the right thing.

"},{"location":"dev/frontend-crud/#routing","title":"Routing","text":"

Routes are explicitly defined in Laravel. The Doctrine2Frontend class sets up the standard routes automatically once you add the following to your routes/web.php (or as appropriate) file on a per implementation basis. E.g. for the Infrastructure implementation, we add to routes/web-doctrine2frontend.php:

<?php\nIXP\\Http\\Controllers\\InfrastructureController::routes();\n

Note that by placing the above in routes/web-doctrine2frontend.php, you ensure the appropriate middleware is attached.

This routes() function determines the route prefix using kebab case of the controller name. That is to say: if the controller is CustKitController, the determined prefix is cust-kit. You can override this by setting a $route_prefix class constant in your implementation.

The standard routes added (using infrastructure as an example) are:

If you want to create your own additional routes, create a function as follows in your implementation:

<?php\npublic static function additionalRoutes( $route_prefix ) {}\n

And add routes (using the normal Route::get() / ::post() / etc Laravel methods).

If you want to completely change the routes, just override the public static function routes() {} function.

"},{"location":"dev/frontend-crud/#view-templates","title":"View Templates","text":"

All the common view templates for thss functionality can be found in resources/views/frontend directory. You can override any of these with your own by creating a template of the same name and placing it under resources/views/xxx (or resources/skins/skinname/xxx) where xxx is the feParams['viewFolderName'].

"},{"location":"dev/frontend-crud/#read-only","title":"Read Only","text":"

If your controller should be read only (list and view actions, no add, edit or delete) then set the following static member:

<?php\n/**\n * Is this a read only controller?\n *\n * @var boolean\n */\npublic static $read_only = true;\n
"},{"location":"dev/frontend-crud/#actions","title":"Actions","text":"

Each of the typical CRUD actions will be described here.

NB: the best documentation is sometimes the code. Check out the above routes file (routes/web-doctrine2frontend.php) and examine some of the implemented controllers directly.

"},{"location":"dev/frontend-crud/#list","title":"List","text":"

The list action is for listing the contents of a database table in a HTML / DataTables view.

The only requirement of the list action is that the following abstract function is implemented:

<?php\n/**\n * Provide array of table rows for the list action (and view action)\n *\n * @param int $id The `id` of the row to load for `view` action. `null` if `list` action.\n * @return array\n */\nabstract protected function listGetData( $id = null );\n

A sample implementation for the infrastructure controller just calls a Doctrine2 repository function:

<?php\nprotected function listGetData( $id = null ) {\n    return D2EM::getRepository( InfrastructureEntity::class )->getAllForFeList( $this->feParams, $id );\n}\n

The table rows returned in the above array must be associatative arrays with keys matching the feParams['listColumns'] definition.

The list view template optionally includes other templates you can define (where xxx below is the feParams['viewFolderName']):

  1. the list view includes a JavaScript template resources/views/frontend/js/list which activates the DataTables, sets up sorting, etc. You can override this (and include the original if appropriate) if you want to add additional JS functionality.
  2. if the resources/views/xxx/list-preamble template exists, it is included just before the table.
  3. if the resources/views/xxx/list-postamble template exists, it is included just after the table.
  4. if the resources/views/xxx/list-head-override template exists, it will replace the <thead> element of the list table (example).
  5. if the resources/views/xxx/list-head-override template exists, it will replace the <tr> elements of the list table (example).
  6. if the resources/views/xxx/list-empty-message template exists, it will replace the the standard information box when a table is empty (example).

The following hooks are available:

"},{"location":"dev/frontend-crud/#view","title":"View","text":"

The view action is for showing a single database row identified by the id passed in the URL.

The only requirement of the view action is that the abstract function listGetData( $id = null ) as used by the list action has been correctly implemented to take an optional ID and return an array with a single element matching that ID.

The table rows returned in the above array must be associatative arrays with keys matching the feParams['viewColumns'] definition.

The list view template optionally includes other templates you can define (where xxx below is the feParams['viewFolderName']):

  1. an optional JavaScript template resources/views/frontend/js/view.
  2. if the resources/views/xxx/view-preamble template exists, it is included just before the view panel.
  3. if the resources/views/xxx/view-postamble template exists, it is included just after the view panel.
  4. if the resources/views/xxx/view-row-override template exists, it will replace the <tr> element of the view (example).
"},{"location":"dev/frontend-crud/#create-update-form","title":"Create / Update Form","text":"

The presentation of the create / update (also known as add / edit) page is discussed here. Form processing and storage will be dealt with in the next section.

The first required element of this functionality is the implementation of the following abstract function:

<?php\nabstract protected function addEditPrepareForm( $id = null ): array;\n

The use of this function is best explained with reference to an implementation from the infrastructure controller:

<?php\n/**\n * Display the form to add/edit an object\n * @param   int $id ID of the row to edit\n * @return array\n */\nprotected function addEditPrepareForm( $id = null ): array {\n    if( $id !== null ) {\n\n        if( !( $this->object = D2EM::getRepository( InfrastructureEntity::class )->find( $id) ) ) {\n            abort(404);\n        }\n\n        $old = request()->old();\n\n        // we use array_key_exists() here as the array can contain the\n        // key with a null value.\n\n        Former::populate([\n            'name'             => array_key_exists( 'name',      $old ) ? $old['name']      : $this->object->getName(),\n            'shortname'        => array_key_exists( 'shortname', $old ) ? $old['shortname'] : $this->object->getShortname(),\n            'isPrimary'        => array_key_exists( 'isPrimary', $old ) ? $old['isPrimary'] : ( $this->object->getIsPrimary() ?? false ),\n        ]);\n    }\n\n    return [\n        'object'          => $this->object,\n    ];\n}\n

Note from the above:

The next required element is building the actual Former object for display. For this, you must create a custom resources/views/xxx/edit-form template. See, as an example, the infrastructure one under resources/views/infrastructure/edit-form.js.

The add/edit view template optionally includes other templates you can define (where xxx below is the feParams['viewFolderName']):

  1. an optional JavaScript template resources/views/xxx/js/edit.
  2. if the resources/views/xxx/edit-preamble template exists, it is included just before the view panel.
  3. if the resources/views/xxx/edit-postamble template exists, it is included just after the view panel.

You can query the boolean $t->params['isAdd'] in your templates to distinguish between add and edit operations.

"},{"location":"dev/frontend-crud/#create-update-store","title":"Create / Update Store","text":"

Storing the edited / new object requires implementing a single abstract method which manages validation and storage. This is best explained with a practical implementation:

<?php\n/**\n * Function to do the actual validation and storing of the submitted object.\n * @param Request $request\n * @return bool|RedirectResponse\n */\npublic function doStore( Request $request )\n{\n    $validator = Validator::make( $request->all(), [\n        'name'                  => 'required|string|max:255',\n        'shortname'             => 'required|string|max:255',\n    ]);\n\n    if( $validator->fails() ) {\n        return Redirect::back()->withErrors($validator)->withInput();\n    }\n\n    if( $request->input( 'id', false ) ) {\n        if( !( $this->object = D2EM::getRepository( InfrastructureEntity::class )->find( $request->input( 'id' ) ) ) ) {\n            abort(404);\n        }\n    } else {\n        $this->object = new InfrastructureEntity;\n        D2EM::persist( $this->object );\n    }\n\n    $this->object->setName(              $request->input( 'name'         ) );\n    $this->object->setShortname(         $request->input( 'shortname'    ) );\n    $this->object->setIxfIxId(           $request->input( 'ixf_ix_id'    ) ? $request->input( 'ixf_ix_id'    ) : null );\n    $this->object->setPeeringdbIxId(     $request->input( 'pdb_ixp'      ) ? $request->input( 'pdb_ixp'      ) : null );\n    $this->object->setIsPrimary(         $request->input( 'primary'      ) ?? false );\n    $this->object->setIXP(               D2EM::getRepository( IXPEntity::class )->getDefault() );\n\n    D2EM::flush($this->object);\n\n    if( $this->object->getIsPrimary() ) {\n        // reset the rest:\n        /** @var InfrastructureEntity $i */\n        foreach( D2EM::getRepository( InfrastructureEntity::class )->findAll() as $i ) {\n            if( $i->getId() == $this->object->getId() || !$i->getIsPrimary() ) {\n                continue;\n            }\n            $i->setIsPrimary( false );\n        }\n        D2EM::flush();\n    }\n\n    return true;\n}\n

Note from this:

The following hooks are available:

"},{"location":"dev/frontend-crud/#delete","title":"Delete","text":"

Deletes are handled via posts and so have Laravel's built in CSRF protection. The logic is quiet simple:

<?php\npublic function delete( Request $request ) {\n\n    if( !( $this->object = D2EM::getRepository( $this->feParams->entity )->find( $request->input( 'id' ) ) ) ) {\n        return abort( '404' );\n    }\n\n    if( $this->preDelete() ) {\n        D2EM::remove( $this->object );\n        D2EM::flush();\n        $this->postFlush( 'delete' );\n        AlertContainer::push( $this->feParams->titleSingular . \" deleted.\", Alert::SUCCESS );\n    }\n\n    return redirect()->action( $this->feParams->defaultController.'@'.$this->feParams->defaultAction );\n}\n

As you can see, it calls a protected function preDelete(): bool {} hook which, if it returns false, the delete operation is abandoned.

The following hooks are available:

* `protected function postDeleteRedirect() {}` - the *Doctrine2Frontend* class returns null. Override it to return\n    a valid route name to have the post store redirect go somewhere besides `/list`.\n
"},{"location":"dev/frontend-crud/#other-hooks","title":"Other Hooks","text":""},{"location":"dev/frontend-crud/#post-flush","title":"Post Flush","text":"

There is a postFlush() hook:

<?php\n/**\n * Optional method to be overridden if a D2F controllers needs to perform post-database flush actions\n *\n * @param string $action Either 'add', 'edit', 'delete'\n * @return bool\n */\nprotected function postFlush( string $action ): bool\n{\n    return true;\n}\n

which is called during some actions with the action name as a parameter: add, edit, delete. This function is called just after the database flush operation.

"},{"location":"dev/grapher/","title":"Grapher","text":""},{"location":"dev/grapher/#outline-of-adding-a-new-graph-type","title":"Outline of Adding a New Graph Type","text":"

This is a quick write up as I commit a new graph type. To be fleshed out.

Our new graphing backend, Grapher, supports different graph types from different backends. To add a new graph type - let's call it Example - you need to do the following:

  1. Create a graph class for this new type called app/Services/Grapher/Graph/Example.php. This must extend the abstract class app/Services/Grapher/Graph.php.
  2. Add an example() function to app/Services/Grapher.php which instantiates the above graph object.
  3. Update the appropriate backend file(s) (app/Services/Grapher/Backend/xxx) to handle this new graph file. I.e. create the actual implementation for getting the data to process this graph.
  4. Add your graph to the supports() function in the appropriate backends (and the app/Services/Grapher/Backend/Dummy backend).
  5. To serve this graph over HTTP:

  6. Create a GET route in app/Providers/GrapherServiceProvider.php

  7. Create a function to handle the GET request in app/Http/Controllers/Services/Grapher.php
  8. Add functionality to the middleware to process a graph request: app/Http/Middleware/Services/Grapher.php

Here's a great example from a Github commit.

"},{"location":"dev/grapher/#adding-a-new-mrtg-graph","title":"Adding a New MRTG Graph","text":"

Here is an example of adding broadcast graphs to MRTG.

"},{"location":"dev/helpders/","title":"Helpers","text":"

Various helpers we use within IXP Manager.

"},{"location":"dev/helpders/#alerts","title":"Alerts","text":"

To show Bootstrap-styled alerts on view (Foil) templates, add them in your controllers as follows:

<?php\n    use IXP\\Utils\\View\\Alert\\Container as AlertContainer;\n    use IXP\\Utils\\View\\Alert\\Alert;\n\n    ...\n\n    AlertContainer::push( '<b>Example:</b> This is a success alert!, Alert::SUCCESS );\n

where the types available are: SUCCESS, INFO (default), DANGER, WARNING.

To then display (all) the alerts, in your foil template add:

<?= $t->alerts() ?>\n

These alerts are HTML-safe as they display the message using HTML Purifier's ''clean()''.

"},{"location":"dev/introduction/","title":"Development Introduction","text":"

We welcome contributions to IXP Manager from the community. The main requirement is that you sign the contributor's license agreement.

"},{"location":"dev/introduction/#core-vs-packages","title":"Core vs Packages","text":"

If you plan to add a significant / large piece of functionality, then please come and talk to us first via the mailing lists. There are two ways to get such contributions into IXP Manager:

  1. added to the core code with our help and guidance; or
  2. as a optional package.

The following is a reply to someone looking to contribute something that didn't fit with what IXP Manager's mission was which may also help anyone considering contributing.

We have learned the very (very) hard way to avoid adding non-core functionality into the core of IXP Manager. At INEX, we won't be using XXX in the short to medium term and nor are we aware of IX's that use it.

This means that XXX code will be non-core and not used or tested (or testable easily) by the core IXP Manager developers. This creates a bunch of issues including:

a) becomes a new consideration for IXP Manager updates and schema changes;

b) the IXP Manager issue tracker and mailing list will be the goto place for people seeking help with this functionality and we will not be able to provide that;

c) would require assurances of maintainers and support for XXX to the project - I'm not sure that can be given at this stage;

d) large features require documentation: https://docs.ixpmanager.org/

e) past experience has shown us that we often end up having to remove chunks of non-core functionality due to (a), (b) and (c) above and this is also costly on time.

Now, we really do not want to discourage adding XXX support to IXP Manager - I like the project and you have shown it can work at an IX. It'd be great to have it as part of the IXP Manager tool chain.

One of the advantages of switching from Zend Framework to Laravel has been the ability to have add on functionality by way of packages:

https://laravel.com/docs/5.6/packages

I think this is a perfect way to add XXX support and we can help ensuring UI hooks by detecting the packet and adding menu options.

This also solves all the issues above:

a) is not an impediment to upgrades: if the XXX package falls behind the pace of IXP Manager development and someone wants XXX support, they just install a version of IXP Manager that is aligned with the XXX package.

b) issue and support wise, having XXX as a package creates a clean line of delineation between IXP Manager and XXX code bases so people can raise issues and questions with the correct project.

c) is mostly answered by (a).

d) documentation becomes the purview of the XXX team and we can provide the appropriate links from ours.

"},{"location":"dev/introduction/#database-orms","title":"Database / ORMs","text":"

As of v6.0.0, IXP Manager uses the Laravel Eloquent ORM, replacing the Doctrine ORM used in previous versions.

a) do not change the schema of any existing table. This would need to be done in IXP Manager core via Eloquent as part of a new release and should be discussed with the core developers.

b) ideally schema changes would be limited to namespaced (xxx_*) tables (where xxx represents your package / feature.)

"},{"location":"dev/looking-glass/","title":"Looking Glass","text":"

IXP Manager has looking glass support allowing IXPs to expose details on route server / collector / AS112 BGP sessions to their members.

As it stands, we have only implemented one looking glass backend - Bird's Eye; a simple secure micro service for querying Bird (JSON API) (and also written by us, INEX).

We have implemented this in IXP Manager as a service so that other backends can be added easily.

Disclaimer: the links and line numbers here are against IXP Manager v4.5.0 and they may have changed since.

"},{"location":"dev/looking-glass/#adding-support-for-additional-lgs","title":"Adding Support for Additional LGs","text":"
  1. An additional API backend needs to be given a constant in Entities\\Router named API_TYPE_XXX where XXX is an appropriate name.

  2. It then needs to have a case: check in app/Services/LookingGlass.php. This needs to instantiate your service provider.

  3. Your service provider must implement the App\\Contracts\\LookingGlass interface.

For a concrete example, see the Bird's Eye implementation.

"},{"location":"dev/release-procedure/","title":"Release Procedure","text":"

DRAFT: in advance of the v4.8.0 release, I am gathering some notes here towards writing for formal release procedure for new minor versions of IXP Manager.

  1. Create a release branch - e.g. release-v5.
  2. Ensure third party libraries have been updated.
  3. Ensure the .env.example has been updated with new options and comments.
  4. Ensure completed release notes on GitHub.
  5. Update the IXP Manager installation script(s) to reference the new branch of IXP Manager.
  6. Update the Docker files to install the new version of IXP Manager.
  7. Update any necessary documentation on https://docs.ixpmanager.org/
  8. Tag the GitHub release.
  9. Ensure proxies match entities.
  10. Ensure production yarn run.
  11. Release announcement.
"},{"location":"dev/telescope/","title":"Telescope","text":"

Laravel Telescope is an elegant debug assistant for the Laravel framework. Telescope provides insight into the requests coming into your application, exceptions, log entries, database queries, queued jobs, mail, notifications, cache operations, scheduled tasks, variable dumps and more. Telescope makes a wonderful companion to your local Laravel development environment.

"},{"location":"dev/users/","title":"Development Notes for User Management","text":"

The project management of this m:n (customer:user) enhancement is an internal Island Bridge Networks issue and this page serves to expose that and more information publicly.

Read the official user documentation first as that will answer questions around how it is meant to work.

Here is the schema for customers / users around the time of the v5 release:

One for the goals of this enhancement was to try to not break as much existing functionality as possible. As an example, we would typically get the user's customer relation using $user->getCustomer() and this was linked via the user.custid column.

In developing this enhancement, we needed to track two things:

  1. the customer the user was currently logged in for;
  2. the customer the user was last logged in for so on the next login, we could return to that customer by default.

By using the user.custid for this purpose, we retained the ability to call $user->getCustomer() which meant the vast majority of existing code continued to work as expected.

So, the mechanism for switching a user from one customer to another is:

  1. authorization: does a c2u (customer to user) record exist for the user/customer combination;
  2. update user.custid to the target customer ID;
  3. issue a redirect to the home page as the new customer.

When logging in, the system will:

"},{"location":"dev/vagrant/","title":"Vagrant","text":"

For development purposes, we have Vagrant build files.

The Vagrant file was updated for IXP Manager v7.

The entire system is built from a fresh Ubuntu 24.04 installation via the tools/vagrant/bootstrap.sh script. This also installs a systemd service to run tools/vagrant/startup.sh on a reboot to restart the various services.

"},{"location":"dev/vagrant/#quick-vagrant-with-virtualbox","title":"Quick Vagrant with VirtualBox","text":"

Note the developers use Parallels (see below) and have not tested on VirtualBox for sometime.

If you want to get IXP Manager with Vagrant and VirtualBox up and running quickly, follow these steps:

  1. Install Vagrant (see: https://developer.hashicorp.com/vagrant/install)
  2. Install VirtualBox (see: https://www.virtualbox.org/)
  3. Clone IXP Manager to a directory:

    git clone https://github.com/inex/IXP-Manager.git ixpmanager\ncd ixpmanager\n
  4. Edit the Vagrantfile in the root of IXP Manager and delete the config.vm.provider \"parallels\" do |prl| block and uncomment the config.vm.provider \"virtualbox\" do |vb|.

  5. Spin up a Vagrant virtual machine:

    vagrant up\n
"},{"location":"dev/vagrant/#quick-vagrant-with-parallels","title":"Quick Vagrant with Parallels","text":"
  1. Install Vagrant (see: https://developer.hashicorp.com/vagrant/install). On MacOS:
    brew tap hashicorp/tap\nbrew install hashicorp/tap/hashicorp-vagrant\n
  2. Install Parallels (see: https://www.parallels.com/)
  3. Install the Parallels provider. E.g., on MacOS when Vagrant is installed via Homebrew:
    vagrant plugin install vagrant-parallels\n
  4. Clone IXP Manager to a directory:
    git clone https://github.com/inex/IXP-Manager.git ixpmanager\ncd ixpmanager\n
  5. Spin up a Vagrant virtual machine:
    vagrant up\n
"},{"location":"dev/vagrant/#next-steps-access-ixp-manager","title":"Next Steps - Access IXP Manager","text":"
  1. Access IXP Manager on: http://localhost:8088/

  2. Log in with one of the following username / passwords:

  3. Admin user: vagrant / Vagrant1 (api key: r8sFfkGamCjrbbLC12yIoCJooIRXzY9CYPaLVz92GFQyGqLq)

  4. Customer Admin: as112 / AS112as112
  5. Customer User: as112user / AS112as112
"},{"location":"dev/vagrant/#vagrant-notes","title":"Vagrant Notes","text":"

Please see Vagrant's own documentation for a full description of how to use it fully.

"},{"location":"dev/vagrant/#database-details","title":"Database Details","text":"

Spinning up Vagrant in the above manner loads a sample database from tools/vagrant/vagrant-base.sql. If you have a preferred development database, place a bzip'd copy of it in the ixpmanager directory called ixpmanager-preferred.sql.bz2 before step 5 above.

"},{"location":"dev/vagrant/#snmp-simulator-and-mrtg","title":"SNMP Simulator and MRTG","text":"

The Vagrant bootstrapping includes installing snmpsim making three \"switches\" matching those in the supplied database available for polling. The source snmpwalks for these are copied from tools/vagrant/snmpwalks to /srv/snmpclients and values can be freely edited there.

Example of polling when ssh'd into vagrant:

snmpwalk -c swi1-fac1-1 -v 2c swi1-fac1-1\nsnmpwalk -c swi1-fac2-1 -v 2c swi1-fac1-1\nsnmpwalk -c swi2-fac1-1 -v 2c swi2-fac1-1\n

As you can see, the community selects the source file - i.e., -c swi1-fac1-1 for /srv/snmpclients/swi1-fac1-1.snmprec. The Vagrant bootstrap file adds these switch names to /etc/hosts also.

The bootstrapping also configures mrtg to run and includes this in the crontab rather than using dummy graphs. The snmp simulator has some randomised elements for some of the interface counters.

"},{"location":"dev/vagrant/#route-server-collector-as112-testbed-and-looking-glass","title":"Route Server / Collector / AS112 Testbed and Looking Glass","text":"

When running vagrant up for the first time, it will create a full route server / collector /as112 testbed complete with clients:

All Bird instance sockets are located in /var/run/bird/ allowing you to connect to them using birdc -s /var/run/bird/xxx.ctl.

In additional to this, a second Apache virtual host is set up listening on port 81 locally providing access to Birdseye installed in /srv/birdseye. The bundled Vagrant database is already configured for this and should work out of the box. All of Birdseye's env files are generated via:

php /vagrant/artisan vagrant:generate-birdseye-configurations\n

Various additional scripts support all of this:

  1. The tools/vagrant/bootstrap.sh file which sets everything up.
  2. tools/vagrant/scripts/refresh-router-testbed.sh will reconfigure all routers.
  3. tools/vagrant/scripts/as112-reconfigure-bird2.sh will (re)configure and start, if necessary, the AS112 Bird instances.
  4. tools/vagrant/scripts/rs-api-reconfigure-all.sh will (re)configure and start, if necessary, the route server Bird instances.
  5. tools/vagrant/scripts/rc-reconfigure.sh will (re)configure and start, if necessary, the route collector Bird instances.

For the clients, we run the following:

mkdir -p /srv/clients\nchown -R vagrant: /srv/clients\nphp /vagrant/artisan vagrant:generate-client-router-configurations\nchmod a+x /srv/clients/start-reload-clients.sh\n/srv/clients/start-reload-clients.sh\n

All router IPs are added to the loopback interface as part of the tools/vagrant/bootstrap.sh (or the startup.sh script on a reboot). There are also necessary entries in /etc/hosts to map router handles to IP addresses. There are two critical Bird BGP configuration options to allow multiple instances to run on the same server and speak with each other:

strict bind yes;\nmultihop;\n
"},{"location":"features/api/","title":"API","text":"

IXP Manager has a number of API endpoints which are documented in the appropriate places throughout the documentation.

Please find details below about authenticating for API access to secured functions.

"},{"location":"features/api/#creating-an-api-key","title":"Creating an API Key","text":"

When logged into IXP Manager, create an API as follows:

  1. Select My Account on the right hand side of the top menu.
  2. Select API Keys from the My Account menu.
  3. Click the plus / addition icon on the top right of the resultant API Keys page.

Treat your API key as a password and do not copy the below URLs into public websites and other public forums.

"},{"location":"features/api/#api-authentication","title":"API Authentication","text":"

There are two ways to use your API key to authenticate to IXP Manager.

You can test these using the API test endpoint at api/v4/test. For example:

https://ixp.example.com/api/v4/test\n

The plaintext response will also indicate if you are authenticated or not (which can be via existing session or API key).

"},{"location":"features/api/#1-http-header-parameter","title":"1. HTTP Header Parameter","text":"

You can pass your API key in the HTTP request as a request header parameter called X-IXP-Manager-API-Key. For example:

curl -X GET -H \"X-IXP-Manager-API-Key: my-api-key\" https://ixp.example.com/api/v4/test\n
"},{"location":"features/api/#2-as-a-url-parameter","title":"2. As a URL Parameter","text":"

This is a legacy method that is still supported. You can tack your key on as follows:

https://ixp.example.com/api/v4/test?apikey=my-api-key\n
"},{"location":"features/api/#api-key-management","title":"API Key Management","text":"

In IXP Manager v5.1 we introduced some new API Key management features:

API keys are not shown by default but require a password to be entered to show them. If you wish to show the keys (inadvisable), you can set IXP_FE_API_KEYS_SHOW=true in .env. This is only available to mimic historic functionality and will be removed in the future.

By default, a user can create no more than 10 keys. If you wish to change this, set IXP_FE_API_KEYS_MAX=n in .env where n is an integer value > 0.

"},{"location":"features/as112/","title":"AS112 Service","text":"

Prerequisite Reading: Ensure you first familiarize yourself with the generic documentation on managing and generating router configurations here.

AS112 is a service which provides anycast reverse DNS lookup for several prefixes, namely:

Because these IP addresses are widely used for private networking, many end-user systems are configured to perform reverse DNS lookups for these address ranges. DNS lookups for these ranges should always be null-answered quickly, in order to make sure that DNS retransmits don\u2019t occur (thereby overloading local DNS resolvers), and to prevent end-user systems from hanging due to DNS lookups.

AS112 services are provided around the world by a group of volunteers and, very often, by IXP operators for the benefit of their members.

INEX has always provided an AS112 service to our members on all peering LANs and so it is an integral part of IXP Manager. You can read all about INEX's implementation at https://www.inex.ie/technical/as112-service/ including graphs for the service. Feel free to use our test and examples from that page for your own IXP.

"},{"location":"features/as112/#building-an-as112-service","title":"Building an AS112 Service","text":"

You can find instructions for building an AS112 service in rfc7534. You should also add AS112 redirection using DNAME functionality as per rfc7535.

You will also find a lot more information and how-tos on the official website at: https://www.as112.net/.

Follow these instructions in every way to build the AS112 service on a (virtual) machine with an appropriate DNS server - except skip the installation and configuration of the BGP daemon. When using IXP Manager, you will need to use Bird for this and IXP Manager will create the configuration.

"},{"location":"features/as112/#managing-your-as112-service-from-ixp-manager","title":"Managing Your AS112 Service from IXP Manager","text":"

AS112 is disabled by default in IXP Manager. This really just means UI elements are hidden. To enable these, set the following in your .env file:

IXP_AS112_UI_ACTIVE=true\n

This will add UI elements including:

Enabling the AS112 service simply indicates if a BGP peering session should be created in the AS112 BGP configuration when downloading the AS112 router's BGP configuration from IXP Manager.

"},{"location":"features/as112/#creating-the-pro-bono-as112-customer","title":"Creating the Pro-Bono AS112 Customer","text":"

You need to add the AS112 service as a pro-bono member of your IXP in IXP Manager. Here's INEX's example:

You then need to create an interface for the AS112 service on each peering LAN where the service will be offered. Here again is INEX's example from our peering LAN1 in Dublin:

Note that historically INEX has not used MD5 on our AS112 service. This is because the service dates from over 20 years ago at INEX when MD5 support was not available. There is no reason not to use MD5 on the service if you wish.

"},{"location":"features/as112/#generating-the-bird-configuration","title":"Generating the Bird Configuration","text":"

Please see the router configuration generation for this.

For your AS112 server, we have a sample script(s) for pulling and updating the configuration from IXP Manager. We typically put this in an hourly cron.

"},{"location":"features/as112/#other-notes","title":"Other Notes","text":"

At INEX, we typically have our AS112 service peer with our route servers.

This will happen automatically if you check the Route Server Client on the AS112 VLAN interface configuration (see above screen shot) and also check the AS112 Client checkbox on the VLAN interfaces of your route servers in IXP Manager. Note that in the same way as you create an AS112 pro-bono customer, you should also have a dedicated route server internal customer.

"},{"location":"features/console-servers/","title":"Console Servers","text":"

An IXP would typically have out of band access (for emergencies, firmware upgrades, etc) to critical infrastructure devices by means of a console server.

IXP Manager has Console Servers and Console Server Connections menu options to allow IXP administrators add / edit / delete / list console server devices and, for any such device, record what console server port is connected to what device (as well as connection characteristics such as baud rate).

From IXP Manager v4.8.0 onwards, each of these pages has a notes field which supports markdown to allow you to include sample connection sessions to devices. This is especially useful for rarely used console connections to awkward devices.

"},{"location":"features/console-servers/#improvements-from-v480","title":"Improvements from v4.8.0","text":"

One of the new features of v4.8.0 is fixing the switch database table which until now could hold switches and console servers. This was awkward in practice and we have split these into distinct database tables and menu options.

"},{"location":"features/core-bundles/","title":"Core Bundles","text":"

A core bundle is a link between the IXP's own switches. These are often referred to as trunks, interswitch links (ISLs), core links, etc. IXP Manager has a number of features to support these since v6 was released.

Before continuing with this document, it is critical you have read and understand how IXP Manager represents normal member connections - please read the Customer Connections page before proceeding as the rest of this document assumes that foundational knowledge.

Within IXP Manager, a core bundle represents a link(s) between two switches. This bundle may have one or more links and it may be one of three types:

  1. A layer 2 LACP link (L2-LAG). Where your exchange has more than two switches, a protocol such as spanning tree would operate across these links to prevent loops.

    If you are running just two switches with a single link between them, this is also the option you would choose. We'd typically recommend a protocol such as LACP or UDLD runs across even single links to detect unidirectional link errors.

  2. A layer 3 LAG (L3-LAG) is for one or more aggregated links between a switch when using a routed underlay such as MPLS / VPLS / VXLAN. Each end of the link would have an IP address and participate in a routed core network.

  3. ECMP is similar to L3-LAG above, each individual link in the core bundle has its own IP addressing and traffic distribution across the links is handled via equal-cost multi-path (ECMP) routing.

INEX has been using the core bundles feature internally for some time without issue. We use ECMP extensively and L2-LAGs to a lesser extent. This all ties into our automation. L3-LAGs are mostly untested by us so please open bug reports on GitHub if there are any issues.

Some of the features that core bundles provide and enable include:

"},{"location":"features/core-bundles/#database-representation","title":"Database Representation","text":"

To fully understand IXP Manager's implementation of core bundles, it is important to have an awareness of the database representation of them. This is why reading the customer connections page is important - core bundles have been designed to fit into the existing database representation:

As you'll note, we still have a virtual interface (VI) as the syntactic sugar to represent a link where:

What's new is we've added a new element of syntactic sugar - the core bundle (CB) - and this is used to group the two ends of the link(s) between switches together.

The above may seem quite complex but it works well in practice. Most importantly, IXP Manager guides you through most of the complexity when setting up and managing core bundles. However, it's still important to have a grasp of the above as the user interface does reflect the underlying database schema.

"},{"location":"features/core-bundles/#creating-a-core-bundle","title":"Creating a Core Bundle","text":"

Core bundles can be added like any another object in IXP Manager: find Core Bundles under IXP Admin Actions on the left hand side menu and then, when you are on the core bundles page, select the [+] add button on the top right of the page.

Adding a core bundle is presented with wizard-type functionality. As with most IXP Manager pages, there is extensive context-based help available by clicking the help button at the end of the form.

There's a number of elements to adding a core bundle and we'll take them individually here using an ECMP bundle as an example.

"},{"location":"features/core-bundles/#general-settings","title":"General Settings","text":"

The context help shown in the image explains each element's requirements quite well.

You will notice that we often say informational unless you are provisioning your switches from IXP Manager. This is because many of these settings have no impact with IXP Manager or associated functions such as graphing. The value of entering this information won't be appreciated unless you are provisioning switches via IXP Manager using something such as Napalm.

Manually provisioning a core bundle with 8 x 10Gb ECMP links for a VXLAN underlay requires 16 interface configurations with consistent and correct IP addressing, MTU, BFD, etc. Add to that the 16 BGP neighbor configurations required. This does not scale beyond a handful of switches. We'd argue it barely scales to two switches. Especially when you then need to change the cost / preference settings.

Using IXP Manager, INEX can edit the cost of a core bundle and push it out through our SaltStack + Napalm configuration in a quick and error free manner.

It should also be recognised that the specific meaning of cost, preference, STP, etc. do not need to be taken literally - use them as appropriate for your network design and technology. For example, INEX uses cost for BGP metrics.

Lastly, some elements appear in general settings as they need to be consistent across all links in a core bundle - for example the MTU.

"},{"location":"features/core-bundles/#common-link-settings","title":"Common Link Settings","text":"

Again, the context help shown in the image explains each element's requirements quite well. In addition to those, please note:

"},{"location":"features/core-bundles/#core-links","title":"Core Links","text":"

The final section requires you add one or more core links and select the 'a side' and 'b side' ports for each link added.

Note that some elements are core bundle type specific - e.g. as this is an ECMP core bundle, the subnet and BFD can be configured on a per link basis. For a L3-LAG, these are configured as part of the general settings.

There are a number of features to assist with adding large bundles (e.g. when we developed this using INEX as a test case, and before the widespread deployment of 100Gb WDM kit, 8 x 10Gb bundles were not uncommon). When you click Add another core link to the bundle...:

"},{"location":"features/core-bundles/#graphing","title":"Graphing","text":"

Core bundles make graphing inter-switch links really easy - in fact, so long as you already have MRTG graphing configured, you just need to add the bundle, allow MRTG configuration to update and the graphs will appear in the statistics menu.

In fact, you can see a live example from INEX here. If this link yields a 404, it will mean we've since mothballed that specific link. Just browse to the Statistics menu and select Inter-Switch / PoP Graphs for another.

You'll note:

"},{"location":"features/core-bundles/#nagios-monitoring","title":"Nagios Monitoring","text":"

There is an API endpoint for superadmins to get the status of core bundles:

/api/v4/switch/{switchid}/core-bundles-status\n

where {switchid} is the database ID of the switch.

A sample of the JSON output is:

{\n  \"status\": true,\n  \"switchname\": \"swi1-cwt1-4\",\n  \"msgs\": [\n    \"swi1-cwt1-4 - swi1-cwt1-1 OK - 1\\/1 links up\",\n    \"swi1-cwt1-4 - swi1-cwt1-2 OK - 1\\/1 links up\",\n    \"swi1-cwt1-4 - swi1-cwt2-1 OK - 1\\/1 links up\",\n    \"swi1-cwt1-3 - swi1-cwt1-4 OK - 3\\/3 links up\",\n    \"swi1-cwt1-4 - swi1-cwt2-3 OK - 4\\/4 links up\"\n  ]\n}\n

If any individual link has failed, status will return false and an appropriate message will be provided for the relevant link(s):

\"ISSUE: swi1-cwt1-4 - swi1-cwt1-1 has 0\\/1 links up\"\n

Individually disabled core links (via the core bundle UI) will not trigger an alert. If an entire core bundle is disabled in the UI it will be listed as follows:

\"Ignoring swi1-cwt1-4 - swi1-cwt1-1 as core bundle disabled\"\n

As you can see, it returns a msgs[] element for each core bundle indicating the number of core links up.

The Nagios script we use at INEX to check the core bundles on a switch can be found here on the GitHub repository here: tools/runtime/nagios/ixp-manager-check-core-bundles.sh.

The Nagios command and service definition is as follows (this is an example - please alter to suit your own environment):

define command {\n    command_name    check_ixpmanager_core_bundles\n    command_line    /path/to/ixp-manager-check-core-bundles.sh -k <API Key> -i $_HOSTDBID$ -u 'https://ixpmanager.example.com'\n}\n\ndefine service {\n    use                  ixp-production-switch-service\n    hostgroup_name       ixp-production-switches\n    service_description  Core Bundles\n    check_command        check_ixpmanager_core_bundles\n}\n

The hostgroup_name and _HOSTDBID come from the Switch Monitoring section in Nagios Monitoring.

NB: Nagios monitoring requires that the Automated Polling / SNMP Updates for switches is working and is working for any switch you want monitored. The Nagios script / API check is a database check. This means if you poll switches every $x minutes (5 by default) and your Nagios script runs the service check every $y minutes (also 5 by default), the maximum delay in notification of a core bundle with issues should be approx. $x + $y minutes.

"},{"location":"features/core-bundles/#creating-weathermaps","title":"Creating Weathermaps","text":"

At INEX, we use Network Weathermap to create the weathermaps on our website.

This isn't something we can document exhaustively as it varys from IXP to IXP. The general approach to take is:

  1. Create a Network Weathermap configuration that works for you.
  2. Use this as a template to automate the configuration using:
    • the API endpoint for core bundles
    • the API endpoint for switches
  3. Use a templating system you are comfortable with to create the configuration.

As an outline of this process, here's the script Nick created for INEX:

#!/bin/sh\n\nPATH=/opt/local/bin:${PATH}\netcdir=/opt/local/etc\n\nAPIKEY=xxxx\nAPIURL=https://ixpmanager.example.com/api/v4\n\ncurl -s -X GET -H \"X-IXP-Manager-API-Key: ${APIKEY}\" \\\n    ${APIURL}/provisioner/corebundle/list.yaml > ${etcdir}/ixp-corebundles.yaml\ncurl -s -X GET -H \"X-IXP-Manager-API-Key: ${APIKEY}\" \\\n    ${APIURL}/provisioner/switch/list.yaml     > ${etcdir}/ixp-switches.yaml\n\nrender-jinja-template.py                       \\\n    --yaml ${etcdir}/ixp-corebundles.yaml      \\\n    --yaml ${etcdir}/ixp-switches.yaml         \\\n    --yaml ${etcdir}/switchpos.yaml   \\\n    --jinja ${etcdir}/ixp-weathermap.conf\n

The switchpos.yaml file is a manual file that contains the x/y coordinates for each switch in the following format:

---\nswitchpos:\n\n  swi1-cls1-1:\n    x: 130\n    y: 40\n\n  swi1-cwt1-1:\n    x: 50\n    y: 100\n

Hopefully this helps - improving this is something that is on our TODO list.

"},{"location":"features/cronjobs/","title":"Cron Jobs - Task Scheduling","text":"

Prior to IXP Manager v5, a number of cron jobs had to be configured manually. From v5.0 onwards, cron jobs are handled by Laravel's task scheduler. As such, you just need a single cron job entry such as:

* * * * *    www-data    cd /path-to-your-ixp-manager && php artisan schedule:run >> /dev/null 2>&1\n

You can see the full schedule in code here (look for the function protected function schedule(Schedule $schedule)).

"},{"location":"features/cronjobs/#tasks-referenced-elsewhere","title":"Tasks Referenced Elsewhere","text":"

The following tasks are run via this mechanism and are referenced elsewhere in the documentation:

"},{"location":"features/cronjobs/#other-tasks","title":"Other Tasks","text":""},{"location":"features/cronjobs/#expunging-logs","title":"Expunging Logs","text":"

Some data should not be retained indefinitely for user privacy / GDPR / housekeeping reasons. The utils:expunge-logs command runs daily at 03:04 and currently:

  1. removes user login history older than 6 months;
  2. removes user API keys that expired >3 months ago;
  3. removes expired user remember tokens.
"},{"location":"features/dns-arpa/","title":"DNS / ARPA","text":"

An IXP assigns each customer (an) IP address(es) from the range used on the peering LAN(s). These IP addresses can show up in traceroutes (for example) and both IXPs and customers like to have these resolve to a hostname.

When creating VLAN Interfaces in IXP Manager there is a field called IPv[4/6] Hostname. This is intended for this DNS ARPA purpose. Some customers have specific requirements for these while other smaller customers may not fully understand the use cases. At INEX, we typically default to entries such as:

IXP Manager can generate your ARPA DNS entries for your peering IP space as per the hostnames configured on each VLAN interface and provide them in two ways:

Both of these are explained below.

Note that the API endpoints below can be tested in your browser by directly accessing the URLs when logged in. Otherwise, you need an API key when using them in scripts.

"},{"location":"features/dns-arpa/#as-json","title":"As JSON","text":"

You can use the IXP Manager API to get all ARPA entries for a given VLAN and protocol as a JSON object using the following endpoint format:

https://ixp.example.com/api/v4/dns/arpa/{vlanid}/{protocol}\n

where:

If either of these are invalid, the API will return with a HTTP 404 response.

And example of the JSON response returned is:

[\n    {\n        \"enabled\": true,\n        \"address\": \"192.0.2.67\",\n        \"hostname\": \"cherrie.example.com\",\n        \"arpa\": \"67.2.0.192.in-addr.arpa.\"\n    },\n    ...\n]\n

where:

You can now feed the JSON object into a script to create your own DNS zones appropriate to your DNS infrastructure.

When scripting, we would normally pull the JSON object using something like:

#! /usr/bin/env bash\n\nKEY=\"your-ixp-manager-api-key\"\nURL=\"https://ixp.example.com/api/v4/dns/arpa\"\nVLANIDS=\"1 2\"\nPROTOCOLS=\"4 6\"\n\nfor v in $VLANIDS; do\n    for p in $PROTOCOLS; do\n\n        cmd=\"/usr/local/bin/curl --fail -s             \\\n            -H \\\"X-IXP-Manager-API-Key: ${KEY}\\\"       \\\n            ${URL}/${v}/${p}                           \\\n                >/tmp/dns-arpa-vlanid$v-ipv$p.json.$$\"\n        eval $cmd\n\n        if [[ $? -ne 0 ]]; then\n            echo \"ERROR: non-zero return from DNS ARPA API call for vlan ID $v with protocol $p\"\n            continue\n        fi\n\n        // do something\n\n        rm /tmp/dns-arpa-vlanid$v-ipv$p.json.$$\n    done\ndone\n
"},{"location":"features/dns-arpa/#from-templates","title":"From Templates","text":"

Rather than writing your own scripts to consume the JSON object as above, it may be easier to use the bundled ISC Bind templates or to write your own template for IXP Manager.

You can use the IXP Manager API to get all ARPA entries for a given VLAN and protocol as plain text based on a template by using the following API endpoint:

https://ixp.example.com/api/v4/dns/arpa/{vlanid}/{protocol}/{template}\n

where:

Remember that the included ISC Bind templates can be skinned or you can add custom templates to your skin directory. More detail on this can be found in the dedicated section below.

The bundled ISC Bind templates can be used by setting {template} to bind or bind-full in the above URL. For the example interface in the JSON above, the ISC Bind bind template would yield:

67.2.0.192.in-addr.arpa.       IN   PTR     cherrie.example.com.\n

(note that the terminated period on the hostname is added by the template)

The two bundled templates are:

"},{"location":"features/dns-arpa/#skinning-templating","title":"Skinning / Templating","text":"

You can use skinning to make changes to the bundled ISC Bind template or add your own.

Let's say you wanted to add your own template called mytemplate1 and your skin is named myskin. The best way to proceed is to copy the bundled example:

cd $IXPROOT\nmkdir -p resources/skins/myskin/api/v4/dns\ncp resources/views/api/v4/dns/bind.foil.php resources/skins/myskin/api/v4/dns/mytemplate1.foil.php\n

You can now edit this template as required. The only constraint on the template name is it can only contain characters from the classes a-z, 0-9, -. NB: do not use uppercase characters.

Contribute back - if you write a useful generator, please open a pull request and contribute it back to the project.

The following variables are available in the template:

The following variables are available for each element of the $t->arpa array (essentially the same as the JSON object above): enabled, hostname, address, arpa. See above for a description.

The actual code in the bundled ISC Bind sample is as simple as:

<?php foreach( $t->arpa as $a ): ?>\n<?= trim($a['arpa']) ?>    IN      PTR     <?= trim($a['hostname']) ?>.\n<?php endforeach; ?>\n
"},{"location":"features/dns-arpa/#sample-script","title":"Sample Script","text":"

At INEX, we have (for example) one peering LAN that is a /25 IPv4 network and so is not a zone file in its own right. As such, we make up the zone file using includes. The main zone file looks like:

$TTL 86400\n\n$INCLUDE /usr/local/etc/namedb/zones/soa-0.2.192.in-addr.arpa.inc\n\n$INCLUDE zones/inex-dns-slave-nslist.inc\n\n$INCLUDE zones/reverse-mgmt-hosts-ipv4.include\n$INCLUDE zones/reverse-vlan-12-ipv4.include\n

The SOA file looks like (as you might expect):

@               IN      SOA     ns.example.come.     hostmaster.example.com. (\n                        2017051701      ; Serial\n                        43200           ; Refresh\n                        7200            ; Retry\n                        1209600         ; Expire\n                        7200 )          ; Minimum\n

The reverse-vlan-12-ipv4.include is the output of the ISC Bind bind template above for a given VLAN ID.

We use the sample script update-dns-from-ixp-manager.sh which can be found in this directory to keep this updated ourselves.

"},{"location":"features/docstore/","title":"Document Store","text":"This page refers to features introduced in IXP Manager v5.4 (general document store) and v5.6 (per-member document store).

IXP Manager has two document stores which allow administrators to upload and manage document. The two types are:

  1. a general document store which allows administrators to make documents generally available for specific user classes (public, customer user, customer admin, superadmin). Example use cases for this are member upgrade forms, distribution of board or management minutes, etc.
  2. A per-member document store which allows administrators to upload documents on a per-member basis. These can be made visible to administrators only or also to users assigned to that specific member. Example use cases for this are member application forms / contracts, completed / signed port upgrade forms, etc.

Both document stores support:

  1. Upload any file type.
  2. Edit uploaded files including name, description, minimum access privilege and replacing the file itself.
  3. Display of text (.txt) and display and parsing of Markdown (.md) files within IXP Manager.
  4. Directory hierarchy allowing the categorisation of files.
  5. Each directory can have explanatory text.
  6. Deletion of files and recursive deletion of directories.
  7. Logging of destructive actions.

Also for the general document store, and non-public documents within it, logging and reporting of downloads (total downloads and unique user downloads).

Please note that all actions except for viewing and downloading files are restricted to super users.

The general document store is accessible:

The per-member document store is accessible:

In the following sections, we use screenshots from the general document store but will highlight and specific differences for the per-member document store.

"},{"location":"features/docstore/#directories","title":"Directories","text":"

Directories are database entries to which uploaded files are attached (rather than actual directories on the storage media).

Directories can be created via the Create Directory button on the top right of the document store. See the following image showing the Create Directory form. As usual, contextual help is available via the green Help button.

To create per-member directories: First select the appropriate member from the Members/Customers list on the left-hand menu, then select the Documents tab, then select the Create Directory button on the top right.

Note that you do not need to set any permissions on directories - directories (and directory hierarchy) will only be visible to each user class if they contain files that the specific user class should be able to see.

If you enter a description, you will see a gray section at the top of the directory listing as shown in the following image. The text you enter will be parsed as Markdown and displayed as HTML. If you leave the description blank, no such section will be shown. This is a useful feature to provide context to your users about the documents in a given directory.

When viewing directories in a directory listing, a per-directory context menu is available per the following image and with the following two options:

  1. Edit: allows you to edit a directory's name, location (parent directory) and / or the description.
  2. Delete: this removes the directory and recursively deletes all files and folders within that directory.
"},{"location":"features/docstore/#files","title":"Files","text":"

Files can be uploaded via the Upload File button on the top right of the document store. See the following image showing the Upload File form with the contextual help shown (via the green Help button).

To upload per-member files: First select the appropriate member from the Members/Customers list on the left-hand menu, then select the Documents tab, then select the Upload File button on the top right.

The various fields are well explained by the contextual help above and we will add some additional emphasis here:

NB: for the per-member document store, there is no option to make a file Publicly available.

The following is how a file appears in a directory listing:

CUSTUSER indicates the minimum access permissions for the file. The numbers 19 (10) tell super admins that there have been a total of 19 downloads and 10 of those were unique to individual users in IXP Manager (i.e. some users would have downloaded this file two or more times). Note that only downloads by logged in users are counted. Publicly accessible files downloaded by non-logged in users are not recorded (if you must know this then that information is available from your web server log files). The date, Feb 29, 2020 is the date the file itself (not metadata) was last changed via the Edit option.

NB: there is no logging of files downloaded in the per-member document store. Please see below for more information.

The options in the per-file context menu are:

If you wish to purge the download logs for a file, you will find a Purge all download logs and reset download statistics checkbox on the file edit form. You can check this without making any other changes except to purge the logs.

"},{"location":"features/docstore/#sha256-functionality","title":"SHA256 Functionality","text":"

Checksums can be used to verify the authenticity / integrity of downloaded files. The primary use-case when developing the Document Store at INEX was for official documents from the INEX Board of Directors which they wished to share with the membership - for example minutes of board meetings.

The INEX Board Secretary would provide the PDF documents to the operations team for upload. In parallel the Secretary would also email the members' mailing list to inform them that a new set of minutes are being uploaded and the secretary would include the sha256 checksum in that email. This is a useful way to independently verify the authenticity of official documents and the reason this feature exists.

When uploading (or changing an uploaded file), if you enter a sha256 checksum, it will be verified on upload (and rejected if it does not match). If you leave it blank, the sha256 checksum will be calculated by IXP Manager.

"},{"location":"features/docstore/#download-logs","title":"Download Logs","text":"

The general Document Store logs file downloads / views by user. The total number of downloads and the unique users download count is shown on the directory listing. You can also get access to the individual download logs for each file via its context menu.

Non-unique individual download logs are expunged once they are more than six months old - except for the first download by each user. The user interface presents this information differently so that it is clear when you are looking at a file that is more than six months old.

There are a number of reasons to log file downloads:

  1. We envisage some IXPs will use this to share official documents. E.g. AGM notices. In these cases it is important to know whether a user did - or did not - download such a document.
  2. To provide a measure of interest / feedback in making documents available and thus judge the usefulness of continuing to do so.

However, there is no reasonable need that we can see to retain individual downloads for more than 6 months. As such, these are automatically expunged by the scheduler.

Note also that all we record is user ID, file ID and time. No IP address or other information is recorded.

"},{"location":"features/docstore/#patch-panel-files","title":"Patch Panel Files","text":"

As a useful convenience, the per-member document store presents a virtual directory which collates any patch panel files that have been uploaded to a member's current or past cross connect record.

"},{"location":"features/docstore/#access-considerations","title":"Access Considerations","text":"

IXP Manager generates a complete directory hierarchy for each of the four user classes. As such, users will only see the directory hierarchies that lead to files that are accessible to them. If a leaf directory does not contain a file that a user class cannot access then they will not see the directory in the listings.

Similarly, users will only see files listed that they can access.

If there are no documents in the Document Store for a specific user class (or public user), the the Document Store menu item will not appear under Customer / Member Information at the top of the page.

If you wish to complete disable the general document store, set the following option in .env:

IXP_FE_FRONTEND_DISABLED_DOCSTORE=true\n

If you wish to complete disable the per-member document store, set the following option in .env:

IXP_FE_FRONTEND_DISABLED_DOCSTORE_CUSTOMER=true\n
"},{"location":"features/docstore/#notes-limitations","title":"Notes & Limitations","text":"

The best way to view the limitations described herein is to understand that the development goals of the document stores were to create something which is simple and secure while consciously avoiding a recreation of Dropbox or a CRM system. We discussed tens of features, dials and knobs for the document stores but chose to not implement them.

  1. No backup / restore / undelete: if you delete a file (or directory) in the web user interface then you will also delete the file as it is stored on disk. This is not a soft-delete option and it does not include a Dropbox-esque undelete for 90 days. If you select and then confirm the deletion of a file or directory, then we assume you have made a deliberate and conscious decision to do just that.

    It should be noted that while IXP Manager does not provide this kind of functionality - it also cannot reliably provide it. As a self-hosted application, it is up to the local system administrators to ensure adequate file system and database backups are in place. Then, with adequate local backup procedures (or the developers expectations that a copy of important documents would also be kept off IXP Manager's document store), restoration of deleted documents is possible.

  2. No editing of files - particularly text and Markdown which are viewable within IXP Manager. We actually tried this but the code and user experience changes required pushed the complexity needle beyond where we wanted this feature to go.

  3. Only superusers can upload / edit / delete files. This won't change for the general document store. We can review it for the per-member document store if the feature is requested.

  4. Because only superusers can upload / edit files, there is no restriction on file types - we expect you to use your own good judgement here. There is also no restriction on file sizes - as a self-hosted application, storage space is your own consideration.

File upload size may be limited by your web server or PHP configuration. For PHP, find which .ini file is in use by running php --ini and then set the following as you wish (example values given):

upload_max_filesize = 100M\npost_max_size = 100M\n

Apache has no limit by default so if running Apache with PHP, just restart Apache (and / or PHP FPM) to apply the above. For Nginx, you need to set this as it has a default upload size of 1MB:

server {\n    ...\n    client_max_body_size 100M;\n}\n

For more information or other web server / PHP combinations, look at the specific documentation for those tools or Google it as this is a common question with many answers.

"},{"location":"features/helpdesk/","title":"Helpdesk Integration","text":"

** WORK IN PROGRESS - DEVELOPMENT NOTES **

As an IXP scales, it will eventually have to replace email support via a simple alias / shared IMAP mailbox with a proper ticketing system. After extensive (and painful!) research, we at INEX chose Zendesk as the system that most matched our budget and required features (1).

While your mileage may vary on this - or you may already have something else - please note that the reference implementation for helpdesk integration on IXP Manager is Zendesk. So, if you haven't already chosen one, Zendesk will provide maximum integration with minimal pain.

Please do not open a feature request for other helpdesk implementations as the authors cannot undertake such additional work. If you wish to have integration with another helpdesk implemented, please consider commercial support

"},{"location":"features/helpdesk/#features-supported","title":"Features Supported","text":"

IXP Manager currently supports:

Work that is in progress includes:

"},{"location":"features/helpdesk/#configuration","title":"Configuration","text":"

As Zendesk is the only implementation currently, this refers only to Zendesk.

"},{"location":"features/helpdesk/#zendesk","title":"Zendesk","text":"

You need to enable API access to Zendesk as follows:

  1. Log into your Zendesk account
  2. On the bottom left, click the Admin icon
  3. Under Channels select API
  4. Enable the Token Access and add a token

With your Zendesk login and the token from above, edit the .env file in the base directory of IXP Manager and set:

HELPDESK_BACKEND=zendesk\nHELPDESK_ZENDESK_SUBDOMAIN=ixp\nHELPDESK_ZENDESK_TOKEN=yyy\nHELPDESK_ZENDESK_EMAIL=john.doe@example.com\n

You can now test that your settings are correct with: FIXME

"},{"location":"features/helpdesk/#implementation-development","title":"Implementation Development","text":"

The helpdesk implementation in IXP Manager is designed using contracts and service providers. I.e. it is done The Right Way (tm).

The reference implementation is for Zendesk but it's coded to a contract (interface) at app/Contracts/Helpdesk.php.

The actual Zendesk implementation can be found at: app/Services/Helpdesk/Zendesk.php.

The good news here is if you want another helpdesk supported, you just need to:

(1) Actually, Zendesk wasn't our first ticketing system. For a number of years we used Cerb but it didn't stay current in terms of modern HTML UI/UX and it suffered from feature bloat. One requirement for our replacement was a decent API and with Zendesk's API we were able to migrate all our old tickets using this script.

"},{"location":"features/irrdb/","title":"IRRDB Prefixes and ASN Filtering","text":"

Prerequisite Reading: Ensure you first familiarize yourself with the generic documentation on managing and generating router configurations here.

IXP Manager can maintain a list of member route:/route6: prefixes and origin ASNs as registered in IRRDBs in its database and then use these to, for example, generate strict inbound filters on route servers.

"},{"location":"features/irrdb/#setup","title":"Setup","text":"

You need to have set up some IRRDB sources (e.g. RIPE's whois service) under the IXP Admin Actions / IRRDB Configuration on the left hand side menu. If you do not have any entries here, there is a database seeder you can use to install some to start you off:

cd $IXPROOT\n./artisan db:seed --class=IRRDBs\n

BGPQ3 is a very easy and fast way of querying IRRDBs. You first need to install this on your system. On a modern Ubuntu system this is as easy as:

apt install bgpq3\n

Then configure the path to it in your .env file.

# Absolute path to run the bgpq3 utility\n# e.g. IXP_IRRDB_BGPQ3_PATH=/usr/local/bin/bgpq3\nIXP_IRRDB_BGPQ3_PATH=/usr/bin/bgpq3\n
"},{"location":"features/irrdb/#usage","title":"Usage","text":"

To populate (and update) your local IRRDB, run the following commands:

cd $IXPROOT\nphp artisan irrdb:update-prefix-db\nphp artisan irrdb:update-asn-db\n

From IXP Manager v5 onwards, and so long as your bgpq3 path is set as above and is executable, the task scheduler will take care of updating your local IRRDB a number of times a day. If you are using a version of IXP Manager before v5, then the above commands should be added to cron to run ~once per day (using the --quiet flag).

There are four levels of verbosity:

  1. --quiet: no output unless there's an error / issue.
  2. no option: simple stats on each customer's update results.
  3. -vv: include per customer and overall timings (database, processing and network).
  4. -vvv (debug): show prefixes/ASNs added remove also.

You can also specify a specific customer to update (rather than all) with an additional free form parameter. The database is searched for a matching customer in the order: customer ASN; customer ID (database primary key); and customer shortname. E.g.:

php artisan irrdb:update-prefix-db 64511\n
"},{"location":"features/irrdb/#internal-workings","title":"Internal Workings","text":"

Essentially, based on a customers AS number / IPv4/6 Peering Macro, IXP Manager uses bgpq3 to query IRRDBs as follows:

bgpq3 -S $sources -l pl -j [-6] $asn/macro\n

where $sources come from the IRRDB sources entries.

Or, a real example:

bgpq3 -S RIPE -l pl -j AS-BTIRE\nbgpq3 -S RIPE -l pl -j -6 AS-BTIRE\n
"},{"location":"features/irrdb/#details","title":"Details","text":"

The IRRDB update commands will:

We use transactions to update the database so, even in the middle of a refresh, a full set of prefixes for all customers will still be available. It also means the update process can be safely interrupted.

Note that our current implementation only queries RADB as BGPQ3 does not support the RIPE whois protocol. Our version will however set the RADB source database according to the member's stated IRRDB database as set on the customer add / edit page - so, for customer's registered with the RIPE IRRDB, the RIPE database of RADB is queried.

"},{"location":"features/ixf-export/","title":"IX-F Member List Export","text":"

The IX-F Member Export is an agreed and standardized JSON schema which allows IXPs to make their member lists available for consumption by tools such as PeeringDB, networks with automated peering managers, prospective members and the many other tools appearing in the peering eco-system.

Historical reference: INEX created and hosted a proof of concept directory for the IX-F Export Schema until Euro-IX/IX-F took it in house in 2018.

The key element of the IX-F Member Export is it makes you, the individual IXP, the canonical trusted source for data about your own IXP. Data that has the best chance of being correct and up to date. Particularly, PeeringDB has the option of allowing network data to be updated from IX records - see our documentation on this here.

To find out more about the JSON schema and see examples, you can read more here, explore many of the public IXP end points available here or see the GitHub euro-ix/json-schemas repository.

IXP Manager supports the IX-F Member List Export out of the box. It previously supported all versions from 0.3 to 0.5 but we now only support 0.6, 0.7 and 1.0 (for >=v5.1). We plan to deprecate support for 0.6 during 2019.

Sometimes you may need something more customized than the the IX-F Member Export. For that, see the other member export feature if IXP Manager.

"},{"location":"features/ixf-export/#preparing-the-ix-f-member-export","title":"Preparing the IX-F Member Export","text":"

There are a small number of things you should do to ensure your IX-F export is correct.

Correctly set the PeeringDB ID and IX-F ID

The first is to ensure you have correctly set the PeeringDB ID and IX-F ID in your infrastructure (see Infrastructures under the left hand side IXP ADMIN ACTIONS menu).

The IX-F ID is mandatory. You will find yours by searching the IX-F providers database here. If you are a new IXP that is not registered here, please email your IXP's: full name, short name, city / region / country, GPS coordinates and website URL to ixpdb-admin (at) euro-ix (dot) net so they can register it in the IXPDB.

Create Network Info

From IXP Manager v4.9 and above, click VLANs on the left-hand-side menu and then chose Network Information. Once there, add the network address and network mask length for IPv4 and IPv6 for your peering LAN(s).

Prior to v4.9, this was a little hacky: there is a database table called networkinfo that requires you to manually insert some detail on your peering LAN.

The first thing you need is the peering VLAN DB ID. [clarification note: this is nothing to do with PeeringDB but the VLAN created within IXP Manager]. For this, select VLANs under the left hand side IXP ADMIN ACTIONS menu in IXP Manager. Locate your peering VLAN DB ID and note it.

For our example, we will use the following sample data:

You need need to add this data to networkinfo with the following sample SQL commands:

INSERT INTO `networkinfo`\n    ( `vlanid`, `protocol`, `network`, `masklen`, `rs1address`, `rs2address`),\nVALUES\n    ( 66, 4, '192.0.2.0', '25', '192.0.2.8', '192.0.2.9' );\n\nINSERT INTO `networkinfo`\n    ( `vlanid`, `protocol`, `network`, `masklen`, `rs1address`, `rs2address`),\nVALUES\n    ( 66, 6, '2001:db8:1000::', '64', '2001:db8:1000::8', '2001:db8:1000::9' );\n

Set Your IXP's Name / Country / etc

The third task is to ensure your IXP's details are correct in the IX-F export.

You will most likely have nothing to do here as it would have been done on installation but this reference may prove useful if there are any issues.

These are mostly set in the .env file (as well as some other places) and the following table shows how they get mapped to the IX-F Export:

IX-F Export IXP Element How to Set in IXP Manager shortname In IXP Manager, from the Infrastructure object name field name IDENTITY_LEGALNAME from .env country IDENTITY_COUNTRY from .env in 2-letter ISO2 format url IDENTITY_CORPORATE_URL from .env peeringdb_id In IXP Manager, from the Infrastructure object ixf_id In IXP Manager, from the Infrastructure object support_email IDENTITY_SUPPORT_EMAIL from .env support_phone IDENTITY_SUPPORT_PHONE from .env support_contact_hours IDENTITY_SUPPORT_HOURS from .env emergency_email IDENTITY_SUPPORT_EMAIL from .env emergency_phone IDENTITY_SUPPORT_PHONE from .env emergency_contact_hours IDENTITY_SUPPORT_HOURS from .env billing_email IDENTITY_BILLING_EMAIL billing_phone IDENTITY_BILLING_PHONE billing_contact_hours IDENTITY_BILLING_HOURS

We we say from the Infrastructure object above, we mean that when you are logged into IXP Manager as an admin, it's the Infrastructures menu option under IXP Admin Actions on the left hand side.

"},{"location":"features/ixf-export/#accessing-the-ix-f-member-list","title":"Accessing the IX-F Member List","text":"

If your version of IXP Manager is installed at, say, https://ixp.example.com/, then the IX-F Member List export can be accessed at:

https://ixp.example.com/api/v4/member-export/ixf/1.0\n

where 1.0 is a version parameter which allows for support of potential future versions.

Note that the publicly accessible version does not include individual member details such as name, max prefixes, contact email and phone, when the member joined, member's web address, peering policy, NOC website, NOC hours or member type. This information is available to any logged in users or users querying the API with an API key.

"},{"location":"features/ixf-export/#access-without-ix-f-id-being-set","title":"Access Without IX-F ID Being Set","text":"

While the IX-F ID is officially required for >= v0.7 of the schema, it may be overlooked on new installations or some IXPs may be uninterested in working with the IX-F IXP database.

The schema requirement for a valid IX-F ID should not prevent the IX-F exporter from working if someone wishes to pull the information regardless of that being set. There are two ways to override this and query the API available from IXP Manager v5.7.0:

The first is to pass an ixfid_y parameter (where y is the database ID of the infrastructure) every infrastructure that does not have one. Using this method will have IXP Manager set the IX-F ID in the generated JSON output suitable for processing by automated scripts. A sample URL for an IXP with two infrastructures might look like this:

https://ixpmanager.example.com/api/v4/member-export/ixf/1.0?ixfid_1=30&ixfid_2=31\n

If you wish to just ignore the IX-F ID and have it set to zero in the JSON output, you can use the following flag:

https://ixpmanager.example.com/api/v4/member-export/ixf/1.0?ignore_missing_ixfid=1\n
"},{"location":"features/ixf-export/#registering-your-api-endpoint-with-ixpdb","title":"Registering Your API Endpoint With IXPDB","text":"

IXPDB requires two pieces of information to fully integrate with the IXPDB. You can provide this information to ixpdb-admin (at) euro-ix (dot) net or - if you have a login to the Euro-IX website, you should be able to login and edit your own IXP directly on IXPDB.

The first element needed is the API endpoint as described above in Accessing the IX-F Member List.

The second is the API endpoint to export your statistics. This is:

https://ixp.example.com/grapher/infrastructure?id=1&type=log&period=day\n

where id=1 is the infrastructure DB ID (see Infrastructures under the left hand side IXP ADMIN ACTIONS menu).

"},{"location":"features/ixf-export/#configuration-options","title":"Configuration Options","text":"

To disable public access to the restricted member export, set the following in your .env file:

IXP_API_JSONEXPORTSCHEMA_PUBLIC=false\n

We strongly advise you not to disable public access if you are a standard IXP. Remember, the public version is essentially the same list as you would provide on your standard website's list of members.

In addition, membership of an IXP is easily discernible from a number of other sources including:

Leave public access available, own your own data, ensure it's validity and advertise it!

If you must disable public access but would still like to provide IX-F (or others) with access, you can set a static access key in your .env file such as:

IXP_API_JSONEXPORTSCHEMA_ACCESS_KEY=\"super-secret-access-key\"\n

and then provide the URL in the format:

https://ixp.example.com/api/v4/member-export/ixf/1.0?access_key=super-secret-access-key\n

If you wish to control access to the infrastructure statistics, see the Grapher API documentation. The statistics data is a JSON object representing each line of a the rest of the file from a standard MRTG log file. This means the per-line array elements are:

  1. The Unix timestamp for the point in time the data on this line is relevant.
  2. The average incoming transfer rate in bytes per second. This is valid for the time between the A value of the current line and the A value of the previous line.
  3. The average outgoing transfer rate in bytes per second since the previous measurement.
  4. The maximum incoming transfer rate in bytes per second for the current interval. This is calculated from all the updates which have occurred in the current interval. If the current interval is 1 hour, and updates have occurred every 5 minutes, it will be the biggest 5 minute transfer rate seen during the hour.
  5. The maximum outgoing transfer rate in bytes per second for the current interval.
"},{"location":"features/ixf-export/#excluding-some-data","title":"Excluding Some Data","text":"

It is possible to exclude some data from v6.0.1 per GitHub issue #722:

While some exchanges are willing to share detailed information about their infrastructure via the IX-F Member Export Schema, others either do not want to or cannot due to regulation. Enabling exchanges to share a limited set of data about their infrastructure would help exchanges find others using the same platforms to learn from each other and shows the diversity of platforms in use across the market.

Please bear in mind that the more data you remove, the less useful the IX-F member export becomes. Most IXPs do not use this exclusion function and, ideally, you will only use it if there is no other choice.

For example, a switch object typically looks like:

{\n    \"id\": 50,\n    \"name\": \"swi1-kcp1-2\",\n    \"colo\": \"Equinix DB2 (Kilcarbery)\",\n    \"city\": \"Dublin\",\n    \"country\": \"IE\",\n    \"pdb_facility_id\": 178,\n    \"manufacturer\": \"Arista\",\n    \"model\": \"DCS-7280SR-48C6\",\n    \"software\": \"EOS 4.24.3M\"\n}\n

If, for example, you need to exclude the model and software version, you can add the following to your .env file:

IXP_API_JSONEXPORTSCHEMA_EXCLUDE_SWITCH=\"model|software\"\n

which will yield:

{\n    \"id\": 50,\n    \"name\": \"swi1-kcp1-2\",\n    \"colo\": \"Equinix DB2 (Kilcarbery)\",\n    \"city\": \"Dublin\",\n    \"country\": \"IE\",\n    \"pdb_facility_id\": 178,\n    \"manufacturer\": \"Arista\"\n}\n

As you can see, the configuration option is the set of identifiers you want to exclude (model and software) separated with the pipe symbol. Different combinations are possible - here are some examples:

IXP_API_JSONEXPORTSCHEMA_EXCLUDE_SWITCH=\"software\"\nIXP_API_JSONEXPORTSCHEMA_EXCLUDE_SWITCH=\"model|software\"\nIXP_API_JSONEXPORTSCHEMA_EXCLUDE_SWITCH=\"city|model|software\"\n

You should not exclude the id as these is referred to in the member interface list.

You can exclude detail for the IXP object:

{\n    \"shortname\": \"INEX LAN1\",\n    \"name\": \"Internet Neutral Exchange Association Limited by Guarantee\",\n    \"country\": \"IE\",\n    \"url\": \"https:\\/\\/www.inex.ie\\/\",\n    \"peeringdb_id\": 48,\n    \"ixf_id\": 20,\n    \"ixp_id\": 1,\n    \"support_email\": \"operations@example.com\",\n    \"support_contact_hours\": \"24x7\",\n    \"emergency_email\": \"operations@example.com\",\n    \"emergency_contact_hours\": \"24x7\",\n    \"billing_contact_hours\": \"8x5\",\n    \"billing_email\": \"accounts@example.com\",\n    ...\n}\n

with the option:

IXP_API_JSONEXPORTSCHEMA_EXCLUDE_IXP=\"name|url\"\n

You should not exclude any of the IDs (peeringdb_id, ixf_id and ixp_id) as these is referred to else where in the document and required externally when using the data.

You can exclude member detail:

{\n    \"asnum\": 42,\n    \"member_since\": \"2009-01-13T00:00:00Z\",\n    \"url\": \"http:\\/\\/www.pch.net\\/\",\n    \"name\": \"Packet Clearing House DNS\",\n    \"peering_policy\": \"open\",\n    \"member_type\": \"peering\",\n    ...\n}\n

with the option:

IXP_API_JSONEXPORTSCHEMA_EXCLUDE_MEMBER=\"peering_policy|member_type\"\n

And finally, you can include member VLAN/protocol detail:

\"ipv4\": {\n    \"address\": \"185.6.36.60\",\n    \"as_macro\": \"AS-PCH\",\n    \"routeserver\": true,\n    \"mac_addresses\": [\n        \"00:xx:yy:11:22:33\"\n    ],\n    \"max_prefix\": 2000\n},\n\"ipv6\": {\n    \"address\": \"2001:7f8:18::60\",\n    \"as_macro\": \"AS-PCH\",\n    \"routeserver\": true,\n    \"mac_addresses\": [\n        \"00:xx:yy:11:22:33\"\n    ],\n    \"max_prefix\": 2000\n}\n

with the option:

IXP_API_JSONEXPORTSCHEMA_EXCLUDE_INTINFO=\"mac_addresses|routeserver\"\n

Please note that the IXP_API_JSONEXPORTSCHEMA_EXCLUDE_INTINFO affects both the ipv4 and ipv6 clauses.

"},{"location":"features/ixf-export/#excluding-members","title":"Excluding Members","text":"

You can exclude members by ASN or by tag by setting the following .env option:

# Exclude members with certain AS numbers\n# IXP_API_JSONEXPORTSCHEMA_EXCLUDE_ASNUM=\"65001|65002|65003\"\n\n# Exclude members with certain tags\n# IXP_API_JSONEXPORTSCHEMA_EXCLUDE_TAGS=\"tag1|tag2\"\n

The following are enabled by default to prevent exporting test customers:

# Exclude documentation ASNs (64496 - 64511, 65536 - 65551)\n# IXP_API_JSONEXPORTSCHEMA_EXCLUDE_RFC5398=true\n\n# Exclude private ASNs (64512 - 65534, 4200000000 - 4294967294)\n# IXP_API_JSONEXPORTSCHEMA_EXCLUDE_RFC6996=true\n
"},{"location":"features/ixf-export/#including-ixp-manager-specific-data","title":"Including IXP Manager Specific Data","text":"

If you pass withtags=1 as a parameter to the URL endpoint, then you will get an extra section in each member section:

\"ixp_manager\": {\n    \"tags\": {\n        \"exampletag1\": \"Example Tag #1\",\n        \"exampletag2\": \"Example Tag #2\"\n    },\n    \"in_manrs\": false,\n    \"is_reseller\": false,\n    \"is_resold\": true,\n    \"resold_via_asn\": 65501\n},\n

As you can see:

"},{"location":"features/ixf-export/#example-member-lists","title":"Example: Member Lists","text":"

A common requirement of IXPs is to create a public member list on their official website. This can be done with the IX-F Member Export quite easily. The below HTML and JavaScript is a way to do it with INEX's endpoint. There's a live JSFiddle which demonstrates this also - https://jsfiddle.net/barryo/2tzuypf9/.

The HTML requires just a table with a placeholder and an id on the body:

<table class=\"table table-bordered\" style=\"margin: 10px\">\n <thead>\n   <tr>\n     <th>Company</th>\n     <th>ASN</th>\n     <th>Connections</th>\n   </tr>\n </thead>\n <tbody id=\"list-members\">\n     <tr>\n         <td colspan=\"3\">Please wait, loading...</td>\n     </tr>\n </tbody>\n</table>\n

The JavaScript loads the member list via the IX-F Export and processes it into the table above:

// Sample IX-F Member Export to Member List script\n//\n// License: MIT (https://en.wikipedia.org/wiki/MIT_License)\n// By @yannrobin and @barryo\n// 2018-03-06\n\nfunction prettySpeeds( s ) {\n        switch( s ) {\n            case 10:     return \"10Mb\";\n            case 100:    return \"100Mb\";\n            case 1000:   return \"1Gb\";\n            case 10000:  return \"10Gb\";\n            case 40000:  return \"40Gb\";\n            case 100000: return \"100Gb\";\n        default:     return s;\n    }\n}\n\n$.getJSON( \"https://www.inex.ie/ixp/api/v4/member-export/ixf/0.7\", function( json ) {\n\n      // sort by name\n    json[ 'member_list' ].sort( function(a, b) {\n        var nameA = a.name.toUpperCase(); // ignore upper and lowercase\n        var nameB = b.name.toUpperCase(); // ignore upper and lowercase\n        if (nameA < nameB) {\n          return -1;\n        }\n        if (nameA > nameB) {\n          return 1;\n        }\n        // names must be equal\n        return 0;\n    });\n\n    let html = '';\n\n    $.each( json[ 'member_list' ], function(i, member) {\n        html += `<tr>\n                     <td>\n                         <a target=\"_blank\" href=\"${member.url}\">${member.name}</a>\n                     </td>\n                     <td>\n                         <a target=\"_blank\"\n                             href=\"http://www.ripe.net/perl/whois?searchtext=${member.asnum}&form_type=simple\">\n                             ${member.asnum}\n                         </a>\n                     </td>`;\n\n        let connection = '';\n        $.each( member[ 'connection_list' ], function(i, conn ) {\n            if( conn[ 'if_list' ].length > 1 ){\n                  connection += conn[ 'if_list' ].length+ '*'\n            }\n            connection += prettySpeeds( conn[ 'if_list' ][0].if_speed );\n\n            if(i < (member[ 'connection_list' ].length - 1 )){\n              connection += \" + \";\n            }\n        });\n\n        html += `<td>${connection}</td></tr>\\n`;\n    });\n\n    $( \"#list-members\" ).html(html);\n});\n

The end result is a table that looks like:

Company | ASN | Connections | --------------------|----------------------------------| 3 Ireland | 34218 | 2*10Gb + 2*10Gb | Afilias | 12041 | 1Gb | ... | ... | ... |

"},{"location":"features/layer2-addresses/","title":"MAC Addresses","text":"

IXP Manager has support for layer2 / MAC addresses in two ways:

  1. Discovered Addresses: a read-only table via an admin menu option called MAC Addresses -> Discovered Addresses which lists entries from a database of MAC addresses which are sourced via a script from the IXP's switches directly. (Available since version 3.x).
  2. Configured Addresses: a managed table of layer2/MAC addresses, viewed by the admin menu option MAC Addresses -> Configured Addresses. These are assigned by IXP administrators on a per VLAN interface basis. (Available since version 4.4).
"},{"location":"features/layer2-addresses/#configured-addresses","title":"Configured Addresses","text":"

In early 2017, INEX migrated its primary peering LAN from a flat layer2 with spanning tree design to a VxLAN set-up with automation via Salt and Napalm (we will insert references to presentations here once we complete all required functionality).

Part of the requirements for this automation (and this was an existing feature request from other IXPs) was the management of MAC addresses within IXP Manager and, rather than assigning them to a virtual interface, assign them to specific VLAN interfaces.

Outside of our automation and VXLAN, other uses included:

  1. to potentially allow members to add a MAC address during maintenance and thus have the system update a layer2 acl on the switch(es);
  2. a static maintained database of MAC addresses for EVPN;
  3. a static maintained database for lookups.

The features of this system are listed below.

"},{"location":"features/layer2-addresses/#listing-and-searching-existing-configured-mac-addresses","title":"Listing and Searching Existing Configured MAC Addresses","text":"

There is a new menu option (left hand side menu) under MAC Addresses called Configured Addresses. This lists all configured MAC addresses including the OUI manufacturer (see below), associated switch / switch port(s), customer name, IPv4 and v6 addresses. You can also:

"},{"location":"features/layer2-addresses/#adding-removing-layer2-addresses-tofrom-a-vlan-interface","title":"Adding / Removing Layer2 Addresses to/from a VLAN Interface","text":"

When editing a customer's interface in the usual manner (customer overview -> Ports -> edit button), you will now see MAC address(es) under VLAN Interfaces:

In the event that there is zero or more than one MAC address, the MAC address demonstrated above will be replaced with an appropriate note to indicate this.

Clicking on the MAC address (or note when none / multiple) will bring you to the configured MAC address management page for this VLAN interface. Addresses can be added / removed on this page. MAC addresses can be entered in either upper or lower cases and can optionally include characters such as ., :, -. These are all stripped before validation and insertion.

"},{"location":"features/layer2-addresses/#extracting-addresses","title":"Extracting Addresses","text":"

As automation features are still a work in progress, not all methods are listed here. Please open an issue on GitHub or start a discussion on the mailing list for whatever methods you would like.

Currently implemented (see the API page for access details):

  1. An API to be used by the sflow / peer to peer graphing tool:
    • Virtual Interface ID to MAC address - GET request to: https://ixp.example.com/api/v4/vlan-interface/sflow-mac-table
    • Virtual Interface ID, VLAN interface ID, customer name and VLAN tag - GET request to: https://ixp.example.com/api/v4/vlan-interface/sflow-matrix
  2. YAML export for automated provisioning. As yet undocumented and not suitable for general use.
  3. Querying the database directly. Not usually recommended as the schema may change.
"},{"location":"features/layer2-addresses/#migrating-discovered-macs-to-configured-macs","title":"Migrating Discovered MACs to Configured MACs","text":"

INEX's use case was to switch from the discovered MAC addresses table to the configured MAC addresses table without the need to data fill all preexisting ~200 MACs. As such we have created an Artisan migration script which can be run with:

php $IXPROOT/artisan l2addresses:populate\n

You will be prompted as follows:

Are you sure you wish to proceed? This command will CLEAR the layer2address table and then copy addresses from the read-only macaddress table. Generally, this command should only ever be run once when initially populating the new table.

One thing to note: as the discovered MAC Addresses table is per virtual interface and the new configured MAC address functionality is per VLAN interface, any MAC from discovered MAC Addresses that is populated into configured MAC Addresses will be populated for every VLAN interface associated with the virtual interface.

The script prints notices for these such as:

Created >1 layer2address for [member name]] with virtual interface: https://www.example.com/ixp/virtual-interface/edit/id/235

The inclusion of the URL makes it easy to double check the result.

For obvious reasons, we only allow a single / unique layer2 address per VLAN. In the event that the script tries to add the same MAC more than once, it will print:

Could not add additional instance of 001122334455 for [Customer]] with virtual interface: https://www.example.com/ixp/virtual-interface/edit/id/265 as it already exists in this Vlan [VLAN name]

These should all be checked manually.

A useful SQL command to double check the results for me was:

SELECT mac, COUNT(mac) AS c FROM l2address GROUP BY mac HAVING COUNT(mac) > 1;\n
"},{"location":"features/layer2-addresses/#discovered-mac-addresses","title":"Discovered MAC Addresses","text":"

This was the original functionality - a read-only table via an admin menu option called MAC Addresses -> Discovered Addresses which lists entries from a database of MAC addresses which are sourced via a script from the IXP's switches directly.

At an IXP, it can be extremely useful to have a quick look up table to see what member owns what MAC address - especially when they start injecting illegal packets into the exchange fabric.

We have a script, update-l2database.pl, for this. To set it up (using Ubuntu as an example), proceed as below. We are in the process of trying to reduce the reliance on the perl library and direct database access. But for now, this script still requires it.

# If you haven't already, install the Perl library for IXP Manager:\napt-get install libnet-snmp-perl libconfig-general-perl libnetaddr-ip-perl\ncd $IXPROOT/tools/perl-lib/IXPManager\nperl Makefile.PL\nmake install\n\n# Then copy and edit the configuration file to set the database connection settings:\ncp $IXPROOT/tools/perl-lib/IXPManager/ixpmanager.conf.dist /usr/local/etc/ixpmanager.conf\njoe /usr/local/etc/ixpmanager.conf #and set database settings\n\n# Now copy the script:\ncp $IXPROOT/tools/runtime/l2database/update-l2database.pl /usr/local/bin\n\n# and then add it to your periodic cron job with:\n/usr/local/bin/update-l2database.pl\n
"},{"location":"features/layer2-addresses/#oui-database","title":"OUI Database","text":"

IXP Manager can store the IEEE OUI database and reference it to show the manufacturer behind a MAC address.

"},{"location":"features/layer2-addresses/#populating-and-updating-the-oui-database","title":"Populating and Updating the OUI Database","text":"

The OUI is updated weekly by the task scheduler. You can force an update with the following Artisan command:

php $IXPROOT/artisan utils:oui-update\n

which will populate / update the OUI database directly from the latest IEEE file from their website.

A specific file can be passed via the file parameter. You can also force a database reset (drop all OUI entries and re-populate) via the --refresh option.

Neither of these options are typically necessary.

"},{"location":"features/layer2-addresses/#end-user-access","title":"End User Access","text":"

In v4.7.3 we introduced the ability for logged in users to management their own configured MAC addresses.

This is disabled by default but can be enabled with the following .env settings:

# Set this to allow customers to change their own configured MAC addresses:\nIXP_FE_LAYER2_ADDRESSES_CUST_CAN_EDIT=true\n\n# The following defaults are configured for min/max MAC addresses\nIXP_FE_LAYER2_ADDRESSES_CUST_PARAMS_MIN_ADDRESSES=1\nIXP_FE_LAYER2_ADDRESSES_CUST_PARAMS_MAX_ADDRESSES=2\n

When a MAC is added, a IXP\\Events\\Layer2Address\\Added event is triggered and, similarly, when a MAC is deleted a IXP\\Events\\Layer2Address\\Deleted event is triggered. We have created an event listener for these to fire off an email in both cases. To enable this listener, set the following .env settings:

# Trigger an email when a superuser adds/deletes a MAC:\nIXP_FE_LAYER2_ADDRESSES_EMAIL_ON_SUPERUSER_CHANGE=true\n\n# Trigger an email when a customer user adds/deletes a MAC:\nIXP_FE_LAYER2_ADDRESSES_EMAIL_ON_CUSTOMER_CHANGE=true\n\n# Destination address of the email:\nIXP_FE_LAYER2_ADDRESSES_EMAIL_ON_CHANGE_DEST=ops@ixp.example.net\n

There are two files you can consider skinning with this functionality:

  1. resources/views/layer2-address/emails/changed.blade.php - the email which is sent when a MAC is added / removed.
  2. resources/views/layer2-address/customer-edit-msg.foil.php - an informational alert box that is shown to the customer on the MAC add/delete page to set their expectations on time to complete on the IXP end.
"},{"location":"features/looking-glass/","title":"Looking Glass","text":"

IXP Manager supports full looking glass features when using the Bird BGP daemon and Bird's Eye (a simple secure micro service for querying Bird).

A fully working example of this can be seen here on INEX's IXP Manager.

Enabling the looking glass just requires:

  1. properly configured router(s).
  2. for routers that Bird and where you want them to be available via a looking glass, install Bird's Eye on the same server(s) running Bird.
  3. the API endpoint must be accessible from the server running IXP Manager and this endpoint must be set correctly in the router's configuration (see router(s) page) (along with an appropriate setting for LG Access Privileges). Note that the Birdseye API end points do not need to be publicly accessible - just from the IXP Manager server.
  4. set the .env option: IXP_FE_FRONTEND_DISABLED_LOOKING_GLASS=false (in IXP Manager's .env and add it if it's missing as it defaults to true).
"},{"location":"features/looking-glass/#choose-the-correct-version-of-birds-eye-to-use","title":"Choose the Correct Version of Bird's Eye to Use","text":"

The looking glass will not work if the versions are not matched correctly as above.

"},{"location":"features/looking-glass/#example-router-configuration","title":"Example Router Configuration","text":"

See this screenshot for an appropriately configured INEX router with Bird's Eye:

"},{"location":"features/looking-glass/#looking-glass-pass-thru-api-calls","title":"Looking Glass 'Pass Thru' API Calls","text":"

Depending on the configured LG Access Privileges for a given router, IXP Manager will pass through the following API calls to the router API.

and return the JSON result.

The rationale for this is that we expect most IX's to keep direct access to looking glass implementations on internal / private networks.

Here are two live examples from INEX:

  1. https://www.inex.ie/ixp/api/v4/lg/rc1-cork-ipv4/status
  2. https://www.inex.ie/ixp/api/v4/lg/rc1-cork-ipv4/bgp-summary

You can see all of INEX's looking glasses at https://www.inex.ie/ixp/lg. GRNET have also a public IXP Manager integration at: https://portal.gr-ix.gr/lg.

"},{"location":"features/looking-glass/#debugging","title":"Debugging","text":"

Generally speaking, if you carefully read the above and the Bird's Eye README file, you should be able to get IXP Manager / Bird's Eye integration working.

If you do not, try all of the following and solve any elements that fail. If you still have issues, email the IXP Manager mailing list with the output of all of the following commands from both sections.

For the following examples, we will use a real INEX example router with these settings:

mysql> SELECT * FROM routers WHERE id = 17\\G\n*************************** 1. row ***************************\n          id: 17\n     vlan_id: 2\n      handle: as112-lan1-ipv4\n    protocol: 4\n        type: 3\n        name: INEX LAN1 - AS112 - IPv4\n   shortname: AS112 - LAN1 - IPv4\n   router_id: 185.6.36.6\n  peering_ip: 185.6.36.6\n         asn: 112\n    software: 1\n   mgmt_host: 10.39.5.6\n         api: http://as112-lan1-ipv4.mgmt.inex.ie/api\n    api_type: 1\n   lg_access: 0\n  quarantine: 0\n      bgp_lc: 0\n    template: api/v4/router/as112/bird/standard\n    skip_md5: 1\nlast_updated: 2018-02-03 14:26:15\n

From the server running IXP Manager:

###############################################################################\n# Does the API hostname resolve?\n$ dig +short as112-lan1-ipv4.mgmt.inex.ie\n\n10.39.5.6\n\n\n###############################################################################\n# Is there network access?\n$ ping as112-lan1-ipv4.mgmt.inex.ie -c 1\n\nPING as112.mgmt.inex.ie (10.39.5.6) 56(84) bytes of data.\n64 bytes from as112.mgmt.inex.ie (10.39.5.6): icmp_seq=1 ttl=64 time=0.103 ms\n\n--- as112.mgmt.inex.ie ping statistics ---\n1 packets transmitted, 1 received, 0% packet loss, time 0ms\nrtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms\n\n\n###############################################################################\n# Is the Bird's Eye service available?\n$ curl -v http://as112-lan1-ipv4.mgmt.inex.ie/api/status\n\n*   Trying 10.39.5.6...\n* Connected to as112-lan1-ipv4.mgmt.inex.ie (10.39.5.6) port 80 (#0)\n> GET /api/status HTTP/1.1\n> Host: as112-lan1-ipv4.mgmt.inex.ie\n> User-Agent: curl/7.47.0\n> Accept: */*\n>\n< HTTP/1.1 200 OK\n< Cache-Control: no-cache\n< Content-Type: application/json\n< Date: Mon, 12 Feb 2018 16:29:52 GMT\n< Transfer-Encoding: chunked\n< Server: lighttpd/1.4.35\n<\n* Connection #0 to host as112-lan1-ipv4.mgmt.inex.ie left intact\n\n{\n    \"api\": {\n        \"from_cache\":true,\n        \"ttl_mins\":1,\n        \"version\":\"1.1.0\",\n        \"max_routes\":1000\n    },\n    \"status\": {\n        \"version\":\"1.6.3\",\n        \"router_id\":\"185.6.36.6\",\n        \"server_time\":\"2018-02-12T16:29:48+00:00\",\n        \"last_reboot\":\"2017-11-09T00:23:24+00:00\",\n        \"last_reconfig\":\"2018-02-12T16:26:14+00:00\",\n        \"message\":\"Daemon is up and running\"\n    }\n}\n

If all of the above checks out, watch the log file while you try and access the looking glass:

cd $IXPROOT\ntail -f storage/log/laravel.log\n

If there are error messages in the above log as you try and access the looking glass, include them when emailing the mailing list for help.

Then on the router (the server running Bird's Eye), you need to provide the following answers when seeking help:

###############################################################################\n# Are you running the correct version of Bird's Eye for IXP Manager?\n\ncat /srv/birdseye/version.php\n\n# see the documentation above for the correct versions to match to IXP Manager.\n\n###############################################################################\n# Is Bird actually running and what are the names of its sockets:\n$ ls -la /var/run/bird\n\ntotal 0\ndrwxrwxr-x  2 bird bird 120 Nov  9 00:26 .\ndrwxr-xr-x 25 root root 900 Feb 12 19:40 ..\nsrw-rw----  1 root root   0 Nov  9 00:23 bird-as112-lan1-ipv4.ctl\nsrw-rw----  1 root root   0 Nov  9 00:23 bird-as112-lan1-ipv6.ctl\nsrw-rw----  1 root root   0 Nov  9 00:23 bird-as112-lan2-ipv4.ctl\nsrw-rw----  1 root root   0 Nov  9 00:23 bird-as112-lan2-ipv6.ctl\n\n###############################################################################\n# What configuration file(s) exist:\n$ ls -la /srv/birdseye/*env\n\n-rw-r--r-- 1 root root 2833 Dec  5  2016 /srv/birdseye/birdseye-as112-lan1-ipv4.env\n-rw-r--r-- 1 root root 2833 Dec  5  2016 /srv/birdseye/birdseye-as112-lan1-ipv6.env\n-rw-r--r-- 1 root root 2833 Dec  5  2016 /srv/birdseye/birdseye-as112-lan2-ipv4.env\n-rw-r--r-- 1 root root 2833 Dec  5  2016 /srv/birdseye/birdseye-as112-lan2-ipv6.env\n\n###############################################################################\n# Let's see the contents of these:\n#\n# NB: when specifying the BIRDC parameter below, for Bird v1.x.y, use the -4/-6\n# switch when querying the ipv4/6 daemon respectively. For Bird v2.x.y, use\n# the -2 switch.\n#\n$ cat /srv/birdseye/*env | egrep -v '(^#)|(^\\s*$)'\n\nBIRDC=\"/usr/bin/sudo /srv/birdseye/bin/birdc -4 -s /var/run/bird/bird-as112-lan1-ipv4.ctl\"\nCACHE_DRIVER=file\nLOOKING_GLASS_ENABLED=true\nBIRDC=\"/usr/bin/sudo /srv/birdseye/bin/birdc -6 -s /var/run/bird/bird-as112-lan1-ipv6.ctl\"\nCACHE_DRIVER=file\nLOOKING_GLASS_ENABLED=true\nBIRDC=\"/usr/bin/sudo /srv/birdseye/bin/birdc -4 -s /var/run/bird/bird-as112-lan2-ipv4.ctl\"\nCACHE_DRIVER=file\nLOOKING_GLASS_ENABLED=true\nBIRDC=\"/usr/bin/sudo /srv/birdseye/bin/birdc -6 -s /var/run/bird/bird-as112-lan2-ipv6.ctl\"\nCACHE_DRIVER=file\nLOOKING_GLASS_ENABLED=true\n\n###############################################################################\n# Test birdc access to the daemon - run for each socket found above:\n# (only one shown here for brevity - include all in your request for help!)\n$ /usr/sbin/birdc -s /var/run/bird/bird-as112-lan1-ipv4.ctl show status\n\nBIRD 1.6.3 ready.\nBIRD 1.6.3\nRouter ID is 185.6.36.6\nCurrent server time is 2018-02-12 19:42:42\nLast reboot on 2017-11-09 00:23:25\nLast reconfiguration on 2018-02-12 19:26:15\nDaemon is up and running\n\n###############################################################################\n# Have you created the sudo file for www-data to be able to access Birdc?\n$ cat /etc/sudoers /etc/sudoers.d/* | grep birdseye\n\nwww-data        ALL=(ALL)       NOPASSWD: /srv/birdseye/bin/birdc\n\n###############################################################################\n# Does the Bird's Eye client work?\n#\u00a0Run for each socket found above with the appropriate protocol (-4/-6):\n# (only one shown here for brevity - include all in your request for help!)\n\n$ /srv/birdseye/bin/birdc -4 -s /var/run/bird/bird-as112-lan1-ipv4.ctl show status\n\nBIRD 1.6.3 ready.\nAccess restricted\nBIRD 1.6.3\nRouter ID is 185.6.36.6\nCurrent server time is 2018-02-12 19:44:31\nLast reboot on 2017-11-09 00:23:25\nLast reconfiguration on 2018-02-12 19:26:15\nDaemon is up and running\n\n\n###############################################################################\n# Is the web server running:\n$ netstat -lpn | grep lighttpd\n\ntcp        0      0 10.39.5.6:80            0.0.0.0:*               LISTEN      1165/lighttpd\ntcp6       0      0 2001:7f8:18:5::6:80     :::*                    LISTEN      1165/lighttpd\nunix  2      [ ACC ]     STREAM     LISTENING     13970    635/php-cgi         /var/run/lighttpd/php.socket-0\n\n###############################################################################\n# Is PHP running:\n$ netstat -lpn | grep php\n\nunix  2      [ ACC ]     STREAM     LISTENING     13970    635/php-cgi         /var/run/lighttpd/php.socket-0\n\n###############################################################################\n# what's the web server configuration\n# NB: make sure you have compared it to:\n#   https://github.com/inex/birdseye/blob/master/data/configs/lighttpd.conf\n$ cat /etc/lighttpd/lighttpd.conf\n\n<not included but linked two lines up>\n\n###############################################################################\n# provide the IXP Manager configuration of your router(s):\nmysql> SELECT * FROM routers\\G\n\n<not included - see example at start of this section>\n
"},{"location":"features/mailing-lists/","title":"Mailing List Management","text":"

IXP Manager has the ability to allow users to subscribe / unsubscribe from Mailman mailing lists (it should be relatively easy to expand this to other mailing list managers as the functionality is based on Mailman but not Mailman specific).

The following sections explain the steps in how this is set up.

NB: This facility does not perform a 100% synchronisation. Any mailing list members that are added separately without a matching user in IXP Manager are not interfered with.

"},{"location":"features/mailing-lists/#configuring-available-mailing-lists","title":"Configuring Available Mailing Lists","text":"

There is a sample configuration file which you need to copy as follows:

cd $IXPROOT\ncp config/mailinglists.php.dist config/mailinglists.php\n

You then need to edit this file as follows:

  1. Enable the mailing list functionality by setting this to true:

    // Set the following to 'true' to enable mailing list functionality:\n'enabled' => true,\n

    If this is not set to true, the user will not be offered subscription options and the CLI/API commands will not execute.

  2. Configure the available mailing list(s) in the lists array. Here is an example:

    'lists' => [\n    'members' => [\n        'name'    => \"Members' Mailing List\",\n        'desc'    => \"A longer description as presented in IXP Manager.\",\n        'email'   => \"members@example.com\",\n        'archive' => \"https://www.example.com/mailman/private/members/\",\n    ],\n    'tech' => [\n        'name'    => \"Tech/Operations Mailing List\",\n        'desc'    => \"A longer description as presented in IXP Manager.\",\n        'email'   => \"tech@example.com\",\n        'archive' => \"https://www.example.com/mailman/private/tech/\",\n    ],\n],\n

    Note that the members and tech array keys above are the list handles that will be used by the API interfaces later. It is also important that they match the Mailman list key.

    Historically, mailing list passwords were also sync'd from the IXP Manager user database unless syncpws is both defined and false for the given list. As we are now enforcing bcrypt as the standard password hashing mechanism, we no longer support this and suggest allowing Mailman to manage its own passwords.

  3. Paths to Mailman commands. These will be used in the API/CLI elements later:

    'mailman' => [\n    'cmds' => [\n        'list_members'   => \"/usr/local/mailman/bin/list_members\",\n        'add_members'    => \"/usr/local/mailman/bin/add_members -r - -w n -a n\",\n        'remove_members' => \"/usr/local/mailman/bin/remove_members -f - -n -N\",\n        'changepw'       => \"/usr/local/mailman/bin/withlist -q -l -r changepw\"\n    ]\n]\n
"},{"location":"features/mailing-lists/#explanation-of-usage","title":"Explanation of Usage","text":"

This mailing list synchronisation / integration code was written for existing Mailman lists we have at INEX where some lists are public with subscribers that will never have an account on INEX's IXP Manager. As such, these scripts are written so that email addresses in common between IXP Manager and Mailman can manage their subscriptions in IXP Manager but those other subscribers will be unaffected.

Users in IXP Manager will either be marked as being subscribed to a list, not subscribed to a list or neither (i.e. a new user). Subscriptions are managed by user preferences (in the database) of the format:

mailinglist.listname1.subscribed = 0/1\n

There are three steps to performing the synchronisation for each list which are done by either using the IXP Manager CLI script artisan mailing-list:... or the API interface.

"},{"location":"features/mailing-lists/#cli-interface-overview","title":"CLI Interface Overview","text":"

NB: these relate to the CLI as implemented from IXP Manager >= v4.7.

  1. The execution of the artisan mailing-list:init script which is really for new IXP Manager users (or initial set up of the mailing list feature). This script is piped the full subscribers list from Mailman (via list_members). This function will iterate through all users and, if they have no preference set for subscription to this list, will either add a \"not subscribed\" preference if their email address is not in the provided list of subscribers or a \"subscribed\" preference if it is.

  2. The execution of the artisan mailing-list:get-subscribers action which lists all users who are subscribed to the given mailing list based on their user preferences. This is piped to the add_members Mailman script.

  3. The execution of the artisan mailing-list:get-subscribers --unsubscribed action which lists all users who are unsubscribed to the given mailing list based on their user preferences. This is piped to the remove_members Mailman script.

"},{"location":"features/mailing-lists/#api-v4-interface-overview","title":"API V4 Interface Overview","text":"

The API v4 implementation was added in IXP Manager v4.7. See the end of this document for the API v1 implementation in previous versions of IXP Manager.

If you wish to use the API version, proceed as follows where:

Use the initialisation function for new IXP Manager users (or initial set up of the mailing list feature) which updates IXP Manager with all currently subscribed mailing list members:

/path/to/mailman/bin/list_members members >/tmp/ml-members.txt\ncurl -f --data-urlencode addresses@/tmp/ml-members.txt \\\n    -H \"X-IXP-Manager-API-Key: $KEY\" -X POST\n    \"https://ixp.example.co/api/v4/mailing-list/init/members\"\nrm /tmp/ml-members.txt\n

Pipe all subscribed users to the add_members Mailman script:

curl -f -H \"X-IXP-Manager-API-Key: $KEY\" -X GET \\\n    \"https://ixp.example.co/api/v4/mailing-list/subscribers/members\" | \\\n    /path/to/mailman/bin/add_members -r - -w n -a n members >/dev/null\n

Pipe all users who are unsubscribed to the remove_members Mailman script:

curl -f -H \"X-IXP-Manager-API-Key: $KEY\" -X GET \\\n    \"https://ixp.example.co/api/v4/mailing-list/unsubscribed/members\" | \\\n    /path/to/mailman/bin/remove_members -f - -n -N members >/dev/null\n
"},{"location":"features/mailing-lists/#how-to-implement","title":"How to Implement","text":"

You can implement mailing list management by configuring IXP Manager as above.

IXP Manager will generate shell scripts to manage all of the above.

Execute the following command for the CLI version (and make sure to update the assignments at the top of the script):

artisan mailing-list:sync-script --sh\n

Or the following for the API V4 version (and make sure to update the assignments at the top of the script):

artisan mailing-list:sync-script\n

This generates a script which performs each of the above four steps for each configured mailing list. If your mailing list configuration does not change, you will not need to rerun this.

You should now put this script into crontab on the appropriate server (same server for CLI!) and run as often as you feel is necessary. The current success message for a user updating their subscriptions says within 12 hours so we'd recommend at least running twice a day.

"},{"location":"features/mailing-lists/#todo","title":"Todo","text":""},{"location":"features/mailing-lists/#api-v1-interface-overview","title":"API V1 Interface Overview","text":"

DEPRECATED and only available in IXP Manager <v4.7.

The CLI version of mailing list management was presented above. If you wish to use the API version, proceed as follows where:

Use the initialisation function for new IXP Manager users (or initial set up of the mailing list feature) which updates IXP Manager with all currently subscribed mailing list members:

/path/to/mailman/bin/list_members members >/tmp/ml-listname1.txt\ncurl -f --data-urlencode addresses@/tmp/ml-listname1.txt \\\n    \"https://www.example.com/ixp/apiv1/mailing-list/init/key/$MyKey/list/members\"\nrm /tmp/ml-listname1.txt\n

Pipe all subscribed users to the add_members Mailman script:

curl -f \"https://www.example.com/ixp/apiv1/mailing-list/get-subscribed/key/$MyKey/list/members\" | \\\n    /path/to/mailman/bin/add_members -r - -w n -a n members >/dev/null\n

Pipe all users who are unsubscribed to the remove_members Mailman script:

curl -f \"https://www.example.com/ixp/apiv1/mailing-list/get-unsubscribed/key/$MyKey/list/members\" | \\\n    /path/to/mailman/bin/remove_members -f - -n -N members >/dev/null\n
"},{"location":"features/manrs/","title":"MANRS","text":"

MANRS - Mutually Agreed Norms for Routing Security - is a global initiative, supported by the Internet Society, that provides crucial fixes to reduce the most common routing threats. \ufeff

IXP Manager >=v5.0 rewards networks that have joined the MANRS program by highlighting this on the customer's public and internal pages - for example:

This information is updated daily via the task scheduler. If you want to run it manually, run this Artisan command:

$ php artisan ixp-manager:update-in-manrs -vv\nMANRS membership updated - before/after/missing: 5/5/104\n

As you'll see from the output, it will show you the results. We will provide more tooling within IXP Manager to show this information in time.

"},{"location":"features/member-export/","title":"Member Export","text":"

The recommended means of exporting member details from IXP Manager is to use the IX-F Member Export tool. We even provide examples of how to use this to create example tables.

However, you may sometimes require additional flexibility which necessitates rolling your own export templates. This Member Export feature will allow you to do this but it does require some PHP programming ability.

This Member Export feature is modeled after the static content tool and you are advised to read that page also.

This feature first appears in v4.8.0 and replaces the deprecated older way of handling this.

"},{"location":"features/member-export/#overview","title":"Overview","text":"

In IXP Manager, there are four types of users as described in the users page. Member export templates can be added which requires a minimum user privilege to access (e.g. priv == 0 would be publicly accessible through to priv == 3 which would require a superadmin).

To create your own member export templte, you should first set up skinning for your installation. Let's assume you called your skin example in the following.

To create a publicly accessible member export page called lonap, you would first create a directory structure in your skin as follows:

cd ${IXPROOT}\nmkdir -p resources/skins/example/content/members/{0,1,2,3}\n

where the directories 0, 1, 2, 3 represent the minimum required user privilege to access the template. You can now create your export template page by creating a file:

resources/skins/example/content/members/0/lonap.foil.php\n

and then edit that page. In fact, we have bundled three examples in the following locations:

  1. resources/skins/example/content/members/3/lonap.foil.php: a table that replicates how LONAP have traditionally listed their members (see here). It would be accessed via: https://ixp.example.com/content/members/3/lonap
  2. resources/skins/example/content/members/3/json-example.foil.php: a JSON example of the above. The HTTP response content type is set to JSON with .json is added to the URL. However you have to ensure your template outputs JSON also. This would be accessed via: https://ixp.example.com/content/members/3/json-example.json
  3. resources/skins/inex/content/members/0/list.foil.php: what we at INEX use to generate this members list. You can access the real data via: https://www.inex.ie/ixp/content/members/0/list.json (not that this is publicly accessible).

The format of the URL to access these member export templates is:

https://ixp.example.com/content/members/{priv}/{page}[.json]\n
"},{"location":"features/nagios/","title":"Nagios Monitoring","text":"

At INEX we use Nagios to monitor a number of production services including:

IXP Manager can generate configuration to monitor the above for you.

NB: IXP Manager will not install and configure Nagios from scratch. You need a working Nagios installation first and then IXP Manager will automate the above areas of the configuration.

"},{"location":"features/nagios/#historical-notes","title":"Historical Notes","text":"

If you have used Nagios on IXP Manager <4.5, then how the configuration is generated has changed. The older documentation may be available here. In previous versions of IXP Manager, we generated entire / monolithic Nagios configuration files. We have found in practice that this does not scale well and creates a number of limitations.

IXP Manager >= v4.5 now simply creates the targets on a per VLAN and protocol basis.

"},{"location":"features/nagios/#sample-scripts","title":"Sample Scripts","text":"

You will find sample scripts for pulling Nagios configuration from IXP Manager and reloading Nagios at:

https://github.com/inex/IXP-Manager/tree/master/tools/runtime/nagios

"},{"location":"features/nagios/#monitoring-member-reachability","title":"Monitoring Member Reachability","text":"

We monitor all member router interfaces (unless asked not to) via ICMP[v6] pings with Nagios. This is all controlled by the Nagios configuration created with this feature.

To enable / disable these checks, edit the VLAN interface configuration and set IPvX Can Ping appropriately. Note that when IPvX Can Ping is disabled, the host definition is created anyway as this would be used for other Nagios checks such as route collector sessions.

There is an additional option when editing a member's VLAN interface called Busy Host. This changes the Nagios ping fidelity from 250.0,20%!500.0,60% to 1000.0,80%!2000.0,90% (using the default object definitions which are configurable). This is useful for routers with slow / rate limited control planes.

Members are added to a number of hostgroups also:

These hostgroups are very useful to single out issues and for post-maintenance checks.

You can use the IXP Manager API to get the Nagios configuration for a given VLAN and protocol using the following endpoint format (both GET and POST requests work):

https://ixp.example.com/api/v4/nagios/customers/{vlanid}/{protocol}\n

where:

If either of these are invalid, the API will return with a HTTP 404 response.

And example of a target in the response is:

###############################################################################################\n###\n### Packet Clearing House DNS\n###\n### Equinix DB2 (Kilcarbery) / Packet Clearing House DNS / swi1-kcp1-1.\n###\n\n### Host: 185.6.36.60 / inex.woodynet.net / Peering VLAN #1.\n\ndefine host {\n    use                     ixp-manager-member-host\n    host_name               packet-clearing-house-dns-as42-ipv4-vlanid2-vliid109\n    alias                   Packet Clearing House DNS / swi1-kcp1-1 / Peering VLAN #1.\n    address                 185.6.36.60\n}\n\n### Service: 185.6.36.60 / inex.woodynet.net / Peering VLAN #1.\n\ndefine service {\n    use                     ixp-manager-member-ping-service\n    host_name               packet-clearing-house-dns-as42-ipv4-vlanid2-vliid109\n}\n
"},{"location":"features/nagios/#configuring-nagios-for-member-reachability","title":"Configuring Nagios for Member Reachability","text":"

You will notice that the above configuration example is very light and is missing an awful lot of Nagios required configuration directives. This is intentional so that IXP Manager is not too prescriptive and allows you to define your own Nagios objects without having to resort to skinning IXP Manager.

Two of the most important elements of Nagios configuration which you need to understand are object definitions and object inheritance.

You can pass three optional parameters to Nagios via GET/POST and these are:

  1. host_definition; defaults to: ixp-manager-member-host.
  2. service_definition; defaults to ixp-manager-member-service.
  3. ping_service_definition; defaults to: ixp-manager-member-ping-service.
  4. ping_busy_service_definition; defaults to: ixp-manager-member-ping-busy-service.

An example of changing two of these parameters is:

curl --data \"host_definition=my-host-def&service_definition=my-service-def\" -X POST \\\n    -H \"Content-Type: application/x-www-form-urlencoded\" \\\n    -H \"X-IXP-Manager-API-Key: my-ixp-manager-api-key\" \\\n    https://ixpexample.com/api/v4/nagios/customers/2/4\n

An example of the three objects that INEX use for this are:

define host {\n    name                    ixp-manager-member-host\n    check_command           check-host-alive\n    check_period            24x7\n    max_check_attempts      10\n    notification_interval   120\n    notification_period     24x7\n    notification_options    d,u,r\n    contact_groups          admins\n    register                0\n}\n\ndefine service {\n    name                    ixp-manager-member-service\n    check_period            24x7\n    max_check_attempts      10\n    check_interval          5\n    retry_check_interval    1\n    contact_groups          admins\n    notification_interval   120\n    notification_period     24x7\n    notification_options    w,u,c,r\n    register                0\n}\n\ndefine service {\n    name                    ixp-manager-member-ping-service\n    use                     ixp-manager-member-service\n    service_description     PING\n    check_command           check_ping!250.0,20%!500.0,60%\n    register                0\n}\n\ndefine service {\n    name                    ixp-manager-member-ping-busy-service\n    use                     ixp-manager-member-service\n    service_description     PING-Busy\n    check_command           check_ping!1000.0,80%!2000.0,90%\n    register                0\n}\n
"},{"location":"features/nagios/#templates-skinning","title":"Templates / Skinning","text":"

You can use skinning to make changes to the bundled default template or, preferably, add your own.

Let's say you wanted to add your own template called mytemplate1 and your skin is named myskin. The best way to proceed is to copy the bundled example:

cd $IXPROOT\nmkdir -p resources/skins/myskin/api/v4/nagios/customers\ncp resources/views/api/v4/nagios/customers/default.foil.php resources/skins/myskin/api/v4/nagios/customers/mytemplate1.foil.php\n

You can now edit this template as required. The only constraint on the template name is it can only contain characters from the classes a-z, 0-9, -. NB: do not use uppercase characters.

You can then elect to use this template by tacking the name onto the API request:

https://ixp.example.com/api/v4/nagios/customers/{vlanid}/{protocol}/{template}\n

where, in this example, {template} would be: mytemplate1.

As a policy, INEX tends to use the bundled templates and as such they should be fit for general purpose.

"},{"location":"features/nagios/#switch-monitoring","title":"Switch Monitoring","text":"

We monitor all production peering LAN switches for a number of difference services (see below).

IXP Manager produces a host configuration for each production switch such as:

#\n# swi2-dc1-1 - DUB01.XX.YY.ZZ, Data Centre DUB1.\n#\n\ndefine host {\n    use                     ixp-manager-production-switch\n    host_name               swi2-dc1-1.mgmt.inex.ie\n    alias                   swi2-dc1-1\n    address                 192.0.2.4\n    _DBID                   74\n}\n

Members are added to a number of hostgroups also:

These hostgroups are very useful when defining service checks.

You can use the IXP Manager API to get the Nagios configuration for a given infrastructure using the following endpoint format (both GET and POST requests work):

https://ixp.example.com/api/v4/nagios/switches/{infraid}\n

where:

You can use skinning to make changes to the bundled default template or, preferably, add your own.

Let's say you wanted to add your own template called myswtemplate1 and your skin is named myskin. The best way to proceed is to copy the bundled example:

cd $IXPROOT\nmkdir -p resources/skins/myskin/api/v4/nagios/switches\ncp resources/views/api/v4/nagios/switches/default.foil.php resources/skins/myskin/api/v4/nagios/switches/myswtemplate1.foil.php\n

You can then elect to use this template by tacking the name onto the API request:

https://ixp.example.com/api/v4/nagios/switches/{infraid}/{template}\n

where, in this example, {template} would be: myswtemplate1.

You can pass one optional parameter to Nagios via GET/POST which is the host definition to inherit from (see customer reachability testing about for full details and examples):

curl --data \"host_definition=my-sw-host-def\" -X POST \\\n    -H \"Content-Type: application/x-www-form-urlencoded\" \\\n    -H \"X-IXP-Manager-API-Key: my-ixp-manager-api-key\" \\\n    https://ixpexample.com/api/v4/nagios/switches/2\n
"},{"location":"features/nagios/#service-checking","title":"Service Checking","text":"

The recommended way to check various services on your production switches is to use the host groups created by the above switch API call. An example of the hostgroups produced include:

  1. ixp-production-switches-infraid-2: all switches on an infrastructure with DB ID 2;
  2. ixp-switches-infraid-2-dc-dub1: all switches in location dc-dub1;
  3. ixp-switches-infraid-2-extreme: all Extreme switches on an infrastructure with DB ID 2; and
  4. ixp-switches-infraid-2-extreme-x670g2-48x-4q: all Extreme switches of model X670G2-48x-4q on an infrastructure with DB ID 2.

Using these, you can create generic service definitions to apply to all hosts such as:

define service{\n    use                             my-ixp-production-switch-service\n    hostgroup_name                  ixp-production-switches-infraid-1, ixp-production-switches-infraid-2\n    service_description             ping - IPv4\n    check_command                   check_ping_ipv4!10!100.0,10%!200.0,20%\n}\n\ndefine service  {\n    use                             my-ixp-production-switch-service\n    hostgroup_name                  ixp-production-switches-infraid-1, ixp-production-switches-infraid-2\n    service_description             SSH\n    check_command                   check_ssh\n}\n

You can target vendor / model specific checks as appropriate:

define service{\n    use                             my-ixp-production-switch-service\n    hostgroup_name                  ixp-switches-infraid-1-extreme, ixp-switches-infraid-2-extreme\n    service_description             Chassis\n    check_command                   check_extreme_chassis\n}\n

The one thing you'll need to keep an eye on is adding hostgroups to service checks as you create new infrastructures / add new switch vendors / models.

Hint: over the years, we at INEX have written a number of switch chassis check scripts and these can be found on Github at barryo/nagios-plugins.

For example the Extreme version checks and returns something like:

OK - CPU: 5sec - 10%. Uptime: 62.8 days. PSUs: 1 - presentOK: 2 - presentOK:. Overall system power state: redundant power available. Fans: [101 - OK (4311 RPM)]: [102 - OK (9273 RPM)]: [103 - OK (4468 RPM)]: [104 - OK (9637 RPM)]: [105 - OK (4165 RPM)]: [106 - OK (9273 RPM)]:. Temp: 34'C. Memory (slot:usage%): 1:29%.

"},{"location":"features/nagios/#birdseye-daemon-monitoring","title":"Birdseye Daemon Monitoring","text":"

We monitor our Bird instances at INEX directly through Birdseye, the software we use for our looking glass. This means it is currently tightly coupled to Bird and Birdseye until such time as we look at a second router software.

IXP Manager produces a host and service configuration for each router such as:

define host     {\n        use                     ixp-manager-host-birdseye-daemon\n        host_name               bird-rc1q-cork-ipv4\n        alias                   INEX Cork - Quarantine Route Collector - IPv4\n        address                 10.40.5.134\n        _api_url                 http://rc1q-ipv4.cork.inex.ie/api\n}\n\ndefine service     {\n    use                     ixp-manager-service-birdseye-daemon\n    host_name               bird-rc1q-cork-ipv4\n}\n

You can use the IXP Manager API to get the Nagios configuration for all or a given VLAN using the following endpoint format (both GET and POST requests work):

https://ixp.example.com/api/v4/nagios/birdseye-daemons\nhttps://ixp.example.com/api/v4/nagios/birdseye-daemons/{template}\nhttps://ixp.example.com/api/v4/nagios/birdseye-daemons/default/{vlanid}\nhttps://ixp.example.com/api/v4/nagios/birdseye-daemons/{template}/{vlanid}\n

where:

You can use skinning to make changes to the bundled default template or, preferably, add your own.

Let's say you wanted to add your own template called mybetemplate1 and your skin is named myskin. The best way to proceed is to copy the bundled example:

cd $IXPROOT\nmkdir -p resources/skins/myskin/api/v4/nagios/birdseye-daemons\ncp resources/views/api/v4/nagios/birdseye-daemons/default.foil.php resources/skins/myskin/api/v4/nagios/birdseye-daemons/mybetemplate1.foil.php\n

You can then elect to use this template by tacking the name onto the API request:

https://ixp.example.com/api/v4/nagios/birdseye-daemons/{template}\n

where, in this example, {template} would be: mybetemplate1.

You can pass two optional parameter to Nagios via GET/POST which is the host and service definition to inherit from (see customer reachability testing about for full details and examples):

curl --data \"host_definition=my-be-host-def&service_definition=my-be-srv-def\" -X POST \\\n    -H \"Content-Type: application/x-www-form-urlencoded\" \\\n    -H \"X-IXP-Manager-API-Key: my-ixp-manager-api-key\" \\\n    https://ixpexample.com/api/v4/nagios/birdseye-daemons\n

The default values for the host and service definitions are ixp-manager-host-birdseye-daemon and ixp-manager-service-birdseye-daemon respectively.

"},{"location":"features/nagios/#service-checking_1","title":"Service Checking","text":"

You will need to create a parent host and service definition for the generated configuration such as:

define host {\n    name                    ixp-manager-host-birdseye-daemon\n    check_command           check-host-alive\n    check_period            24x7\n    max_check_attempts      10\n    notification_interval   120\n    notification_period     24x7\n    notification_options    d,u,r\n    contact_groups          admins\n    register                0\n}\n\ndefine service {\n    name                    ixp-manager-service-birdseye-daemon\n    service_description     Bird BGP Service\n    check_command           check_birdseye_daemon!$_HOSTAPIURL$\n    check_period            24x7\n    max_check_attempts      10\n    check_interval          5\n    retry_check_interval    1\n    contact_groups          admins\n    notification_interval   120\n    notification_period     24x7\n    notification_options    w,u,c,r\n    register                0\n}\n\ndefine command{\n        command_name    check_birdseye_daemon\n        command_line    /usr/local/nagios-plugins-other/nagios-check-birdseye.php -a $ARG1$\n}\n

The Nagios script we use is bundled with inex/birdseye and can be found here.

Typical Nagios state output:

OK: Bird 1.6.2. Bird's Eye 1.0.4. Router ID 192.0.2.126. Uptime: 235 days. Last Reconfigure: 2017-07-17 16:00:04.26 BGP sessions up of 28.

"},{"location":"features/nagios/#birdseye-bgp-session-monitoring","title":"Birdseye BGP Session Monitoring","text":"

We monitor our Bird route collector, route server and AS112 Bird BGP sessions at INEX directly through Birdseye, the software we use for our looking glass. This means it is currently tightly coupled to Bird and Birdseye until such time as we look at a second router software.

IXP Manager produces a host and service configuration for each router type such as:

### Router: INEX LAN1 - Route Collector - IPv4 / 192.0.2.126.\n\ndefine service     {\n    use                     ixp-manager-member-bgp-session-service\n    host_name               as112-reverse-dns-as112-ipv4-vlanid2-vliid99\n    service_description     BGP session to rc1-lan1-ipv4 (INEX LAN1 - Route Collector - IPv4)\n    _api_url                http://www.example.com/api\n    _protocol               pb_0099_as112\n}\n

The configuration also includes hostgroups for the given VLAN, protocol and type for:

You can use the IXP Manager API to get the Nagios configuration for a given protocol, VLAN and router type using the following templates:

https://ixp.example.com/api/v4/nagios/birdseye-bgp-sessions/{vlanid}/{protocol}/{type}\nhttps://ixp.example.com/api/v4/nagios/birdseye-bgp-sessions/{vlanid}/{protocol}/{type}/{template}\n

where:

You can use skinning to make changes to the bundled default template or, preferably, add your own.

Let's say you wanted to add your own template called myrstemplate1 and your skin is named myskin. The best way to proceed is to copy the bundled example:

cd $IXPROOT\nmkdir -p resources/skins/myskin/api/v4/nagios/birdseye-bgp-sessions\ncp resources/views/api/v4/nagios/birdseye-bgp-sessions/default.foil.php resources/skins/myskin/api/v4/nagios/birdseye-bgp-sessions/myrstemplate1.foil.php\n

You can then elect to use this template by tacking the name onto the API request:

https://ixp.example.com/api/v4/nagios/birdseye-bgp-sessions/{vlanid}/{protocol}/{type}/{template}\n

where, in this example, {template} would be: myrstemplate1.

You can pass one optional parameter to Nagios via GET/POST which is the service definition to inherit from (see customer reachability testing about for full details and examples):

curl --data \"service_definition=my-rs-srv-def\" -X POST \\\n    -H \"Content-Type: application/x-www-form-urlencoded\" \\\n    -H \"X-IXP-Manager-API-Key: my-ixp-manager-api-key\" \\\n    https://ixpexample.com/api/v4/nagios/birdseye-bgpsessions/2/4/1\n

The default values for the service definition is ixp-manager-member-bgp-session-service respectively.

"},{"location":"features/nagios/#service-checking_2","title":"Service Checking","text":"

You will need to create a parent service definition and a check command for the generated configuration such as:

define service {\n    name                    ixp-manager-member-bgp-session-service\n    service_description     Member Bird BGP Sessions\n    check_period            24x7\n    max_check_attempts      10\n    check_interval          5\n    retry_check_interval    1\n    contact_groups          admins\n    notification_interval   120\n    notification_period     24x7\n    notification_options    w,u,c,r\n    register                0\n    check_command           check_birdseye_bgp_session!$_SERVICEAPI_URL!$_SERVICEPROTOCOL\n}\n\ndefine command{\n    command_name    check_birdseye_bgp_session\n    command_line    /path/to/nagios-check-birdseye-bgp-sessions.php -a $ARG1$ -p $ARG2$ -n\n}\n

The Nagios script we use is bundled with inex/birdseye and can be found here.

"},{"location":"features/patch-panels/","title":"Patch Panels / Cross Connects","text":"

One of the more difficult things for an IXP to manage is the sheer volume of patch panels / cross connects they need to manage. We have approached the issue a number of times in IXP Manager but abandoned the attempt every time. Typically our original solutions were over-engineered where all we really needed was something which could replace a spreadsheet per panel.

We believe we have now developed a feature complete and useful means of managing patch panels and associated cross connects.

"},{"location":"features/patch-panels/#features","title":"Features","text":""},{"location":"features/patch-panels/#duplex-fibre-ports","title":"Duplex Fibre Ports","text":"

We suggest adding fibre ports as two ports per duplex port. I.e. if your patch panel has 12 duplex ports, enter this as 24 ports. When allocating ports later, you can set it as a duplex port and select its partner / slave port. This will future proof your patch panels for the growing use of bidi optics and other simplex based xWDM fibre solutions.

From our experience, duplex fibre ports are often identified as their individual strands. For example, duplex port 5 would be referenced as F9/F10.

Your mileage may vary on this but we need to allow this flexibility to cover both use-cases. You always have options however:

"},{"location":"features/patch-panels/#adding-a-patch-panel","title":"Adding a Patch Panel","text":"

The following image shows a typical add a new patch panel form (as of v4.3). You'll note that clicking Help provides detailed context aware help messages.

Most of this is self-explanatory but:

In addition to that, we will also use a cabinets U's are counted from top/bottom, and a patch panel's U position and mounted at front/rear to also create a location description. * When setting the Number of Ports, ensure you count duplex fibre ports as two ports. So 12 duplex fibre ports would be entered as 24. When editing a patch panel, this input field represents the number of additional ports you want to add to a patch panel and thus defaults to 0 in that situation. * Port Name Prefix: This is an optional field intended for use on fibre patch panels. As an example, you may wish to prefix individual fibre strands in a duplex port with F which would mean the name of a duplex port would be displayed as F1/F2. * Chargeable: Usually IXPs request their members to come to them and bear the costs of that. However, sometimes a co-location facility may charge the IXP for a half circuit or the IXP may need order and pay for the connection. Setting this only sets the default option when allocating ports to members later. The options are Yes / No / Half / Other.

"},{"location":"features/patch-panels/#filtering-patch-panel-ports","title":"Filtering Patch Panel Ports","text":"

Quite often, all you are looking for is a free in a particular location (data centre) or cabinet of a particular cable type (UTP / SMF / etc.).

IXP Manager makes this easy from the Patch Panels page via the top right button labeled Filter Options. When clicked, this yields an advanced search as follows:

Cabinets auto-fills when you chose (or change) a location.

"},{"location":"features/patch-panels/#patch-panel-port-states","title":"Patch Panel Port States","text":"

A patch panel port can have the following states:

"},{"location":"features/patch-panels/#free-available-ports","title":"Free / Available Ports","text":"

IXP Manager will colour ports in the Available, Prewired and Awaiting Cease states are green allowing an easy visual indication of available ports.

"},{"location":"features/patch-panels/#patch-panel-port-lifecycle","title":"Patch Panel Port Lifecycle","text":"

Patch panel ports start as either available or prewired. The context menu in these states is:

The main lifecycle option here is Allocate:

Once a port is allocated, it enters the Awaiting XConnect / Connected state and there are additional context menu options available:

The three lifecycle actions are:

  1. Set Connected: updates the status (and allows you to add public/private notes). You next action after this should be Email - Connect.
  2. Set Awaiting Cease: mark the port and pending disconnection. Your next action after this should be Email - Cease.
  3. Set Ceased: this is a transitive state in that it doesn't stick. When you mark a patch panel as ceased, the existing details (including files and notes) are archived as part of the port's history and then port is then cleared and made available again.
"},{"location":"features/patch-panels/#file-attachments","title":"File Attachments","text":"

Over the lifetime of a cross connect - and particularly when it is being ordered, there may be files exchanges between you and your customers or the colocation provider. These can be added to the patch panel port via the Attach File... option.

The dialog will dynamically determine the maximum file upload size based on your PHP settings. To alter this, change the following in your server's appropriate php.ini:

; Maximum allowed size for uploaded files.\nupload_max_filesize = 40M\n\n; Must be greater than or equal to upload_max_filesize\npost_max_size = 40M\n

Please search the internet for further help as this is outside the scope of IXP Manager documentation.

The following features apply to file attachments:

"},{"location":"features/patch-panels/#loa-generation","title":"LoA Generation","text":"

Many co-location providers will not accept a cross connect order without a LoA (Letter of Agency/Authority) from the party to whom the cross connect is to be delivered (typically the IXP).

IXP Manager will generate a PDF LoA to download or include in the emails it generates for you. Here is an example:

You will need to skin this yourself to change the legalese, address and contact details and potentially add a logo. See the version INEX uses here (which includes and embedded PNG logo).

Note that Dompdf is used to turn the HTML template into a PDF and it has some restrictions.

You have two options for generating LoAs (without having IXP Manager email them):

"},{"location":"features/patch-panels/#loa-verification","title":"LoA Verification","text":"

The stock LoA template (and INEX's version) includes a link that allows co-location browsers to click on to verify its veracity.

On successful verification, the end user sees:

"},{"location":"features/patch-panels/#email-generation","title":"Email Generation","text":"

IXP Manager allows you to send four emails related to patch panels / cross connects (port status dependent). These are listed below. The Email editor looks as follows:

Note that:

The four email templates available are:

All email templates can be skinned for your own needs. See INEX's example skins here.

"},{"location":"features/patch-panels/#viewing-patch-panel-details-and-archives","title":"Viewing Patch Panel Details and Archives","text":"

Each patch panel port row has the following button:

The number in the badge indicates how many historical records are available.

Clicking on the button yields the following view screen:

"},{"location":"features/patch-panels/#development-history","title":"Development History","text":"

The Patch Panels functionality was developed during Q1 2017 and added in March 2017. This was made possible from sponsorship which enabled us to hire a new full time developer. We are especially grateful to our sponsors - please see them here.

"},{"location":"features/peering-manager/","title":"Peering Manager","text":"

The Peering Manager is a fantastic tool that allows your members to view and track their peerings with other IXP members. The display is broken into four tabs for each member:

The mechanism for detecting bilateral peers is by by observing established TCP sessions between member peering IP addresses on port 179 using [sflow](sflow.md]. See the peering matrix documentation as setting up the peering matrix will provide all the data needed for the peering manager.

NB: You must check the Peering Manager option when editing VLANs for that VLAN to be included in the peering manager.

The features of the peering manager include:

"},{"location":"features/peering-manager/#required-configuration-settings","title":"Required Configuration Settings","text":"

This feature requires some settings in your .env which you may have already set:

;; the various identity settings\nIDENTITY_...\n\n;; the default VLAN's database ID (get this from the DB ID column in VLANs)\nIDENTITY_DEFAULT_VLAN=1\n

The default peering request email template can be found at resources/views/peering-manager/peering-message.foil.php. You can skin this if you wish but it is generic enough to use as is.

"},{"location":"features/peering-manager/#disabling-the-peering-manager","title":"Disabling the Peering Manager","text":"

You can disable the peering manager by setting the following in .env:

IXP_FE_FRONTEND_DISABLED_PEERING_MANAGER=true\n
"},{"location":"features/peering-manager/#peering-manager-test-mode","title":"Peering Manager Test Mode","text":"

For testing / experimentation purposes, you can enable a test mode which, when enabled, will send all peering requests to the defined test email.

To enable test mode, just set the following in .env:

PEERING_MANAGER_TESTMODE=true\nPEERING_MANAGER_TESTEMAIL=user@example.com\n

When test mode is enabled, there will be a clear alert displayed at the top of the peering manager page.

Normally, the peering manager adds a note to the peer's notes and sets a request last sent date when a peering request is sent. In test mode, this will not happen. If you want this to happen in test mode, set these to true:

PEERING_MANAGER_TESTNOTE=true\nPEERING_MANAGER_TESTDATE=true\n
"},{"location":"features/peering-matrix/","title":"Peering Matrix","text":""},{"location":"features/peering-matrix/#overview","title":"Overview","text":"

The peering matrix system builds up a list of who is peering with whom over your IXP.

There are two primary data sources: route server clients and sflow. Currently, it is assumed that all IXP participants who connect to the route server have an open peering policy and do not filter prefixes.

NB: You must check the Peering Matrix option when editing VLANs for that VLAN to be included in the peering matrix on the frontend.

"},{"location":"features/peering-matrix/#data-source-route-server-clients","title":"Data Source: Route Server Clients","text":"

Route server clients are automatically shown as peering with each other onm the peering matrix. No operator input is required for this.

"},{"location":"features/peering-matrix/#data-source-sflow-bgp-session-detection","title":"Data Source: sflow BGP session detection","text":"

IXP Manager can pick out active BGP sessions from an sflow data feed. This is handled using the sflow-detect-ixp-bgp-sessions script. As this is a perl script, it is necessary to install all the perl modules listed in the check-perl-dependencies.pl script.

Sflow is a packet sampling mechanism, which means that it will take some while before the peering database is populated. After 24 hours of operation, the peering database should be relatively complete.

sflow-detect-ixp-bgp-sessions needs its own dedicated sflow data feed, so it is necessary to set up sflow data fan-out using the sflowtool as described in the sflow fan-out section here. INEX normally uses udp port 5501 for its bgp detection sflow feed.

For more information, see the sflow documentation.

Note that the peering matrix functionality depends on SQL triggers which are maintained in the tools/sql/views.sql file. This can be refreshed using the following command:

```sh\nmysql -u ixp -p ixp < $IXPROOT/tools/sql/views.sql\n```\n
"},{"location":"features/peering-matrix/#configuring-ixpmanagerconf","title":"Configuring ixpmanager.conf","text":"

In addition to the correct SQL configuration for the <sql> section, sflow-detect-ixp-bgp-sessions needs the following options set in the <ixp> section of ixpmanager.conf:

"},{"location":"features/peering-matrix/#sample-ixpmanagerconf","title":"Sample ixpmanager.conf","text":"
<ixp>\n  # location of sflow executable\n  sflowtool = /usr/local/bin/sflowtool\n\n  # sflow listener to p2p rrd exporter, listening on udp port 5500\n  sflowtool_opts = -4 -p 5500 -l\n\n  # sflow listener for BGP peering matrix, listening on udp port 5501\n  sflowtool_bgp_opts = -4 -p 5501 -l\n</ixp>\n
"},{"location":"features/peering-matrix/#testing-the-daemon","title":"Testing the daemon","text":"

The system can be tested using sflow-detect-ixp-bgp-sessions --debug. If it starts up correctly, the script should occasionally print out peering sessions like this:

DEBUG: [2001:db8::ff]:64979 - [2001:db8::7]:179 tcpflags 000010000: ack. database updated.\nDEBUG: [192.0.2.126]:30502 - [192.0.2.44]:179 tcpflags 000010000: ack. database updated.\nDEBUG: [2001:db8::5:0:1]:179 - [2001:db8::4:0:2]:32952 tcpflags 000011000: ack psh. database updated.\n
"},{"location":"features/peering-matrix/#running-the-daemon-in-production","title":"Running the daemon in production","text":"

The script control-sflow-detect-ixp-bgp-sessions should be copied (and edited if necessary) to the operating system startup directory so that sflow-detect-ixp-bgp-sessions is started as a normal daemon.

"},{"location":"features/peering-matrix/#controlling-access-to-the-peering-matrix","title":"Controlling Access to the Peering Matrix","text":"

The peering matrix is publicly available by default. However you can limit access to a minimum user privilege by setting PEERING_MATRIX_MIN_AUTH to an integer from 0 to 3 in your .env. See here for what these integers mean. For example, to limit access to any logged in user, set the following:

PEERING_MATRIX_MIN_AUTH=1\n

You can disable the peering matrix by setting the following in .env:

IXP_FE_FRONTEND_DISABLED_PEERING_MATRIX=true\n
"},{"location":"features/peering-matrix/#troubleshooting","title":"Troubleshooting","text":"

This probably means that it's not getting an sflow feed. Check to ensure that sflowtool is feeding the script correctly by using the sflow-detect-ixp-bgp-sessions --insanedebug. This should print out what the script is reading from sflowtool. Under normal circumstances, this will be very noisy.

If the IP addresses match those on the IXP's peering LAN, then the IP address database is not populated correctly. This can be fixed by entering the IXP's addresses in the IP Addressing menu of the web UI.

Check that the tools/sql/views.sql file has been imported into the SQL database.

"},{"location":"features/peeringdb-oauth/","title":"PeeringDB - OAuth","text":"

IXP Manager can authenticate users via their PeeringDB account and affiliations (from V5.2.0). This is hugely beneficial for your customers who are members of multiple IXPs which use IXP Manager - it means they only need their PeeringDB account to access their portal at each of those IXs.

NB: this feature is not set up by default as it requires some configuration on both PeeringDB and your IXP Manager installation.

"},{"location":"features/peeringdb-oauth/#security","title":"Security","text":"

Is there a security risk?

Well, we at INEX do not think so and we have developed and enabled this feature for our members.

By enabling PeeringDB OAuth, you are creating a path that delegates authentication and authorization of users on your platform to PeeringDB. This is not a decision that was rushed into. It is particularly notable that PeeringDB is the industry-standard database for network operators and the PeeringDB team take the job of assessing whether someone should be affiliated with a network seriously.

We at INEX also discussed this functionality with a wide variety of people in our industry and the opinion was overwhelmingly in favor with no known dissenters.

Lastly, we have developed this in a security-conscious way. New users get read-only access by default. All information from PeeringDB is validated and a number of other confirmatory steps are taken. You can read all about this in the OAuth User Creation section below.

"},{"location":"features/peeringdb-oauth/#configuring-peeringdb-for-oauth","title":"Configuring PeeringDB for OAuth","text":"

There are two steps to configuring OAuth for PeeringDB - first set it up on PeeringDB and then, using the tokens generated there, configure IXP Manager.

"},{"location":"features/peeringdb-oauth/#configuring-peeringdb","title":"Configuring PeeringDB","text":"

The first step is to create your IXP Manager OAuth application through your PeeringDB account:

  1. Log into your PeeringDB account at https://www.peeringdb.com/login
  2. Access the OAuth applications page by either:
    • browsing directly to: https://www.peeringdb.com/oauth2/applications/ after logging in; or
    • access your profile by clicking on your username on the top right and then click on Manage OAuth Applications on the bottom left.
  3. Click [New Application].
  4. Complete the form as follows:
    • Set the name (e.g. IXP Manager).
    • Record the client ID (needed for IXP Manager's configuration).
    • Record the client secret (needed for IXP Manager's configuration).
    • Set Client type to Public.
    • Set Authorization grant type to Authorization code.
    • For Redirect urls, you need to provide the fully qualified path to the /auth/login/peeringdb/callback action on your IXP Manager installation. For example, if the base URL of your IXP Manager installation is https://www.example.com then set redirect URL to https://www.example.com/auth/login/peeringdb/callback. Note that for OAuth, it is mandatory to use https:// (encryption).
  5. Click [Save].

Here is a sample form on PeeringDB:

"},{"location":"features/peeringdb-oauth/#configuring-ixp-manager","title":"Configuring IXP Manager","text":"

To enable OAuth with PeeringDB on IXP Manager, set the following options in your .env file:

AUTH_PEERINGDB_ENABLED=true\n\nPEERINGDB_OAUTH_CLIENT_ID=\"xxx\"\nPEERINGDB_OAUTH_CLIENT_SECRET=\"xxx\"\nPEERINGDB_OAUTH_REDIRECT=\"https://www.example.com/auth/login/peeringdb/callback\"\n

while replacing the configuration values for the those from the PeeringDB set-up above.

Once this is complete, you'll find a new option on IXP Manager's login form:

By default, new users are created on IXP Manager as read-only customer users. You can change this to read-write customer admin users by additionally setting the following option:

AUTH_PEERINGDB_PRIVS=2\n
"},{"location":"features/peeringdb-oauth/#disabling-on-a-per-customer-basis","title":"Disabling On a Per-Customer Basis","text":"

If PeeringDB OAuth is configured and enabled (AUTH_PEERINGDB_ENABLED=true) then it is enabled for all customers. However you may encounter a customer who does not want OAuth access enabled on their account. In this situation, IXP Manager allows you to disabled OAuth on a per-customer basis when adding or editing customers.

Just uncheck the following option on the add / edit customer page:

"},{"location":"features/peeringdb-oauth/#oauth-user-creation","title":"OAuth User Creation","text":"

When IXP Manager receives an OAuth login request from PeeringDB, it goes through a number of validation, creation and deletion steps:

  1. Ensure there is a valid and properly formatted user data object from PeeringDB.
  2. Ensure that both the PeeringDB user account and the PeeringDB email address is verified.
  3. Validate the set of affiliated ASNs from PeeringDB and ensure at least one matching network configured on IXP Manager.
  4. Load or create a user on IXP Manager with a matching PeeringDB user ID. Whether the user already existed or needs to be created, the name and email are updated to match PeeringDB. If the user is to be created then:
    • username is set from the PeeringDB provided name (or unknownpdbuser) using s/[^a-z0-9\\._\\-]/./ with an incrementing integer concatenated as necessary for uniqueness.
    • database column user.peeringdb_id set to PeeringDB's user ID.
    • cryptographically secure random password set (user not provided with this - they will need to do a password reset to set their own password if so desired).
    • database column user.creator set to OAuth-PeeringDB.
  5. Iterate through the user's current affiliated customers on IXP Manager and remove any that were previously added by the PeeringDB OAuth process but are no longer in PeeringDB's affiliated networks list.
  6. Iterate through PeeringDB's affiliated networks list and identify those that are not already linked in IXP Manager - the potential new networks list.
  7. For each network in the potential new networks list, affiliate it with the user if:
    • the network exists on IXP Manager;
    • the network is a peering network (customer type full or pro-bono);
    • the network state in IXP Manager is Normal; and
    • the network is active (not cancelled).
  8. If at the end of this process, the user is left with no affiliated networks, the user is deleted.
"},{"location":"features/peeringdb-oauth/#identifying-oauth-users","title":"Identifying OAuth Users","text":"

If AUTH_PEERINGDB_ENABLED is enabled in your .env, you will see a column called OAuth in the Users list table (accessed via the left hand side menu). This will indicate if the user was created by OAuth (Y) or not (N).

When viewing a user's details (eye button on the users list), it will show how the user was created and also how the user was affiliated with a particular customer. The same is also shown when editing users.

"},{"location":"features/peeringdb-oauth/#historical-notes","title":"Historical Notes","text":"

PeeringDB OAuth with IXP Manager as an idea dates from early 2017 when Job Snijders proposed it in GitHub issue peeringdb/peeringdb#131. We recognized the benefits immediately and opened a parallel ticket at inex/IXP-Manager#322. The background discussions at this point were that PeeringDB would be prepared to invest developer time if IXP Manager committed to implementing it. We both did.

PeeringDB's OAuth documentation can be found here.

As part of the development process, we wrote a provider for the Laravel Socialite package which was merged into that package via the SocialiteProviders/Providers#310 pull request.

"},{"location":"features/peeringdb-oauth/#development-notes","title":"Development Notes","text":"

Testing in development needs to be setup following the instructions above. While PeeringDB has a beta site, the actual OAuth URL is hard coded into Socialite. You can test against production or edit the two URLs in this file: data/SocialiteProviders/src/PeeringDB/Provider.php.

For local testing, you'll need both SSL and a way for PeeringDB to redirect back to you. valet share from Laravel Valet is perfect for this.

"},{"location":"features/peeringdb/","title":"PeeringDB","text":"

PeeringDB is a freely available, user-maintained, database of networks and interconnection data. The database facilitates the global interconnection of networks at Internet Exchange Points (IXPs), data centers, and other interconnection facilities. The database is a non-profit, community-driven initiative run and promoted by volunteers. It is a public tool for the growth and good of the internet.

IXP Manager uses PeeringDB in a number of current (and planned) ways.

"},{"location":"features/peeringdb/#oauth-user-authentication","title":"OAuth - User Authentication","text":"

IXP Manager can authenticate users via their PeeringDB account and affiliations. Please see this page for full details and instructions.

"},{"location":"features/peeringdb/#population-of-data-when-adding-customers","title":"Population of Data When Adding Customers","text":"

Much of the information required to add new customers to IXP Manager can be prepopulated from PeeringDB by entering the customer's ASN into the box provided on the add customer page:

For maximum benefit, you should configure a user on PeeringDB for your IXP Manager installation and set these details in the .env file:

#######################################################################################\n# PeeringDB Authentication\n#\n# PeeringDb's API is used, for example, to pre-populate new customer details. If you\n# provide a working PeeringDb username/password then these will be used to get more\n# complete information.\n#\nIXP_API_PEERING_DB_USERNAME=username\nIXP_API_PEERING_DB_PASSWORD=password\n
"},{"location":"features/peeringdb/#syncing-ixp-owned-data-to-peeringdb-customer-records","title":"Syncing IXP Owned Data to PeeringDB Customer Records","text":"

PeeringDB can take data from your IXP Manager installation via the IX-F Export schema that is part of IXP Manager. PeeringDB requires two things for this to work:

First, you need to add your IX-F Export URL to your PeeringDB IX LAN entry:

Secondly, and unfortunately, each individual network must opt in to having their data updated by this mechanism (data meaning connection to the exchange, peering IPs and port speeds). They do this via this option on their network page:

"},{"location":"features/peeringdb/#asn-detail","title":"ASN Detail","text":"

In most places in IXP Manager, you can click on an ASN number. This uses PeeringDB's whois service to provide a quick view of that network's PeeringDB record.

Here's a squashed screen shot just to illustrate the feature:

"},{"location":"features/peeringdb/#existence-of-peeringdb-records","title":"Existence of PeeringDB Records","text":"

On the customer overview page from IXP Manager v5.0, we provide an indication (yes/no) as to whether a customer has a PeeringDB record. Generally it is important for IXPs to encourage their members to create PeeringDB entries to ensure your IXP is properly represented on their database.

Whether a customer has a PeeringDB entry is updated daily via the cronjobs.md. If you want to run it manually, run this Artisan command:

$ php artisan ixp-manager:update-in-peeringdb -vv\nPeeringDB membership updated - before/after/missing: 92/92/17\n

As you'll see from the output, it will show you the results. We will provide more tooling within IXP Manager to show this information in time.

"},{"location":"features/peeringdb/#facilities","title":"Facilities","text":"

When you add facilities (locations / points of presence / data centers) to IXP Manager, it pulls a list of known facilities from PeeringDB. You should select the correct one from the list (or add a missing one to PeeringDB) when you add/edit your facilities.

Note that this list is cached for two hours.

"},{"location":"features/peeringdb/#infrastructures-and-peeringdb-ixp-entry","title":"Infrastructures and PeeringDB IXP Entry","text":"

Similarly to adding facilities above, when you add an infrastructure (IXP) to IXP Manager, it pulls a list of known IXPs from PeeringDB. You should select the correct one from the list (or add a missing one to PeeringDB) when you add/edit your infrastructures.

Note that this list is cached for two hours.

"},{"location":"features/provisioning/","title":"Automated Provisioning","text":""},{"location":"features/provisioning/#introduction","title":"Introduction","text":"

At INEX, we have been using IXP Manager for automated provisioning of our peering platform since 2017. We have published all the provisioning templates we use in production here.

Info

You are welcome to have a look at what's here and contribute feedback via the issues page or the ixpmanager mailing list. Having said that, provisioning is complicated and very specific to individual IXPs. Even if your IXP is running with the same two network operating systems that are in this repo, it is unlikely that people have the resources freely available to be able to make this work for you.

This page has two sections:

  1. An overview of INEX's own templates.
  2. A description of the API endpoints that INEX uses which should allow the automated provisioning using any other system besides the one INEX uses.
"},{"location":"features/provisioning/#overview-on-inexs-templates","title":"Overview on INEX's Templates","text":"

The templates INEX has published at the above link provide configuration support for Arista EOS and Cumulus Linux 3.4+/4.0+ devices using SaltStack.

The Arista EOS implementation uses NAPALM and can be easily modified for any other operating system which supports either NAPALM or netmiko.

The Cumulus Linux template implementation uses native SaltStack support, and treats the Cumulus Linux switch like any other Linux device. For an IXP, you need CL >= 3.4.2.

At the bare minimum, in order to make these work, you will need to be completely fluent with NAPALM and advanced use of SaltStack, including how to configure and maintain salt proxies. If you have multiple IXP configurations (e.g. live / test environments), you will also need to be fluent with the idea of multiple salt environments.

A good starting point would be Mircea Ulinic's guides for integrating SaltStack and NAPALM. For a bigger-picture overview about how these templates hang together, we've done some presentations - see the 2017 and 2018 talks on automation on the IXP Manager website.

Note that there is no information in these presentations about the nitty gritty of getting all this stuff to work. The Apricot 2018 presentation involves lots of cheery handwaving and high level overview stuff, but very little detail other than some sample command-lines that we use.

In 2023/4 we hope to design a workshop / tutorial videos on this topic.

"},{"location":"features/provisioning/#api-endpoints","title":"API Endpoints","text":"

The API endpoints documented below should provide everything you need to provision all aspects of an IXP fabric.

The INEX sample templates we reference below will get their dynamic information from two sources:

  1. A static file of variables - see this SaltStack example: variables.j2; and
  2. IXP Manager API endpoints as documented.
"},{"location":"features/provisioning/#base-switch-configuration","title":"Base Switch Configuration","text":"

Sample output:

switch:\n  name: swi1-exp1-1\n  asn: 65000\n  hostname: swi1-exp1-1.mgmt.example.com\n  loopback_ip: 192.0.2.1\n  loopback_name: Loopback0\n  ipv4addr: 192.0.2.100\n  model: DCS-7280SR-48C6\n  active: true\n  os: EOS\n  id: 72\n  macaddress: 11:22:33:44:55:66\n  lastpolled: \"2023-04-21T09:31:11+01:00\"\n  osversion: 4.25.4M\n  snmpcommunity: supersecret\n

All of this data comes from the switch settings in IXP Manager. The ipv4addr is the management address.

As well as the base configuration shown the the template about, this information could also be used to provision:

"},{"location":"features/provisioning/#layer3-interfaces","title":"Layer3 Interfaces","text":"

Sample output:

layer3interfaces:\n- ipv4: 192.0.2.21/31\n  description: 'LAN1: swi1-exp1-3 - swi1-exp1-1'\n  bfd: true\n  speed: 100000\n  name: Ethernet51/1\n  autoneg: true\n  shutdown: false\n- ipv4: 192.0.2.33/31\n  description: 'LAN1: swi1-exp2-3 - swi1-exp1-1'\n  bfd: true\n  speed: 100000\n  name: Ethernet53/1\n  autoneg: true\n  shutdown: false\n- description: Loopback interface\n  loopback: true\n  ipv4: 192.0.2.1/32\n  name: Loopback0\n  shutdown: false\n

This API is used to set up the basic layer3 interface elements that are required in future stages to create a VXLAN overlay. The data comes from two sources on IXP Manager:

"},{"location":"features/provisioning/#vlans","title":"VLANs","text":"

Sample output:

vlans:\n- name: IXP LAN1\n  tag: 10\n  private: false\n  config_name: vl_peeringlan1\n- name: Quarantine LAN1\n  tag: 11\n  private: false\n  config_name: vl_quarantinelan1\n- name: VoIP Peering LAN1\n  tag: 12\n  private: false\n  config_name: VOIPPeeringLAN1\n

This information comes from the VLAN configuration on IXP Manager. The INEX sample template also configures VXLAN with this information.

"},{"location":"features/provisioning/#layer2-interfaces","title":"Layer2 Interfaces","text":"Info

Despite the template being called cust interfaces, this API endpoint is for both customer interfaces and layer2 core interfaces.

Sample output:

layer2interfaces:\n- type: edge\n  description: Sample Member - No LAG\n  dot1q: false\n  virtualinterfaceid: 26\n  lagframing: false\n  vlans:\n  - number: 10\n    macaddresses:\n    - 22:33:44:55:66:77\n    ipaddresses:\n      ipv4: 198.51.100.23\n      ipv6: 2001:db8::23\n  shutdown: false\n  status: connected\n  name: \"1:3\"\n  speed: 10000\n  autoneg: true\n- type: edge\n  description: Sample Member - LAG\n  dot1q: false\n  virtualinterfaceid: 251\n  lagframing: true\n  lagindex: 3\n  vlans:\n  - number: 10\n    macaddresses:\n    - 33:44:55:66:77:88\n    ipaddresses:\n      ipv4: 198.51.100.108\n      ipv6: 2001:db8::108\n  name: Port-Channel3\n  lagmaster: true\n  fastlacp: false\n  lagmembers:\n  - Ethernet5\n  - Ethernet6\n  shutdown: false\n  status: connected\n- type: edge\n  description: Sample Member - LAG\n  dot1q: false\n  virtualinterfaceid: 251\n  lagframing: true\n  lagindex: 3\n  vlans:\n  - number: 10\n    macaddresses:\n    - 33:44:55:66:77:88\n    ipaddresses:\n      ipv4: 198.51.100.108\n      ipv6: 2001:db8::108\n  name: Ethernet5\n  lagmaster: false\n  fastlacp: false\n  shutdown: false\n  status: connected\n  autoneg: true\n  speed: 10000\n  rate_limit: ~\n- type: edge\n  description: Sample Member - LAG\n  dot1q: false\n  virtualinterfaceid: 251\n  lagframing: true\n  lagindex: 3\n  vlans:\n  - number: 10\n    macaddresses:\n    - 33:44:55:66:77:88\n    ipaddresses:\n      ipv4: 198.51.100.108\n      ipv6: 2001:db8::108\n  name: Ethernet6\n  lagmaster: false\n  fastlacp: false\n  shutdown: false\n  status: connected\n  autoneg: true\n  speed: 10000\n  rate_limit: ~\n- type: core\n  description: 'LAN1: swi1-exp2-3 to swi1-exp1-1 - Sample Core L2 Link'\n  dot1q: true\n  stp: false\n  cost: ~\n  preference: ~\n  virtualinterfaceid: 439\n  corebundleid: 30\n  lagframing: true\n  lagindex: 1000\n  vlans:\n  - number: 10\n    macaddresses: []\n  - number: 11\n    macaddresses: []\n  - number: 12\n    macaddresses: []\n  name: Port-Channel1000\n  lagmaster: true\n  fastlacp: true\n  lagmembers:\n  - \"Ethernet48\"\n  shutdown: false\n- type: core\n  description: 'LAN1: swi1-exp2-3 to swi1-exp1-1 - Sample Core L2 Link'\n  dot1q: true\n  stp: false\n  cost: ~\n  preference: ~\n  virtualinterfaceid: 439\n  corebundleid: 30\n  lagframing: true\n  lagindex: 1000\n  vlans:\n  - number: 10\n    macaddresses: []\n  - number: 11\n    macaddresses: []\n  - number: 12\n    macaddresses: []\n  name: \"Ethernet48\"\n  lagmaster: false\n  fastlacp: true\n  shutdown: false\n  autoneg: true\n  speed: 40000\n

The data comes from two sources on IXP Manager:

"},{"location":"features/provisioning/#bgp","title":"BGP","text":"

Sample output:

bgp:\n  floodlist:\n  - 192.0.2.2\n  - 192.0.2.12\n  - 192.0.2.10\n  - 192.0.2.11\n  - 192.0.2.40\n  - 192.0.2.20\n  - 192.0.2.0\n  - 192.0.2.60\n  - 192.0.2.22\n  - 192.0.2.82\n  - 192.0.2.23\n  - 192.0.2.42\n  - 192.0.2.15\n  - 192.0.2.16\n  - 192.0.2.17\n  - 192.0.2.18\n  adjacentasns:\n    65082:\n      description: swi1-exp1-3\n      asn: 65082\n      cost: 100\n      preference: ~\n    65002:\n      description: swi1-exp2-3\n      asn: 65002\n      cost: 850\n      preference: ~\n  routerid: 192.0.2.1\n  local_as: 65000\n  out:\n    pg-ebgp-ipv4-ixp:\n      neighbors:\n        192.0.2.120:\n          description: swi1-exp1-3\n          remote_as: 65082\n          cost: 100\n          preference: ~\n        192.0.2.132:\n          description: swi1-exp2-3\n          remote_as: 65002\n          cost: 850\n          preference: ~\n

This completes the layer2 underlay for VXLAN. The sources of information for this are the switches and core bundles in IXP Manager.

"},{"location":"features/reseller/","title":"Reseller Functionality","text":"

Reseller mode must be explicitly enabled with a .env option:

IXP_RESELLER_ENABLED=true\n
"},{"location":"features/reseller/#introduction","title":"Introduction","text":"

In our model, a resold member is still a fully fledged member, they just happen to reach the exchange via someone else's network. You / we would still have a relationship with the member independent of the reseller and would still be required to carry out the standard turn up (for us, this includes IP assignment, quarantine procedures, route collector session, route server sessions if appropriate, etc.).

IXP Manager's functionality is simply to:

"},{"location":"features/reseller/#features","title":"Features","text":""},{"location":"features/reseller/#reseller-and-fanout-ports","title":"Reseller and Fanout Ports.","text":"

For resellers, we need to enforce the one port - one mac - one address rule on the peering LAN.

Depending on switch technology, this can be done using

Currently the schema cannot adequately handle a virtual ethernet port.

Typically, we'd assign a dedicated switch (or bunch of switch ports) as a fanout switch with a reseller uplink port (or LAG). The reseller delivers their customer traffic in dedicated VLANs over this uplink port. We then break each individual customer's traffic into dedicated fanout ports. These physical fanout ports have a one to one relationship with peering ports for that customer (these can be single physical ports or LAGs).

The reseller functionality includes:

"},{"location":"features/reseller/#options","title":"Options","text":"

The following are set in .env:

To enable reseller functionality, set the following to true:

IXP_RESELLER_ENABLED=false\n

If your resold customers are billed directly by the reseller and not the IXP, set this to true to remove billing details from their admin and member areas.

IXP_RESELLER_RESOLD_BILLING=false\n
"},{"location":"features/reseller/#coding-hints","title":"Coding Hints","text":"

In the (older Zend Framework) controllers, you can execute reseller code via:

if( $this->resellerMode() ) {\n    // your reseller specific code here\n}\n

And in (the older Zend Framework) Smarty templates, you can add reseller only content via:

{if $resellerMode}\n    <!-- Your reseller content -->\n{/if}\n

If you have a $customer entity, you can see if it is a reseller via:

if( $customer->isReseller() ) {}\n

To see if a customer is a resold customer or get the reseller customer entity:

if( $customer->getReseller() ) {} // returns false if not a resold customer\n

Finally, to get all resold customer entities of a reseller:

$customer->getResoldCustomers()\n

Reseller functionality was added jointly by INEX and LONAP in June 2013.

"},{"location":"features/rir-objects/","title":"RIR Objects","text":"

IXP Manager can generate (and email) your RIR objects - for example your AS-SETs, AS object, etc - to your RIR for automatic updates / maintenance.

As a concrete example of this, see how INEX do this with our RIPE objects as follows:

Some RIRs, such as RIPE, has a facility to update these objects by email.

"},{"location":"features/rir-objects/#configuration","title":"Configuration","text":"

The general form of the Artisan command is:

$ php artisan rir:generate-object --send-email      \\\n    --to=test-dbm@ripe.net                        \\\n    --from me@example.com  autnum\n

You can see the options by using the standard -h help switch with Artisan:

$ php artisan rir:generate-object -h\nUsage:\n  rir:generate-object [options] [--] <object>\n\nArguments:\n  object                The RIR object template to use\n\nOptions:\n      --send-email      Rather than printing to screen, sends and email for updating a RIR automatically\n      --force           Send email even if it matches the cached version\n      --to[=TO]         The email address to send the object to (if not specified then uses IXP_API_RIR_EMAIL_TO)\n      --from[=FROM]     The email address from which the email is sent (if not specified, tries IXP_API_RIR_EMAIL_FROM and then defaults to IDENTITY_EMAIL)\n  -h, --help            Display this help message\n  -q, --quiet           Do not output any message\n\nHelp:\n  This command will generate and display a RIR object (and optionally send by email)\n

You will note that without the --send-email switch, the command will print to standard output allowing you to consume the object and use it on another way.

NB: the generated object is stored in the cache when it is generated with --send-email for the first time. Future runs with --send-email will only resend the email if the generated object differs from the cached version. You can force an email to be sent with --force. Secondly, the cache used is a file system based cache irrespective of the CACHE_DRIVER .env settings. To wipe it, run: artisan cache:clear file.

The following options are available for use in the .env file:

#######################################################################################\n# Options for updating RIR Objects - see https://docs.ixpmanager.org/features/rir-objects/\n\n# Your RIR password to allow the updating of a RIR object by email:\nIXP_API_RIR_PASSWORD=soopersecret\n\n# Rather than specifying the destination address on the command line, you can set it here\n# (useful for cronjobs and required for use with artisan schedule:run in >=v5.0)\nIXP_API_RIR_EMAIL_TO=test-dbm@ripe.net\n\n# Rather than specifying the from address on the command line, you can set it here\n# (useful for cronjobs and required for use with artisan schedule:run in >=v5.0)\nIXP_API_RIR_EMAIL_FROM=ixp@example.com\n
"},{"location":"features/rir-objects/#objects-and-templates","title":"Objects and Templates","text":"

There are a number of predefined objects available under resources/views/api/v4/rir and skinning is the recommended way to add / edit these objects.

You can copy an existing template or create a new one. For example, if you wanted a template called my-as-set, you would create it under resources/skins/example/api/v4/rir/my-as-set.foil.php and then specify it to the Artisan command as:

$ php artisan rir:generate-object my-as-set\n

The template name must be lowercase, and contain only the characters: 0-9 a-z _ -.

"},{"location":"features/rir-objects/#available-template-variables","title":"Available Template Variables","text":""},{"location":"features/rir-objects/#predefined-templates-objects","title":"Predefined Templates / Objects","text":""},{"location":"features/rir-objects/#autnum","title":"autnum:","text":"

You'll find a standard template for an autnum: object at resources/views/api/v4/rir/autnum.foil.php; as well as INEX's own versions under resources/skins/inex/api/v4/rir/autnum-as2128.foil.phpandautnum-as43760.foil.php` for the IXP route collector and and route servers respectively.

Just copy one of these to your own skin directory and edit as appropriate.

"},{"location":"features/rir-objects/#as-set-connected-asns","title":"as-set: - Connected ASNs","text":"

You can create an AS-SET of connected ASNs / AS macros (see INEX's AS-SET-INEX-CONNECTED as an example) via the example template as-set-ixp-connected.

"},{"location":"features/rir-objects/#as-set-route-server-asns","title":"as-set: - Route Server ASNs","text":"

You can create an AS-SET of ASNs / AS macros connected to the route servers (see AS-SET-INEX-RS as an example) via the example template as-set-ixp-rs.

There's also templates for v4 and v6 only versions: as-set-ixp-rs-v4 and as-set-ixp-rs-v6.

"},{"location":"features/route-collectors/","title":"Route Collectors","text":"

Prerequisite Reading: Ensure you first familiarize yourself with the generic documentation on managing and generating router configurations here.

Route collectors are an important member setup, diagnostic and metric tool for IXPs. Route collectors accept all routes and advertise none. IXP Manager will generate route collector configuration for you. You can see an example of this generated configuration here.

At INEX we use the Bird BGP daemon as our collector using the stock configuration as generated by IXP Manager. We also use this same setup for the quarantine LAN collectors. You can see our live looking glasses here for more information.

"},{"location":"features/route-collectors/#setting-up","title":"Setting Up","text":"

You first need to add your route collector(s) to the IXP Manager routers database. See this page on how to do that.

Typically an IXP's route collector service will share the ASN of the IXP's own management network (but be different to the route server entry). You should also add the route collectors to your initial internal customer representing your IXP on to IXP Manager. Here's INEX's example from our peering LAN1 in Dublin:

"},{"location":"features/route-collectors/#other-information","title":"Other Information","text":""},{"location":"features/route-collectors/#quarantine","title":"Quarantine","text":"

We also use a quarantine route collector when provisioning new member connections. This is a Bird BGP daemon running on a virtual machine on our quarantine LAN. For is, this quarantine LAN is:

When adding routers to IXP Manager, setting the quarantine flag means that the configuration will only contain interfaces that are on the quarantine VLAN.

"},{"location":"features/route-servers/","title":"Route Servers","text":"

Prerequisite Reading: Ensure you first familiarize yourself with the generic documentation on managing and generating router configurations here.

Normally on a peering exchange, all connected parties will establish bilateral peering relationships with each other customer connected to the exchange. As the number of connected parties increases, it becomes increasingly more difficult to manage peering relationships with customers of the exchange. A typical peering exchange full-mesh eBGP configuration might look something similar to the diagram on the left hand side.

The full-mesh BGP session relationship scenario requires that each BGP speaker configure and manage BGP sessions to every other BGP speaker on the exchange. In this example, a full-mesh setup requires 7 BGP sessions per member router, and this increases every time a new member connects to the exchange.

However, by using a route servers for peering relationships, the number of BGP sessions per router stays at two: one for each route server (assuming a resilient set up). Clearly this is a more sustainable way of maintaining IXP peering relationships with a large number of participants.

"},{"location":"features/route-servers/#configuration-generation-features","title":"Configuration Generation Features","text":"

Please review the generic router documentation to learn how to automatically generate route server configurations. This section goes into a bit more specific detail on INEX's route server configuration (as shipped with IXP Manager) and why it's safe to use.

You should also look at the following resources:

The features of the route server configurations that IXP Manager generates include:

With Bird v2 support in IXP Manager >= v5, we provide better looking glass integration and other tooling to show members which prefixes are filtered and why.

"},{"location":"features/route-servers/#filtering-algorithm","title":"Filtering Algorithm","text":"

The Bird v2 filtering algorithm is as follows:

  1. Filter small prefixes (default is > /24 / /48 for ipv4 / ipv6).
  2. Filter martians / bogons prefixes (see this template).
  3. Sanity check - filter prefixes with no AS path or > 64 ASNs in AS path.
  4. Sanity check to ensure peer AS is the same as first AS in the prefix\u2019s AS path.
  5. Prevent next-hop hijacking. This occurs when a participant advertises a prefix with a next hop IP other than their own. An exception exists to allow participants with multiple connections advertise their other router (next-hop within the same AS).
  6. Filter known transit networks - see this section
  7. IRRDB filtering: ensure origin AS is in set of ASNs from member AS-SET.
  8. RPKI:
    • Valid -> accept
    • Invalid -> drop
  9. RPKI Unknown -> revert to standard IRRDB prefix filtering.

If a route fails at any point it is tagged (for looking glass) and rejected.

"},{"location":"features/route-servers/#setting-up","title":"Setting Up","text":"

You first need to add your route servers to the IXP Manager routers database. See this page on how to do that.

Typically an IXP's route server service will have a dedicated ASN that is different to the IXP's own management / route collector ASN. As such, you need to add a new internal customer to IXP Manager.

Warning

You are strongly advised to use / request a dedicated 16-bit ASN from your RIR for route server use and in our experience, all RIRs understand this and accomodate it. The route server configurations will support an asn32 but to our knowledge, this has never been used in production. Also, withouot an asn16, you will be unable to offer your members standard community based filtering.

Here's an example from INEX for our route server #1:

You then need to create an interface for this route server on each peering LAN where the service will be offered. Here again is INEX's example from our peering LAN1 in Dublin:

There's a couple things to note in the above:

  1. AS112 Client is checked which means (so long as Route Server Client is checked on the AS112 service) the AS112 service will peer with the route servers.
  2. Apply IRRDB Filtering has no meaning here as this is the route server rather than the route server client.
"},{"location":"features/route-servers/#per-asn-import-export-filters","title":"Per ASN Import / Export Filters","text":"

There are occasions where you may need to override the default filtering mechanism for some members. IXP Manager allows you to create custom Bird2 checks at the start of the standard import / export filters when using Bird2 (not supported on the older Bird v1 configuration).

To do this, you must create skinned files named after the ASN. For example, let's assume your skin name is example and the ASN of the member you want to apply custom filtering to is 64511; then you would an export and/or import filter in files named:

You'll see real examples from INEX here. Remember that these are placed at the beginning of the standard filters allowing you to explicitly accept or reject the prefix. However, remember that INEX accepts prefixes on import always but tags prefixes for filtering with large community routerasn:1101:x - please see the resources referenced above for details on this.

"},{"location":"features/route-servers/#displaying-filtered-prefixes","title":"Displaying Filtered Prefixes","text":"

Using Bird v2 and internal large communities, we have completely overhauled how we show end users what prefixes are filtered by the route servers.

If you are running route servers using the Bird v2 configuration and if you have installed the looking glass then you should set the following in your .env file:

IXP_FE_FRONTEND_DISABLED_FILTERED_PREFIXES=false\n

This is a live view gathered from each Bird v2 route server with a looking glass.

Please see our presentations from 2019 for more information on this. Particularly the UKNOF one from September 2019 would be the most up to date.

For a route server to be polled for a customer by this tool, the following conditions must be met:

  1. the customer must be a route server client and the vlan cannot be a private vlan;
  2. only enabled IP protocols are queried for a vlan interface;
  3. the router server must be allocated to the same vlan and have a instance for the IP protocol;
  4. the route server cannot be marked as quarantine;
  5. the route server muct have an API configured;
  6. the route server must be a route server (remember you can provision collectors and AS112 routers via IXP Manager also);
  7. the route server must have large communities enabled;

It is also critical that the looking glass for the route server works.

Caching: for large members with tens of thousands of routes, gathering filtered prefixes can be an expensive task for IXP Manager and the route server (expensive in terms of time and CPU cycles). As such, this feature of IXP Manager requires the use of a persistent cache. We recommend memcached for this which is installed and enabled by default with the automated installer.

"},{"location":"features/route-servers/#well-known-filtering-communities","title":"Well-Known Filtering Communities","text":"

The route server configuration that IXP Manager generates by default provides well known communities to allow members to control the distribution of their prefixes.

NB: in the following, rs-asn is the AS number of the IXP's route server.

The standard communities are defined as follows:

Description Community Prevent announcement of a prefix to a peer 0:peer-as Announce a route to a certain peer rs-asn:peer-as Prevent announcement of a prefix to all peers 0:rs-asn Announce a route to all peers rs-asn:rs-asn

The community for announcing a route to all peers (rs-asn:rs-asn) is the default behaviour and so there is no need to tag routes with this.

Example #1: if a member wishes to instruct the IXP route server (with AS64500) to distribute a particular prefix only to AS64496 and AS64503, the prefix should be tagged with communities: 0:64500 64500:64496 64500:64503 (i.e. announce to no one except...).

Example #2: for a member to to announce a prefix to all IXP route server participants, excluding AS64497, the prefix should be tagged with only community 0:64497.

If you enabled support for BGP large communities, then the following large communities can be used:

Description Community Prevent announcement of a prefix to a peer rs-asn:0:peer-as Announce a route to a certain peer rs-asn:1:peer-as Prevent announcement of a prefix to all peers rs-asn:0:0 Announce a route to all peers rs-asn:1:0

If your route server is configured to support large communities, then you should advise your members to use these over standard 16-bit communities as a large number of networks now have a 32-bit ASN. You should also advise them not to mix standard 16-bit communities and large communities \u2013 please choose one or the other.

Lastly, with BGP large communities, AS path prepending control is also available by default using the following large BGP communities:

Description Community Prepend to peer AS once rs-asn:101:peer-as Prepend to peer AS twice rs-asn:102:peer-as Prepend to peer AS three times rs-asn:103:peer-as

"},{"location":"features/route-servers/#rfc1997-passthru","title":"RFC1997 Passthru","text":"

RFC1997 defines some well-known communities including NO_EXPORT (0xFFFFFF01 / 65535:65281) and NO_ADVERTISE and states that they have global significance and their operations shall be implemented in any community-attribute-aware BGP speaker.

According to RFC7947, it is a matter of local policy whether these well-known communities are interpreted or passed through. Historically, some IXP route servers interpret them and some pass them through. As such the behaviour of these well-known communities is not well-understood when it comes to route servers and this topic has been the subject of a good deal of debate in the IXP community over the years.

In 2017, INEX and LONAP published draft-hilliard-grow-no-export-via-rs-00 to try and create some consensus on this. While the draft was not accepted as a standard, the discussion drew a conclusion that these well-known communities should not be interpreted by the route server but passed through.

When creating a route server in IXP Manager, there is a checkbox option to control this behavior: Pass through RFC1997 well-known communities (recommended).

It is recommended that this be enabled on route servers.

"},{"location":"features/route-servers/#legacy-prefix-analysis-tool","title":"Legacy Prefix Analysis Tool","text":"

The older but deprecated means of viewing filtered prefixes was the Route Server Prefix Analysis tool which allows your members to examine what routes they are advertising to the route servers, which are being accepted and which are being rejected.

"},{"location":"features/route-servers/#limitations-and-caveats","title":"Limitations and Caveats","text":"

Implemented as a Perl script which accesses the database directly. The script can also only be used on one LAN and one route server. Thus, pick you most popular LAN and route server.

"},{"location":"features/route-servers/#setting-up_1","title":"Setting Up","text":"
IXP_FE_FRONTEND_DISABLED_RS_PREFIXES=false\n

Once you make the last change above, the prefix analysis tool will be available to administrators and members on IXP Manager.

"},{"location":"features/routers/","title":"Routers","text":"

IXP Manager can generate router configuration for typical IXP services such as:

See the above pages for specific information on each of those use cases and below for instructions on how to generate configuration.

Tip

For larger router configurations - especially when you have members with large prefix lists, you will need to increase PHP's memory_limit as the default of 128M will not be sufficient. Start with 512MB and watch the log (storage/logs/...) which reports the memory and time for configuration generation.

"},{"location":"features/routers/#managing-routers","title":"Managing Routers","text":"

The basic elements of a router are configured in IXP Manager under the IXP Admin Actions - Routers option on the left hand menu.

When you goto add / edit a router, the green help button will provide explanatory details on each field of information required:

From the router management page, you can:

"},{"location":"features/routers/#configuration-generation-overview","title":"Configuration Generation Overview","text":"

The simplest configuration to generate is the route collector configuration. A route collector is an IXP router which serves only to accept all routes and export no routes. It is used for problem diagnosis, to aid customer monitoring and for looking glasses (see INEX's here).

The original Bird v1 configuration simply pulls in a fairly standard header (sets up router ID, listening address and some filters) and creates a session for all customer routers on the given VLAN. The new Bird v2 configuration has more features and replicates the route server filtering mechanism but tags and accepts all routes for diagnosis.

When adding a router, you give it a handle. For example: rc1-lan1-ipv4 which, for INEX, would mean a route collector on peering LAN1 using IPv4. Then - for the given router handle - the configuration can be generated and pulled using the API as follows:

#! /bin/sh\n\n# The API Key.\n# This is generated in IXP Manager via the top right menu: *My Account -> API Keys*\nKEY=\"your-admin-ixp-manager-api-key\"\n\n# The base URL of your IXP Manager install plus: 'api/v4/router/gen-config'\nURL=\"https://ixp.example.com/api/v4/router/gen-config\"\n\n# The handle is as described above:\nHANDLE=\"rc1-lan1-ipv4\"\n\n# Then the configuration can be pulled as follows:\ncurl --fail -s -H \"X-IXP-Manager-API-Key: ${KEY}\" ${URL}/${HANDLE} >${HANDLE}.conf\n

Configurations for the route server and AS112 templates can be configured just as easily.

The stock templates for both are secure and well tested and can be used by setting the template element of the router to one of the following. NB: from May 2019, we recommend you use IXP Manager v5 and Bird2 templates.

We also provide sample scripts for automating the re-configuration of these services by cron:

All of these scripts have been written defensively such that if there is any issue getting the configuring or validating the configuration then the running router instance should be unaffected. This has worked in practice at INEX when IXP Manager was under maintenance, when there were management connectivity issues and when there were database issues. They also use the updated API (see below) to mark when the router configuration update script ran successfully.

"},{"location":"features/routers/#router-pairing-and-locking","title":"Router Pairing and Locking","text":"Info

Pairing, locking and the more advanced update scripts were introduced with IXP Manager v6.4.0, and there is also a tutorial video linked from the release notes.

For IXPs, route servers are considered a critical production service and most IXPs deploy them in redundant pairs. This is usually implemented with dedicated hardware (servers with dual PSU, hardware RAID, and out-of-band management access) deployed in different points of presence.

When it comes to updating the configuration of these, the older scripts provided by IXP Manager suggested that this be done about four times per day with the timing of the cronjob set so that there is an offset so that each server will not update at the same time. The hope was that if there was an issue, only one server of the resilient pair would be affected, and engineers would have time to react and prevent updates on the other working server. Some IXPs added additional logic to the scripts to check if the other server was functional before performing a reconfiguration, but this was often limited to pings and a simple check to see if Bird was running.

The v6.4.0 release introduced a significant new resilience mechanism by pairing servers. In the IXP Manager router UI, you can now select another router to pair with the one you are editing:

You would select pairs as follows:

Once your pairs are set up, you need to deploy the new router update scripts as follows:

There is no need to use different scripts for route collectors and servers. Traditionally, at INEX, these scripts were developed slightly differently from each other (e.g., the collector script updates both IPv4 and IPv6 versions and provides more informative output, whereas the route server script takes a specific route server handle to update). We may merge these in the future.

You can use these scripts exactly as they are on an Ubuntu server changing only the configuration lines at the top:

APIKEY=\"your-api-key\"\nURLROOT=\"https://ixp.example.com\"\nBIRDBIN=\"/usr/sbin/bird\"\n

The collector script takes an additional configuration option for the handles of the servers to update - e.g.:

HANDLES=\"rc1-ipv4 rc1-ipv6\"\n

These new scripts now work as follows:

  1. NEW: Obtain a local script lock preventing more than one update script to execute at a time on the server (e.g., if the update is long-running, cron cannot start additional updates).
  2. NEW: Obtain a configuration lock from IXP Manager.
    • This involves making an API call to /api/v4/router/get-update-lock/$handle, which IXP Manager then processes and returns HTTP code 200 if the lock is acquired and the update can proceed.
    • A lock is not granted if the router is paused for updates within IXP Manager (new per-router option in the router's dropdown menu on the router list page).
    • A lock is not granted if another process has already acquired a configuration lock for this router.
    • A lock is also not granted if the router's partner is locked. This major new resiliency addition prevents two paired route servers from being updated in parallel.
    • The update script will abort if IXP Manager is unavailable or in maintenance mode. It must get a HTTP 200 to proceed.
  3. If a lock is acquired, the script will then download the latest configuration from IXP Manager.
  4. The script will do some basic sanity checks on the downloaded configuration:
    • First, check that the HTTP request to pull the new configuration succeeded.
    • Second, check that the downloaded file exists and is non-zero in size.
    • Third, ensure at least two BGP protocol definitions are in the configuration file.
    • Lastly, the script has Bird parse the downloaded file to ensure validity.
  5. NEW: The update script will now compare the newly downloaded script to the running configuration.
    • If there are differences, the old configuration is backed up, and the Bird daemon will be reloaded.
    • If no differences exist, the Bird daemon will not be reloaded.
  6. A check is performed to ensure the Bird daemon is actually running and, if not, it is started.
  7. IMPROVED: A final API call is made to IXP Manager via /api/v4/router/updated/$handle to release the lock and update the last updated timestamp.
    • A significant improvement here is the use of a until api-succeeds, sleep 60, retry construct to ensure the lock is released even when there are transitive network issues / IXP Manager maintenance modes / server maintenance, etc.

Adding step (5) above (only reload on changes) now allows the update script to be safely run as frequently as every few minutes, which is necessary for the UI-based community filtering to be effective.

You should still offset the updates between router pairs, as the script will give up if a lock cannot be obtained. Future improvements could allow for some retries.

For additional information with UI images, see slides 25-30 in this presentation PDF.

"},{"location":"features/routers/#updated-api","title":"Updated API","text":"

It can be useful to know that the scripts for updating the router configuration for AS112, route collector and route server BGP daemons run successfully. At INEX for example, we have three LANs and so 10 individual servers running a total of 30 Bird instances which is unwieldy to check and monitor manually.

When viewing routers in IXP Manager, you may have noticed the Last Updated column which will initially show (unknown). All our update scripts (see above) trigger the updated API call when a route configuration run has completed successfully. Note that this does not mean that a configuration has necessarily changed but rather that the update script ran and executed correctly. In other words: the configuration was successfully pulled from IXP Manager, compared to the running configuration and, if changed, successfully applied.

The API call to update the last updated field to now is a POST as follows:

curl -s -X POST -H \"X-IXP-Manager-API-Key: my-ixp-manager-api-key\" \\\n    https://ixp.example.com/api/v4/router/updated/{handle}\n

where {handle} should be replaced with the route handle as described above.

The result is a JSON object with the datetime as set and is equivalent to the result of the following API call which fetches the last updated field without setting it:

curl -s -X GET -H \"X-IXP-Manager-API-Key: my-ixp-manager-api-key\" \\\n    https://ixp.example.com/api/v4/router/updated/{handle}\n\n{\"last_updated\":\"2017-05-21T19:14:43+00:00\",\"last_updated_unix\":1495394083}\n

There are two useful additional API endpoints. To get the last updated time of all routers, use:

curl -s -X GET -H \"X-IXP-Manager-API-Key: my-ixp-manager-api-key\" \\\n    https://ixp.example.com/api/v4/router/updated\n\n{\"handle1\":{\"last_updated\":\"2017-05-21T19:14:43+00:00\",\"last_updated_unix\":1495394083},\n \"handle2\":{\"last_updated\":null,\"last_updated_unix\":null},\n ...}\n

The above output shows the format of the reply as well as the fact that routers without a last updated value set will be included as null values.

Lastly, you can request the last updated time of routers where that time exceeds a given number of seconds. In this call, routers without a last updated time will not be returned. This is useful for monitoring applications such as Nagios where you would want a warning / alert on any routers that have not updated in the last day for example:

curl -s -X GET -H \"X-IXP-Manager-API-Key: my-ixp-manager-api-key\" \\\n    https://ixp.example.com/api/v4/router/updated-before/86400\n\n[]\n

This example also shows that an empty JSON object is returned for an empty result. Otherwise the format of the reply is the same as for the call above for all routers:

{\"handle1\":{\"last_updated\":\"2017-05-21T19:14:43+00:00\",\"last_updated_unix\":1495394083},...}\n
"},{"location":"features/routers/#examples","title":"Examples","text":"

We use Travis CI to test IXP Manager before pushing new releases. The primary purpose of this is to ensure that the configuration for routers generated matches known good configurations from the same sample database.

These known good configurations also serve as useful examples of what the standard IXP Manager configuration generates.

See these known good configurations here and:

"},{"location":"features/routers/#live-status","title":"Live Status","text":"

The live status of any configured routers that have API access can be seen in IXP Manager via the Live Status sub-menu option of Routers on the left hand side menu.

Each router is queried twice via AJAX requests to provide:

"},{"location":"features/routers/#filtering-known-transit-networks","title":"Filtering Known Transit Networks","text":"

We filter known transit networks as discussed here: https://bgpfilterguide.nlnog.net/guides/no_transit_leaks/.

There are three configuration options available (>v6.1.0) to allow you to change the default behaviour. These options exist to provide an easier path than skinning the template files directly.

Exclude one of more AS numbers from the default list (see this file on your own deployment of IXP Manager).

(1) Exclude Specific ASNs:

If you just want to exclude one or more ASNs from the default list, then using comma separation, set the following in your .env file:

IXP_NO_TRANSIT_ASNS_EXCLUDE=65501,65502\n

(2) Disable This Feature Entirely:

Set an empty configuration option as follows in your .env file:

IXP_NO_TRANSIT_ASNS_OVERRIDE=\n

(3) Use Your Own Custom List of ASNs:

Set the following configuration option with a comma separated list as follows in your .env file:

IXP_NO_TRANSIT_ASNS_OVERRIDE=65501,65502,65503\n
"},{"location":"features/rpki/","title":"RPKI","text":"

IXP Manager supports RPKI validation on the router configuration generated for Bird v2. The best way to fully understand RPKI with IXP Manager is to watch our presentation from APRICOT 2019 or read this article on INEX's website.

"},{"location":"features/rpki/#rpki-validator-local-cache","title":"RPKI Validator / Local Cache","text":"

IXP Manager uses the RPKI-RTR protocol to feed ROAs to the Bird router instances. We recommend you install two of these validators/local caches from different vendors.

Let IXP Manager know where they are by setting the following .env settings:

# IP address and port of the first RPKI local cache:\nIXP_RPKI_RTR1_HOST=192.0.2.11\nIXP_RPKI_RTR1_PORT=3323\n\n# While not required, we recommend you also install a second validator:\n# IXP_RPKI_RTR2_HOST=192.0.2.12\n# IXP_RPKI_RTR2_PORT=3323\n

See our installation notes for these:

  1. Routinator 3000.
  2. rpki-client.
  3. Cloudflare's RPKI Toolkit - this has now been deprecated and should not be used.
  4. RIPE NCC RPKI Validator 3 - this has now been deprecated and should not be used.
"},{"location":"features/rpki/#revalidation","title":"Revalidation","text":"

As it stands, Bird v2.0.4 does not support revalidation of prefixes following ROA updates (i.e. a prefix that was accepted as ROA valid that subsequently becomes ROA unknown / invalid will remain learnt as ROA valid). The Bird developers are working on fixing this. In the interim, you need to schedule a revalidation via cron using a /etc/crontab entry such as:

20 11,23 * * *   root    /usr/sbin/birdc -s /var/run/bird/bird-rs1-ipv4.ctl reload in all >/dev/null\n
"},{"location":"features/rpki/#enabling-rpki","title":"Enabling RPKI","text":"

The outline procedure to enable RPKI is below. These notes are written from the perspective that you have existing IXP Manager Bird v1 route servers. If this is a green field site, these notes will work just as well by ignoring the upgrade bits. In either case, it's vital you already understand how to configure routers in IXP Manager.

At INEX we started with our route collector which is a non-service affecting administrative tool. Once we were happy with the stability and results of that, we upgraded our two route servers one week apart in planned announced maintenance windows. We also took the opportunity to perform a distribution upgrade from Ubuntu 16.04 to 18.04.

Start by installing two local caches / validator services as linked above. INEX uses Cloudflare's and Routinator 3000. You should also add these to your production monitoring service.

Once your maintenance window starts, stop the target route server you plan to upgrade. You'll then need to to remove the Bird v1 package (dpkg -r bird on Ubuntu). Once the Bird package is removed, you can perform a distribution upgrade if you wish.

Bird v2 is available as a prebuilt package with Ubuntu 20.04 LTS and can be installed with apt install bird2.

There are no Bird v2 packages for Ubuntu 18.04 LTS. As such, you need to install from source if using that older platform. Rather than installing a build environment and compiling on each server, you can do this on a single server (a dedicated build box / admin server / etc) and then distribute the package across your route servers / collector:

# Install Ubuntu build packages and libraries Bird requires:\napt install -y build-essential libssh-dev libreadline-dev \\\n    libncurses-dev flex bison checkinstall\n\n# At time of writing, the latest release was v2.0.7.\n# Check for newer versions!\ncd /usr/src\nwget ftp://bird.network.cz/pub/bird/bird-2.0.7.tar.gz\ntar zxf  bird-2.0.7.tar.gz\ncd bird-2.0.7/\n./configure  --prefix=/usr --sysconfdir=/etc\nmake -j2\ncheckinstall -y\n

The checkinstall tool creates a deb package file: /usr/local/src/bird-2.0.7/bird_2.0.7-1_amd64.deb

NB: for this method to work, you must be running the same operating system and version on the target servers as the build box. For us, it was Ubuntu 18.04 LTS on all systems.

To install on a target machine:

# from build machine\nscp bird_2.0.7-1_amd64.deb target-machine:/tmp\n\n# on target machine\napt install -y libssh-dev libreadline-dev libncurses-dev\ndpkg -i /tmp/bird_2.0.7-1_amd64.deb\n

You now need to update your route server record in IXP Manager:

Note that the Bird v2 template uses large BGP communities extensively internally. The option Enable Large BGP Communities / RFC8092 only controls whether your members can use large communities for filtering. It's 2020 - you should really enable this.

As mentioned above, you need to let IXP Manager know where your local caching / validators are by setting the following .env settings:

# IP address and port of the first RPKI local cache:\nIXP_RPKI_RTR1_HOST=192.0.2.11\nIXP_RPKI_RTR1_PORT=3323\n\n# While not required, we recommend you also install a second validator:\nIXP_RPKI_RTR2_HOST=192.0.2.12\nIXP_RPKI_RTR2_PORT=3323\n

Take a look at the generated configuration within IXP Manager now and sanity check it.

If you have been using our scripts to reload route server configurations, you will need to download the new one (and edit the lines at the top) or update your existing one. The main elements that need to be changed is that the daemon name is not longer named differently for IPv6 (Bird v1 had bird/birdc and bird6/bird6c where as Bird v2 only has bird/birdc).

You should now be able to run this script to pull a new configuration and start an instance of the route server. We would start with one and compare route numbers (just eyeball them) against the route server you have not upgraded.

You're nearly there! If you are using our Bird's Eye looking glass, you will need to upgrade this to >= v1.2.1 for Bird v2 support. At INEX, we tend to clone the repository and so a simple git pull is all that's required. If you're installing from release packages, get the latest one and copy over your configurations.

"},{"location":"features/rpki/#bird-operational-notes","title":"Bird Operational Notes","text":"

These notes are valid when using IXP Manager's Bird v2 with RPKI route server configuration.

You can see the status of the RPKI-RTR protocol with:

bird> show protocols \"rpki*\"\nName       Proto      Table      State  Since         Info\nrpki1      RPKI       ---        up     2019-05-11 14:51:40  Established\nrpki2      RPKI       ---        up     2019-05-11 12:44:25  Established\n

And you can see detailed information with:

bird> show protocols all rpki1\nName       Proto      Table      State  Since         Info\nrpki1      RPKI       ---        up     2019-05-11 14:51:40  Established\n  Cache server:     10.39.5.123:3323\n  Status:           Established\n  Transport:        Unprotected over TCP\n  Protocol version: 1\n  Session ID:       54059\n  Serial number:    122\n  Last update:      before 459.194 s\n  Refresh timer   : 440.805/900\n  Retry timer     : ---\n  Expire timer    : 172340.805/172800\n  Channel roa4\n    State:          UP\n    Table:          t_roa\n    Preference:     100\n    Input filter:   ACCEPT\n    Output filter:  REJECT\n    Routes:         72161 imported, 0 exported\n    Route change stats:     received   rejected   filtered    ignored   accepted\n      Import updates:         141834          0          0          0     141834\n      Import withdraws:         2519          0        ---          0       3367\n      Export updates:              0          0          0        ---          0\n      Export withdraws:            0        ---        ---        ---          0\n  No roa6 channel\n

You can examine the ROA table with:

bird> show route table t_roa\nTable t_roa:\n58.69.253.0/24-24 AS36776  [rpki1 2019-05-11 14:51:40] * (100)\n                           [rpki2 2019-05-11 12:45:45] (100)\n

Now, using INEX's route collector ASN (2128) as an example here - change for your own collector/server ASN - you can find RPKI invalid and filtered routes via:

bird> show route  where bgp_large_community ~ [(2128,1101,13)]\nTable master4:\n136.146.52.0/22      unicast [pb_as15169_vli99_ipv4 2019-05-11 01:00:17] * (100) [AS396982e]\n        via 185.6.36.57 on eth1\n...\n

At time of writing, the filtered reason communities are:

define IXP_LC_FILTERED_PREFIX_LEN_TOO_LONG      = ( routeserverasn, 1101, 1  );\ndefine IXP_LC_FILTERED_PREFIX_LEN_TOO_SHORT     = ( routeserverasn, 1101, 2  );\ndefine IXP_LC_FILTERED_BOGON                    = ( routeserverasn, 1101, 3  );\ndefine IXP_LC_FILTERED_BOGON_ASN                = ( routeserverasn, 1101, 4  );\ndefine IXP_LC_FILTERED_AS_PATH_TOO_LONG         = ( routeserverasn, 1101, 5  );\ndefine IXP_LC_FILTERED_AS_PATH_TOO_SHORT        = ( routeserverasn, 1101, 6  );\ndefine IXP_LC_FILTERED_FIRST_AS_NOT_PEER_AS     = ( routeserverasn, 1101, 7  );\ndefine IXP_LC_FILTERED_NEXT_HOP_NOT_PEER_IP     = ( routeserverasn, 1101, 8  );\ndefine IXP_LC_FILTERED_IRRDB_PREFIX_FILTERED    = ( routeserverasn, 1101, 9  );\ndefine IXP_LC_FILTERED_IRRDB_ORIGIN_AS_FILTERED = ( routeserverasn, 1101, 10 );\ndefine IXP_LC_FILTERED_PREFIX_NOT_IN_ORIGIN_AS  = ( routeserverasn, 1101, 11 );\ndefine IXP_LC_FILTERED_RPKI_UNKNOWN             = ( routeserverasn, 1101, 12 );\ndefine IXP_LC_FILTERED_RPKI_INVALID             = ( routeserverasn, 1101, 13 );\ndefine IXP_LC_FILTERED_TRANSIT_FREE_ASN         = ( routeserverasn, 1101, 14 );\ndefine IXP_LC_FILTERED_TOO_MANY_COMMUNITIES     = ( routeserverasn, 1101, 15 );\n

Check the route server configuration as generated by IXP Manager for the current list if you are reading this on a version later than v5.0.

If you want to see if a specific IP is covered by a ROA, use:

bird> show route table t_roa where 45.114.234.0 ~ net\nTable t_roa:\n45.114.234.0/24-24 AS59347  [rpki1 2019-05-11 14:51:40] * (100)\n                            [rpki2 2019-05-11 12:45:45] (100)\n45.114.232.0/22-24 AS59347  [rpki1 2019-05-11 14:51:40] * (100)\n                            [rpki2 2019-05-11 12:45:45] (100)\n45.114.232.0/22-22 AS59347  [rpki1 2019-05-11 14:51:41] * (100)\n                            [rpki2 2019-05-11 12:45:45] (100)\n
"},{"location":"features/sflow-p2p/","title":"Configuring peer-to-peer statistics","text":"

The IXP Manager sflow peer-to-peer graphing system depends on the MAC address database system so that point to point traffic flows can be identified. Before proceeding further, this should be configured so that when you click on either the MAC Addresses | Discovered Addresses or MAC Addresses | Configured Addresses links from the admin portal, you should see a MAC address associated with each port. If you cannot see any MAC address in either database, then the sflow peer-to-peer graphing mechanism will not work. This needs to be working properly before any attempt is made to configure sflow peer-to-peer graphing. The sflow p2p graphing system can use either discovered MAC addresses or configured MAC addresses, but not both.

"},{"location":"features/sflow-p2p/#server-overview","title":"Server Overview","text":"

As sflow can put a reasonably high load on a server due to disk I/O for RRD file updates - it is recommended practice to use a separate server (or virtual server) to handle the IXP's sflow system. The sflow server will need:

"},{"location":"features/sflow-p2p/#configuration","title":"Configuration","text":""},{"location":"features/sflow-p2p/#freebsd","title":"FreeBSD","text":"
pkg install apache24 sflowtool git databases/rrdtool mrtg\n
"},{"location":"features/sflow-p2p/#ubuntu","title":"Ubuntu","text":"
apt-get install apache2 git rrdtool rrdcached mrtg\n

sflowtool is not part of the Ubuntu / Debian package archive and must be compiled from source if running on these systems. The source code can be found on Github: https://github.com/sflow/sflowtool.

Once the required packages are installed, the IXP Manager peer-to-peer graphing system can be configured as follows:

On FreeBSD it is advisable to set net.inet.udp.blackhole=1 in /etc/sysctl.conf, to stop the kernel from replying to unknown sflow packets with an ICMP unreachable reply.

"},{"location":"features/sflow-p2p/#ixpmanagerconf","title":"ixpmanager.conf","text":"

The following sflow parameters must be set in the <ixp> section:

Note that the <sql> section of ixpmanager.conf will need to be configured either if you are running update-l2database.pl or the sflow BGP peering matrix system. The sflow-to-rrd-handler script uses API calls and does not need SQL access.

An example ixpmanager.conf might look like this:

<sql>\n        dbase_type      = mysql\n        dbase_database  = ixpmanager\n        dbase_username  = ixpmanager_user\n        dbase_password  = blahblah\n        dbase_hostname  = sql.example.com\n</sql>\n\n<ixp>\n        sflowtool = /usr/bin/sflowtool\n        sflowtool_opts = -4 -p 6343 -l\n        sflow_rrdcached = 1\n        sflow_rrddir = /data/ixpmatrix\n\n        apikey = APIKeyFromIXPManager\n        apibaseurl = http://www.example.com/ixp/api/v4\n        macdbtype = configured\n</ixp>\n

This file should be placed in /usr/local/etc/ixpmanager.conf

"},{"location":"features/sflow-p2p/#starting-sflow-to-rrd-handler","title":"Starting sflow-to-rrd-handler","text":"

The tools/runtime/sflow/sflow-to-rrd-handler command processes the output from sflowtool and injects it into the RRD archive. This command should be configured to start on system bootup.

If you are running on FreeBSD, this command can be started on system bootup by copying the tools/runtime/sflow/sflow_rrd_handler script into /usr/local/etc/rc.d and modifying the /etc/rc.conf command to include:

sflow_bgp_handler_enable=\"YES\"\n
"},{"location":"features/sflow-p2p/#displaying-the-graphs","title":"Displaying the Graphs","text":"

The IXP Manager web GUI requires access to the sflow p2p .rrd files over http or https. This means that the sflow server must run a web server (e.g. Apache), and the IXP Manager GUI must be configured with the URL of the RRD archive on the sflow server.

Assuming that ixpmanager.conf is configured to use /data/ixpmatrix for the RRD directory, these files can be server over HTTP using the following Apache configuration (please consider appropriate access security - this example assumes an internal host on an internal network):

Alias /grapher-sflow /data/ixpmatrix\n<Directory \"/data/ixpmatrix\">\n        Options None\n        Require all granted \n</Directory>\n

The IXP Manager .env file must be configured with parameters both to enable sflow and to provide the front-end with the HTTP URL of the back-end server. Assuming that the sflow p2p server has IP address 10.0.0.1, then the following lines should be added to .env:

GRAPHER_BACKENDS=\"mrtg|sflow|smokeping\"\nGRAPHER_BACKEND_SFLOW_ENABLED=true\nGRAPHER_BACKEND_SFLOW_ROOT=\"http://10.0.0.1/grapher-sflow\"\n
"},{"location":"features/sflow-p2p/#rrd-requirements","title":"RRD Requirements","text":"

Each IXP edge port will have 4 separate RRD files for recording traffic to each other participant on the same VLAN on the IXP fabric: ipv4 bytes, ipv6 bytes, ipv4 packets and ipv6 packets. This means that the number of RRD files grows very quickly as the number of IXP participants increases. Roughly speaking, for every N participants at the IXP, there will be about 4*N^2 RRD files. As this number can create extremely high I/O requirements on even medium sized exchanges, IXP Manager requires that rrdcached is used.

"},{"location":"features/sflow-p2p/#troubleshooting","title":"Troubleshooting","text":"

There are plenty of things which could go wrong in a way which would stop the sflow mechanism from working properly.

"},{"location":"features/sflow-p2p/#freebsd-really","title":"FreeBSD, really?","text":"

No. All of this runs perfectly well on Ubuntu (or your favourite Linux distribution).

INEX runs its sflow back-end on FreeBSD because we found that the UFS filesystem performs better than the Linux ext3 filesystem when handling large RRD archives. If you run rrdcached, it's unlikely that you will run into performance problems. If you do, you can engineer around them by running the RRD archive on a PCIe SSD.

"},{"location":"features/sflow-p2p/#api-endpoints","title":"API Endpoints","text":"

The tools/runtime/sflow/sflow-to-rrd-handler script from IXP Manager referenced above uses an IXP Manager API endpoint to associate sflow samples (based on source and destination MAC addreesses) with VLAN interfaces.

As IXP Manager supports layer2 / MAC addresses in two ways (learned versus configured), there are two endpoints (using https://ixp.example.com as your IXP Manager installation):

  1. Learned: https://ixp.example.com/api/v4/sflow-db-mapper/learned-macs
  2. Configured: https://ixp.example.com/api/v4/sflow-db-mapper/configured-macs

The JSON output is structured as per the following example:

{\n    \"infrastructure id\": {\n        \"vlan tag\": {\n            \"mac address\": \"vlan interface id\",\n            ...\n        },\n        ...\n    },\n    ...\n}\n

where:

"},{"location":"features/sflow/","title":"Introduction","text":"

IXP Manager can use sflow data to:

The peer-to-peer traffic graphs show traffic aggregate analysis of bytes/packets, split by VLAN and protocol (IPv4 / IPv6), both for individual IXP peering ports and entire VLANs.

The peering matrix guesses who interconnects with whom on the basis of analysing bgp session flows.

"},{"location":"features/sflow/#helicopter-view","title":"Helicopter View","text":"

Sflow needs to be configured with an \"accounting perimeter\". This means that ingress sflow accounting should be enabled on all edge ports, but should not be enabled on any of the core ports. This approach ensures that all packets entering or leaving the IXP are counted exactly once, when they enter the IXP fabric.

All the switches at the IXP should be configured to send sflow packets to the IP address of your sflow collector. This will probably be the same server that you use for your IXP Manager sflow peer-to-peer graphing.

If sflow is enabled on any of the core ports or sflow is enabled in both directions (ingress + egress), traffic will be double-counted and this will lead to incorrect graphs.

Each switch on the network sends sampled sflow packets to an sflow collector. These packets are processed by the \"sflowtool\" command, which converts into an easily-parseable ascii format. IXP Manager provides a script to read the output of the sflowtool command, correlate this against the IXP database and to use this to build up a matrix of traffic flows which are then exported to an RRD database.

The RRD files are stored on disk and can be accessed by using the sflow graphing system included in IXP Manager.

"},{"location":"features/sflow/#sflow-on-switches","title":"Sflow on Switches","text":"

Many vendors support sflow, but some do not. There is a partial list on the sflow web site.

Most switches which support sflow will support ingress accounting, because this is what's required in RFC 3176. Some switches (e.g. Dell Force10 running older software) only support egress sflow. If you use these on your IXP alongside other switches which only support ingress sflow, then the sflow graphs will show twice the traffic in one direction for the p2p graphs and zero traffic in the other direction. There is no way for IXP Manager to work around this problem.

If not all of the IXP edge ports are sflow capable, then sflow traffic data will be lost for these ports. This means that some point-to-point traffic graphs will show up with zero traffic, and that the sflow aggregate graphs will be wrong.

Sflow uses data sampling. This means that the results it produces are estimated projections, but on largee data sets, these projections tend to be statistically accurate.

Each switch or switch port needs to be configured with an sflow sampling rate. The exact rate chosen will depend on the traffic levels on the switch, how powerful the switch management plane CPU is, and how much bandwidth is available for the switch management.

On a small setup with low levels of traffic (e.g. 100kpps), it would be sensible to leave the sampling rate low (e.g. 1:1024). Alternatively, a busy 100G port may need a sampling rate of 1:32768 may turn out to be too low if the port is seeing large numbers of packets. If the switch or the entire network is handling very large quantities of traffic, this figure should be high enough that IXP ports with low quantities of traffic will still get good quality graphs, but low enough that the switch management CPU isn't trashed, and that packets are not dropped on the management ethernet port.

Some switches have automatic rate-limiting built in for sflow data export. The sampling rate needs to be chosen so that sflow data export rate limiting doesn't kick in. If it does, samples will be lost and this will cause graph inaccuracies.

"},{"location":"features/sflow/#switch-implementation-limitations","title":"Switch Implementation Limitations","text":""},{"location":"features/sflow/#netflow","title":"Netflow","text":"

IXP Manager does not support netflow and support is not on the roadmap. This is because most netflow implementations do not export mac address information, which means that they cannot provide workable mac layer peer-to-peer statistics.

"},{"location":"features/sflow/#cisco-switches","title":"Cisco Switches","text":"

Of Cisco's entire product range, only the Nexus 3000 and Nexus 9000 product range support sflow. Also, the sflow support on the Cisco Nexus 3k range is crippled due to the NX-OS software implementation, which forces ingress+egress sflow to be configured on specified ports rather than ingress-only. Functional accounting requires ingress-only or egress-only sflow to be configured on a per-port basis: ingress + egress causes double-counting of packets. It may be possible to work around this limitation using the broadcom shell using something like the following untested configuration:

n3k# conf t\nn3k(config)# feature sflow\nn3k(config)# sflow data-source interface Ethernet1/1\nn3k(config)# ^Z\nn3k# test hardware internal bcm-usd bcm-diag-shell\nAvailable Unit Numbers: 0\nbcm-shell.0> PortSampRate xe0 4096 0\nbcm-shell.0> PortSampRate xe0\n xe0:   ingress: 1 out of 4096 packets, egress: not sampling,\nbcm-shell.0> quit\nn3k#\n

Note that this command is not reboot persistent, and any time the switch is rebooted, the command needs to be re-entered manually. Note also that this configuration is untested.

"},{"location":"features/sflow/#brocade-turboiron-24x","title":"Brocade TurboIron 24X","text":"

By default a TIX24X will export 100 sflow records per second. This can be changed using the following command:

SSH@Switch# dm device-command 2762233\nSSH@Switch# tor modreg CPUPKTMAXBUCKETCONFIG(3) PKT_MAX_REFRESH=0xHHHH\n

... where HHHH is the hex representation of the number of sflow records per second. INEX has done some very primitive usage profiling which suggests that going above ~3000 sflow records per second will trash the management CPU too hard, so we use PKT_MAX_REFRESH=0x0BB8. Note that this command is not reboot persistent, and any time a TIX24X is rebooted, the command needs to be re-entered manually.

"},{"location":"features/sflow/#dell-force10","title":"Dell Force10","text":"

Earlier versions of FTOS only support egress sflow, but support for ingress sflow was added in 2014. If you intend to deploy IXP Manager sflow accounting on a Dell F10 switch, then you should upgrade to a software release which supports ingress sflow.

"},{"location":"features/sflow/#cumulus-linux","title":"Cumulus Linux","text":"

Cumulus Linux uses hsflowd, which does not allow the operator to enable or disable sflow on a per-port basis, nor does it permit the operator to configure ports to use ingress-only sflow. This configuration needs to be handled using the /usr/lib/cumulus/portsamp command, which is not reboot persistent. It is strongly recommended to handle this configuration using orchestration, as it is not feasible to manually maintain this configuration.

"},{"location":"features/sflow/#fanout","title":"Fanout","text":""},{"location":"features/sflow/#configuring-sflowtool-fan-out","title":"Configuring sflowtool fan-out","text":"

The sflow data from all the IXP switches will normally be directed at a single sflow collector. Often it is useful to have multiple copies of this sflow data stream so that the sflow data can be processed in different ways.

IXP Manager uses sflow data for two separate components:

  1. point-to-point ixp traffic graphs
  2. detecting BGP live sessions on the exchange and using the info to update the peering matrix

This means that IXP Manager needs two separate sflow feeds. This can be achieved by using the sflowtool fanout facility, which sends an exact copy of all incoming sflow records to a list of destinations. For example, the following command listens for incoming sflow data on port 6343 and send three copies out. Two copies are directed to different ports on the same server, on ports 5500 and 5501. The third copy is sent to 192.0.2.20, port 6343.

# sflowtool -4 -p 6343 -f 127.0.0.1/5500 -f 127.0.0.1/5501 -f 192.0.2.20/6343\n

This example could be used for handling P2P traffic graphs and BGP session detection on one machine, while sending a third sflow data feed to a separate server for IXP development or debugging. The two local sflow feeds can be read using sflowtool:

# sflowtool -4 -p 5500 -l\n# sflowtool -4 -p 5501 -l\n

The sflowtool fanout daemon should be started by the normal operating system daemon startup mechanism, e.g. script in a rc.d or init.d directory, or by a manual entry in /etc/rc.local.

If running sflowtool version 3.23 or greater, it is important to use the -4 command-line parameter in sflowtool because otherwise it will listen on both ipv4 and ipv6 sockets. If you have an sflowtool process attempting to listen on a wildcard socket, it will stop other sflowtool processes from starting.

"},{"location":"features/skinning/","title":"Templates & Skinning","text":"

Remember that v4 is a transition version of IXP Manager from Zend Framework / Smarty to Laravel and so much of the frontend / templating still uses v3 templates and code. As such, how to skin a page will depend on whether the template is found in resources/views (v4) or application/[modules/xxx/]views (v3). Both are covered here.

IXP Manager supports template/view skinning allowing users to substitute any of their own templates in place of the default ones shipped with IXP Manager.

"},{"location":"features/skinning/#skinning-in-version-v4","title":"Skinning in Version >=v4","text":"

First, set the following parameter in .env:

VIEW_SKIN=\"example\"\n

Skins should then be placed in the resources/skins/example directory (example should be substituted for whatever you want to call your own skin). The default templates can be found in resources/views directory. INEX bundles its own skinned templates in resources/skins/inex as an example.

Once a skin is enabled from .env, then any templates found in the skin directory (using the same directory structure as found under resouces/views) will take precedence over the default template file. This means you do not need to recreate / copy all the default files - just replace the ones you want.

In previous versions of IXP Manager, we used Smarty as the templating engine. This meant that if someone wanted to help improve IXP Manager then they would need to become familiar with PHP and Smarty. In v4 we dropped Smarty and, rather than using another compiled templating engine, we have decided to go with native PHP templates.

For this, we are using Foil - Foil brings all the flexibility and power of modern template engines to native PHP templates. Write simple, clean and concise templates with nothing more than PHP. Also, simulaneously supported are Lavael's own Blade templates which we sometimes use for simple pages.

"},{"location":"features/skinning/#example","title":"Example","text":"

The graphing MRTG configuration generator allows for custom configuration content at the top and bottom of the file. In order to have your custom configuration enabled, you need to skin two files.

Here's an example:

# position ourselves in the IXP Manager root directory\ncd ${IXPROOT}\n\n# make the skin directory\nmkdir resources/skins/example\n\n# create the full path required for the MRTG configuration files:\nmkdir -p resources/skins/example/services/grapher/mrtg\n\n# copy over the customisation files:\ncp views/services/grapher/mrtg/custom-header.foil.php resources/skins/example/services/grapher/mrtg\ncp views/services/grapher/mrtg/custom-footer.foil.php resources/skins/example/services/grapher/mrtg\n\n# edit the above files as required\nvi resources/skins/example/services/grapher/mrtg/custom-header.plates.php\nvi resources/skins/example/services/grapher/mrtg/custom-footer.plates.php\n

Then, finally, edit .env and set the skin to use:

VIEW_SKIN=\"example\"\n

You can of course skin any file including the non-custom MRTG files as suits your needs.

"},{"location":"features/skinning/#custom-variables-configuration-options","title":"Custom Variables / Configuration Options","text":"

When you are skinning your own templates, you may find you need to create custom configuration options for values you do not want to store directly in your own templates. For this, we have a configuration file which is excluded from Git. Initiate it via::

cp config/custom.php.dist config/custom.php\n

This is Laravel's standard configuration file format (which is an associative PHP array). You can also use Laravel's dotenv variables here too.

As an example, if you were to create a configuration option:

<?php\n'example' => [\n    'key' => 'my own config value',\n],\n

then in code this would be accessible as:

<?php\nconfig( \"custom.example.key\", \"default value if not set|null\" )\n

where the second parameter is a default option if the requested configuration setting has not been defined (which defaults to null). In templates, this can be accessed the same way or rendered in the template with::

<?= config( \"custom.example.key\", \"default\" ) ?>\n
"},{"location":"features/skinning/#skinning-old-templates-v49","title":"Skinning Old Templates (<v4.9)","text":"

This is still important as IXP Manager v4 still uses most of the previous templates.

To skin files found under application/[modules/xxx/]views, proceed as follows:

  1. set a skin name in .env:

    VIEW_SMARTY_SKIN=\"myskin\"\n

  2. create a directory with a matching name: application/views/_skins/myskin.

Once the above .env option is set, then any pages in its skin directory (using the same directory structure as application/views will take precedence over the default template files. This means you do not need to recreate / copy all the default files - just replace the ones you want.

"},{"location":"features/skinning/#finding-templates","title":"Finding Templates","text":"

Usually there is one of two places to find a template:

If you're skinning, then there's an extra two places:

The indicated variables above mean:

Typically, following the URL path in the views directory will yield the template file you need.

To help identify if the page you are looking at is from the =v4 code base, we have added a HTML comment to the templates which appears just after the <head> tag as follows:

"},{"location":"features/static-content/","title":"Static Content","text":"

IXP Manager can serve some static pages for you if you wish. The typical use cases for this are:

  1. support details / contact page;
  2. other static content relevant to your members.
"},{"location":"features/static-content/#overview","title":"Overview","text":"

In IXP Manager, there are four types of users as described in the users page. Static contact can be added which requires a minimum user privilege to access (e.g. priv == 0 would be publicly accessible through to priv == 3 which would require a superadmin).

To create static content, you should first set up skinning for your installation. Let's assume you called your skin example.

To create a publicly accessible static content page called misc-benefits, you would first create a content directory in your skin as follows:

cd $IXPROOT\nmkdir -p resources/skins/example/content/{0,1,2,3}\n

where the directories 0, 1, 2, 3 represent the minimum required user privilege to access the content. You can now create your content page as follows:

cp resources/views/content/0/example.foil.php resources/skins/example/content/0/misc-benefits.foil.php\n

and then edit that page.

It can be accessed using a URL such as: https://ixp.example.com/content/0/misc-benefits where the route template is: content/{priv}/{page}.

The example.foil.php template copied above should provide the necessary structure for you. Essentially just replace the title and the content.

For publicly accessible documents, there is an alias route:

/public-content/{page}  -> treated as: /content/0/{page}\n
"},{"location":"features/static-content/#support-contact-template","title":"Support / Contact Template","text":"

IXP Manager ships with a link to Support in the main title menu. You should copy and adjust this as necessary via skinning:

cp resources/views/content/0/support.foil.php resources/skins/example/content/0/support.foil.php\n
"},{"location":"features/static-content/#documentation-menu","title":"Documentation Menu","text":"

You can link to your own static contact pages using the Documentation menu by skinning this file:

cp resources/views/layouts/header-documentation.foil.php resources/skins/example/layouts/header-documentation.foil.php\n

The stock version includes a link to the example page and a external link to the IXP Manager website (we would be much obliged if you left this in place!).

INEX's own version of this can be found in the shipped resources/skins/inex/header-documentation.foil.php file which shows how we use it.

"},{"location":"features/tacacs/","title":"TACACS (User Formatting)","text":"

IXP Manager can generate formatted lists of user information. The best example of this is for TACACS.

TACACS is used in most IXPs to manage access to switching and routing devices:

IXP Manager comes with a flexible template for generating the user section of a TACACS file.

"},{"location":"features/tacacs/#generating-tacacs-configuration","title":"Generating TACACS Configuration","text":"

You can use the IXP Manager API to get the user section of a TACACS file using the following endpoint formats (both GET and POST requests work):

https://ixp.example.com/api/v4/user/formatted\nhttps://ixp.example.com/api/v4/user/formatted/{priv}\nhttps://ixp.example.com/api/v4/user/formatted/{priv}/{template}\n

where:

And example of a user in the response is:

user=joebloggs {\n    member=admin\n    login = des \"$2y$10$pHln5b4DrPj3uuhgfg45HeWEQLK/3ngRxYgYppbnYzleJ.9EpLAN.\"\n}\n
"},{"location":"features/tacacs/#optional-parameters","title":"Optional Parameters","text":"

You can optionally POST any of the following to change elements of the default template:

An example of changing these parameters is:

curl --data \"users=bob,alice&group=god&bcrypt=2a\" -X POST \\\n    -H \"Content-Type: application/x-www-form-urlencoded\" \\\n    -H \"X-IXP-Manager-API-Key: my-ixp-manager-api-key\" \\\n    https://ixpexample.com/api/v4/user/formatted\n
"},{"location":"features/tacacs/#templates-skinning","title":"Templates / Skinning","text":"

You can use skinning to make changes to the bundled default template or, preferably, add your own.

Let's say you wanted to add your own template called mytemplate1 and your skin is named myskin. The best way to proceed is to copy the bundled example:

cd $IXPROOT\nmkdir -p resources/skins/myskin/api/v4/user/formatted\ncp resources/views/api/v4/user/formatted/default.foil.php resources/skins/myskin/api/v4/user/formatted/mytemplate1.foil.php\n

You can now edit this template as required. The only constraint on the template name is it can only contain characters from the classes a-z, 0-9, -. NB: do not use uppercase characters.

All variables available in the template can be seen in the default template.

"},{"location":"features/tacacs/#setting-up-tacacs","title":"Setting Up TACACS","text":"

This section explains how to set up TACACS with IXP Manager. We assume you already have an understanding of TACACS.

"},{"location":"features/tacacs/#generating-updating-tacacs","title":"Generating / Updating TACACS","text":"

At INEX, we use a script that:

You can find that script in this directory. Alter it to suit your own purposes.

"},{"location":"features/rpki/cloudflare/","title":"Cloudflare's RPKI Toolkit","text":"Danger

Cloudflare's RPKI toolkit has been deprecated and should not be used.

Cloudflare created their own RPKI toolkit which, similar to RIPE's, is split into two elements:

  1. GoRTR is the daemon that implements the RPKI-RTR protocol to distribute validated ROAs to your routers.
  2. OctoRPKI is the validator which pulls the signed ROAs from the trust anchors and validates them and then makes them available to GoRTR.

NB: Before you proceed further, you should read Cloudflare's own introduction to this toolkit.

We use a standard Ubuntu 20.04 installation (selecting the minimal virtual server option), 2 vCPUs, 2GB RAM, 20GB LVM hard drive.

Cloudflare provide pre-built packages for installation - visit the following URLs and download the appropriate packages for your operating system:

As of late November 2020, the following packages are available to install:

wget https://github.com/cloudflare/cfrpki/releases/download/v1.2.2/octorpki_1.2.2_amd64.deb\nwget https://github.com/cloudflare/gortr/releases/download/v0.14.7/gortr_0.14.7_amd64.deb\ndpkg -i octorpki_1.2.2_amd64.deb gortr_0.14.7_amd64.deb\n
"},{"location":"features/rpki/cloudflare/#octorpki","title":"OctoRPKI","text":"

You now need to install the ARIN file manually:

  1. Visit https://www.arin.net/resources/rpki/tal.html
  2. Download the TAL in RFC 7730 format
  3. Place it in /usr/share/octorpki/tals/arin.tal

You can now run the validator via the following command:

# start the service:\nsystemctl start octorpki\n\n# see and tail the logs\njournalctl -fu octorpki\n\n# enable to start on server boot:\nsystemctl enable octorpki.service\n

NB: OctoRPKI listens as a web service by default on port 8081. It's possible to change this port by adding OCTORPKI_ARGS=-http.addr :8080 to /etc/default/octorpki if required.

As it starts up, there is some info available as JSON under http://[hostname/ip address]:8081/infos and the ROAs can be seen as JSON via http://[hostname/ip address]:8081/output.json after ~5mins.

"},{"location":"features/rpki/cloudflare/#gortr","title":"GoRTR","text":"

To start GoRTR (once OctoRPKI is configured and running), we first edit /etc/default/gortr:

GORTR_ARGS=-bind :3323 -verify=false -cache http://localhost:8081/output.json -metrics.addr :8082\n

You can now run the GoRTR daemon via the following command:

# start the service:\nsystemctl start gortr\n\n# see and tail the logs\njournalctl -fu gortr\n\n# enable to start on server boot:\nsystemctl enable gortr.service\n

Once GoRTR starts up, metrics are available from http://[hostname/ip address]:8082/metrics.

"},{"location":"features/rpki/cloudflare/#monitoring","title":"Monitoring","text":"

We add Nagios http checks for ports 8081 (OctoRPKI) and 8082 (GoRTR) to our monitoring platform. We also add a check_tcp test for GoRTR port 3323.

"},{"location":"features/rpki/ripe/","title":"Ripe","text":""},{"location":"features/rpki/ripe/#ripe-ncc-rpki-validator-3","title":"RIPE NCC RPKI Validator 3","text":"Danger

The RIPE NCC RPKI Validator 3 has been deprecated and should not be used.

The RIPE NCC RPKI Validator 3 is a RPKI relying party software (aka RPKI Validator). While RIPE's RPKI Validator 3 is a RPKI-RTR implementation we have tested and support, we found it buggy in production (as of April 2019 it consumed increasing amounts of disk space and crashed regularly). These instructions reflect INEX's production installation from early 2019.

RIPE provides CentOS7 RPMs for production builds but as we tend to use Ubuntu LTS for our servers, we will describe an installation using the generic builds here. You can read RIPE's CentOS7 installation details here and their own generic install details here (which are the ones we worked from for these Ubuntu 18.04 LTS instructions).

We use a standard Ubuntu 18.04 installation (selecting the minimal virtual server option), 2 vCPUs, 2GB RAM, 10GB LVM hard drive.

We will use a non-root user to run the daemons:

useradd -c 'RIPE NCC RPKI Validator' -d /srv/ripe-rpki-validator \\\n    -m -s /bin/bash -u 1100 ripe\n

Download and extract the latest production releases from here:

cd /srv/ripe-rpki-validator\nwget https://ftp.ripe.net/tools/rpki/validator3/prod/generic/rpki-rtr-server-latest-dist.tar.gz\ntar zxf rpki-rtr-server-latest-dist.tar.gz\nwget https://ftp.ripe.net/tools/rpki/validator3/prod/generic/rpki-validator-3-latest-dist.tar.gz\ntar zxf rpki-validator-3-latest-dist.tar.gz\n

When you extract these, you'll find they create directories named by their version. As we will reference these in various scripts, we will alias these directories so we do not need to update the scripts on an upgrade of the software. In our example case, the version was 3.0-255 so we do the following (and also ensure the permissions are correct):

ln -s rpki-rtr-server-3.0-355 rpki-rtr-server-3\nln -s rpki-validator-3.0-355 rpki-validator-3\nchown -R ripe: /srv/ripe-rpki-validator\n

The requirements for RPKI Validator 3 are OpenJDK and rsync. For Ubuntu 18.04 that means:

apt install -y openjdk-8-jre rsync curl\n

We will want to keep configuration changes and the database across upgrades. For this we:

# move the config and replace it with a link:\ncd /srv/ripe-rpki-validator\nmv rpki-validator-3/conf/application.properties rpki-validator-3.conf\nln -s /srv/ripe-rpki-validator/rpki-validator-3.conf \\\n    /srv/ripe-rpki-validator/rpki-validator-3/conf/application.properties\n\n# And do the same for the datebase:\nmv rpki-validator-3/db .\nln -s /srv/ripe-rpki-validator/db /srv/ripe-rpki-validator/rpki-validator-3/db\n\n# And do the same for rpki-rtr-server-3:\nmv rpki-rtr-server-3/conf/application.properties rpki-rtr-server-3.conf\nln -s /srv/ripe-rpki-validator/rpki-rtr-server-3.conf \\\n    /srv/ripe-rpki-validator/rpki-rtr-server-3/conf/application.properties\n\n# again, ensure file ownership is okay\nchown -R ripe: /srv/ripe-rpki-validator\n

We then edit /srv/ripe-rpki-validator/rpki-validator-3.conf and change the following configuration options:

  1. server.port and server.address if you want to access the web interface directly. Commenting server.address out makes it listen on all interfaces.
  2. spring.datasource.url to /srv/ripe-rpki-validator/db/rpki-validator.h2.

And we edit /srv/ripe-rpki-validator/rpki-rtr-server-3.conf and:

  1. set server.port and server.address as required (note this is for the API, not the RTR protocol). server.address= listens on all interfaces.
  2. set rtr.server.address and rtr.server.port as required (this is the RTR protocol). rtr.server.address=:: listens on all interfaces.

You should now be able to start the Validator and RTR daemons:

# as the RIPE user\nsu - ripe\n\ncd /srv/ripe-rpki-validator/rpki-validator-3\n./rpki-validator-3.sh\n\ncd /srv/ripe-rpki-validator/rpki-rtr-server-3\n./rpki-rtr-server.sh\n

We need to manually install the ARIN TAL by:

  1. Visiting https://www.arin.net/resources/rpki/tal.html
  2. Downloading the TAL in RIPE NCC RPKI Validator format format
  3. Installing it using the command:
    /srv/ripe-rpki-validator/rpki-validator-3/upload-tal.sh arin-ripevalidator.tal http://localhost:8080/\n

We use systemd to ensure both daemons start automatically:

cat <<ENDL >/etc/systemd/system/rpki-validator-3.service\n[Unit]\nDescription=RPKI Validator\nAfter=network.target\n\n[Service]\nEnvironment=JAVA_CMD=/usr/bin/java\nExecStart=/srv/ripe-rpki-validator/rpki-validator-3/rpki-validator-3.sh\n\n# prevent restart in case there's a problem\n# with the database or binding to socket\nRestartPreventExitStatus=7\n\nUser=ripe\n\n[Install]\nWantedBy=multi-user.target\nENDL\n\nsystemctl enable rpki-validator-3.service\nsystemctl start rpki-validator-3.service\n\n\ncat <<ENDL >/etc/systemd/system/rpki-rtr-server-3.service\n[Unit]\nDescription=RPKI RTR\nAfter=rpki-validator-3.service\n\n[Service]\nEnvironment=JAVA_CMD=/usr/bin/java\nExecStart=/srv/ripe-rpki-validator/rpki-rtr-server-3/rpki-rtr-server.sh\n\n# prevent restart in case there's a problem\n# with the database or binding to socket\nRestartPreventExitStatus=7\n\nUser=ripe\n\n[Install]\nWantedBy=multi-user.target\nENDL\n\nsystemctl enable rpki-rtr-server-3.service\nsystemctl start rpki-rtr-server-3.service\n

You can see log messages using:

cat /var/log/syslog | grep rpki-validator\ncat /var/log/syslog | grep rpki-rtr\n

We separately add the server and the RIPE daemons to our standard monitoring and alerting tools.

"},{"location":"features/rpki/routinator/","title":"Routinator 3000","text":"

Routinator 3000 is a RPKI relying party software (aka RPKI Validator) written in Rust by the good folks at NLnet Labs. These instructions reflect Routinator 0.8.2 (on Ubuntu 20.04). This mostly follows their own GitHub instructions and documentation.

We use a standard Ubuntu 20.04 installation (selecting the minimal virtual server option), 2 vCPUs, 2GB RAM, 20GB LVM hard drive.

Add the apt repo to the system by creating a file called /etc/apt/sources.list.d/routinator.list with the following contents:

deb [arch=amd64] https://packages.nlnetlabs.nl/linux/debian/ stretch main\ndeb [arch=amd64] https://packages.nlnetlabs.nl/linux/debian/ buster main\ndeb [arch=amd64] https://packages.nlnetlabs.nl/linux/ubuntu/ xenial main\ndeb [arch=amd64] https://packages.nlnetlabs.nl/linux/ubuntu/ bionic main\ndeb [arch=amd64] https://packages.nlnetlabs.nl/linux/ubuntu/ focal main\n

Then add the NLNetLabs package key to the system:

sudo apt update && apt-get install -y gnupg2\nwget -qO- https://packages.nlnetlabs.nl/aptkey.asc | sudo apt-key add -\nsudo apt update\n

Note that the first apt update will return a bunch of errors. The second update should run without errors, once the key has been added.

We then install the required software:

sudo apt install routinator\nsudo routinator-init\n

Alternatively, if you plan to agree with the ARIN RPA, run:

sudo routinator-init --accept-arin-rpa\n

By default, Routinator listens only on TCP sockets on 127.0.0.1. If you want other devices to be able to access the service, it needs to listen to the wildcard socket.

If you're running Linux, you can configure Routinator to listen to both ipv4 and ipv6 wildcard sockets using the following configuration lines in /etc/routinator/routinator.conf:

rtr-listen = [ \"[::]:3323\" ]\nhttp-listen = [ \"[::]:8080\" ]\n

If you're running an operating system other than Linux, you'll need separate entries for ipv4 and ipv6:

rtr-listen = [ \"127.0.0.1:3323\", \"[::]:3323\" ]\nhttp-listen = [ \"127.0.0.1:8080\", \"[::]:8080\" ]\n

You can then test by running the following command, which prints the validated ROA payloads and increases the log level to show the process in detail:

/usr/bin/routinator --config /etc/routinator/routinator.conf -v vrps\n
"},{"location":"features/rpki/routinator/#starting-on-boot","title":"Starting on Boot","text":"

To have this service start at boot:

systemctl enable routinator\nsystemctl start routinator\n
"},{"location":"features/rpki/routinator/#monitoring","title":"Monitoring","text":"

We add Nagios http checks for port 8080 (HTTP) to our monitoring platform. We also add a check_tcp test for the RPKI-RTR port 3323.

"},{"location":"features/rpki/routinator/#http-interface","title":"HTTP Interface","text":"

The following is copied from Routinator's man page. As a future work fixme, this should be used for better monitoring that just check_tcp above.

HTTP SERVICE\n       Routinator  can provide an HTTP service allowing to fetch the Validated\n       ROA Payload in various formats. The service does not support HTTPS  and\n       should only be used within the local network.\n\n       The service only supports GET requests with the following paths:\n\n\n       /metrics\n              Returns  a  set  of  monitoring  metrics  in  the format used by\n              Prometheus.\n\n       /status\n              Returns the current status of the Routinator instance.  This  is\n              similar  to  the  output  of the /metrics endpoint but in a more\n              human friendly format.\n\n       /version\n              Returns the version of the Routinator instance.\n\n       /api/v1/validity/as-number/prefix\n              Returns a JSON object describing whether the route  announcement\n              given  by its origin AS number and address prefix is RPKI valid,\n              invalid, or not found.  The returned object is  compatible  with\n              that  provided by the RIPE NCC RPKI Validator. For more informa-\n              tion, see  https://www.ripe.net/support/documentation/developer-\n              documentation/rpki-validator-api\n\n       /validity?asn=as-number&prefix=prefix\n              Same as above but with a more form-friendly calling convention.\n\n\n       In  addition, the current set of VRPs is available for each output for-\n       mat at a path with the same name as the output format.  E.g.,  the  CSV\n       output is available at /csv.\n\n       These paths accept filter expressions to limit the VRPs returned in the\n       form of a query string. The field filter-asn can be used to filter  for\n       ASNs  and  the  field filter-prefix can be used to filter for prefixes.\n       The fields can be repeated multiple times.\n\n       This works in the same way as the options of the same name to the  vrps\n       command.\n
"},{"location":"features/rpki/rpkiclient/","title":"OpenBSD's RPKI Validator rpki-client","text":"

The OpenBSD project created a free and easy-to-use RPKI validator named rpki-client.

Deployment is split into two elements:

  1. rpki-client is the validator which pulls the Signed Objects from the RPKI repositories and validates them and then makes them available to StayRTR.
  2. StayRTR is the daemon that implements the RPKI-RTR protocol to distributes Validated ROA Payloads to your routers.

We use a standard Debian Sid (unstable) installation, 2 vCPUs, 2GB RAM, 20GB LVM hard drive. Debian provides pre-built packages for installation.

As of early March 2024, the following packages can easily be installed:

$ sudo apt install rpki-client stayrtr\n
"},{"location":"features/rpki/rpkiclient/#rpki-trust-anchors","title":"rpki-trust-anchors","text":"

You'll need to confirm whether you'd like to install the ARIN TAL.

You can now run the validator via the following command:

"},{"location":"features/rpki/rpkiclient/#rpki-client","title":"rpki-client","text":"
# start the service:\nsystemctl start rpki-client &\n\n# see and tail the logs\njournalctl -fu rpki-client\n

Running rpki-client the first time might take a few minutes.

"},{"location":"features/rpki/rpkiclient/#stayrtr","title":"StayRTR","text":"

To start StayRTR (once rpki-client is configured and running), we first edit /etc/default/stayrtr:

STAYRTR_ARGS=-bind :3323 -cache /var/lib/rpki-client/json -metrics.addr :8082\n

You can now run the StayRTR daemon via the following command:

# start the service:\nsystemctl restart stayrtr\n\n# see and tail the logs\njournalctl -fu stayrtr\n

Once rpki-client completed its initial run, and StayRTR starts up, metrics are available from http://[hostname/ip address]:8082/metrics.

"},{"location":"features/rpki/rpkiclient/#monitoring","title":"Monitoring","text":"

We add Nagios http checks for and 8082 (StayRTR) to our monitoring platform. We also add a check_tcp test for StayRTR port 3323.

Rpki-client produces a statistics file in OpenMetrics format in /var/lib/rpki-client/metrics for use with Grafana.

"},{"location":"grapher/api/","title":"API & Permissions","text":"

This page discusses default permissions required for accessing certain graphs and well as details on how to change that.

"},{"location":"grapher/api/#accessibility-of-aggregate-graphs","title":"Accessibility of Aggregate Graphs","text":"

By default, the following graphs are publicly accessible in IXP Manager and available through the top menu under Statistics:

  1. aggregate bits/sec and packets/sec graphs for the IXP;
  2. aggregate bits/sec and packets/sec graphs for the infrastructures;
  3. aggregate bits/sec and packets/sec graphs for locations / facilities;
  4. aggregate bits/sec graphs on a per-protocol and per-VLAN basis (requires sflow);
  5. aggregate graphs for the switches; and
  6. aggregate graphs for the core bundles / trunk connections.

If you wish to limit access to these to a less than or equal user permission, set the following in .env appropriately:

  1. GRAPHER_ACCESS_IXP
  2. GRAPHER_ACCESS_INFRASTRUCTURE
  3. GRAPHER_ACCESS_LOCATION
  4. GRAPHER_ACCESS_VLAN
  5. GRAPHER_ACCESS_SWITCH
  6. GRAPHER_ACCESS_TRUNK (this also applies to core bundles)

For example to limit access to trunks / core bundles to logged in users, set:

GRAPHER_ACCESS_TRUNK=1\n

If you would like to make the aggregate graphs available to logged in users only, set the following .env options:

GRAPHER_ACCESS_IXP=1\nGRAPHER_ACCESS_INFRASTRUCTURE=1\nGRAPHER_ACCESS_VLAN=1\nGRAPHER_ACCESS_SWITCH=1\nGRAPHER_ACCESS_LOCATION=1\nGRAPHER_ACCESS_TRUNK=1\n

If you would prefer to restrict access to these to superusers / admins only, replace =1 above with =3.

"},{"location":"grapher/api/#api-access","title":"API Access","text":"

Grapher allows API access to graphs via a base URL of the form:

https://ixp.example.com/grapher/{graph}[?id=x][&period=x][&type=x][&category=x] \\\n    [&protocol=x][&backend=x]\n

Here's two quick examples from INEX's production system:

  1. Aggregate exchange traffic options: https://www.inex.ie/ixp/grapher/ixp?id=1&type=json
  2. Aggregate exchange traffic PNG: https://www.inex.ie/ixp/grapher/ixp (as you'll learn below, the defaults are id=1&type=png).

A sample of the JSON output is:

{\n    \"class\": \"ixp\",\n    \"urls\": {\n        \"png\": \"https:\\/\\/www.inex.ie\\/ixp\\/grapher\\/ixp?period=day&type=png&category=bits&protocol=all&id=1\",\n        \"log\": \"https:\\/\\/www.inex.ie\\/ixp\\/grapher\\/ixp?period=day&type=log&category=bits&protocol=all&id=1\",\n        \"json\": \"https:\\/\\/www.inex.ie\\/ixp\\/grapher\\/ixp?period=day&type=json&category=bits&protocol=all&id=1\"\n    },\n    \"base_url\": \"https:\\/\\/www.inex.ie\\/ixp\\/grapher\\/ixp\",\n    \"statistics\": {\n        \"totalin\": 15013439801606864,\n        \"totalout\": 15013959560329200,\n        \"curin\": 158715231920,\n        \"curout\": 158713872624,\n        \"averagein\": 125566129180.59367,\n        \"averageout\": 125570476225.09074,\n        \"maxin\": 222438012592,\n        \"maxout\": 222348641336\n    },\n    \"params\": {\n        \"type\": \"json\",\n        \"category\": \"bits\",\n        \"period\": \"day\",\n        \"protocol\": \"all\",\n        \"id\": 1\n    },\n    \"supports\": {\n        \"protocols\": {\n            \"all\": \"all\"\n        },\n        \"categories\": {\n            \"bits\": \"bits\",\n            \"pkts\": \"pkts\"\n        },\n        \"periods\": {\n            \"day\": \"day\",\n            \"week\": \"week\",\n            \"month\": \"month\",\n            \"year\": \"year\"\n        },\n        \"types\": {\n            \"png\": \"png\",\n            \"log\": \"log\",\n            \"json\": \"json\"\n        }\n    },\n    \"backends\": {\n        \"mrtg\": \"mrtg\"\n    },\n    \"backend\": \"mrtg\"\n}\n

You can see from the above what params were used to create the statistics (and would be used for the image if type=png), what parameters are supported (supports), what backends are available for the given graph type and mix of parameters, etc.

Notes:

  1. not all backends support all options or graphs; use the json type to see what's supported but remember that IXP Manager will, when configured correctly, chose the appropriate backend;
  2. the primary key IDs mentioned below are mostly available in the UI when viewing lists of the relevant objects under a column DB ID;
  3. an understanding of how IXP Manager represents interfaces is required to grasp the below - see here.

Let's first look at supported graphs:

For additional options, it's always best to manually or programmatically examine the output for type=json to see what is supported. The following is a general list.

"},{"location":"grapher/api/#access-control","title":"Access Control","text":"

The grapher API can be accessed using the standard API access mechanisms.

Each graph (ixp, infrastructure, etc.) has an authorise() method which determines who is allowed view a graph. For example, see IXP\\Services\\Grapher\\Graph\\VlanInterface::authorise(). The general logic is:

For the supported graph types, default access control is:

Graph Default Access Control ixp public but respects GRAPHER_ACCESS_IXP (see above) infrastructure public but respects GRAPHER_ACCESS_INFRASTRUCTURE (see above) vlan public but respects GRAPHER_ACCESS_VLAN (see above), unless it's a private VLAN (in which case only superuser is supported currently) location public but respects GRAPHER_ACCESS_LOCATION (see above) switch public but respects GRAPHER_ACCESS_SWITCH (see above) core-bundle public but respects GRAPHER_ACCESS_TRUNK (see above) trunk public but respects GRAPHER_ACCESS_TRUNK (see above) physicalinterface superuser or user of the owning customer but respects GRAPHER_ACCESS_CUSTOMER (see Access to Member Graphs below) vlaninterface superuser or user of the owning customer but respects GRAPHER_ACCESS_CUSTOMER (see Access to Member Graphs below) virtualinterface superuser or user of the owning customer but respects GRAPHER_ACCESS_CUSTOMER (see Access to Member Graphs below) customer superuser or user of the owning customer but respects GRAPHER_ACCESS_CUSTOMER (see Access to Member Graphs below) latency superuser or user of the owning customer but respects GRAPHER_ACCESS_LATENCY (see Access to Member Graphs below) p2p superuser or user of the source (svli) owning customer but respects GRAPHER_ACCESS_P2P (see Access to Member Graphs below)"},{"location":"grapher/api/#access-to-member-graphs","title":"Access to Member Graphs","text":"

NB: before you read this section, please first read and be familiar with the Accessibility of Aggregate Graphs section above.

A number of IXPs have requested a feature to allow public access to member / customer graphs. To support this we have added the following .env options (beginning in v4.8) with the default value as shown:

GRAPHER_ACCESS_CUSTOMER=\"own_graphs_only\"\nGRAPHER_ACCESS_P2P=\"own_graphs_only\"\nGRAPHER_ACCESS_LATENCY=\"own_graphs_only\"\n

The own_graphs_only setting just means perform the default access checks which are: access is granted to a superuser or a user who belongs to the customer which owns the respective graph. I.e. no one but the customer or a superadmin can access the respective graph.

If you wish to allow access to these to a less than or equal user permission, set the above in .env appropriately.

For example:

then set the following in .env:

GRAPHER_ACCESS_CUSTOMER=0\nGRAPHER_ACCESS_P2P=1\n

Note that GRAPHER_ACCESS_LATENCY is omitted as we are not changing the default.

Please note the following:

"},{"location":"grapher/introduction/","title":"Grapher - Introduction","text":"

IXP Manager generates all of its graphs using its own graphing system called Grapher. This was introduced in v4.

Grapher is a complete rewrite of all previous graphing code and includes:

To date, we have developed the following reference backend implementations:

  1. dummy - a dummy grapher that just provides a placeholder graph for all possible graph types;
  2. mrtg - MRTG graphing using either the log or rrd backend. Use cases for MRTG are L2 interface statistics for bits / packets / errors / discards / broadcasts per second. Aggregate graphs for customer LAGs, overall customer traffic, all traffic over a switch / infrastructure / the entire IXP are all supported;
  3. sflow - while the MRTG backend looks at layer 2 statistics, sflow is used to provide layer 3 statistics such as per protocol (IPv4/6) graphs and peer to peer graphs;
  4. smokeping - (available from v4.8.0) which creates latency graphs and this replaces the previous way we used to access Smokeping graphs.

In a typical production environment, you would implement MRTG, Smokeping and sflow to provide the complete set of features.

"},{"location":"grapher/introduction/#configuration","title":"Configuration","text":"

There are only a handful of configuration options required and a typical and complete $IXPROOT/.env would look like the following:

GRAPHER_BACKENDS=\"mrtg|sflow|smokeping\"\nGRAPHER_CACHE_ENABLED=true\n\nGRAPHER_BACKEND_MRTG_DBTYPE=\"rrd\"\nGRAPHER_BACKEND_MRTG_WORKDIR=\"/srv/mrtg\"\nGRAPHER_BACKEND_MRTG_LOGDIR=\"/srv/mrtg\"\n\nGRAPHER_BACKEND_SFLOW_ENABLED=true\nGRAPHER_BACKEND_SFLOW_ROOT=\"http://sflow-server.example.com/grapher-sflow\"\n\nGRAPHER_BACKEND_SMOKEPING_ENABLED=true\nGRAPHER_BACKEND_SMOKEPING_URL=\"http://smokeping-server.example.com/smokeping\"\n

For those interested, the complete Grapher configuration file can be seen in $IXPROOT/config/grapher.php. Remember: put your own local changes in .env rather than editing this file directly.

The global (non-backend specific) options are:

Backend specific configuration and set-up instructions can be found in their own sections below.

"},{"location":"grapher/mrtg/","title":"Backend: MRTG","text":"

MRTG is used to generate interface graphs. MRTG is a particularly efficient SNMP poller as, irrespective of how many times an interface is referenced for different graphs, it is only polled once per run. If you want to understand MRTG related options in this section, please refer to MRTG's own documentation.

Per-second graphs are generated for bits, packets, errors, discards and broadcasts at 5min intervals. IXP Manager's Grapher system can use MRTG to poll switches and create traffic graphs for:

"},{"location":"grapher/mrtg/#mrtg-setup-and-configuration","title":"MRTG Setup and Configuration","text":"

You need to install some basic packages for MRTG to work - on Ubuntu for example, install:

apt install rrdtool mrtg\n

You also need a folder to store all MRTG files. For example:

mkdir -p /srv/mrtg\n

In your `.env, you need to set the following options:

# The MRTG database type to use - either log or rrd:\nGRAPHER_BACKEND_MRTG_DBTYPE=\"rrd\"\n\n# Where to store log/rrd/png files. This is from the perspective\n# of the mrtg daemon and it is only used when generating the mrtg configuration\n# file so this should be a local path on whatever server mrtg will run:\nGRAPHER_BACKEND_MRTG_WORKDIR=\"/srv/mrtg\"\n\n# Where IXP Manager can fine the GRAPHER_BACKEND_MRTG_WORKDIR above. If mrtg is\n# running on the same server as IXP Manager, this this would just be the same:\nGRAPHER_BACKEND_MRTG_LOGDIR=\"/srv/mrtg\"\n# Note that if you wish to run MRTG on another server, you can expose the\n# WORKDIR on a HTTP server and provide a URL to this option:\n# GRAPHER_BACKEND_MRTG_LOGDIR=\"http://collector.example.com/mrtg\"\n
"},{"location":"grapher/mrtg/#generating-mrtg-configuration","title":"Generating MRTG Configuration","text":"

You can now generate a MRTG configuration by executing a command such as:

# Move to the directory where you have installed IXP Manager (typically: /srv/ixpmanager)\ncd $IXPROOT\n\n# Generate MRTG configuration and output to stdout:\nphp artisan grapher:generate-configuration -B mrtg\n\n# Generate MRTG configuration and output to a named file:\nphp artisan grapher:generate-configuration -B mrtg -O /tmp/mrtg.cfg.candidate\n

You could also combine a syntax check before putting the resultant file live. Here's a complete example that could be run via cron:

#! /usr/bin/env bash\n\n# Set this to the directory where you have installed IXP Manager (typically: /srv/ixpmanager)\nIXPROOT=/srv/ixpmanager\n\n# Temporary configuration file:\nTMPCONF=/tmp/mrtg.cfg.$$\n\n# Synchronize configuration files\n${IXPROOT}/artisan grapher:generate-configuration -B mrtg -O $TMPCONF\n\n# Remove comments and date/time stamps for before comparing for differences\ncat /etc/mrtg.cfg    | egrep -v '^#.*$' | \\\n    egrep -v '^[ ]+Based on configuration last generated by.*$' >/tmp/mrtg.cfg.filtered\ncat $TMPCONF         | egrep -v '^#.*$' | \\\n    egrep -v '^[ ]+Based on configuration last generated by.*$' >${TMPCONF}.filtered\ndiff /tmp/mrtg.cfg.filtered ${TMPCONF}.filtered >/dev/null\nDIFF=$?\n\nrm /tmp/mrtg.cfg.filtered\nrm ${TMPCONF}.filtered\n\nif [[ $DIFF -eq 0 ]]; then\n    rm ${TMPCONF}\n    exit 0\nfi\n\n/usr/bin/mrtg --check ${TMPCONF}                 \\\n    && /bin/mv ${TMPCONF} /etc/mrtg.cfg          \\\n    && /etc/rc.d/mrtg_daemon restart > /dev/null\n

If your MRTG collector is on a different server, you could use a script such as the following to safely update MRTG via IXP Manager's API.

#! /usr/bin/env bash\n\n# Temporary configuration file:\nTMPCONF=/etc/mrtg.cfg.$$\n\n# Download the configuration via the API. Be sure to replace 'your_api_key'\n# with your actual API key (see API documentation).\ncurl --fail -s -H \"X-IXP-Manager-API-Key: your_api_key\" \\\n    https://ixp.example.com/api/v4/grapher/mrtg-config >${TMPCONF}\n\nif [[ $? -ne 0 ]]; then\n    echo \"WARNING: COULD NOT FETCH UP TO DATE MRTG CONFIGURATION!\"\n    exit -1\nfi\n\ncd /etc\n\n# Remove comments and date/time stamps for before comparing for differences\ncat mrtg.cfg    | egrep -v '^#.*$' | \\\n    egrep -v '^[ ]+Based on configuration last generated by.*$' >mrtg.cfg.filtered\ncat ${TMPCONF}  | egrep -v '^#.*$' | \\\n    egrep -v '^[ ]+Based on configuration last generated by.*$' >${TMPCONF}.filtered\ndiff mrtg.cfg.filtered ${TMPCONF}.filtered >/dev/null\nDIFF=$?\n\nrm mrtg.cfg.filtered\nrm ${TMPCONF}.filtered\n\nif [[ $DIFF -eq 0 ]]; then\n    rm ${TMPCONF}\n    exit 0\nfi\n\n/usr/bin/mrtg --check ${TMPCONF} && /bin/mv ${TMPCONF} /etc/mrtg.cfg\n\n\n\n/usr/bin/mrtg --check ${TMPCONF}                 \\\n    && /bin/mv ${TMPCONF} /etc/mrtg.cfg          \\\n    && /etc/rc.d/mrtg_daemon restart > /dev/null\n

Note that the MRTG configuration that IXP Manager generates instructs MRTG to run as a daemon. On FreeBSD, MRTG comes with an initd script by default and you can kick it off on boot with something like the following in /etc/rc.conf:

mrtg_daemon_enable=\"YES\"\nmrtg_daemon_config=\"/etc/mrtg.cfg\"\n

On Ubuntu it does not but it comes with a /etc/cron.d/mrtg file which kicks it off every five minutes (it will daemonize the first time and further cron jobs will have no effect).

Marco d'Itri provided Ubuntu / Debian compatible systemd configurations for mrtg which you can find detailed in this Github issue.

To start and stop it via the older initd scripts on Ubuntu, use an initd script such as this: ubuntu-mrtg-initd (source):

cp ${IXPROOT}/tools/runtime/mrtg/ubuntu-mrtg-initd /etc/init.d/mrtg\nchmod +x /etc/init.d/mrtg\nupdate-rc.d mrtg defaults\n/etc/init.d/mrtg start\n

And disable the default cron job for MRTG on Ubuntu (/etc/cron.d/mrtg).

"},{"location":"grapher/mrtg/#customising-the-configuration","title":"Customising the Configuration","text":"

Generally speaking, you should not customize the way IXP Manager generates MRTG configuration as the naming conventions are tightly coupled to how IXP Manager fetches the graphs. However, if there are bits of the MRTG configuration you need to alter, you can do it via skinning. The skinning documentation actually uses MRTG as an example.

"},{"location":"grapher/mrtg/#inserting-traffic-data-into-the-database-reporting-emails","title":"Inserting Traffic Data Into the Database / Reporting Emails","text":"

The MRTG backend inserts daily summaries into MySQL for reporting. See the traffic_daily and traffic_daily_phys_ints database tables for this. Essentially, there is a row per day per customer in the first and a row per physical interface in the second for traffic types bits, discards, errors, broadcasts and packets. Each row has a daily, weekly, monthly and yearly value for average, max and total.

From IXP Manager >= v5.0, the task scheduler handles collecting and storing yesterday's data. If you are using an older version, create a cron job such as:

0 2   * * *   www-data        /srv/ixpmanager/artisan grapher:upload-stats-to-db\n5 2   * * *   www-data        /srv/ixpmanager/artisan grapher:upload-pi-stats-to-db\n

In the IXP Manager application, the traffic_daily data powers the League Table function and the traffic_daily_phys_int data powers the Utilisation (since v5.5.0) function - both on the left hand menu.

This data is also used to send email reports / notifications of various traffic events. A sample crontab for this would look like the following:

0 4   * * *   www-data        /srv/ixpmanager/artisan grapher:email-traffic-deltas    \\\n                                --stddev=1.5 -v user1@example.com,user2@example.com\n\n30 10 * * tue www-data        /srv/ixpmanager/artisan grapher:email-port-utilisations \\\n                                --threshold=80 user1@example.com,user2@example.com\n\n31 10 * * *   www-data        /srv/ixpmanager/artisan grapher:email-ports-with-counts \\\n                                --discards user1@example.com,user2@example.com\n\n32 10 * * *   www-data        /srv/ixpmanager/artisan grapher:email-ports-with-counts \\\n                                --errors user1@example.com,user2@example.com\n

Which, in the order above, do:

  1. Email a report of members whose average traffic has changed by more than 1.5 times their standard deviation.
  2. Email a report of all ports with >=80% utilisation yesterday (this uses the MRTG files as it predates the traffic_daily_phys_ints table).
  3. Email a report of all ports with a non-zero discard count yesterday.
  4. Email a report of all ports with a non-zero error count yesterday.

This generated emails are HTML formatted with embedded graph images.

"},{"location":"grapher/mrtg/#port-utilisation","title":"Port Utilisation","text":"

In IXP Manager v5.5.0, we introduced a port utilisation reporting function into IXP Manager's frontend UI. You will find it in the IXP STATISTICS section of the left hand side menu.

The purpose of this tool is to easily identify ports that are nearing or exceeding 80% utilisation. In its default configuration, IXP Manager will iterate over all the physical interface (switch ports) MRTG log files for every member and insert that information into the database at 02:10 (AM).

In the UI, when you select a specific date and period (day/week/month/year), you are shown the maximum port utilisation (in and out) for the given period up to 02:10 on that day.

This feature was introduced in March 2020 during the Coronavirus outbreak. After observing as much as 50% routine traffic increases across IXPs in areas under lock down, we needed a tool that would allow us to rapidly and easily view port utilisations across all members rather than looking at member graphs individually.

"},{"location":"grapher/mrtg/#troubleshooting","title":"Troubleshooting","text":""},{"location":"grapher/mrtg/#general-notes","title":"General Notes","text":""},{"location":"grapher/mrtg/#missing-graphs","title":"Missing Graphs","text":"

A common issue raised on the mailing list is missing customer graphs. The code which generates MRTG configuration is in the MRTG backend file (see the function getPeeringPorts()).

The conditions for a physical interface allocated to a customer to make the configuration file are:

If you are not sure about how ports are configured in IXP Manager, please see the interfaces document.

You can check the physical interface state by:

  1. goto the customer's overview page (select customer from dropdown menu on top right).
  2. select the Ports tab.
  3. edit the port via the Pencil icon next to the connection you are interested in.
  4. find the physical interface under Physical Interfaces and edit it via the Pencil icon on the right hand side of the row.
  5. Status should be either Connected or Quarantine.

You can ensure a switch is active by:

  1. Select Switches from the left hand side IXP ADMIN ACTIONS menu.
  2. Click Include Inactive on the top right heading.
  3. Find the switch where the physical interface is and ensure Active is checked.
"},{"location":"grapher/sflow/","title":"Backend: sflow","text":"

Instructions for configuring the IXP Manager sflow integration can be found on the Sflow Introduction and Sflow peer-to-peer pages.

"},{"location":"grapher/smokeping/","title":"Backend: Smokeping","text":"

Latency graphs are a tool for monitoring latency / packet loss to the routers of IXP partipants and they can be an invaluable asset when diagnosing many IXP issues.

While they should never be used as a tool for monitoring IXP latency or packet loss (as routers de-prioritise ICMP requests and/or may not have a suitably powerful management plane), they can act as an extremely useful tool for identifying and diagnosing customer / member issues. What we really look for here is recent changes over time.

IXP Manager can configure Smokeping to monitor member routers and display those graphs in member statistic pages. Presuming it is installed.

"},{"location":"grapher/smokeping/#historical-notes","title":"Historical Notes","text":"

If you have used Smokeping on IXP Manager <v4.5 or between v4.5 and v4.7.3, then how the configuration is generated has changed.