This repository contains all the files needed to create a Dockerised container implementation of the AMWA Networked Media Open Specifications. For more information about AMWA, NMOS and the Networked Media Incubator, please refer to http://amwa.tv/.
This work is principally based on the open-sourced implementation from Sony. Please see: http://github.com/sony/nmos-cpp
The resulting Docker Container is specifically optimised to operate on a Mellanox switch, but can also function independently on many other platforms. Please see overview presentation from the IP Showcase @ IBC 2019:
Specifically the implementation supports the following specifications:
- AMWA IS-04 NMOS Discovery and Registration Specification (supporting v1.0-v1.3)
- AMWA IS-05 NMOS Connection Management Specification (supporting v1.0-v1.1)
- AMWA IS-07 NMOS Event & Tally Specification (supporting v1.1)
- AMWA IS-08 NMOS Audio Channel Mapping Specification (supporting v1.0)
- AMWA IS-09 NMOS System Specification (originally defined in JT-NM TR-1001-1:2018 Annex A) (supporting v1.0)
- AMWA BCP-002-01 NMOS Grouping Recommendations - Natural Grouping
- AMWA BCP-003-01 NMOS API Security Recommendations - Securing Communications
Additionally it supports the following additional components:
- Supports auto identification of the switch Boundary Clock PTP Domain which is published via the AMWA IS-09 System Resource when run on a Mellanox switch
- Supports an embedded NMOS Browser Client/Controller which supports NMOS Control using AMWA IS-05. This implementation does not currently support AMWA IS-08
- Supports an embedded MQTT Broker (mosquitto) to allow simplified use of the NMOS MQTT Transport type for AMWA IS-05 and IS-07
- Supports a DNS-SD Bridge to HTML implementation that supports both mDNS and DNS-SD
The nmos-cpp container includes implementations of the NMOS Node, Registration and Query APIs, and the NMOS Connection API. It also included a NMOS Browser Client/Controller in JavaScript, a MQTT Broker and a DNS-SD API which aren't part of the specifications.
The NVIDIA NMOS docker container has now passed the stringent testing required by JT-NM for both Registries and Controllers. The container was tested whilst running on a Mellanox Spectrum/Spectrum-2 switch using the Onyx Docker subsystem. You can access the JT-NM testing matrix here.
In addition, the container has been successfully tested in AMWA Networked Media Incubator workshops.
The Dockerfile in this repository is designed so that if needed it can be run under the Docker Experimental BuildX CLI feature set. The container is published for the follow CPU Architectures:
- Intel and AMD x86_86 64-bit architectures
- ARMv8 AArch64 (64-bit ARM architecture)
- ARMv7 AArch32 (32-bit ARM architecture)
The container has been tested on the following platforms for compatibility:
- Mellanox SN2000, SN3000 and SN4000 Series switches
- Mellanox Bluefield family of SmartNICs (operating natively on the SmartNIC ARM cores)
- NVIDIA Jetson AGX Xavier Developer Kit (even though not tested the container should function on all NVIDIA AGX platforms)
- Raspberry Pi RPi 3 Model B and RPi 4 Model B (both Raspbian's standard 32-bit and the new experimental 64-bit kernels have been tested)
- Standard Intel and AMD Servers running the container under Ubuntu Linux and Windows - Both bare-metal and virtualised environments have been tested.
The NVIDIA NMOS container, like the NMOS Specifications, is intended to be always ready, but continually developing. To ease development overheads and to continually validate the status of the container it now undergoes CI Testing via GitHub Actions. This CI testing is meant as a sanity check around the container functionality rather than extensive testing of nmos-cpp functionality itself. Please see wider Sony CI Testing for deeper testing on nmos-cpp.
The following configuration, defined by the ci-build-test-publish job, is built and unit tested automatically via continuous integration. If the tests complete successfully the container is published directly to Docker Hub and also saved as an artifact against the GitHub Action Job. Additional configurations may be added in the future.
Platform | Version | Configuration Options |
---|---|---|
Linux | Ubuntu 18.04 (GCC 7.5.0) | Avahi |
The AMWA NMOS API Testing Tool is automatically run against the built NMOS container operating in both "nmos-node" and "nmos-registry" configurations.
Test Suite Result/Status:
Prerequisites:
- Run Onyx version 3.8.2000+ as a minimum
- Set an accurate date and time on the switch - Use PTP, NTP or set manually using the "clock set" command
- Create and have "interface vlans" for all VLANs that you want the container to be exposed on
Execute the following switch commands to download and run the container on the switch:
- Login as administrator to the switch CLI
- "docker" - Enables the Docker subsystem on the switch (Make sure you exit the docker menu tree using "exit")
- "docker no shutdown" - Activates Docker on the switch
- "docker pull rhastie/nmos-cpp:latest" - Pulls the latest version of the Docker container from Docker Hub
- "docker start rhastie/nmos-cpp latest nmos now privileged network" - Start Docker container immediately
- "docker no start nmos" - Stops the Docker container
Additional/optional steps:
On a Mellanox switch the DNS configuration used by the container is inherited from the switch configuration
- If you want to configure a DNS server for use by the container you can use the "ip name-server" switch command to specify a DNS server. By default, the container will use any DNS servers provided by DHCP
- If you want to configure a DNS search domain for the container you can use the "ip domain-list" switch command to specify DNS search domains. By default, the container will use any DNS search domains provided by DHCP. In the absence of any being configured it will default to ".local" ie. mDNS
- If you want to understand the current DNS configuration use the switch command "show hosts"
Prerequisites:
- It's generally recommended to use the Ubuntu 18.04+ based BFB (Bluefield bootstream) image as this contains all necessary drivers and OS as a single bundle. See download page
- Have an accurate date and time
- Make sure external connectivity and name resolution are available from the SmartNIC Ubuntu OS - There are several ways that this can be done. Please review the Bluefield documentation
- Docker is generally provided under the Mellanox BFB image, but if not available, install a full Docker CE environment using instructions
- Set docker permission for your host user
Execute the follow Linux commands to download and run the container on the host:
docker pull rhastie/nmos-cpp:latest
docker run -it --net=host --privileged --rm rhastie/nmos-cpp:latest
Prerequisites:
- It's generally recommended to run the very latest JetPack from NVIDIA (JetPack 4.3 at the time of testing)
- Have an accurate date and time
- Docker is generally provided under the NVIDIA JetPack image, but if not available, install a full Docker CE environment using instructions
- Set docker permission for your host user
Execute the follow Linux commands to download and run the container on the host:
docker pull rhastie/nmos-cpp:latest
docker run -it --net=host --privileged --rm rhastie/nmos-cpp:latest
Prerequisites:
- It's generally recommended to run latest version of Raspbian (Buster at the time of testing)
- Have an accurate date and time
- If using Raspbian Buster you can installed Docker using "sudo apt-get install docker.io". If using older versions of Raspbian install a full Docker CE environment using instructions
- Set docker permission for your host user
Execute the follow Linux commands to download and run the container on the host:
docker pull rhastie/nmos-cpp:latest
docker run -it --net=host --privileged --rm rhastie/nmos-cpp:latest
Prerequisites:
- It's generally recommended to run using Ubuntu 18.04+
- Have an accurate date and time
- Install a full Docker CE environment using instructions
- Set docker permission for your host user
Execute the follow Linux commands to download and run the container on the host:
docker pull rhastie/nmos-cpp:latest
docker run -it --net=host --privileged --rm rhastie/nmos-cpp:latest
The container publishes on all available IP addresses using port 8010
- Browse to http://[Switch or Host IP Address>]:8010 to get to the Web GUI interface.
- The NMOS Registry is published on the "x-nmos" URL
- The NMOS Browser Client/Controller is published on the "admin" URL
The container also contains an implementation of NMOS Virtual Node. This can simulate a node attaching to the registry/controller. Importantly, a single instance of the container can run the registry/controller or the node, but not both at the same time. If you need both operating, you just start a second instance of the container.
By design the container is configured not to run the node implementation by default, however, you can override this default using two different approaches:
There is a docker environmental variable available that will override the default execution of the container and start the NMOS Virtual node. Use the following command to start the container using this variable:
docker run -it --net=host --name nmos-registry --rm -e "RUN_NODE=TRUE" rhastie/nmos-cpp:latest
You can use the process below to build the container so that the default execution is changed and the container executes the NMOS Virtual Node at runtime without needing an environmental variable being set
Below are some brief instructions on how to build the container. There are several additional commands available and its suggested you review the Makefile in the repository
- Make sure you have a fully functioning Docker CE environment. It is recommended you follow the instructions for Ubuntu
- Clone this repository to your host
- Run:
make build
- Make sure you have a fully functioning Docker CE environment. It is recommended you follow the instructions for Ubuntu
- Clone this repository to your host
- Run:
make buildnode
Please note the container will be built with a “-node” suffix applied to remove any confusion.