Skip to content

Network adaptation service

Amy Buck edited this page Aug 7, 2017 · 9 revisions

NAS provides a CPS API for managing the NPU functionality. NAS holds the core functionality of a network switch/router for Layer 2/Layer 3 forwarding and switching.

NOTE: NAS should not be application or protocol aware, as this allows applications to be changed without requiring a change to the NAS.

NAS goals

  • Provide a CPS API (at northbound) for third-party development that provides a series of building blocks on which applications can be constructed
  • Support the packet I/O functionality of the NPU
  • Integrate the Linux OS with the NPU functionality
  • Hold the core functionality in programming various NPU functions including Layer 2, Layer 3, routing, ACL, and QoS
  • Provide a single interface to multiple types of NPUs. NAS contains NDI functionality which interfaces with SAI. NDI aggregates and abstracts access to all networking device information, as well as topology details for programming each NPU.
  • NAS interfaces (at southbound) with the lower HAL SAI which holds the NPU abstraction as the shim layer with some minimal intelligence.

NAS integrates with Linux in such a way that open source applications like Quagga and a standard DHCP server can be used without modification.

Linux adaptation

The NAS daemon integrates standard Linux network APIs with NPU hardware functionality, and registers and listens to networking (netlink) events.

NAS Linux adaptation functionality

  • The NAS daemon integrates standard Linux network APIs with NPU hardware functionality, and registers and listens to networking (netlink) events.
  • When it receives an event, the NAS daemon processes the event contents and programs the NPU with relevant information, such as enabling/disabling an interface, adding/removing a route, or adding/deleting a VLAN.
  • Internal tap devices are used to associate physical ports on the NPU with Linux interfaces. When the NAS detects a change in physical port status (up/down), the daemon propagates the new port status to the associated tap device.
  • Packet I/O describes control-plane packet forwarding between physical ports and associated Linux interfaces. Packet I/O is implemented as a standalone thread of the NAS daemon. Packets received by the NPU are forwarded to the CPU.
  • The packet I/O thread receives the packet through the SAI API callback. Each received packet contains the identity of the source physical port. The packet I/O module injects the packet to the tap device associated with the source physical port.
  • Applications receive packets from the Linux IP stack using standard sockets.
  • Applications use tap devices to transmit packets. The packet I/O thread receives the transmitted packet from the Linux IP stack. Based on the source tap device of the packet, the transmitted packet is forwarded to the associated physical port.

Network configuration

The NAS handles the networking functionality. The NAS daemon listens to netlink events for Layer 2 and Layer 3 configurations, and programs the NPU. See CPS API network configuration for more information.

Layer 2

NAS-Layer 2 provides the interface to enable core L2 functionality and related protocols. See NAS L2 and VLAN design for more information.

Link aggregation

LAG (link aggregation) also known as interface bonding, joins multiple physical interfaces together into a virtual interface, known as a bond interface. See Link aggregation for more information.

Link layer discover protocol

Ethernet switches use LLDP to learn and distribute device information on network links. The information enables the switch to quickly identify a variety of devices, resulting in a LAN that interoperates smoothly and efficiently. See LLDP for more information.

Media access control

NAS Layer 2 MAC module handles L2 FDB-related functionality. Its primary responsibilities are to provide FDB related services, port security features requested by the applications. In turn, the NAS L2 MAC module facilitates to program necessary information to the Linux kernel and underlying NPU through the SAI module.

One of the important functions of this module is to interface with netlink messages. It listens to the netlink message for FDB/interface related information and takes necessary actions, sends netlink/ioctl to set anything required in the kernel. See MAC for more information.

Layer 2 security

The NAS Layer 2 security module provides L2 802.1x-related services requested by the applications. This module facilitates the following services in the NAS layer via NAS L2 APIs

Authentication Type Description
Port-based authentication Multi-host Promiscuous MAC mode set to trap packets for all clients.
MAC-based authentication Multi-auth Specific client MAC address set to trap packets for the port. The above achieved using hardware flags set for the specific port.
MAC authentication Bypass Unknown mac address packets are punted to CPU to 802.1x module in CPS layer. Source discard for specific mac addresses.

NAS STG

NAS L2 STG module provides an interface to CPS layer/third-party application developer to access NPU capabilities to run xSTP (RSTP/MSTP/PVST) protocols on top of it. NAS STG module does not handle any protocol level details nor it has any protocol knowledge.

NAS L2 STG is also responsible for receiving netlink events from kernel to provide support for kernel STP protocol in NPU. NAS L2 STG also programs the STP state of kernel interfaces using netlink events based on CPS layer/third-party STP protocol stack. NAS L2 STG module uses SAI STP module underneath to provide NPU capabilities related to STG. See STG for more information.

Layer 3

The NAS-routing subsystem consists of the core functionality of Layer 3 routing and forwarding information base (FIB) and maintains a consolidated view of best routes and its adjacency database received from Linux kernel routing table (KFIB), or an external RTM via CPS interface.

NAS-routing assists in programming the correct L3 routes and dependent resolved L3 next-hops and its adjacency information into related L3 routing tables in the NPUs for Layer 3 forwarding information. See NAS L3 for more information.

Interface management

Linux interfaces are created during bootup and represent the physical ports on the NPU in a one-to-one mapping. The internal Linux interfaces allow applications to configure physical port parameters including MTU, port state, and link state. Linux interfaces also provide packet I/O functionality and support applications in control plane transmission (sending and receiving). See Interface management for more information.

Network device interface (NDI)

The NDI abstracts the NPU accessibility to NAS components and provides uniform interface towards NPU irrespective of whether it is local or remote. It also maintains platforms specific details. The NDI layer is the part of NAS which is responsible for instantiating and providing interface to the NPU via SAI.

Main functionality provided by NDI layer

  • Initialization of SAI instances. NDI maintains NDI NPU database for each NPU present in the system or line card running base package. A SAI instance is created for each NPU and linked with NDI NPU databasxe.
  • Provides mapping between NPU ID/port ID to SAI instance. Each NAS feature component needs to implement the corresponding southbound ndi_xxx() APIs. ndi_xxx() maps to SAI APIs stored in the NDI NPU database. It is recommended to have one to one mapping between each ndi_xxx() with SAI API but it is permissible to have ndi_xxx() calling multiple SAI APIs to improve readability and reduce overhead.
  • Maintains default platform and vendor specific settings like port configuration and properties.
  • Interfaces with Other platform components like PAL and physical media drivers for optics related functionality and any other miscellaneous platform based functionality. It also provides interface to PAL and any other modules which needs access to NPU’s environment variable such as NPU temperature sensor reading.

See NDI design for more information.

NDI vs. SAI

  • NDI is a higher level of abstraction, and uses the SAI API
  • NDI aggregates and abstracts access to all networking devices (media/optics, FPGA, BFD functionality). SAI API only implements an SDK abstraction only For instance, ndi_set_port_admin_state(npu_id, port_id, enable) performs two functions:
    • Enable tx laser, enable rx on QSFP corresponding to NPU port
    • Enable NPU port
  • NDI implements per platform initialization functions related to networking devices (such as CPLD, FPGA, NPU pre-emphasis), SAI provides NPU specific initialization, and only from the perspective of networking services
  • NDI supports multiple NPUs as part of the API, SAI supports a single NPU per SAI instance
  • The NDI can create multiple SAI instances to support multiple NPU
  • NDI supports NPU virtualization (where a remote NPU can be addressed by SW running on a card where the NPU is not physically connected), SAI only supports locally connected NPU (such as local PCI bus, to a given NPU)
Clone this wiki locally