You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current SR-IOV eco-system works great if the workloads are using it with kernel networking, or with Mellanox Connect-X cards.
But if the workload is a low latency workload, possibly using DPDK currently there is a dependency on network configuration steps done by other infrastructure components, outside CNI, and DANM.
This feature aims to provide first class native API support for these kind of special workloads.
Basically two improvements are needed. 1. Kernel driver selection
Both for conventional, and "smart NICs" it is required to bind the Virtual Function to specific, configurable kernel drivers; other than the default.
SR-IOV CNI used to provide interfaces for such operations, however this capability was discontinued for some reason. As application really should not have privileges to execute this network management operation, it is DANM's burden to do this change in a configurable manner.
A new, Pod-level parameter shall be introduced called "driver". Simply put, when this parameter is filled DANM will blindly bind the device to the configured driver.
We need Pod level configuration because whether the workload is using DPDK, or not is based on the process hosted inside it, not on the network.
You can connect DPDK, and non-DPDK workloads to the same network!
2. Conveying IPAM information to the process inside the container
Rebinding NICs to another kernel driver makes them "invisible" for "traditional" tools such as "ip", "ifconfig" etc.
However IPAM information is generally in the responsibility area of the CNI ecosystem within Kubernetes. DPDK apps will "provision" their own IPs, but it should still be the IP chosen by the CaaS.
In the absence of the traditional interfaces IPAM information needs to be conveyed by untraditional means - and hope application decides to pick-it up.
Sometimes one can only hope :)
It doesn't matter which way we choose to push info to the container, it will introduce CNI dependency. So we might as well choose the easiest, most frequently used approach: environment variables.
The text was updated successfully, but these errors were encountered:
Is this a BUG REPORT or FEATURE REQUEST?:
feature
The current SR-IOV eco-system works great if the workloads are using it with kernel networking, or with Mellanox Connect-X cards.
But if the workload is a low latency workload, possibly using DPDK currently there is a dependency on network configuration steps done by other infrastructure components, outside CNI, and DANM.
This feature aims to provide first class native API support for these kind of special workloads.
Basically two improvements are needed.
1. Kernel driver selection
Both for conventional, and "smart NICs" it is required to bind the Virtual Function to specific, configurable kernel drivers; other than the default.
SR-IOV CNI used to provide interfaces for such operations, however this capability was discontinued for some reason. As application really should not have privileges to execute this network management operation, it is DANM's burden to do this change in a configurable manner.
A new, Pod-level parameter shall be introduced called "driver". Simply put, when this parameter is filled DANM will blindly bind the device to the configured driver.
We need Pod level configuration because whether the workload is using DPDK, or not is based on the process hosted inside it, not on the network.
You can connect DPDK, and non-DPDK workloads to the same network!
2. Conveying IPAM information to the process inside the container
Rebinding NICs to another kernel driver makes them "invisible" for "traditional" tools such as "ip", "ifconfig" etc.
However IPAM information is generally in the responsibility area of the CNI ecosystem within Kubernetes. DPDK apps will "provision" their own IPs, but it should still be the IP chosen by the CaaS.
In the absence of the traditional interfaces IPAM information needs to be conveyed by untraditional means - and hope application decides to pick-it up.
Sometimes one can only hope :)
It doesn't matter which way we choose to push info to the container, it will introduce CNI dependency. So we might as well choose the easiest, most frequently used approach: environment variables.
The text was updated successfully, but these errors were encountered: