Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First class DPDK support for conventional Intel Virtual Functions and Smart NICs #148

Closed
Levovar opened this issue Sep 17, 2019 · 1 comment
Labels
enhancement New feature or request

Comments

@Levovar
Copy link
Collaborator

Levovar commented Sep 17, 2019

Is this a BUG REPORT or FEATURE REQUEST?:
feature

The current SR-IOV eco-system works great if the workloads are using it with kernel networking, or with Mellanox Connect-X cards.
But if the workload is a low latency workload, possibly using DPDK currently there is a dependency on network configuration steps done by other infrastructure components, outside CNI, and DANM.
This feature aims to provide first class native API support for these kind of special workloads.

Basically two improvements are needed.
1. Kernel driver selection
Both for conventional, and "smart NICs" it is required to bind the Virtual Function to specific, configurable kernel drivers; other than the default.
SR-IOV CNI used to provide interfaces for such operations, however this capability was discontinued for some reason. As application really should not have privileges to execute this network management operation, it is DANM's burden to do this change in a configurable manner.
A new, Pod-level parameter shall be introduced called "driver". Simply put, when this parameter is filled DANM will blindly bind the device to the configured driver.
We need Pod level configuration because whether the workload is using DPDK, or not is based on the process hosted inside it, not on the network.
You can connect DPDK, and non-DPDK workloads to the same network!

2. Conveying IPAM information to the process inside the container
Rebinding NICs to another kernel driver makes them "invisible" for "traditional" tools such as "ip", "ifconfig" etc.
However IPAM information is generally in the responsibility area of the CNI ecosystem within Kubernetes. DPDK apps will "provision" their own IPs, but it should still be the IP chosen by the CaaS.
In the absence of the traditional interfaces IPAM information needs to be conveyed by untraditional means - and hope application decides to pick-it up.
Sometimes one can only hope :)
It doesn't matter which way we choose to push info to the container, it will introduce CNI dependency. So we might as well choose the easiest, most frequently used approach: environment variables.

@Levovar
Copy link
Collaborator Author

Levovar commented Oct 11, 2019

Point1 unfortunately cannot be implemented with the current SR-IOV Device Plugin architecture.
The second RQ is implemented as per #155

@Levovar Levovar closed this as completed Oct 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant