diff --git a/kube-loxilb/index.html b/kube-loxilb/index.html index f5695765..fb8cbd9a 100644 --- a/kube-loxilb/index.html +++ b/kube-loxilb/index.html @@ -2372,15 +2372,11 @@
kubectl apply -f bgp-peer.yaml
kubectl apply -f bgp-peer.yaml
+
You can check it in two ways. The first one can be checked through loxicmd(in loxilb container), and the second one can be checked through kubectl.
# loxicmd
kubectl exec -it {loxilb} -n kube-system -- loxicmd get bgpneigh
diff --git a/search/search_index.json b/search/search_index.json
index e8ea59c3..c3a05d4c 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":""},{"location":"#welcome-to-loxilb-documentation","title":"Welcome to loxilb documentation","text":""},{"location":"#background","title":"Background","text":"loxilb started as a project to ease deployments of cloud-native/kubernetes workloads for the edge. When we deploy services in public clouds like AWS/GCP, the services becomes easily accessible or exported to the outside world. The public cloud providers, usually by default, associate load-balancer instances for incoming requests to these services to ensure everything is quite smooth.
However, for on-prem and edge deployments, there is no service type - external load balancer provider by default. For a long time, MetalLB was the only choice for the needy. But edge services are a different ball game altogether due to the fact that there are so many exotic protocols in play like GTP, SCTP, SRv6 etc and integrating everything into a seamlessly working solution has been quite difficult.
loxilb dev team was approached by many people who wanted to solve this problem. As a first step to solve the problem, it became apparent that networking stack provided by Linux kernel, although very solid, really lacked the development process agility to quickly provide support for a wide variety of permutations and combinations of protocols and stateful load-balancing on them. Our search led us to the awesome tech developed by the Linux community - eBPF. The flexibility to introduce new functionality into the OS Kernel as a safe sandbox program was a complete fit to our design philosophy. It also does not need any dedicated CPU cores which makes it perfect for designing energy-efficient edge architectures.
"},{"location":"#what-is-loxilb","title":"What is loxilb","text":"loxilb is an open source cloud-native load-balancer based on GoLang/eBPF with the goal of achieving cross-compatibity across a wide range of on-prem, public-cloud or hybrid K8s environments. loxilb is being developed to support the adoption of cloud-native tech in telco, mobility, and edge computing.
"},{"location":"#kubernetes-with-loxilb","title":"Kubernetes with loxilb","text":"Kubernetes defines many service constructs like cluster-ip, node-port, load-balancer, ingress etc for pod to pod, pod to service and outside-world to service communication.
All these services are provided by load-balancers/proxies operating at Layer4/Layer7. Since Kubernetes's is highly modular, these services can be provided by different software modules. For example, kube-proxy is used by default to provide cluster-ip and node-port services.
Service type load-balancer is usually provided by public cloud-provider(s) as a managed entity. But for on-prem and self-managed clusters, there are only a few good options available. Even for provider-managed K8s like EKS, there are many who would want to bring their own LB to clusters running anywhere. loxilb provides service type load-balancer as its main use-case. loxilb can be run in-cluster or ext-to-cluster as per user need.
loxilb works as a L4 load-balancer/service-proxy by default. Although L4 load-balancing provides great performance and functionality, at times, an equally performant L7 load-balancer is also necessary in K8s for various use-cases. loxilb also supports L7 load-balancing in the form of Kubernetes Ingress implementation. This also benefit users who need L4 and L7 load-balancing under the same hood.
Additionally, loxilb also supports: - [x] kube-proxy replacement with eBPF(full cluster-mesh implementation for Kubernetes) - [x] Ingress Support - [x] Kubernetes Gateway API - [ ] Kubernetes Network Policies (in-progress)
"},{"location":"#telco-cloud-with-loxilb","title":"Telco-Cloud with loxilb","text":"For deploying telco-cloud with cloud-native functions, loxilb can be used as a SCP(service communication proxy). SCP is a communication proxy defined by 3GPP and aimed at optimizing telco micro-services running in cloud-native environment. Read more about it here.
Telco-cloud requires load-balancing and communication across various interfaces/standards like N2, N4, E2(ORAN), S6x, 5GLAN, GTP etc. Each of these present its own unique challenges which loxilb aims to solve e.g.: - N4 requires PFCP level session-intelligence - N2 requires NGAP parsing capability(Related Blogs - Blog-1, Blog-2, Blog-3) - S6x requires Diameter/SCTP multi-homing LB support(Related Blog) - MEC use-cases might require UL-CL understanding(Related Blog) - Hitless failover support might be essential for mission-critical applications - E2 might require SCTP-LB with OpenVPN bundled together - SIP support is needed to enable cloud-native VOIP
"},{"location":"#why-choose-loxilb","title":"Why choose loxilb?","text":" Performs
much better compared to its competitors across various architectures - Single-Node Performance
- Multi-Node Performance
- Performance on ARM
- Short Demo on Performance
- Utitlizes ebpf which makes it
flexible
as well as customizable
- Advanced
quality of service
for workloads (per LB, per end-point or per client) - Works with
any
Kubernetes distribution/CNI - k8s/k3s/k0s/kind/OpenShift + Calico/Flannel/Cilium/Weave/Multus etc - Extensive support for
SCTP workloads
(with multi-homing) on k8s - Dual stack with
NAT66, NAT64
support for k8s - k8s
multi-cluster
support (planned \ud83d\udea7) - Runs in
any
cloud (public cloud/on-prem) or standalone
environments
"},{"location":"#overall-features-of-loxilb","title":"Overall features of loxilb","text":" - L4/NAT stateful loadbalancer
- NAT44, NAT66, NAT64 with One-ARM, FullNAT, DSR etc
- Support for TCP, UDP, SCTP (w/ multi-homing), QUIC, FTP, TFTP etc
- High-availability support with hitless/maglev/cgnat clustering
- Extensive and scalable end-point liveness probes for cloud-native environments
- Stateful firewalling and IPSEC/Wireguard support
- Optimized implementation for features like Conntrack, QoS etc
- Full compatibility for ipvs (ipvs policies can be auto inherited)
- Policy oriented L7 proxy support - HTTP1.0, 1.1, 2.0 etc (planned \ud83d\udea7)
"},{"location":"#components-of-loxilb","title":"Components of loxilb","text":" - GoLang based control plane components
- A scalable/efficient eBPF based data-path implementation
- Integrated goBGP based routing stack
- A kubernetes agent kube-loxilb written in Go
"},{"location":"#architectural-considerations","title":"Architectural Considerations","text":" - Understanding loxilb modes and deployment in K8s with kube-loxilb
- Understanding High-availability with loxilb
"},{"location":"#getting-started-with-different-k8s-distributions-tools","title":"Getting started with different K8s distributions & tools","text":""},{"location":"#loxilb-as-ext-cluster-pod","title":"loxilb as ext-cluster pod","text":" - K3s : loxilb with default flannel
- K3s : loxilb with calico
- K3s : loxilb with cilium
- K0s : loxilb with default kube-router networking
- EKS : loxilb ext-mode
"},{"location":"#loxilb-as-in-cluster-pod","title":"loxilb as in-cluster pod","text":" - K3s : loxilb in-cluster mode
- K0s : loxilb in-cluster mode
- MicroK8s : loxilb in-cluster mode
- EKS : loxilb in-cluster mode
"},{"location":"#loxilb-as-service-proxy","title":"loxilb as service-proxy","text":" - K3s : loxilb service-proxy with flannel
- K3s : loxilb service-proxy with calico
"},{"location":"#loxilb-as-kubernetes-ingress","title":"loxilb as Kubernetes Ingress","text":" - K3s: How to run loxilb-ingress
"},{"location":"#loxilb-in-standalone-mode","title":"loxilb in standalone mode","text":" - Run loxilb standalone
"},{"location":"#advanced-guides","title":"Advanced Guides","text":" - How-To : Service-group zones with loxilb
- How-To : Access end-points outside K8s
- How-To : Deploy multi-server K3s HA with loxilb
- How-To : Deploy loxilb with multi-AZ HA support in AWS
- How-To : Deploy loxilb with multi-cloud HA support
- How-To : Deploy loxilb with ingress-nginx
"},{"location":"#knowledge-base","title":"Knowledge-Base","text":" - What is eBPF
- What is k8s service - load-balancer
- Architecture in brief
- Code organization
- eBPF internals of loxilb
- What are loxilb NAT Modes
- loxilb load-balancer algorithms
- Manual steps to build/run
- Debugging loxilb
- loxicmd command-line tool usage
- Developer's guide to loxicmd
- Developer's guide to loxilb API
- API Reference - loxilb web-Api
- Performance Reports
- Development Roadmap
- Contribute
- System Requirements
- Frequenctly Asked Questions- FAQs
"},{"location":"#blogs","title":"Blogs","text":" - K8s - Elevating cluster networking
- eBPF - Map sync using Go
- K8s in-cluster service LB with LoxiLB
- K8s - Introducing SCTP Multihoming with LoxiLB
- Load-balancer performance comparison on Amazon Graviton2
- Hyperscale anycast load balancing with HA
- Getting started with loxilb on Amazon EKS
- K8s - Deploying \"hitless\" Load-balancing
- Oracle Cloud - Hitless HA load balancing
- Ipv6 migration in Kubernetes made easy
"},{"location":"#community-posts","title":"Community Posts","text":" - 5G SCTP LoadBalancer Using loxilb
- 5G Uplink Classifier Using loxilb
- K3s - Using loxilb as external service lb
- K8s - Bringing load-balancing to multus workloads with loxilb
- 5G SCTP LoadBalancer Using LoxiLB on free5GC
- Kubernetes Services: Achieving optimal performance is elusive
"},{"location":"#research-papers-featuring-loxilb","title":"Research Papers (featuring loxilb)","text":" - Mitigating Spectre-PHT using Speculation Barriers in Linux BPF
"},{"location":"#latest-cicd-status","title":"Latest CICD Status","text":" -
Features(Ubuntu20.04)
-
Features(Ubuntu22.04)
-
K3s Tests
-
K8s Cluster Tests
-
EKS Test
"},{"location":"api-dev/","title":"loxilb API Development Guide","text":"For building and extending LoxiLB API server.
"},{"location":"api-dev/#api-source-architecture","title":"API source Architecture","text":".\n\u251c\u2500\u2500 certification\n\u2502 \u251c\u2500\u2500 serverca.crt\n\u2502 \u2514\u2500\u2500 serverkey.pem\n\u251c\u2500\u2500 cmd\n\u2502 \u2514\u2500\u2500 loxilb_rest_api-server\n\u2502 \u2514\u2500\u2500 main.go\n\u251c\u2500\u2500 \u2026.\n\u251c\u2500\u2500 models\n\u2502 \u251c\u2500\u2500 error.go\n\u2502 \u251c\u2500\u2500 \u2026..\n\u251c\u2500\u2500 restapi\n\u2502 \u251c\u2500\u2500 configure_loxilb_rest_api.go\n\u2502 \u251c\u2500\u2500 \u2026..\n\u2502 \u251c\u2500\u2500 handler\n\u2502 \u2502 \u251c\u2500\u2500 common.go\n\u2502 \u2502 \u2514\u2500\u2500\u2026..\n\u2502 \u251c\u2500\u2500 operations\n\u2502 \u2502 \u251c\u2500\u2500 get_config_conntrack_all.go\n\u2502 \u2502 \u2514\u2500\u2500 \u2026.\n\u2502 \u2514\u2500\u2500 server.go\n\u2514\u2500\u2500 swagger.yml\n
* Except for the ./api/restapi/handler and ./api/certification directories, the rest of the contents are automatically created. * Add the logic for the function to the handler directory. * Add logic to file ./api/restapi/configure_loxilb_rest_api.go - Swagger.yml file update
paths:\n '/additional/url/{param}':\n get:\n summary: Test Swagger API Server.\n description: Check Swagger API server. This basic information or architecture is for the later applications.\n parameters:\n - name: param\n in: path\n required: true\n type: string\n format: string\n description: Description of the additional url\n responses:\n '204':\n description: OK\n '400':\n description: Malformed arguments for API call\n schema:\n $ref: '#/definitions/Error'\n '401':\n description: Invalid authentication credentials\n
- path.{Set path and parameter URL}.{get,post,etc RESTful setting}.{Description}
- {Set path and parameter URL} Set the path used in the RESTful API. It begins with \"config/\" and is defined as a sub-category from a large category. Define the parameters using the symbol {param}. The parameters are defined in the description section.
- {get,post,etc RESTful setting} Use get, post, delete, and patch to define queries, registrations, deletions, and modifications.
-
{Description} Summary description of API Detailed description of API Parameters Set the name, path, etc. Define the content of the response
-
Creating Additional Parts with Swagger
# alias swagger='docker run --rm -it --user $(id -u):$(id -g) -e GOPATH=$(go env GOPATH):/go -v $HOME:$HOME -w $(pwd) quay.io/goswagger/swagger'\n# swagger generate server\n
-
Development of Additional Partial Handlers
package handler\n\nimport (\n \"fmt\"\n\n \"github.com/go-openapi/runtime/middleware\"\n\n \"testswagger.com/restapi/operations\"\n)\n\nfunc ConfigAdditionalUrl(params operations.GetAdditionalUrlParams) middleware.Responder {\n /////////////////////////////////////////////\n // Add logic Here //\n ////////////////////////////////////////////.\n return &ResultResponse{Result: fmt.Sprintf(\"params.param : %s\", params.param)}\n}\n
-
Select the logic required for the ConfigAdditionalUrl portion of the handler directory. The required parameters come from operations.GetAdditionalUrlParams.
-
Additional Partial Handler Registration
func configureAPI(api *operations.LoxilbRestAPIAPI) http.Handler {\n ...... \n // Change it by putting a function here\n api.GetAdditionalUrlHandler = operations.GetAdditionalUrlHandlerFunc(handler.ConfigAdditionalUrl)\n \u2026.\n}\n
- if api.{REST}...The Handler form is automatically generated, where if nil is erased and a handler is added to the operation function.
- In many cases, additional generation is not possible. In that case, you can add the function by entering it separately. The name of the function consists of a combination of Method, URL, and Parameter.
"},{"location":"api/","title":"loxilb api-reference","text":""},{"location":"api/#loxilb-web-apis","title":"loxilb Web-APIs","text":"Loxilb can be fully configured using extensive list of RestAPI. Refer to the API Documentation.
"},{"location":"arch/","title":"loxilb architecture and modules","text":"loxilb consists of the following modules :
-
loxilb CCM plugin
It fully implements K8s CCM load-balancer interface and talks to goLang based loxilb process using Restful APIs. Although loxilb CCM is logically shown as part of loxilb cluster nodes, it will usually run in the worker/master nodes of the K8s cluster. LoxiCCM can easily be used as part of any CCM operator implementation.
-
loxicmd
loxicmd is command line tool to configure and dump loxilb information which is based on same foundation as the wildly popular kubectl tools.
- loxilb
loxilb is a modern goLang based framework (process) which mantains information coming in from various sources e.g apiserver and populates the eBPF maps used by the loxilb eBPF kernel. It is also responsible for loading eBPF programs to the interfaces.It also acts as a client to goBGP to exchange routes based on information from loxilb CCM. Last but not the least, it will be finally responsible for maintaining HA state sync with its remote peers. Almost all serious lb implementations need to be deployed as a HA cluster.
- loxilb eBPF kernel
eBPF kernel module implements the data-plane of loxilb which provides complete kernel bypass. It is a fully self contained and feature-rich stack able to process packets from rx to tx without invoking linux native kernel networking.
- goBGP
Although goBGP is a separate project, loxilb has adopted and integrated with goBGP as its routing stack of choice. We also hope to develop features for this awesome project in the future.
- DashBoards
Grafana based dashboards to provide highly dynamic insight into loxilb state.
The following is a typical loxilb deployment topology (Currently HA implementation is in development) :
"},{"location":"aws-multi-az/","title":"How-To - Deploy loxilb with multi-AZ HA support in AWS","text":""},{"location":"aws-multi-az/#deploy-loxilb-with-multi-az-ha-support-in-aws","title":"Deploy LoxiLB with multi-AZ HA support in AWS","text":"LoxiLB supports stateful HA configuration in various cloud environments such as AWS. Especially for AWS, one can configure HA using the Floating IP pattern, together with LoxiLB.
The HA configuration described in the above document has certain limitations. It could only be configured within a single Availability-Zone(AZ). The HA instances need to share the VIP of the same subnet in order to provide a single access point to users, but this configuration was so far not possible in a multi-AZ environment. This blog explains how to deploy LoxiLB in a multi-AZ environment and configure HA.
"},{"location":"aws-multi-az/#overall-scenario","title":"Overall Scenario","text":"Two LoxiLB instances - loxilb1 and loxilb2 will be deployed in different AZs. These two loxilbs form a HA pair and operate in active-backup roles.
The active loxilb1 instance is additionally assigned a secondary network interface called loxi-eni. The loxi-eni network interface has a private IP (192.168.248.254 in this setup) which is used as a secondary IP.
loxilb1 associates this 192.168.248.254 secondary IP with an user-specified public ElasticIP address. When a user accesses the EKS service externally using an ElasticIP address, this traffic is NATed to the 192.168.248.254 IP and delivered to the active loxilb instance. The active loxilb instance can then load balance the traffic to the appropriate endpoint in EKS.
If loxilb1 goes down due to any reason, the status of loxilb2, which was backup previously, changes to active.
During this transition, loxilb2 instance is assigned a new loxil-eni secondary network interface, and the 192.168.248.254 IP used by the the original master \"loxilb1\" is set to the secondary network interface of loxilb2.
The ElasticIP used by the user is also (re)associated to the 192.168.248.254 private IP address of the \"new\" active instance. This makes it possible to maintain active sessions even during failover or situations where there is a need to upgrade orginal master instance etc.
To summarize, when a failover occurs the public ElasticIP address is always associated to the active LoxiLB instance, so users who were previously accessing EKS using the same ElasticIP address can continue to do so without being affected by any node failure or other issues.
"},{"location":"aws-multi-az/#an-example-scenario","title":"An example scenario","text":"We will use eksctl to create an EKS cluster. To use eksctl, we need to register authentication information through AWS CLI. Instructions for installing aws CLI & eksctl etc are omitted in this document and can be found in AWS.
Using eksctl, let's create an EKS cluster with the following command. For this test, we are using AWS's Osaka region, so using ap-northeast-3
in the --region option.
eksctl create cluster \\\n --version 1.24 \\\n --name multi-az-eks \\\n --vpc-nat-mode Single \\\n --region ap-northeast-3 \\\n --node-type t3.medium \\\n --nodes 2 \\\n --with-oidc \\\n --managed\n
After running the above, we will have an EKS clsuter with two nodes named \"multi-az-eks\"."},{"location":"aws-multi-az/#configuring-loxilb-ec2-instances","title":"Configuring LoxiLB EC2 Instances","text":""},{"location":"aws-multi-az/#create-loxilb-subnet","title":"Create LoxiLB subnet","text":"After configuring EKS, it's time to configure the LoxiLB instance. Let's create the subnet that each of the LoxiLB instances will use.
LoxiLB instances will be created each located in a different AZ. Therefore, the subnets to be used by the instances will also be created in different AZs: AZ-a and AZ-b.
First, create a subnet loxilb-subnet-a in ap-northeast-3a with the subnet 192.168.218.0/24.
Similarly, create a subnet loxilb-subnet-b in ap-northeast-3b with the subnet 192.168.228.0/24.
After creating it, we can double check the \"Enable auto-assign public IPv4 address\" setting so that interfaces connected to each subnet are automatically assigned a public IP.
"},{"location":"aws-multi-az/#aws-route-table","title":"AWS Route table","text":"Newly created subnets automatically use the default route table. We will connect the default route table to the internet gateway so that users can access the LoxiLB instance from outside.
"},{"location":"aws-multi-az/#loxilb-iam-settings","title":"LoxiLB IAM Settings","text":"LoxiLB instances require permission to access the AWS EC2 API to associate ElasticIPs and create secondary interfaces and subnets.
We will create a role with the following IAM policy for LoxiLB EC2 instances.
{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\": \"*\",\n \"Resource\": \"*\"\n }\n ]\n}\n
"},{"location":"aws-multi-az/#loxilb-ec2-instance-creation","title":"LoxiLB EC2 instance creation","text":"We will create two LoxiLB instances for this example and connect the instances wits subnets A and B created above.
And specify to use the IAM role created above in the IAM instance profile of the Advanced details settings.
After the instance is created, go to the Action \u2192 networking \u2192 Change Source /destination check menu in the instance menu and disable this check. Since LoxiLB is a load balancer, this configration must be disabled for LoxiLB to operate properly.
"},{"location":"aws-multi-az/#create-elastic-ip","title":"Create Elastic IP","text":"Next we will create an Elastic IP to use to access the service from outside.
For this example, the IP 13.208.x.x
was assigned. The Elastic IP is used when deploying kube-loxilb, and is automatically associated to the LoxiLB master instance when configuring LoxiLB HA without any user intervention.
"},{"location":"aws-multi-az/#kube-loxilb-deployment","title":"kube-loxilb deployment","text":"kube-loxilb is a K8s operator for LoxiLB. Download the manifest file required for your deployment in EKS.
wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/ext-cluster/kube-loxilb.yaml\n
Change the args inside this yaml (as applicable)
spec:\n containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n - --loxiURL=http://192.168.228.108:11111,http://192.168.218.60:11111\n - --externalCIDR=13.208.X.X/32\n - --privateCIDR=192.168.248.254/32\n - --setRoles=0.0.0.0\n - --setLBMode=2 \n
- Modify loxiURL with the IPs of the LoxiLB EC2instances created above.
- For externalCIDR, specify the Elastic IP created above.
- PrivateCIDR specifies the VIP that will be associated with the Elastic IP. As described in the scenario above, we will use 192.168.248.254 as the VIP in this article. The IP must be set within the range of the VPC CIDR and not currently part of any another subnet.
"},{"location":"aws-multi-az/#run-loxilb-pods","title":"Run LoxiLB Pods","text":""},{"location":"aws-multi-az/#install-docker-on-loxilb-instances","title":"Install docker on LoxiLB instance(s)","text":"LoxiLB is deployed as a container on each instance. To use containers, docker must first be installed on the instance. Docker installation guide can be found here
"},{"location":"aws-multi-az/#running-loxilb-container","title":"Running LoxiLB container","text":"The following command is for a LoxiLB instance (loxilb1) using subnet-a.
sudo docker run -u root --cap-add SYS_ADMIN \\\n --restart unless-stopped \\\n --net=host \\\n --privileged \\\n -dit \\\n -v /dev/log:/dev/log -e AWS_REGION=ap-northeast-3 \\\n --name loxilb \\\n ghcr.io/loxilb-io/loxilb:aws-support \\\n --cloud=aws --cloudcidrblock=192.168.248.0/24 --cluster=192.168.228.108 --self=0\n
- In the cloudcidrblock option, specify the IP band that includes the VIP set in kube-loxilb's privateCIDR. master LoxiLB uses the value set here to create a new subnet in the AZ where it is located and uses it for HA operation.
- The cluster option specifies the IP of the partner instance (LoxiLB instance using subnet-b) for which HA is configured.
- The self option is set to 0. It is just a identier used internally to identify each instance
Similarily we can run loxilb2 instance in the second EC2 instance using subnet-b:
sudo docker run -u root --cap-add SYS_ADMIN \\\n --restart unless-stopped \\\n --net=host \\\n --privileged \\\n -dit \\\n -v /dev/log:/dev/log -e AWS_REGION=ap-northeast-3 \\\n --name loxilb \\\n ghcr.io/loxilb-io/loxilb:aws-support \\\n --cloud=aws --cloudcidrblock=192.168.248.0/24 --cluster=192.168.218.60 --self=1\n
For each instance, HA status can be checked as follows:
When the container runs, you can check the HA status as follows:
ubuntu@ip-192-168-218-60:~$ sudo docker exec -ti loxilb bash\nroot@ip-192-168-228-108:/# loxicmd get ha\n| INSTANCE | HASTATE |\n|----------|---------|\n| default | MASTER |\nroot@ip-192-168-228-108:/#\n
"},{"location":"aws-multi-az/#creating-a-service","title":"Creating a service","text":"Let's create a test service to test HA functionality. Below are the manifest files for the nginx pod and service that we will use for testing.
apiVersion: v1\nkind: Service\nmetadata:\n name: nginx-lb1\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: nginx-test\n ports:\n - port: 55002\n targetPort: 80\n type: LoadBalancer\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-test\n labels:\n what: nginx-test\nspec:\n containers:\n - name: nginx-test\n image: nginx:stable\n ports:\n - containerPort: 80\n
After creating an nginx service with the above, weu can see that the ElasticIP has been designated as the externalIP of the service. LEIS6N3:~/workspace/aws-demo$ kubectl apply -f nginx.yaml\nservice/nginx-lb1 created\npod/nginx-test created\nLEIS6N3:~/workspace/aws-demo$ kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.100.0.1 <none> 443/TCP 22h\nnginx-lb1 LoadBalancer 10.100.178.3 llb-13.208.X.X 55002:32403/TCP 15s\n
We can now access the service from a host client : "},{"location":"aws-multi-az/#testing-ha-functionality","title":"Testing HA functionality","text":"Once LoxiLB HA is configured, we can check in the AWS console that a secondary interface has been added to the master. To test HA operation, simply stop the LoxiLB pod in master state.
ubuntu@ip-192-168-228-108:~$ sudo docker stop loxilb\nloxilb\nubuntu@ip-192-168-228-108:~$\n
Even after stopping the masterLB, the service can be accessed without interruption :
During failover, a secondary interface is created on the new master instance, and you can see that the ElasticIP is also associated to the new interface.
"},{"location":"ccm/","title":"Howto - ccm plugin","text":"loxi-ccm is a cloud-manager that provides kubernetes with loxilb load balancer. kubernetes provides the cloud-provider interface for the implementation of external cloud provider-specific logic, and loxi-ccm is an implementation of the cloud-provider interface.
"},{"location":"ccm/#typical-loxi-ccm-deployment-topology","title":"Typical loxi-ccm deployment topology","text":"As seen in the loxilb architecture documentation, loxi-ccm is logically shown as part of the loxilb cluster. But it's actually running on the k8s master/control-plane node.
loxi-ccm implements the k8s load balancer service function using RESTful API of loxilb. When a user creates a k8s load balancer type service, loxi-ccm allocates an IP from the registered External IP subnet Pool. loxi-ccm sets rules in loxilb to allow service access from external with the assigned IP. In other words, loxi-ccm needs two information.
- loxilb API server address
- External IP Subnet
These informations are managed through k8s ConfigMap. loxi-ccm users should modify this informations to suit your environment.
"},{"location":"ccm/#deploy-loxi-ccm-on-kubernetes","title":"Deploy loxi-ccm on kubernetes","text":"The guide below has been tested in environment on Ubuntu 20.04, kubernetes v1.24 (calico CNI)
"},{"location":"ccm/#1-modify-k8s-configmap","title":"1. Modify k8s ConfigMap","text":"In the manifests/loxi-ccm.yaml manifests file, the ConfigMap is defined as follows
---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: loxilb-config\n namespace: kube-system\ndata:\n apiServerURL: \"http://192.168.20.54:11111\"\n externalIPcidr: 123.123.123.0/24\n---\n
The ConfigMap has two values: apiServerURL and externalIPcidr. - apiServerURL : API Server address of loxilb.
- externalIPcidr : Subnet band to be allocated by loxilb as External IP of the load balancer.
apiServerURL and externalIPcidr must be modified according to the environment of the user using loxi-ccm.
"},{"location":"ccm/#2-deploy-loxi-ccm","title":"2. Deploy loxi-ccm","text":"Once you have modified ConfigMap, you can deploy loxi-ccm using the loxi-ccm.yaml manifest file. Run the following command on the kubernetes you want to deploy.
kubectl apply -f https://github.com/loxilb-io/loxi-ccm/raw/master/manifests/loxi-ccm.yaml\n
After entering the command, check whether loxi-cloud-controller-manager is created in the daemonset of the kube-system namespace."},{"location":"ccm/#manual-build","title":"Manual build","text":"If you want to build loxi-ccm manually, do the following:
"},{"location":"ccm/#1-build","title":"1. build","text":"./build.sh\n
"},{"location":"ccm/#2-build-upload-container-image","title":"2. Build & upload container image","text":"Below is an example. This case use docker to build container images, and images is uploaded to docker hub.
TAG=\"0.1\"\nDOCKER_ID=YOUR_DOCKER_ID\nsudo docker build -t $DOCKER_ID/loxi-ccm:$TAG -f ./Dockerfile .\nsudo docker push $DOCKER_ID/loxi-ccm:$TAG\n
"},{"location":"ccm/#3-create-loxi-ccm-daemonset-using-custom-image","title":"3. create loxi-ccm daemonset using custom image","text":"In the DaemonSet section of the ./manifests/loxi-ccm.yaml file, change the image name to a custom image. (spec.template.spec.containers.image)
---\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n labels:\n k8s-app: loxi-cloud-controller-manager\n name: loxi-cloud-controller-manager\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: loxi-cloud-controller-manager\n template:\n metadata:\n labels:\n k8s-app: loxi-cloud-controller-manager\n spec:\n serviceAccountName: loxi-cloud-controller-manager\n containers:\n - name: loxi-cloud-controller-manager\n imagePullPolicy: Always\n # for in-tree providers we use k8s.gcr.io/cloud-controller-manager\n # this can be replaced with any other image for out-of-tree providers\n image: {DOCKER_ID}/loxi-ccm:{TAG}\n command:\n - /bin/loxi-cloud-controller-manager\n
"},{"location":"cmd-dev/","title":"Developer's guide to loxicmd","text":""},{"location":"cmd-dev/#loxicmd-development-guide","title":"loxicmd development guide","text":"This guide should help developers extend and enhance loxicmd. The guide is divided into three main stages: design, development, and testing. Start with cloning the loxicmd git:
git clone git@github.com:loxilb-io/loxicmd.git\n
"},{"location":"cmd-dev/#api-check-and-command-design","title":"API check and command design","text":"Before developing Command, we need to check if the API of the necessary functions is provided. Check the official API document of LoxiLB to see if the required API is provided. Afterwards, the GET, POST, and DELETE methods are designed with get, create, and delete commands according to the API provided.
loxicmd$ tree\n.\n\u251c\u2500\u2500 AUTHORS\n\u251c\u2500\u2500 cmd\n\u2502 \u251c\u2500\u2500 create\n\u2502 \u2502 \u251c\u2500\u2500 create.go\n\u2502 \u2502 \u2514\u2500\u2500 create_loadbalancer.go\n\u2502 \u251c\u2500\u2500 delete\n\u2502 \u2502 \u251c\u2500\u2500 delete.go\n\u2502 \u2502 \u2514\u2500\u2500 delete_loadbalancer.go\n\u2502 \u251c\u2500\u2500 get\n\u2502 \u2502 \u251c\u2500\u2500 get.go\n\u2502 \u2502 \u251c\u2500\u2500 get_loadbalancer.go\n\u2502 \u2514\u2500\u2500 root.go\n\u251c\u2500\u2500 go.mod\n\u251c\u2500\u2500 go.sum\n\u251c\u2500\u2500 LICENSE\n\u251c\u2500\u2500 main.go\n\u251c\u2500\u2500 Makefile\n\u251c\u2500\u2500 pkg\n\u2502 \u2514\u2500\u2500 api\n\u2502 \u251c\u2500\u2500 client.go\n\u2502 \u251c\u2500\u2500 common.go\n\u2502 \u251c\u2500\u2500 loadBalancer.go\n\u2502 \u2514\u2500\u2500 rest.go\n\u2514\u2500\u2500 README.md\n
Add the code in the ./cmd/get, ./cmd/delete, ./cmd/create, and ./pkg/api directories to add functionality."},{"location":"cmd-dev/#add-structure-in-pkgapi-and-register-method-example-of-connection-track-api","title":"Add structure in pkg/api and register method (example of connection track API)","text":" -
CommonAPI embedding Using embedding the CommonAPI for the Methods and variables, to use in the Connecttrack structure.
type Conntrack struct {\n CommonAPI\n}\n
-
Add Structure Configuration and JSON Structure Define the structure for JSON Unmashal.
type CtInformationGet struct {\n CtInfo []ConntrackInformation `json:\"ctAttr\"`\n}\n\ntype ConntrackInformation struct {\n Dip string `json:\"destinationIP\"`\n Sip string `json:\"sourceIP\"`\n Dport uint16 `json:\"destinationPort\"`\n Sport uint16 `json:\"sourcePort\"`\n Proto string `json:\"protocol\"`\n CState string `json:\"conntrackState\"`\n CAct string `json:\"conntrackAct\"`\n}\n
- Define Method Functions in pkg/api/client.go Define the URL in the Resource constant. Defines the function to be used in the command.
const (\n \u2026\n loxiConntrackResource = \"config/conntrack/all\"\n)\n\n\nfunc (l *LoxiClient) Conntrack() *Conntrack {\n return &Conntrack{\n CommonAPI: CommonAPI{\n restClient: &l.restClient,\n requestInfo: RequestInfo{\n provider: loxiProvider,\n apiVersion: loxiApiVersion,\n resource: loxiConntrackResource,\n },\n },\n }\n}\n
"},{"location":"cmd-dev/#add-get-create-delete-functions-within-cmd","title":"Add get, create, delete functions within cmd","text":"Use the Cobra library to define commands, Alise, descriptions, options, and callback functions, and then create a function that returns. Create a function such as PrintGetCTReturn and add logic when the status code is 200.
func NewGetConntrackCmd(restOptions *api.RESTOptions) *cobra.Command {\n var GetctCmd = &cobra.Command{\n Use: \"conntrack\",\n Aliases: []string{\"ct\", \"conntracks\", \"cts\"},\n Short: \"Get a Conntrack\",\n Long: `It shows connection track Information`,\n Run: func(cmd *cobra.Command, args []string) {\n client := api.NewLoxiClient(restOptions)\n ctx := context.TODO()\n var cancel context.CancelFunc\n if restOptions.Timeout > 0 {\n ctx, cancel = context.WithTimeout(context.TODO(), time.Duration(restOptions.Timeout)*time.Second)\n defer cancel()\n }\n resp, err := client.Conntrack().Get(ctx)\n if err != nil {\n fmt.Printf(\"Error: %s\\n\", err.Error())\n return\n }\n if resp.StatusCode == http.StatusOK {\n PrintGetCTResult(resp, *restOptions)\n return\n }\n\n },\n }\n\n return GetctCmd\n}\n
"},{"location":"cmd-dev/#register-command-in-cmd","title":"Register command in cmd","text":"Register Cobra as defined in 3.
func GetCmd(restOptions *api.RESTOptions) *cobra.Command {\n var GetCmd = &cobra.Command{\n Use: \"get\",\n Short: \"A brief description of your command\",\n Long: `A longer description that spans multiple lines and likely contains examples\n and usage of using your command. For example:\n\n Cobra is a CLI library for Go that empowers applications.\n This application is a tool to generate the needed files\n to quickly Get a Cobra application.`,\n Run: func(cmd *cobra.Command, args []string) {\n fmt.Println(\"Get called\")\n },\n }\n GetCmd.AddCommand(NewGetLoadBalancerCmd(restOptions))\n GetCmd.AddCommand(NewGetConntrackCmd(restOptions))\n return GetCmd\n}\n
"},{"location":"cmd-dev/#build-test","title":"Build & Test","text":"make\n
"},{"location":"cmd/","title":"Table of Contents","text":" - What is loxicmd
- How to build
- How to run and configure loxilb
- Load balancer
- Endpoint
- BFD
- Session
- SessionUlCl
- IPaddress
- FDB
- Route
- Neighbor
- Vlan
- Vxlan
- Firewall
- Mirror
- Policy
- Session Recorder
"},{"location":"cmd/#what-is-loxicmd","title":"What is loxicmd","text":"loxicmd is command tool for loxilb's configuration. loxicmd aims to provide all configuation related to loxilb and is based on kubectl's look and feel. When running k8s, kube-loxilb usually takes care of loxilb configuration but nonetheless loxicmd can be used for enhanced config, debugging and observability.
"},{"location":"cmd/#how-to-build","title":"How to build","text":"Note - loxilb docker has this built-in and there is no need to build it when using loxilb docker
Install package dependencies
go get .\n
Make loxicmd
make\n
Install loxicmd
sudo cp -f ./loxicmd /usr/local/sbin\n
"},{"location":"cmd/#how-to-run-and-configure-loxilb","title":"How to run and configure loxilb","text":""},{"location":"cmd/#load-balancer","title":"Load Balancer","text":""},{"location":"cmd/#get-load-balancer-rules","title":"Get load-balancer rules","text":"Get basic information
loxicmd get lb\n
Get detailed information
loxicmd get lb -o wide\n
Get info in json loxicmd get lb -o json\n
"},{"location":"cmd/#configure-load-balancer-rule","title":"Configure load-balancer rule","text":"Simple NAT44 tcp (round-robin) load-balancer
loxicmd create lb 1.1.1.1 --tcp=1828:1920 --endpoints=2.2.3.4:1\n
Note: - Round-robin is default mode in loxilb - End-point format is specified as <CIDR:weight>. For round-robin, weight(1) has no significance. NAT66 (round-robin) load-balancer
loxicmd create lb 2001::1 --tcp=2020:8080 --endpoints=4ffe::1:1,5ffe::1:1,6ffe::1:1\n
NAT64 (round-robin) load-balancer
loxicmd create lb 2001::1 --tcp=2020:8080 --endpoints=31.31.31.1:1,32.32.32.1:1,33.33.33.1:1\n
WRR (Weighted round-robin) load-balancer (Divide traffic in 40%, 40% and 20% ratio among end-points)
loxicmd create lb 20.20.20.1 --select=priority --tcp=2020:8080 --endpoints=31.31.31.1:40,32.32.32.1:40,33.33.33.1:20\n
Sticky end-point selection load-balancer (select end-points based on traffic hash)
loxicmd create lb 20.20.20.1 --select=hash --tcp=2020:8080 --endpoints=31.31.31.1:1,32.32.32.1:1,33.33.33.1:1\n
Load-balancer with forceful tcp-reset session timeout after inactivity of 30s
loxicmd create lb 20.20.20.1 --tcp=2020:8080 --endpoints=31.31.31.1:1,32.32.32.1:1,33.33.33.1:1 --inatimeout=30\n
Load-balancer with one-arm mode
loxicmd create lb 20.20.20.1 --tcp=2020:8080 --endpoints=100.100.100.2:1,100.100.100.3:1,100.100.100.4:1 --mode=onearm\n
Load-balancer with fullnat mode
loxicmd create lb 88.88.88.1 --sctp=38412:38412 --endpoints=192.168.70.3:1 --mode=fullnat\n
- For more information on one-arm and full-nat mode, please check this post Load-balancer config in DSR(direct-server return) mode
loxicmd create lb 20.20.20.1 --tcp=2020:8080 --endpoints=31.31.31.1:1,32.32.32.1:1 --mode=dsr\n
Load-balancer config with active endpoint monitoring
loxicmd create lb 20.20.20.1 --tcp=2020:8080 --endpoints=31.31.31.1:1,32.32.32.1:1 --monitor\n
Note: - By default loxilb does not do active endpoint monitoring i.e it will continue to select end-points which might be inactive - This is due to the fact kubernetes also has its own service monitoring mechanism and it can notify loxilb of any such endpoint health state - Based on user's requirements, one can specify active endpoint checks using \"--monitor\" flag - loxilb has extensive endpoint monitoring methods. Further details can be found in endpoint section "},{"location":"cmd/#load-balancer-yaml-example","title":"Load-balancer yaml example","text":"apiVersion: netlox/v1\nkind: Loadbalancer\nmetadata:\n name: test\nspec:\n serviceArguments:\n externalIP: 1.2.3.1\n port: 80\n protocol: tcp\n sel: 0\n endpoints:\n - endpointIP: 4.3.2.1\n weight: 1\n targetPort: 8080\n - endpointIP: 4.3.2.2\n weight: 1\n targetPort: 8080\n - endpointIP: 4.3.2.3\n weight: 1\n targetPort: 8080\n
"},{"location":"cmd/#delete-a-load-balancer-rule","title":"Delete a load-balancer rule","text":""},{"location":"cmd/#loxicmd-delete-lb-1111-tcp1828","title":"loxicmd delete lb 1.1.1.1 --tcp=1828 \n
","text":""},{"location":"cmd/#endpoint","title":"Endpoint","text":""},{"location":"cmd/#get-load-balancer-end-point-health-information","title":"Get load-balancer end-point health information","text":"loxicmd get ep\n
"},{"location":"cmd/#create-end-point-for-health-probing","title":"Create end-point for health probing","text":"# loxicmd create endpoint IP [--name=<id>] [--probetype=<probetype>] [--probereq=<probereq>] [--proberesp=<proberesp>] [--probeport=<port>] [--period=<period>] [--retries=<retries>]\nloxicmd create endpoint 32.32.32.1 --probetype=http --probeport=8080 --period=60 --retries=2\n
IP(string) : Endpoint target IPaddress name(string) : Endpoint Identifier probetype(string): Probe-type:ping,http,https,udp,tcp,sctp,none probereq(string): If probe is http/https, one can specify additional uri path proberesp(string): If probe is http/https, one can specify custom response string probeport(int): If probe is http,https,tcp,udp,sctp one can specify custom l4port to use period(int): Period of probing retries(int): Number of retries before marking endPoint inactive Notes: - \"name\" is not required when endpoint is created initially. loxilb will allocate the name which can be checked with \"loxicmd get ep\". \"name\" can be given as an Identifier when user wants to modify endpoint probe parameters - Initial state of endpoint will be decided within 15 seconds of rule addition (We cant be sure if service is immediately up so this is the init liveness check timeout. It is not configurable at this time) - After init liveness check, probes will be done as per default (60s) or whatever value is set by the user - When endpoint is inactive we have internal logic and timeouts to minimize blocking calls and maintain stability. Only when endpoint is active, we use probe timeout given by user - For UDP end-points and probe-type, there are two ways to check end-point health currently: - If the service can respond to probe requests with pre-defined responses sent over UDP, we can use the following :
loxicmd create endpoint 172.1.217.133 --name=\"udpep1\" --probetype=udp --probeport=32031 --period=60 --retries=2 --probereq=\"probe\" --proberesp=\"hello\"\n
- If the services cannot support the above mechanism, loxilb will try to check for \"ICMP Port unreachable\" after sending UDP probes. If an \"ICMP Port unreachable\" is received, it means the endpoint is not up. "},{"location":"cmd/#examples","title":"Examples :","text":"loxicmd create endpoint 32.32.32.1 --probetype=http --probeport=8080 --period=60 --retries=2\n\nloxicmd get ep\n| HOST | NAME | PTYPE | PORT | DURATION | RETRIES | MINDELAY | AVGDELAY | MAXDELAY | STATE |\n|------------|----------------------|-------|------|----------|---------|----------|----------|----------|-------|\n| 32.32.32.1 | 32.32.32.1_http_8080 | http: | 8080 | 60 | 2 | 0s | 0s | 0s | ok |\n\n# Modify duration and retry count using name\n\nloxicmd create endpoint 32.32.32.1 --name=32.32.32.1_http_8080 --probetype=http --probeport=8080 --period=40 --retries=4\n\nloxicmd get ep\n| HOST | NAME | PTYPE | PORT | DURATION | RETRIES | MINDELAY | AVGDELAY | MAXDELAY | STATE |\n|------------|----------------------|-------|------|----------|---------|----------|-----------|-----------|-------|\n| 32.32.32.1 | 32.32.32.1_http_8080 | http: | 8080 | 40 | 4 | 0s | 0s | 0s | ok |\n
"},{"location":"cmd/#create-end-point-with-https-probing-information","title":"Create end-point with https probing information","text":"# loxicmd create endpoint IP [--name=<id>] [--probetype=<probetype>] [--probereq=<probereq>] [--proberesp=<proberesp>] [--probeport=<port>] [--period=<period>] [--retries=<retries>]\nloxicmd create endpoint 32.32.32.1 --probetype=https --probeport=8080 --probereq=\"health\" --proberesp=\"OK\" --period=60 --retries=2\n
Note: loxilb requires CA certificate for TLS connection and private certificate and private key for mTLS connection. Admin can keep a common(default) CA certificate for all the endpoints at \"/opt/loxilb/cert/rootCA.crt\" or per-endpoint certificates can be kept as \"/opt/loxilb/cert/\\<IP>/rootCA.crt\", private key must be kept at \"/opt/loxilb/cert/server.key\" and private certificate at \"/opt/loxilb/cert/server.crt\". Please see Minica or Certstrap or this CICD test case to know how to generate certificates."},{"location":"cmd/#endpoint-yaml-example","title":"Endpoint yaml example","text":"apiVersion: netlox/v1\nkind: Endpoint\nmetadata:\n name: test\nspec:\n hostName: \"Test\"\n description: string\n inactiveReTries: 0\n probeType: string\n probeReqUrl: string\n probeDuration: 0\n probePort: 0\n
"},{"location":"cmd/#delete-end-point-informtion","title":"Delete end-point informtion","text":""},{"location":"cmd/#loxicmd-delete-endpoint-31313131","title":"loxicmd delete endpoint 31.31.31.31\n
","text":""},{"location":"cmd/#bfd","title":"BFD","text":""},{"location":"cmd/#get-bfd-session-information","title":"Get BFD Session information","text":"loxicmd get bfd\n
"},{"location":"cmd/#create-bfd-session","title":"Create BFD Session","text":"#loxicmd create bfd <remoteIP> --sourceIP=<sourceIP> --interval=<time in usecs> --retryCount=<count>\nloxicmd create bfd 192.168.80.253 --sourceIP=192.168.80.252 --interval=500000 --retryCount=3\n
remoteIP(string): Remote IP address sourceIP(string): Source IP address for binding interval(int): BFD packet Tx Interval Time in microseconds retryCount(int): Number of retry counts to detect failure. "},{"location":"cmd/#set-bfd-session","title":"Set BFD Session","text":"#loxicmd set bfd <remoteIP> --interval=<time in usecs> --retryCount=<count>\nloxicmd set bfd 192.168.80.253 --interval=400000 --retryCount=5\n
interval(int): BFD packet Tx Interval Time in microseconds retryCount(int): Number of retry counts to detect failure. "},{"location":"cmd/#delete-bfd-session","title":"Delete BFD Session","text":"#loxicmd delete bfd <remoteIP>\nloxicmd delete bfd 192.168.80.253\n
remoteIP(string): Remote IP address sourceIP(string): Source IP address for binding interval(int): BFD packet Tx Interval Time in microseconds retryCount(int): Number of retry counts to detect failure."},{"location":"cmd/#bfd-yaml-example","title":"BFD yaml example","text":"apiVersion: netlox/v1\nkind: BFD\nmetadata:\n name: test\nspec:\n instance: \"default\"\n remoteIp: \"192.168.80.253\"\n sourceIp: \"192.168.80.252\"\n interval: 300000\n retryCount: 4\n
"},{"location":"cmd/#session","title":"Session","text":""},{"location":"cmd/#get-session-information","title":"Get Session information","text":"loxicmd get session\n
"},{"location":"cmd/#create-session-information","title":"Create Session information","text":"#loxicmd create session <userID> <sessionIP> --accessNetworkTunnel=<TeID>:<TunnelIP> --coreNetworkTunnel=<TeID>:<TunnelIP>\nloxicmd create session user1 192.168.20.1 --accessNetworkTunnel=1:1.232.16.1 coreNetworkTunnel=1:1.233.16.1\n
userID(string): User Identifier sessionIP(string): Session IP address accessNetworkTunnel(string): accessNetworkTunnel has pairs that can be specified as 'TeID:IP' coreNetworkTunnel(string): coreNetworkTunnel has pairs that can be specified as 'TeID:IP' "},{"location":"cmd/#session-yaml-example","title":"Session yaml example","text":"apiVersion: netlox/v1\nkind: Session\nmetadata:\n name: test\nspec:\n ident: user1\n sessionIP: 88.88.88.88\n accessNetworkTunnel:\n TeID: 1\n tunnelIP: 11.11.11.11\n coreNetworkTunnel:\n TeID: 1\n tunnelIP: 22.22.22.22\n
"},{"location":"cmd/#delete-session-information","title":"Delete Session information","text":""},{"location":"cmd/#loxicmd-delete-session-user1","title":"loxicmd delete session user1\n
","text":""},{"location":"cmd/#sessionulcl","title":"SessionUlCl","text":""},{"location":"cmd/#get-sessionulcl-information","title":"Get SessionUlCl information","text":"loxicmd get sessionulcl\n
"},{"location":"cmd/#create-sessionulcl-information","title":"Create SessionUlCl information","text":"#loxicmd create sessionulcl <userID> --ulclArgs=<QFI>:<ulclIP>,...\nloxicmd create sessionulcl user1 --ulclArgs=16:192.33.125.1\n
userID(string): User Identifier ulclArgs(string): Port pairs can be specified as 'QFI:UlClIP' "},{"location":"cmd/#sessionulcl-yaml-example","title":"SessionUlCl yaml example","text":"apiVersion: netlox/v1\nkind: SessionULCL\nmetadata:\n name: test\nspec:\n ulclIdent: user1\n ulclArgument:\n qfi: 11\n ulclIP: 8.8.8.8\n
"},{"location":"cmd/#delete-sessionulcl-information","title":"Delete SessionUlCl information","text":"loxicmd delete sessionulcl --ulclArgs=192.33.125.1\n
ulclArgs(string): UlCl IP address can be specified as 'UlClIP'. It don't need QFI."},{"location":"cmd/#ipaddress","title":"IPaddress","text":""},{"location":"cmd/#get-ipaddress-information","title":"Get IPaddress information","text":"loxicmd get ip\n
"},{"location":"cmd/#create-ipaddress-information","title":"Create IPaddress information","text":"#loxicmd create ip <DeviceIPNet> <device>\nloxicmd create ip 192.168.0.1/24 eno7\n
DeviceIPNet(string): Actual IP address with mask device(string): name of the related device "},{"location":"cmd/#ipaddress-yaml-example","title":"IPaddress yaml example","text":"apiVersion: netlox/v1\nkind: IPaddress\nmetadata:\n name: test\nspec:\n dev: eno8\n ipAddress: 192.168.23.1/32\n
"},{"location":"cmd/#delete-ipaddress-information","title":"Delete IPaddress information","text":""},{"location":"cmd/#loxicmd-delete-ip-deviceipnet-device-loxicmd-delete-ip-1921680124-eno7","title":"#loxicmd delete ip <DeviceIPNet> <device>\nloxicmd delete ip 192.168.0.1/24 eno7\n
","text":""},{"location":"cmd/#fdb","title":"FDB","text":""},{"location":"cmd/#get-fdb-information","title":"Get FDB information","text":"loxicmd get fdb\n
"},{"location":"cmd/#create-fdb-information","title":"Create FDB information","text":"#loxicmd create fdb <MacAddress> <DeviceName>\nloxicmd create fdb aa:aa:aa:aa:bb:bb eno7\n
MacAddress(string): mac address DeviceName(string): name of the related device "},{"location":"cmd/#fdb-yaml-example","title":"FDB yaml example","text":"apiVersion: netlox/v1\nkind: FDB\nmetadata:\n name: test\nspec:\n dev: eno8\n macAddress: aa:aa:aa:aa:aa:aa\n
"},{"location":"cmd/#delete-fdb-information","title":"Delete FDB information","text":"#loxicmd delete fdb <MacAddress> <DeviceName>\nloxicmd delete fdb aa:aa:aa:aa:bb:bb eno7\n
"},{"location":"cmd/#route","title":"Route","text":""},{"location":"cmd/#get-route-information","title":"Get Route information","text":"loxicmd get route\n
"},{"location":"cmd/#create-route-information","title":"Create Route information","text":"#loxicmd create route <DestinationIPNet> <gateway>\nloxicmd create route 192.168.212.0/24 172.17.0.254\n
DestinationIPNet(string): Actual IP address route with mask gateway(string): gateway information if any "},{"location":"cmd/#route-yaml-example","title":"Route yaml example","text":"apiVersion: netlox/v1\nkind: Route\nmetadata:\n name: test\nspec:\n destinationIPNet: 192.168.30.0/24\n gateway: 172.17.0.1\n
"},{"location":"cmd/#delete-route-information","title":"Delete Route information","text":""},{"location":"cmd/#loxicmd-delete-route-destinationipnet-loxicmd-delete-route-192168212024","title":"#loxicmd delete route <DestinationIPNet>\nloxicmd delete route 192.168.212.0/24 \n
","text":""},{"location":"cmd/#neighbor","title":"Neighbor","text":""},{"location":"cmd/#get-neighbor-information","title":"Get Neighbor information","text":"loxicmd get neighbor\n
"},{"location":"cmd/#create-neighbor-information","title":"Create Neighbor information","text":"#loxicmd create neighbor <DeviceIP> <DeviceName> [--macAddress=aa:aa:aa:aa:aa:aa]\nloxicmd create neighbor 192.168.0.1 eno7 --macAddress=aa:aa:aa:aa:aa:aa\n
DeviceIP(string): The IP address DeviceName(string): name of the related device macAddress(string): resolved hardware address if any "},{"location":"cmd/#neighbor-yaml-example","title":"Neighbor yaml example","text":"apiVersion: netlox/v1\nkind: Neighbor\nmetadata:\n name: test\nspec:\n dev: eno8\n macAddress: aa:aa:aa:aa:aa:aa\n ipAddress: 192.168.23.21\n
"},{"location":"cmd/#delete-neighbor-information","title":"Delete Neighbor information","text":"#loxicmd delete neighbor <DeviceIP> <device>\nloxicmd delete neighbor 192.168.0.1 eno7\n
"},{"location":"cmd/#vlan","title":"Vlan","text":""},{"location":"cmd/#get-vlan-and-vlan-member-information","title":"Get Vlan and Vlan Member information","text":"loxicmd get vlan\n
loxicmd get vlanmember\n
"},{"location":"cmd/#create-vlan-and-vlan-member-information","title":"Create Vlan and Vlan Member information","text":"#loxicmd create vlan <Vid>\nloxicmd create vlan 100\n
Vid(int): vlan identifier #loxicmd create vlanmember <Vid> <DeviceName> --tagged=<Tagged>\nloxicmd create vlanmember 100 eno7 --tagged=true\nloxicmd create vlanmember 100 eno7\n
Vid(int): vlan identifier DeviceName(string): name of the related device tagged(boolean): tagged or not (default is false) "},{"location":"cmd/#vlan-yaml-example","title":"Vlan yaml example","text":"apiVersion: netlox/v1\nkind: Vlan\nmetadata:\n name: test\nspec:\n vid: 100\n
"},{"location":"cmd/#vlan-member-yaml-example","title":"Vlan Member yaml example","text":"apiVersion: netlox/v1\nkind: VlanMember\nmetadata:\n name: test\n vid: 100\nspec:\n dev: eno8\n Tagged: true\n
"},{"location":"cmd/#delete-vlan-and-vlan-member-information","title":"Delete Vlan and Vlan Member information","text":"#loxicmd delete vlan <Vid>\nloxicmd delete vlan 100\n
#loxicmd delete vlanmember <Vid> <DeviceName> --tagged=<Tagged>\nloxicmd delete vlanmember 100 eno7 --tagged=true\nloxicmd delete vlanmember 100 eno7\n
"},{"location":"cmd/#vxlan","title":"Vxlan","text":""},{"location":"cmd/#get-vxlan-and-vxlan-peer-information","title":"Get Vxlan and Vxlan Peer information","text":"loxicmd get vxlan\n
loxicmd get vxlanpeer\n
"},{"location":"cmd/#create-vxlan-and-vxlan-peer-information","title":"Create Vxlan and Vxlan Peer information","text":"#loxicmd create vxlan <VxlanID> <EndpointDeviceName>\nloxicmd create vxlan 100 eno7\n
VxlanID(int): Vxlan Identifier EndpointDeviceName(string): VTEP Device name(It must have own IP address for peering) #loxicmd create vxlanpeer <VxlanID> <PeerIP>\nloxicmd create vxlan-peer 100 30.1.3.1\n
VxlanID(int): Vxlan Identifier PeerIP(string): Vxlan peer device IP address "},{"location":"cmd/#vxlan-yaml-example","title":"Vxlan yaml example","text":"apiVersion: netlox/v1\nkind: Vxlan\nmetadata:\n name: test\nspec:\n epIntf: eno8\n vxlanID: 100\n
"},{"location":"cmd/#vxlan-peer-yaml-example","title":"Vxlan Peer yaml example","text":"apiVersion: netlox/v1\nkind: VxlanPeer\nmetadata:\n name: test\n vxlanID: 100\nspec:\n peerIP: 21.21.21.1\n
"},{"location":"cmd/#delete-vxlan-and-vxlan-peer-information","title":"Delete Vxlan and Vxlan Peer information","text":"#loxicmd delete vxlan <VxlanID>\nloxicmd delete vxlan 100\n
#loxicmd delete vxlanpeer <VxlanID> <PeerIP>\nloxicmd delete vxlan-peer 100 30.1.3.1\n
"},{"location":"cmd/#firewall","title":"Firewall","text":""},{"location":"cmd/#get-firewall-information","title":"Get Firewall information","text":"loxicmd get firewall\n
"},{"location":"cmd/#create-firewall-information","title":"Create Firewall information","text":"#loxicmd create firewall --firewallRule=<ruleKey>:<ruleValue>, [--allow] [--drop] [--trap] [--redirect=<PortName>] [--setmark=<FwMark>\nloxicmd create firewall --firewallRule=\"sourceIP:1.2.3.2/32,destinationIP:2.3.1.2/32,preference:200\" --allow\nloxicmd create firewall --firewallRule=\"sourceIP:1.2.3.2/32,destinationIP:2.3.1.2/32,preference:200\" --allow --setmark=10\nloxicmd create firewall --firewallRule=\"sourceIP:1.2.3.2/32,destinationIP:2.3.1.2/32,preference:200\" --drop\nloxicmd create firewall --firewallRule=\"sourceIP:1.2.3.2/32,destinationIP:2.3.1.2/32,preference:200\" --trap\nloxicmd create firewall --firewallRule=\"sourceIP:1.2.3.2/32,destinationIP:2.3.1.2/32,preference:200\" --redirect=ensp0\n
firewallRule sourceIP(string) - Source IP in CIDR notation destinationIP(string) - Destination IP in CIDR notation minSourcePort(int) - Minimum source port range maxSourcePort(int) - Maximum source port range minDestinationPort(int) - Minimum destination port range maxDestinationPort(int) - Maximum source port range protocol(int) - the protocol portName(string) - the incoming port preference(int) - User preference for ordering "},{"location":"cmd/#firewall-yaml-example","title":"Firewall yaml example","text":"apiVersion: netlox/v1\nkind: Firewall\nmetadata:\n name: test\nspec:\n ruleArguments:\n sourceIP: 192.169.1.2/24\n destinationIP: 192.169.2.1/24\n preference: 200\n opts:\n allow: true\n
"},{"location":"cmd/#delete-firewall-information","title":"Delete Firewall information","text":""},{"location":"cmd/#loxicmd-delete-firewall-firewallrulerulekeyrulevalue-loxicmd-delete-firewall-firewallrulesourceip123232destinationip231232preference200","title":"#loxicmd delete firewall --firewallRule=<ruleKey>:<ruleValue>\nloxicmd delete firewall --firewallRule=\"sourceIP:1.2.3.2/32,destinationIP:2.3.1.2/32,preference:200\n
","text":""},{"location":"cmd/#mirror","title":"Mirror","text":""},{"location":"cmd/#get-mirror-information","title":"Get Mirror information","text":"loxicmd get mirror\n
"},{"location":"cmd/#create-mirror-information","title":"Create Mirror information","text":"#loxicmd create mirror <mirrorIdent> --mirrorInfo=<InfoOption>:<InfoValue>,... --targetObject=attachement:<port1,rule2>,mirrObjName:<ObjectName>\nloxicmd create mirror mirr-1 --mirrorInfo=\"type:0,port:ensp0\" --targetObject=\"attachement:1,mirrObjName:ensp1\n
mirrorIdent(string): Mirror identifier type(int) : Mirroring type as like 0 == SPAN, 1 == RSPAN, 2 == ERSPAN port(string) : The port where mirrored traffic needs to be sent vlan(int) : for RSPAN we may need to send tagged mirror traffic remoteIP(string) : For ERSPAN we may need to send tunnelled mirror traffic sourceIP(string): For ERSPAN we may need to send tunnelled mirror traffic tunnelID(int): For ERSPAN we may need to send tunnelled mirror traffic "},{"location":"cmd/#mirror-yaml-example","title":"Mirror yaml example","text":"apiVersion: netlox/v1\nkind: Mirror\nmetadata:\n name: test\nspec:\n mirrorIdent: mirr-1\n mirrorInfo:\n type: 0\n port: eno1\n targetObject:\n attachment: 1\n mirrObjName: eno2\n
"},{"location":"cmd/#delete-mirror-information","title":"Delete Mirror information","text":"#loxicmd delete mirror <mirrorIdent>\nloxicmd delete mirror mirr-1\n
"},{"location":"cmd/#policy","title":"Policy","text":""},{"location":"cmd/#get-policy-information","title":"Get Policy information","text":"loxicmd get policy\n
"},{"location":"cmd/#create-policy-information","title":"Create Policy information","text":"#loxicmd create policy IDENT --rate=<Peak>:<Commited> --target=<ObjectName>:<Attachment> [--block-size=<Excess>:<Committed>] [--color] [--pol-type=<policy type>]\nloxicmd create policy pol-0 --rate=100:100 --target=ensp0:1\nloxicmd create policy pol-1 --rate=100:100 --target=ensp0:1 --block-size=12000:6000\nloxicmd create policy pol-1 --rate=100:100 --target=ensp0:1 --color\nloxicmd create policy pol-1 --rate=100:100 --target=ensp0:1 --color --pol-type 0\n
rate(string): Rate pairs can be specified as 'Peak:Commited'. rate unit : Mbps block-size(string): Block Size pairs can be specified as 'Excess:Committed'. block-size unit : bps target(string): Target Interface pairs can be specified as 'ObjectName:Attachment' color(boolean): Policy color enbale or not pol-type(int): Policy traffic control type. 0 : TrTCM, 1 : SrTCM "},{"location":"cmd/#policy-yaml-example","title":"Policy yaml example","text":"apiVersion: netlox/v1\nkind: Policy\nmetadata:\n name: test\nspec:\n policyIdent: pol-eno8\n policyInfo:\n type: 0\n colorAware: false\n committedInfoRate: 100\n peakInfoRate: 100\n targetObject:\n attachment: 1\n polObjName: eno8\n
"},{"location":"cmd/#delete-policy-information","title":"Delete Policy information","text":""},{"location":"cmd/#loxicmd-delete-policy-polident-loxicmd-delete-policy-pol-1","title":"#loxicmd delete policy <Polident>\nloxicmd delete policy pol-1\n
","text":""},{"location":"cmd/#session-recorder","title":"Session Recorder","text":""},{"location":"cmd/#set-n-tuple-policy-for-recording","title":"Set n-tuple policy for recording","text":"loxicmd create firewall --firewallRule=\"destinationIP:31.31.31.0/24,preference:200\" --allow --record\n
loxilb will record any connection track entry which matches this policy (even for reverse direction) as a way to provide extended visibility for debugging "},{"location":"cmd/#check-or-record-with-tcpdump","title":"Check or record with tcpdump","text":"tcpdump -i llb0 -n\n
Any valid tcpdump option can be used including saving to a pcap file"},{"location":"cmd/#get-live-connection-track-information","title":"Get live connection-track information","text":"loxicmd get conntrack\n
"},{"location":"cmd/#get-port-dump-information","title":"Get port-dump information","text":"loxicmd get port\n
"},{"location":"cmd/#save-all-loxilbs-operational-information-in-dbstore","title":"Save all loxilb's operational information in DBStore","text":"loxicmd save -a\n
** This will ensure that whenever loxilb restarts, it will start with last saved state from DBStore"},{"location":"cmd/#configure-loxicmd-with-yamlbeta","title":"Configure loxicmd with yaml(Beta)","text":"The loxicmd support yaml based configuration. The format is same as Kubernetes. This beta version support only one configuraion per one file. That means \"Do not use ---
in yaml file.\" . It will be supported at next release.
"},{"location":"cmd/#command","title":"Command","text":"#loxicmd apply -f <file.yaml>\n#loxicmd delete -f <file.yaml>\nloxicmd apply -f lb.yaml\nloxicmd delete -f lb.yaml\n
"},{"location":"cmd/#file-examplelbyaml","title":"File example(lb.yaml)","text":"apiVersion: netlox/v1\nkind: Loadbalancer\nmetadata:\n name: load\nspec:\n serviceArguments:\n externalIP: 123.123.123.1\n port: 80\n protocol: tcp\n sel: 0\n endpoints:\n - endpointIP: 4.3.2.1\n weight: 1\n targetPort: 8080\n - endpointIP: 4.3.2.2\n weight: 1\n targetPort: 8080\n - endpointIP: 4.3.2.3\n weight: 1\n targetPort: 8080\n
It reuse API's json body as a \"Spec\". If the API URL has no param, it don't need to use \"metadata\". For example, The body of load Balancer rule is shown below. {\n \"serviceArguments\": {\n \"externalIP\": \"123.123.123.1\",\n \"port\": 80,\n \"protocol\": \"tcp\",\n \"sel\": 0\n },\n \"endpoints\": [\n {\n \"endpointIP\": \"4.3.2.1\",\n \"weight\": 1,\n \"targetPort\": 8080\n },\n {\n \"endpointIP\": \"4.3.2.2\",\n \"weight\": 1,\n \"targetPort\": 8080\n },\n {\n \"endpointIP\": \"4.3.2.3\",\n \"weight\": 1,\n \"targetPort\": 8080\n }\n}\n
This json format can be converted Yaml format as shown below. serviceArguments:\n externalIP: 123.123.123.1\n port: 80\n protocol: tcp\n sel: 0\n endpoints:\n - endpointIP: 4.3.2.1\n weight: 1\n targetPort: 8080\n - endpointIP: 4.3.2.2\n weight: 1\n targetPort: 8080\n - endpointIP: 4.3.2.3\n weight: 1\n targetPort: 8080\n
Finally, this is located in the Spec of the entire configuration file as File example(lb.yaml) If you want to add Vlan bridge, IPaddress or something else. Just change the Kind value from Loadbalancer to VlanBridge, IPaddress as like below example.
apiVersion: netlox/v1\nkind: IPaddress\nmetadata:\n name: test\nspec:\n dev: eno8\n ipAddress: 192.168.23.1/32\n
If the URL has param such as adding vlan-member, it must have metadata
.
apiVersion: netlox/v1\nkind: VlanMember\nmetadata:\n name: test\n vid: 100\nspec:\n dev: eno8\n Tagged: true\n
The example of all the settings below, so please refer to it."},{"location":"cmd/#more-information","title":"More information","text":"There are tons of other commands, use help option!
loxicmd help\n
"},{"location":"code/","title":"loxilb is organized as below:","text":"\u251c\u2500\u2500 api\n\u2502\u00a0 \u251c\u2500\u2500 certification\n\u2502\u00a0 \u251c\u2500\u2500 cmd\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 loxilb-rest-api-server\n\u2502\u00a0 \u251c\u2500\u2500 models\n\u2502\u00a0 \u251c\u2500\u2500 restapi\n\u2502\u00a0 \u251c\u2500\u2500 handler\n\u2502\u00a0 \u251c\u2500\u2500 operations\n\u251c\u2500\u2500 common\n\u251c\u2500\u2500 ebpf\n\u2502\u00a0 \u251c\u2500\u2500 common\n\u2502\u00a0 \u251c\u2500\u2500 headers\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 linux\n\u2502\u00a0 \u251c\u2500\u2500 kernel\n\u2502\u00a0 \u251c\u2500\u2500 libbpf\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 include\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 asm\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 linux\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 uapi\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 linux\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 scripts\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 src\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 build\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 usr\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 include\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 bpf\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 lib64\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 pkgconfig\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 sharedobjs\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 staticobjs\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 travis-ci\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 managers\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 vmtest\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 configs\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 blacklist\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 whitelist\n\u2502\u00a0 \u251c\u2500\u2500 utils\n\u251c\u2500\u2500 loxinet\n\u251c\u2500\u2500 options\n\u251c\u2500\u2500 loxilib\n
"},{"location":"code/#api","title":"api","text":"This directory contains source code to host api server to handle CCM configuration requests.
"},{"location":"code/#common","title":"common","text":"Common api to configure which are exposed by loxinet are defined in this directory.
"},{"location":"code/#loxinet","title":"loxinet","text":"This module implements the glue layer or the middle layer between eBPF datapath module and api modules. It defines functions for configuring networking and load balancing rules in the eBPF datapath.
"},{"location":"code/#ebpf","title":"ebpf","text":"This directory contains source code for loxilb eBPF datapath.
"},{"location":"code/#options","title":"options","text":"This directory contains files for managing the command line options.
"},{"location":"code/#loxilib","title":"loxilib","text":"This package contains common routines for logging, statistics and other utilities.
"},{"location":"contribute/","title":"Contributing","text":"When contributing to any of loxilb's repositories, please first discuss the change you wish to make via issue, email, or any other method with the owners of this repository before making a change.
Please note we have a code of conduct, please follow it in all your interactions with the project.
"},{"location":"contribute/#pull-request-process","title":"Pull Request Process","text":" - Ensure any install or build dependencies are removed before the end of the layer when doing a build.
- Update the README.md with details of changes to the interface, this includes new environment variables, exposed ports, useful file locations and container parameters.
- Increase the version numbers in any examples files and the README.md to the new version that this Pull Request would represent. The versioning scheme we use is SemVer.
- You may merge the Pull Request in once you have the sign-off of two other developers, or if you do not have permission to do that, you may request the second reviewer to merge it for you.
- For Pull Requests to be successfully merged to main branch :
- It has to be code-reviewed by the maintainer(s)
-
Integrated Travis-CI runs should pass without errors
-
Detailed instructions to help new developers setup the development/test environment can be found here
- Alternatively, they can email developers at [loxilb-devel@netlox.io]. , checkout existing issues in github , visit the loxilb forum or loxilb slack channel
"},{"location":"contribute/#sign-your-commits","title":"Sign Your Commits","text":"Instructions
"},{"location":"contribute/#dco","title":"DCO","text":"Licensing is important to open source projects. It provides some assurances that the software will continue to be available based under the terms that the author(s) desired. We require that contributors sign off on commits submitted to our project's repositories. The Developer Certificate of Origin (DCO) is a way to certify that you wrote and have the right to contribute the code you are submitting to the project.
You sign-off by adding the following to your commit messages. Your sign-off must match the git user and email associated with the commit.
This is my commit message\n\nSigned-off-by: Your Name <your.name@example.com>\n
Git has a -s
command line option to do this automatically:
git commit -s -m 'This is my commit message'\n
If you forgot to do this and have not yet pushed your changes to the remote repository, you can amend your commit with the sign-off by running
git commit --amend -s\n
"},{"location":"contribute/#code-of-conduct","title":"Code of Conduct","text":""},{"location":"contribute/#our-pledge","title":"Our Pledge","text":"In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
"},{"location":"contribute/#our-standards","title":"Our Standards","text":"Examples of behavior that contributes to creating a positive environment include:
- Using welcoming and inclusive language
- Being respectful of differing viewpoints and experiences
- Gracefully accepting constructive criticism
- Focusing on what is best for the community
- Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
- The use of sexualized language or imagery and unwelcome sexual attention or advances
- Trolling, insulting/derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or electronic address, without explicit permission
- Other conduct which could reasonably be considered inappropriate in a professional setting
"},{"location":"contribute/#our-responsibilities","title":"Our Responsibilities","text":"Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
"},{"location":"contribute/#scope","title":"Scope","text":"This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
"},{"location":"contribute/#enforcement","title":"Enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [loxilb-devel@netlox.io]. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
"},{"location":"contribute/#attribution","title":"Attribution","text":"This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at http://contributor-covenant.org/version/1/4
"},{"location":"debugging/","title":"loxilb - How to debug and troubleshoot","text":""},{"location":"debugging/#loxilb-docker-or-pod-not-coming-in-running-state","title":"* loxilb docker or pod not coming in Running state ?","text":" -
Solution:
-
Check the host machine kernel version. loxilb requires kernel version 5.8 or above.
-
Make sure you are running the correct image as per your environment.
"},{"location":"debugging/#externalip-pending-in-kubectl-get-svc","title":"* externalIP pending in \"kubectl get svc\" ?","text":"If this happens:
1. When running loxilb externally, there could be a connectivity issue.
-
Solution:
- Check if loxiURL annotation in kube-loxilb.yaml was set correctly.
- Check for kube-loxilb(master node) connectivity with loxilb node.
2. When running loxilb in-cluster mode
- Solution: Make sure loxilb pods were spawned.
3. Make sure the annotation \"node.kubernetes.io/exclude-from-external-load-balancers\" is NOT present in the node's configuration.
- Solution: If present, then the node will not be considered as an endpoint by loxilb. You can remove it by editing \"kubectl edit \\<node-name>\"
4. Make sure these annotations are present in your service.yaml
spec:\n loadBalancerClass: loxilb.io/loxilb\n type: LoadBalancer\n
"},{"location":"debugging/#sctp-packets-dropping","title":"* SCTP packets dropping ?","text":"Usually, This happens due to SCTP checksum validation by host kernel and the possible scenarios are:
1. When workload and loxilb are scheduled in the same node.
2. Different CNI creates different types of interfaces i.e. CNIs creates bridges/tunnels/veth pairs and different network policies. These interfaces have different characteristics and implications on loxilb's checksum calculation logic.
-
Solution:
There are two ways to resolve this issue:
- Disable checksum calculation.
echo 1 > /sys/module/sctp/parameters/no_checksums\n echo 0 > /proc/sys/net/netfilter/nf_conntrack_checksum\n
- Or, Let loxilb take care of the checksum calculation completely. For that, We need to install a utility(a kernel module) in all the nodes where loxilb is running. It will make sure the correct checksum is applied at the end.
curl -sfL https://github.com/loxilb-io/loxilb-ebpf/raw/main/kprobe/install.sh | sh -\n
"},{"location":"debugging/#abort-in-sctp","title":"* ABORT in SCTP ?","text":"SCTP ABORT can be seen in many scenarios:
1. When Service IP is same as loxilb IP and SCTP packets does not match the rules.
-
Solution:
- Check if the rule is installed properly
loxicmd get lb\n
- Make sure the client is connecting to the same IP and port as per the configured service LB rule.
2. In one-arm/fullnat mode, loxilb sends SCTP ABORT after receiving SCTP INIT ACK packet.
- Solution: Check the underlying hypervisor interface driver. Some drivers does not provide enough metadata for ebpf processing which makes the packet to follow fallback path to kernel and kernel being unaware of the SCTP connection sends SCTP ABORT. Emulated interfaces in bridge mode are preferred for smooth networking.
3. ABORT after few seconds(Heartbeat re-transmisions)
When initiating the SCTP connection, if the application is not binded with a particular IP then SCTP stack uses all the IPs in the SCTP INIT message. After the successful connection, both endpoints start health check for each network path. As loxilb is in between and unaware of all the endpoint IPs, drops all those packets, which leads to sending SCTP ABORT from the endpoint.
- Solution: In SCTP uni-homing case, it is absolutely necessary to make sure the applications are binded to only one IP to avoid this case.
4. ABORT after few seconds(SCTP Multihoming)
- Solution: Currently, SCTP Multihoming service works only with fullnat mode and externalTrafficPolicy set to \"Local\"
"},{"location":"debugging/#check-loxilb-logs","title":"* Check loxilb logs","text":"loxilb logs its various important events and logs in the file /var/log/loxilb.log. Users can check it by using tail -f or any other command of choice.
root@752531364e2c:/# tail -f /var/log/loxilb.log \nDBG: 2022/07/10 12:49:27 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:49:37 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:49:47 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:49:57 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:50:07 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:50:17 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:50:27 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:50:37 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:50:47 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:50:57 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \n
"},{"location":"debugging/#check-loxicmd-to-debug-loxilbs-internal-state","title":"* Check loxicmd to debug loxilb's internal state","text":"```
"},{"location":"debugging/#spawn-a-bash-shell-of-loxilb-docker","title":"Spawn a bash shell of loxilb docker","text":"docker exec -it loxilb bash
root@752531364e2c:/# loxicmd get lb | EXTERNALIP | PORT | PROTOCOL | SELECT | # OF ENDPOINTS | |------------|------|----------|--------|----------------| | 10.10.10.1 | 2020 | tcp | 0 | 3 |
root@752531364e2c:/# loxicmd get lb -o wide | EXTERNALIP | PORT | PROTOCOL | SELECT | ENDPOINTIP | TARGETPORT | WEIGHT | |------------|------|----------|--------|---------------|------------|--------| | 10.10.10.1 | 2020 | tcp | 0 | 31.31.31.1 | 5001 | 1 | | | | | | 32.32.32.1 | 5001 | 2 | | | | | | 100.100.100.1 | 5001 | 2 |
root@0c4f9175c983:/# loxicmd get conntrack | DESTINATIONIP | SOURCEIP | DESTINATIONPORT | SOURCEPORT | PROTOCOL | STATE | ACT | |---------------|------------|-----------------|------------|----------|-------------|-----| | 127.0.0.1 | 127.0.0.1 | 11111 | 47180 | tcp | closed-wait | | | 127.0.0.1 | 127.0.0.1 | 11111 | 47182 | tcp | est | | | 32.32.32.1 | 31.31.31.1 | 35068 | 35068 | icmp | bidir | |
root@65ad9b2f1b7f:/# loxicmd get port | INDEX | PORTNAME | MAC | LINK/STATE | L3INFO | L2INFO | |-------|----------|-------------------|-------------|---------------|---------------| | 1 | lo | 00:00:00:00:00:00 | true/false | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3801 | | | | | | IPv6 : [] | | | 2 | vlan3801 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3801 | | | | | | IPv6 : [] | | | 3 | llb0 | 42:6e:9b:7f:ff:36 | true/false | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3803 | | | | | | IPv6 : [] | | | 4 | vlan3803 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3803 | | | | | | IPv6 : [] | | | 5 | eth0 | 02:42:ac:1e:01:c1 | true/true | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3805 | | | | | | IPv6 : [] | | | 6 | vlan3805 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3805 | | | | | | IPv6 : [] | | | 7 | enp1 | fe:84:23:ac:41:31 | false/false | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3807 | | | | | | IPv6 : [] | | | 8 | vlan3807 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3807 | | | | | | IPv6 : [] | | | 9 | enp2 | d6:3c:7f:9e:58:5c | false/false | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3809 | | | | | | IPv6 : [] | | | 10 | vlan3809 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3809 | | | | | | IPv6 : [] | | | 11 | enp2v15 | 8a:9e:99:aa:f9:c3 | false/false | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3811 | | | | | | IPv6 : [] | | | 12 | vlan3811 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3811 | | | | | | IPv6 : [] | | | 13 | enp3 | f2:c7:4b:ac:fd:3e | false/false | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3813 | | | | | | IPv6 : [] | | | 14 | vlan3813 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3813 | | | | | | IPv6 : [] | | | 15 | enp4 | 12:d2:c3:79:f3:6a | false/false | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3815 | | | | | | IPv6 : [] | | | 16 | vlan3815 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3815 | | | | | | IPv6 : [] | | | 17 | vlan100 | 56:2e:76:b2:71:48 | false/false | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 100 | | | | | | IPv6 : [] | |
"},{"location":"ebpf/","title":"What is eBPF ??","text":"eBPF has been making quite some news lately. An elegant way to extend the linux kernel (or windows) has far reaching implications. Although initially, eBPF was used to enhance system observability beyond existing tools, we will explore in this post how eBPF can be used for enhancing Linux networking performance. There are a lot of additional resources about eBPF in the eBPF project page.
"},{"location":"ebpf/#a-quick-recap","title":"A quick recap","text":"The hooks that are of particular interest for this discussion are NIC hook (invoked just after packet is received at NIC) and TC hook (invoked just before Linux starts processing packet with its TCP/IP stack). Programs loaded to the former hook are also known as XDP programs and to the latter are called eBPF TC. Although both use eBPF restricted C syntax, there are significant differences between these types. (We will cover it in a separate blog later). For now, we just need to remember that when dealing with container-to-container or container-to-outside communication eBPF-TC makes much more sense since memory allocation (for skb) will happen either way in such scenarios.
"},{"location":"ebpf/#the-performance-bottlenecks","title":"The performance bottlenecks","text":"Coming back to the focus of our discussion which is of course performance, let us step back and take a look at why Linux sucks at networking performance (or rather why it could perform much faster). Linux networking evolved from the days of dial up modem networking when speed was not of utmost importance. Down the lane, code kept accumulating. Although it is extremely feature rich and RFC compliant, it hardly resembles a powerful data-path networking engine. The following figure shows a call-trace of Linux kernel networking stack:
The point is it has become incredibly complex over the years. Once features like NAT, VXLAN, conntrack etc come into play, Linux networking stops scaling due to cache degradation, lock contention etc.
"},{"location":"ebpf/#one-problem-leads-to-the-another","title":"One problem leads to the another","text":"To avoid performance penalties, many user-space frameworks like DPDK have been widely used, which completely skip the linux kernel networking and directly process packets in the user-space. As simple as that may sound, there are some serious drawbacks in using such frameworks e.g need to dedicate cores (can\u2019t multitask), applications written on a specific user-space driver (PMD) might not run on another as it is, apps are also rendered incompatible across different DPDK releases frequently. Finally, there is a need to redo various parts of the TCP/IP stack and the provisioning involved. In short, it leads to a massive and completely unnecessary need of reinventing the wheel. We will have a detailed post later to discuss these factors. But for now, in short, if we are looking to get more out of a box than doing only networking, DPDK is not the right choice. In the age of distributed edge computing and immersive metaverse, the need to do more out of less is of utmost importance.
"},{"location":"ebpf/#ebpf-comes-to-the-rescue","title":"eBPF comes to the rescue","text":"Now, eBPF changes all of this. eBPF is hosted inside the kernel so the biggest advantage of eBPF is it can co-exist with Linux/OS without the need of using dedicated cores, skipping the Kernel stack or breaking tools used for ages by the community. Handling of new protocols and functionality can be done in the fly without waiting for kernel development to catch up.
"},{"location":"eks-external/","title":"Eks external","text":""},{"location":"eks-external/#create-an-eks-cluster-with-ingress-access-enabled-by-loxilb-external-mode","title":"Create an EKS cluster with ingress access enabled by loxilb (external-mode)","text":"This document details the steps to create an EKS cluster and allow external ingress access using loxilb running in external mode. loxilb will run as EC2 instances in EKS cluster's VPC while loxilb's operator, kube-loxilb, will run as a replica-set inside EKS cluster.
"},{"location":"eks-external/#create-eks-cluster-with-4-worker-nodes-from-a-bastion-node-inside-your-vpc","title":"Create EKS cluster with 4 worker nodes from a bastion node inside your VPC","text":" - It is assumed that aws-cli, kubectl and eksctl are installed in a bastion node
$ eksctl create cluster --version 1.24 --name loxilb-demo --vpc-nat-mode Single --region ap-northeast-2 --node-type t3.small --nodes 4 --with-oidc --managed\n
- Create kube config for kubectl access
$ aws eks update-kubeconfig --region ap-northeast-2 --name loxilb-demo\n
- Double confirm the cluster created
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system aws-node-2fpm4 2/2 Running 0 14m\nkube-system aws-node-6vhlr 2/2 Running 0 14m\nkube-system aws-node-9kzb2 2/2 Running 0 14m\nkube-system aws-node-vvkq5 2/2 Running 0 14m\nkube-system coredns-5ff5b8d45c-gj9kj 1/1 Running 0 21m\nkube-system coredns-5ff5b8d45c-p64fd 1/1 Running 0 21m\nkube-system kube-proxy-5j9gf 1/1 Running 0 14m\nkube-system kube-proxy-5tm8w 1/1 Running 0 14m\nkube-system kube-proxy-894k9 1/1 Running 0 14m\nkube-system kube-proxy-xgfb8 1/1 Running 0 14m\n
"},{"location":"eks-external/#deploy-loxilb-as-ec2-instances-in-ekss-vpc","title":"Deploy loxilb as EC2 instances in EKS's VPC","text":" - Create a file
launch-loxilb.sh
with the following contents (in bastion node) sudo apt-get update && apt-get install -y snapd\nsudo snap install docker\nsleep 30\nsudo docker run -u root --cap-add SYS_ADMIN --net=host --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest\n
- Deploy loxilb ec2 instance(s) using the above init-script
$ aws ec2 run-instances --image-id ami-01ed8ade75d4eee2f --count 1 --instance-type t3.medium --key-name aws-netlox --security-group-ids sg-0e2638db05b256476 --subnet-id subnet-0109b973f5f674f99 --associate-public-ip-address --user-data file://launch-loxilb.sh\n
"},{"location":"eks-external/#note-subnet-id-should-be-any-subnet-with-public-access-enabled-from-the-eks-cluster-rest-of-the-args-can-be-changed-as-applicable","title":"Note : subnet-id should be any subnet with public access enabled from the EKS cluster. Rest of the args can be changed as applicable","text":" - Double confirm loxilb EC2 instances are running properly in amazon aws console or using aws cli.
- Disable source/dest check of the loxilb EC2 instances
aws ec2 modify-network-interface-attribute --network-interface-id eni-02e1cbfa022eb0901 --no-source-dest-check\n
"},{"location":"eks-external/#deploy-loxilbs-operator-kube-loxilb","title":"Deploy loxilb's operator (kube-loxilb)","text":" - Create a file kube-loxilb.yml with the following contents
---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: kube-loxilb\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nrules:\n - apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - pods\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - endpoints\n - services\n - services/status\n verbs:\n - get\n - watch\n - list\n - patch\n - update\n - apiGroups:\n - gateway.networking.k8s.io\n resources:\n - gatewayclasses\n - gatewayclasses/status\n - gateways\n - gateways/status\n - tcproutes\n - udproutes\n verbs: [\"get\", \"watch\", \"list\", \"patch\", \"update\"]\n - apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - get\n - watch\n - list\n - apiGroups:\n - authentication.k8s.io\n resources:\n - tokenreviews\n verbs:\n - create\n - apiGroups:\n - authorization.k8s.io\n resources:\n - subjectaccessreviews\n verbs:\n - create\n - apiGroups:\n - bgppeer.loxilb.io\n resources:\n - bgppeerservices\n verbs:\n - get\n - watch\n - list\n - create\n - update\n - delete\n\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: kube-loxilb\nsubjects:\n - kind: ServiceAccount\n name: kube-loxilb\n namespace: kube-system\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: kube-loxilb\n namespace: kube-system\n labels:\n app: kube-loxilb-app\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: kube-loxilb-app\n template:\n metadata:\n labels:\n app: kube-loxilb-app\n spec:\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet\n tolerations:\n # Mark the pod as a critical add-on for rescheduling.\n - key: CriticalAddonsOnly\n operator: Exists\n priorityClassName: system-node-critical\n serviceAccountName: kube-loxilb\n terminationGracePeriodSeconds: 0\n containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n - --loxiURL=http://192.168.31.175:11111\n - --externalCIDR=0.0.0.0/32\n - --setLBMode=5\n #- --setRoles:0.0.0.0\n resources:\n requests:\n cpu: \"100m\"\n memory: \"50Mi\"\n limits:\n cpu: \"100m\"\n memory: \"50Mi\"\n securityContext:\n privileged: true\n capabilities:\n add: [\"NET_ADMIN\", \"NET_RAW\"]\n
"},{"location":"eks-external/#note1-externalcidr-args-can-be-set-to-any-public-ip-address-via-which-any-of-the-worker-nodes-can-be-accessed-it-can-be-also-set-to-simply-000032-which-means-lb-will-be-performed-on-any-of-the-nodes-where-loxilb-runs-the-decision-of-which-loxilb-nodeinstance-will-be-chosen-as-ingress-in-this-case-can-be-done-by-route53dns","title":"Note1: --externalCIDR args can be set to any Public IP address via which any of the worker nodes can be accessed. It can be also set to simply 0.0.0.0/32 which means LB will be performed on any of the nodes where loxilb runs. The decision of which loxilb node/instance will be chosen as ingress in this case can be done by Route53/DNS.","text":""},{"location":"eks-external/#note2-loxiurl-args-should-be-set-to-privateip-addresses-of-the-loxilb-ec2-instances-accessible-from-the-eks-cluster-currently-kube-loxilb-cant-autodetect-the-ec2-instances-running-loxilb-in-external-mode","title":"Note2: --loxiURL args should be set to privateIP address(es) of the loxilb ec2 instances accessible from the EKS cluster. Currently, kube-loxilb can't autodetect the EC2 instances running loxilb in external mode.","text":" - Deploy kube-loxilb to EKS cluster
$ kubectl apply -f kube-loxilb.yml\nserviceaccount/kube-loxilb created\nclusterrole.rbac.authorization.k8s.io/kube-loxilb created\ndeployment.apps/kube-loxilb created\n
- Check the state of the EKS cluster
$ kubectl get pods -A \nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system aws-node-2fpm4 2/2 Running 0 14m\nkube-system aws-node-6vhlr 2/2 Running 0 14m\nkube-system aws-node-9kzb2 2/2 Running 0 14m\nkube-system aws-node-vvkq5 2/2 Running 0 14m\nkube-system coredns-5ff5b8d45c-gj9kj 1/1 Running 0 21m\nkube-system coredns-5ff5b8d45c-p64fd 1/1 Running 0 21m\nkube-system kube-proxy-5j9gf 1/1 Running 0 14m\nkube-system kube-proxy-5tm8w 1/1 Running 0 14m\nkube-system kube-proxy-894k9 1/1 Running 0 14m\nkube-system kube-proxy-xgfb8 1/1 Running 0 14m\nkube-system kube-loxilb-6477d6897f-vz74f 1/1 Running 0 5m\n
"},{"location":"eks-external/#install-a-test-service","title":"Install a test service","text":" - Create a file nginx.yml with the following contents:
apiVersion: v1\nkind: Service\nmetadata:\n name: nginx-lb1\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: nginx-test\n ports:\n - port: 55002\n targetPort: 80\n type: LoadBalancer\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-test\n labels:\n what: nginx-test\nspec:\n containers:\n - name: nginx-test\n image: nginx:stable\n ports:\n - containerPort: 80\n
- Deploy test nginx service to EKS
$ kubectl apply -f nginx.yml\nservice/nginx-lb1 created\n
-
Check the state of the EKS cluster
$ kubectl get pods -A \nNAMESPACE NAME READY STATUS RESTARTS AGE\ndefault nginx-test 1/1 Running 0 50s\nkube-system aws-node-2fpm4 2/2 Running 0 14m\nkube-system aws-node-6vhlr 2/2 Running 0 14m\nkube-system aws-node-9kzb2 2/2 Running 0 14m\nkube-system aws-node-vvkq5 2/2 Running 0 14m\nkube-system coredns-5ff5b8d45c-gj9kj 1/1 Running 0 21m\nkube-system coredns-5ff5b8d45c-p64fd 1/1 Running 0 21m\nkube-system kube-proxy-5j9gf 1/1 Running 0 14m\nkube-system kube-proxy-5tm8w 1/1 Running 0 14m\nkube-system kube-proxy-894k9 1/1 Running 0 14m\nkube-system kube-proxy-xgfb8 1/1 Running 0 14m\nkube-system kube-loxilb-6477d6897f-vz74f 1/1 Running 0 5m\n
-
Check the external service for service ingress (via loxilb)
$ kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.100.0.1 <none> 443/TCP 10h\nnginx-lb1 LoadBalancer 10.100.244.105 llbanyextip 55005:30055/TCP 24s\n
"},{"location":"eks-external/#test-the-service","title":"Test the service","text":" - Try to access the service from outside (internet). We can use any public IP associated with any of the loxilb ec2 instances
$ curl http://3.37.191.xx:55005 \n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
"},{"location":"eks-external/#note-we-would-need-to-make-sure-aws-security-groups-are-setup-properly-to-allow-access-for-ingress-traffic","title":"Note - We would need to make sure AWS security groups are setup properly to allow access for ingress traffic.","text":""},{"location":"eks-external/#restricting-loxilb-service-for-a-local-zone-node-group","title":"Restricting loxilb service for a local-zone node-group","text":"For limiting loxilb services to a specific node group of a local-zone, we can use kubenetes node-labels to limit the endpoints of that service to that node-group only. For example, if all the nodes in a local-zone node-groups have a label node.kubernetes.io/local-zone2=true
, then we can create a loxilb service with a following annotation :
apiVersion: v1\nkind: Service\nmetadata:\n name: nginx-lb1\n annotations:\n loxilb.io/nodelabel: \"node.kubernetes.io/local-zone2\"\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: nginx-test\n ports:\n - port: 55002\n targetPort: 80\n type: LoadBalancer\n
This will make sure that loxilb will pick only the endpoint nodes which belong to that node-group only."},{"location":"eks-incluster/","title":"Eks incluster","text":""},{"location":"eks-incluster/#create-an-eks-cluster-with-ingress-access-enabled-by-loxilb-incluster-mode","title":"Create an EKS cluster with ingress access enabled by loxilb (incluster-mode)","text":"This document details the steps to create an EKS cluster and allow external ingress access using loxilb running in incluster mode. loxilb will run as a daemon-set in all the worker nodes while loxilb's operator, kube-loxilb, will run as a replica-set.
Although loxilb has built-in support for associating (floating) AWS EIPs to private subnet addresses of EC2 instances, this is not considered in this particular scenario. But if it is needed, the functionality can be enabled by changing a few parameters in yaml config files.
"},{"location":"eks-incluster/#create-eks-cluster-with-4-worker-nodes-from-a-bastion-node-inside-your-vpc","title":"Create EKS cluster with 4 worker nodes from a bastion node inside your VPC","text":" - It is assumed that aws-cli, kubectl and eksctl are installed in a bastion node
$ eksctl create cluster --version 1.24 --name loxilb-demo --vpc-nat-mode Single --region ap-northeast-2 --node-type t3.small --nodes 4 --with-oidc --managed\n
- Create kube config for kubectl access
$ aws eks update-kubeconfig --region ap-northeast-2 --name loxilb-demo\n
- Double confirm the cluster created
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system aws-node-2fpm4 2/2 Running 0 14m\nkube-system aws-node-6vhlr 2/2 Running 0 14m\nkube-system aws-node-9kzb2 2/2 Running 0 14m\nkube-system aws-node-vvkq5 2/2 Running 0 14m\nkube-system coredns-5ff5b8d45c-gj9kj 1/1 Running 0 21m\nkube-system coredns-5ff5b8d45c-p64fd 1/1 Running 0 21m\nkube-system kube-proxy-5j9gf 1/1 Running 0 14m\nkube-system kube-proxy-5tm8w 1/1 Running 0 14m\nkube-system kube-proxy-894k9 1/1 Running 0 14m\nkube-system kube-proxy-xgfb8 1/1 Running 0 14m\n
"},{"location":"eks-incluster/#deploy-loxilb-as-a-daemon-set","title":"Deploy loxilb as a daemon-set","text":" - Create a file loxilb.yml with the following contents
apiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: loxilb-lb\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n app: loxilb-app\n template:\n metadata:\n name: loxilb-lb\n labels:\n app: loxilb-app\n spec:\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet\n containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command: [ \"/root/loxilb-io/loxilb/loxilb\", \"--egr-hooks\", \"--blacklist=cni[0-9a-z]|veth.|flannel.|eni.\" ]\n ports:\n - containerPort: 11111\n - containerPort: 179\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: loxilb-lb-service\n namespace: kube-system\nspec:\n clusterIP: None\n selector:\n app: loxilb-app\n ports:\n - name: loxilb-app\n port: 11111\n targetPort: 11111\n protocol: TCP\n - name: loxilb-app-bgp\n port: 179\n targetPort: 179\n protocol: TCP\n
- Deploy loxilb
$ kubectl apply -f loxilb.yml\ndaemonset.apps/loxilb-lb created\nservice/loxilb-lb-service created\n
- Double confirm loxilb pods are running properly as a daemonset
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system aws-node-2fpm4 2/2 Running 0 19m\nkube-system aws-node-6vhlr 2/2 Running 0 19m\nkube-system aws-node-9kzb2 2/2 Running 0 19m\nkube-system aws-node-vvkq5 2/2 Running 0 19m\nkube-system coredns-5ff5b8d45c-gj9kj 1/1 Running 0 26m\nkube-system coredns-5ff5b8d45c-p64fd 1/1 Running 0 26m\nkube-system kube-proxy-5j9gf 1/1 Running 0 19m\nkube-system kube-proxy-5tm8w 1/1 Running 0 19m\nkube-system kube-proxy-894k9 1/1 Running 0 19m\nkube-system kube-proxy-xgfb8 1/1 Running 0 19m\nkube-system loxilb-lb-7s45t 1/1 Running 0 18s\nkube-system loxilb-lb-fp6nv 1/1 Running 0 18s\nkube-system loxilb-lb-pbzql 1/1 Running 0 18s\nkube-system loxilb-lb-zzth8 1/1 Running 0 18s\n
"},{"location":"eks-incluster/#deploy-loxilbs-operator-kube-loxilb","title":"Deploy loxilb's operator (kube-loxilb)","text":" - Create a file kube-loxilb.yml with the following contents
---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: kube-loxilb\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nrules:\n - apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - pods\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - endpoints\n - services\n - services/status\n verbs:\n - get\n - watch\n - list\n - patch\n - update\n - apiGroups:\n - gateway.networking.k8s.io\n resources:\n - gatewayclasses\n - gatewayclasses/status\n - gateways\n - gateways/status\n - tcproutes\n - udproutes\n verbs: [\"get\", \"watch\", \"list\", \"patch\", \"update\"]\n - apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - get\n - watch\n - list\n - apiGroups:\n - authentication.k8s.io\n resources:\n - tokenreviews\n verbs:\n - create\n - apiGroups:\n - authorization.k8s.io\n resources:\n - subjectaccessreviews\n verbs:\n - create\n - apiGroups:\n - bgppeer.loxilb.io\n resources:\n - bgppeerservices\n verbs:\n - get\n - watch\n - list\n - create\n - update\n - delete\n\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: kube-loxilb\nsubjects:\n - kind: ServiceAccount\n name: kube-loxilb\n namespace: kube-system\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: kube-loxilb\n namespace: kube-system\n labels:\n app: kube-loxilb-app\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: kube-loxilb-app\n template:\n metadata:\n labels:\n app: kube-loxilb-app\n spec:\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet\n tolerations:\n # Mark the pod as a critical add-on for rescheduling.\n - key: CriticalAddonsOnly\n operator: Exists\n priorityClassName: system-node-critical\n serviceAccountName: kube-loxilb\n terminationGracePeriodSeconds: 0\n containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n - --externalCIDR=0.0.0.0/32\n - --setLBMode=5\n #- --setRoles:0.0.0.0\n resources:\n requests:\n cpu: \"100m\"\n memory: \"50Mi\"\n limits:\n cpu: \"100m\"\n memory: \"50Mi\"\n securityContext:\n privileged: true\n capabilities:\n add: [\"NET_ADMIN\", \"NET_RAW\"]\n
"},{"location":"eks-incluster/#note-externalcidr-args-can-be-set-to-any-public-ip-address-via-which-any-of-the-worker-nodes-can-be-accessed-it-can-be-also-set-to-simply-000032-which-means-lb-will-be-performed-on-any-of-the-nodes-where-loxilb-runs-the-decision-of-which-loxilb-nodeinstance-will-be-chosen-as-ingress-in-this-case-can-be-done-by-route53dns","title":"Note: --externalCIDR args can be set to any Public IP address via which any of the worker nodes can be accessed. It can be also set to simply 0.0.0.0/32 which means LB will be performed on any of the nodes where loxilb runs. The decision of which loxilb node/instance will be chosen as ingress in this case can be done by Route53/DNS.","text":" - Deploy kube-loxilb to EKS cluster
$ kubectl apply -f kube-loxilb.yml\nserviceaccount/kube-loxilb created\nclusterrole.rbac.authorization.k8s.io/kube-loxilb created\ndeployment.apps/kube-loxilb created\n
- Check the state of the EKS cluster
$ kubectl get pods -A \nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system aws-node-2fpm4 2/2 Running 0 35m\nkube-system aws-node-6vhlr 2/2 Running 0 35m\nkube-system aws-node-9kzb2 2/2 Running 0 35m\nkube-system aws-node-vvkq5 2/2 Running 0 35m\nkube-system coredns-5ff5b8d45c-gj9kj 1/1 Running 0 42m\nkube-system coredns-5ff5b8d45c-p64fd 1/1 Running 0 42m\nkube-system kube-loxilb-c7cd4fccd-hjg8w 1/1 Running 0 116s\nkube-system kube-proxy-5j9gf 1/1 Running 0 35m\nkube-system kube-proxy-5tm8w 1/1 Running 0 35m\nkube-system kube-proxy-894k9 1/1 Running 0 35m\nkube-system kube-proxy-xgfb8 1/1 Running 0 35m\nkube-system loxilb-lb-7s45t 1/1 Running 0 16m\nkube-system loxilb-lb-fp6nv 1/1 Running 0 16m\nkube-system loxilb-lb-pbzql 1/1 Running 0 16m\nkube-system loxilb-lb-zzth8 1/1 Running 0 16m\n
"},{"location":"eks-incluster/#install-a-test-service","title":"Install a test service","text":" - Create a file nginx.yml with the following contents:
apiVersion: v1\nkind: Service\nmetadata:\n name: nginx-lb1\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: nginx-test\n ports:\n - port: 55002\n targetPort: 80\n type: LoadBalancer\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-test\n labels:\n what: nginx-test\nspec:\n containers:\n - name: nginx-test\n image: nginx:stable\n ports:\n - containerPort: 80\n
- Deploy test nginx service to EKS
$ kubectl apply -f nginx.yml\nservice/nginx-lb1 created\n
-
Check the state of the EKS cluster
$ kubectl get pods -A \nNAMESPACE NAME READY STATUS RESTARTS AGE\ndefault nginx-test 1/1 Running 0 50s\nkube-system aws-node-2fpm4 2/2 Running 0 39m\nkube-system aws-node-6vhlr 2/2 Running 0 39m\nkube-system aws-node-9kzb2 2/2 Running 0 39m\nkube-system aws-node-vvkq5 2/2 Running 0 39m\nkube-system coredns-5ff5b8d45c-gj9kj 1/1 Running 0 46m\nkube-system coredns-5ff5b8d45c-p64fd 1/1 Running 0 46m\nkube-system kube-loxilb-c7cd4fccd-hjg8w 1/1 Running 0 6m13s\nkube-system kube-proxy-5j9gf 1/1 Running 0 39m\nkube-system kube-proxy-5tm8w 1/1 Running 0 39m\nkube-system kube-proxy-894k9 1/1 Running 0 39m\nkube-system kube-proxy-xgfb8 1/1 Running 0 39m\nkube-system loxilb-lb-7s45t 1/1 Running 0 20m\nkube-system loxilb-lb-fp6nv 1/1 Running 0 20m\nkube-system loxilb-lb-pbzql 1/1 Running 0 20m\nkube-system loxilb-lb-zzth8 1/1 Running 0 20m\n
-
Check the external service for service ingress (via loxilb)
$ kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.100.0.1 <none> 443/TCP 6h19m\nnginx-lb1 LoadBalancer 10.100.63.175 llbanyextip 55002:32704/TCP 4hs\n
"},{"location":"eks-incluster/#test-the-service","title":"Test the service","text":" - Try to access the service from outside (internet). We can use any public IP associated with the cluster (worker) nodes
$ curl http://43.201.76.xx:55002 \n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
"},{"location":"eks-incluster/#note-we-would-need-to-make-sure-aws-security-groups-are-setup-properly-to-allow-access-for-ingress-traffic","title":"Note - We would need to make sure AWS security groups are setup properly to allow access for ingress traffic.","text":""},{"location":"eks-incluster/#restricting-loxilb-service-for-a-local-zone-node-group","title":"Restricting loxilb service for a local-zone node-group","text":"For limiting loxilb services to a specific node group of a local-zone, we can use kubenetes node-labels to limit the endpoints of that service to that node-group only. For example, if all the nodes in a local-zone node-groups have a label node.kubernetes.io/local-zone2=true
, then we can create a loxilb service with a following annotation :
apiVersion: v1\nkind: Service\nmetadata:\n name: nginx-lb1\n loxilb.io/nodelabel: \"node.kubernetes.io/local-zone2\"\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: nginx-test\n ports:\n - port: 55002\n targetPort: 80\n type: LoadBalancer\n
This will make sure that loxilb will pick only the endpoint nodes which belong to that node-group only."},{"location":"ext-ep/","title":"How-To - access end-points outside K8s","text":"In Kubernetes, there are two key concepts - Service and Endpoint."},{"location":"ext-ep/#what-is-service","title":"What is Service?","text":"
A \"Service\" is a method that exposes an application running in one or more pods.
"},{"location":"ext-ep/#what-is-an-endpoint","title":"What is an Endpoint?","text":"An \"Endpoint\" defines a list of network endpoints(IP address and port), typically referenced by a Service to define which Pods the traffic can be sent to.
When we create a service in Kubernetes, usually we do not have to worry about the Endpoints' management as it is taken care by Kubernetes itself. But, sometimes not all the services run in a single cluster, some of them are hosted in other cluster(s) e.g. DB, storage, web services, trancoder etc.
When endpoints are outside of the Kubernetes cluster, Endpoint objects can still be used to define and manage those external endpoints. This scenario is common when Kubernetes services need to interact with external systems, APIs, or services located outside of the cluster. Here's a practical example:
Suppose you have a Kubernetes cluster hosting a microservices-based application, and one of the services needs to communicate with an external database hosted outside of the cluster. In this case, you can use an Endpoint object to define the external database endpoint within Kubernetes. In that case, your cloud-native apps needs to connect to the external services with external endpoints.
"},{"location":"ext-ep/#service-with-external-endpoint","title":"Service with External Endpoint","text":"You can create an external service with loxilb as well. For this, You can simply create an Endpoint Object and then create a service using this endpoint object:
endpoint.yml
apiVersion: v1\nkind: Endpoints\nmetadata:\n name: ext-tcp-lb\nsubsets:\n - addresses:\n - ip: 192.168.82.2\n ports:\n - port: 80\n
Create endpoint object:
$ kubectl apply -f endpoint.yml\n
View endpoints:
$ kubectl get ep\nNAME ENDPOINTS AGE\nkubernetes 10.0.2.15:6443 16m\next-tcp-lb 192.168.82.2:80 16m\n
service.yml
apiVersion: v1\nkind: Service\nmetadata:\n name: ext-tcp-lb\nspec:\n loadBalancerClass: loxilb.io/loxilb\n type: LoadBalancer \n ports:\n - protocol: TCP\n port: 8000\n targetPort: 80\n
Create Service:
$ kubectl apply -f service.yml\n
View Service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 16m\next-tcp-lb LoadBalancer 10.43.164.108 llb-20.20.20.1 8000:30355/TCP 15m\n
"},{"location":"faq/","title":"loxilb FAQs","text":" - Does loxilb depend on what kind of CNI is deployed in the cluster ?
Yes, loxilb configuration and operation might be related to which CNI (Calico, Cilium etc) is in use. loxilb just needs a way to find a route to its end-points. This also depends on how the network topology is laid out. For example, if a separated network for nodePort and external LB services is in effect or not. We will have a detailed guide on best practices for loxilb deployment soon. In the meantime, kindly reach out to us via github or loxilb forum
- Can loxilb be possibly run outside the released docker image ?
Yes, loxilb can be run outside the provided docker image. Docker image gives it good portability across various linux like OS's without any performance impact. However, if need is to run outside its own docker, kindly follow README of various loxilb-io repositories.
- Can loxilb also act as a CNI ?
loxilb supports many functionalities of a CNI but loxilb dev team is happy solving external LB and related connectivity problems for the time being. If there is a future requirement not met by currently available CNIs, we might chip in as well
- Is there a commercially supported version of loxilb ?
At this point of time, loxilb-team is working hard to provide a high-quality open-source product. If users need commercial support, kindly get in touch with us
- Can loxilb run in a standalone mode (without Kubernetes) ?
Very much so. loxilb can run in a standalone mode. Please follow various guides available in loxilb repo to run loxilb in a standalone mode.
- How loxilb ensures conformance wtih Kubernetes ?
loxilb uses kubetest/kubetest2 plus various other test-utilities as part of its CI/CD workflows. We are also planning to get ourselves officially supported by distros like RedHat Openshift
- Where is loxilb deployed so far ?
loxilb is currently used in academia for R&D and various organizations are in process of using it for PoCs. We will update the list of deployments as and when they are officially known
"},{"location":"gtp/","title":"Creating a simple test topology for testing GTP with loxilb","text":"To test loxilb in a completely virtual environment, it is possible to quickly create a virtual test topology. We will explain the steps required to create a very simple topology (more complex topologies can be built using this example) :
graph LR;\n UE1[UE1<br>32.32.32.1]-->B[llb1<br>10.10.10.59/24];\n UE2[UE2<br>31.31.31.1]-->B;\n B-- GTP Tunnel ---C[llb2<br>10.10.10.56/24]\n C-->D[EP1<br>31.31.31.1];\n C-->E[EP2<br>32.32.32.1];\n C-->F[EP3<br>17.17.17.1];\n
Prerequisites :
- The system should be x86 based (bare-metal or virtual)
- Docker should be preinstalled
"},{"location":"gtp/#next-step-is-to-run-the-following-script-to-create-and-configure-the-above-topology","title":"Next step is to run the following script to create and configure the above topology.","text":"Please refer scenario3 in loxilb/cicd script
Script will spawn dockers for UEs, loxilbs and endpoints.
In the script, UEs will try to access service IP(88.88.88.88). We are creating sessions and configuring load-balancer rule inside loxilb docker as follows :
dexec=\"docker exec -it \"\n##llb1 config\n\n#Creating session for ue1\n$dexec llb1 loxicmd create session user1 88.88.88.88 --accessNetworkTunnel 1:10.10.10.56 --coreNetworkTunnel=1:10.10.10.59\n\n#Creating ULCL filter with ue1 IP\n$dexec llb1 loxicmd create sessionulcl user1 --ulclArgs=11:32.32.32.1\n\n#Creating session for ue2\n$dexec llb1 loxicmd create session user2 88.88.88.88 --accessNetworkTunnel 2:10.10.10.56 --coreNetworkTunnel=2:10.10.10.59\n\n#Creating ULCL filter with ue2 IP\n$dexec llb1 loxicmd create sessionulcl user2 --ulclArgs=12:31.31.31.1\n\n##llb2 config\n#Creating session for ue1\n$dexec llb2 loxicmd create session user1 32.32.32.1 --accessNetworkTunnel 1:10.10.10.59 --coreNetworkTunnel=1:10.10.10.56\n\n#Creating ULCL filter with service IP for ue1\n$dexec llb2 loxicmd create sessionulcl user1 --ulclArgs=11:88.88.88.88\n\n#Creating session for ue1\n$dexec llb2 loxicmd create session user2 31.31.31.1 --accessNetworkTunnel 2:10.10.10.59 --coreNetworkTunnel=2:10.10.10.56\n\n#Creating ULCL filter with service IP for ue2\n$dexec llb2 loxicmd create sessionulcl user2 --ulclArgs=12:88.88.88.88\n\n\n##Create LB rule\n$dexec llb2 loxicmd create lb 88.88.88.88 --tcp=2020:8080 --endpoints=25.25.25.1:1,26.26.26.1:1,27.27.27.1:1\n
So, we now have two instances loxilb running as a docker. First instance of loxilb, llb1 is simulated as gNB as it is used to encap the incoming traffic from UE1 or UE2. Breakout, forward or loadbalancer rule can be configured on second instance llb2 We can run any workloads as we wish inside the host containers and start testing loxilb.
"},{"location":"ha-deploy-KOR/","title":"loxilb \uace0\uac00\uc6a9\uc131(HA) \ubc30\ud3ec \ubc29\ubc95","text":"\uc774 \ubb38\uc11c\uc5d0\uc11c\ub294 loxilb\ub97c \uace0\uac00\uc6a9\uc131(HA)\uc73c\ub85c \ubc30\ud3ec\ud558\ub294 \ub2e4\uc591\ud55c \uc2dc\ub098\ub9ac\uc624\uc5d0 \ub300\ud574 \uc124\uba85\ud569\ub2c8\ub2e4. \uc774 \ud398\uc774\uc9c0\ub97c \uacc4\uc18d\ud558\uae30 \uc804\uc5d0 kube-loxilb\uc640 loxilb\uac00 \uc9c0\uc6d0\ud558\ub294 \ub2e4\uc591\ud55c NAT \ubaa8\ub4dc\uc5d0 \ub300\ud55c \uae30\ubcf8\uc801\uc778 \uc774\ud574\ub97c \uac00\uc9c0\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. loxilb\ub294 \uc544\ud0a4\ud14d\ucc98 \uc120\ud0dd\uc5d0 \ub530\ub77c \uc778-\ud074\ub7ec\uc2a4\ud130 \ub610\ub294 Kubernetes \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc5d0\uc11c \uc2e4\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ubb38\uc11c\uc5d0\uc11c\ub294 \uc778-\ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubc30\ud3ec\ub97c \uac00\uc815\ud558\uc9c0\ub9cc, \uc720\uc0ac\ud55c \uad6c\uc131\uc774 \uc678\ubd80 \uad6c\uc131 \uc5d0\uc11c\ub3c4 \ub3d9\uc77c\ud558\uac8c \uac00\ub2a5\ud569\ub2c8\ub2e4.
- \uc2dc\ub098\ub9ac\uc624 1 - Flat L2 \ub124\ud2b8\uc6cc\ud0b9 (\uc561\ud2f0\ube0c-\ubc31\uc5c5)
- \uc2dc\ub098\ub9ac\uc624 2 - L3 \ub124\ud2b8\uc6cc\ud06c (BGP\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\ubc31\uc5c5 \ubaa8\ub4dc)
- \uc2dc\ub098\ub9ac\uc624 3 - L3 \ub124\ud2b8\uc6cc\ud06c (BGP ECMP\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\uc561\ud2f0\ube0c)
- \uc2dc\ub098\ub9ac\uc624 4 - \uc5f0\uacb0 \ub3d9\uae30\ud654\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\ubc31\uc5c5
- \uc2dc\ub098\ub9ac\uc624 5 - \ube60\ub978 Fail-over \uac10\uc9c0\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\ubc31\uc5c5(BFD)
"},{"location":"ha-deploy-KOR/#1-flat-l2-","title":"\uc2dc\ub098\ub9ac\uc624 1 - Flat L2 \ub124\ud2b8\uc6cc\ud0b9 (\uc561\ud2f0\ube0c-\ubc31\uc5c5)","text":""},{"location":"ha-deploy-KOR/#_1","title":"\uc124\uc815","text":"\uc774 \ubc30\ud3ec \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 Kubernetes\uc640 loxilb\uac00 \ub2e4\uc74c\uacfc \uac19\uc774 \uc124\uc815\ub429\ub2c8\ub2e4:
Kubernetes\ub294 2\uac1c\uc758 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc640 2\uac1c\uc758 \uc6cc\ucee4 \ub178\ub4dc\ub85c \uad6c\uc131\ub41c \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud558\uba70, \ubaa8\ub4e0 \ub178\ub4dc\ub294 \ub3d9\uc77c\ud55c 192.168.80.0/24 \uc11c\ube0c\ub137\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 loxilb\uac00 \ubaa8\ub4e0 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc5d0\uc11c DaemonSet\uc73c\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4. \uadf8\ub9ac\uace0 kube-loxilb\ub294 Deployment\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_2","title":"\uc801\ud569\ud55c \uc0ac\uc6a9 \uc0ac\ub840","text":" - \ud074\ub77c\uc774\uc5b8\ud2b8\uc640 \uc11c\ube44\uc2a4\uac00 \ub3d9\uc77c\ud55c \uc11c\ube0c\ub137\uc5d0 \uc788\uc5b4\uc57c \ud558\ub294 \uacbd\uc6b0.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \ub3d9\uc77c\ud55c \uc11c\ube0c\ub137\uc5d0 \uc788\uac70\ub098 \uc5c6\uc744 \uc218 \uc788\ub294 \uacbd\uc6b0.
- \uac04\ub2e8\ud55c \ubc30\ud3ec\ub97c \uc6d0\ud558\ub294 \uacbd\uc6b0.
"},{"location":"ha-deploy-KOR/#kube-loxilb","title":"kube-loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \ub85c\uceec \uc11c\ube0c\ub137\uc5d0\uc11c CIDR \uc120\ud0dd.
- SetRoles \uc635\uc158\uc744 \uc120\ud0dd\ud558\uc5ec \ud65c\uc131 loxilb pod\ub97c \uc120\ud0dd\ud560 \uc218 \uc788\ub3c4\ub85d \ud569\ub2c8\ub2e4.
- loxilb\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 Fail-over \uc2dc \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130\ub97c \uc120\ucd9c\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c One-arm \uc11c\ube44\uc2a4 \ubaa8\ub4dc\uc5d0\uc11c loxilb\ub97c \uc124\uc815\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_3","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --externalCIDR=192.168.80.200/24\n - --setRoles=0.0.0.0\n - --setLBMode=1\n
- \"--externalCIDR=192.168.80.200/24\" - svc\uc758 \uc678\ubd80 \uc11c\ube44\uc2a4 IP\ub294 externalCIDR \ubc94\uc704\uc5d0\uc11c \uc120\ud0dd\ub429\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8, svc \ubc0f \ud074\ub7ec\uc2a4\ud130\uac00 \ub3d9\uc77c\ud55c \uc11c\ube0c\ub137\uc5d0 \uc788\uc2b5\ub2c8\ub2e4.
- \"--setRoles=0.0.0.0\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 loxilb \uc778\uc2a4\ud134\uc2a4 \uc911\uc5d0\uc11c \ud65c\uc131-\ubc31\uc5c5\uc744 \uc120\ud0dd\ud558\uace0 svc IP\ub97c \ud65c\uc131 loxilb \ub178\ub4dc\uc5d0 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- \"--setLBMode=1\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c One-arm \ubaa8\ub4dc\uc5d0\uc11c svc\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
\uc0d8\ud50c kube-loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#loxilb","title":"loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - svc\ub85c \ud5a5\ud558\ub294 \uc678\ubd80 \ud2b8\ub798\ud53d\uc744 \ucd94\uc801\ud558\uace0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc804\ub2ec\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \ud65c\uc131 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4(\uad6c\uc131\ub41c \uacbd\uc6b0).
"},{"location":"ha-deploy-KOR/#_4","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command: [ \"/root/loxilb-io/loxilb/loxilb\", \"--egr-hooks\", \"--blacklist=cni[0-9a-z]|veth.|flannel.\" ]\n ports:\n - containerPort: 11111\n - containerPort: 179\n - containerPort: 50051\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n
-
\"--egr-hooks\" - \uc6cc\ud06c\ub85c\ub4dc\uac00 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc5d0 \uc2a4\ucf00\uc904\ub9c1\ub420 \uc218 \uc788\ub294 \uacbd\uc6b0 \ud544\uc694\ud569\ub2c8\ub2e4. \uc6cc\ucee4 \ub178\ub4dc\ub85c \uc6cc\ud06c\ub85c\ub4dc \uc2a4\ucf00\uc904\ub9c1\uc744 \uad00\ub9ac\ud558\ub294 \uacbd\uc6b0 \ud574\ub2f9 \ubcc0\uc218\ub97c \uc124\uc815\ud560 \ud544\uc694\ub294 \uc5c6\uc2b5\ub2c8\ub2e4.
-
\"--blacklist=cni[0-9a-z]|veth.|flannel.\" - \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\uc5d0\uc11c \uc2e4\ud589\ud558\ub294 \uacbd\uc6b0 \ud544\uc218\uc785\ub2c8\ub2e4. loxilb\ub294 \ubaa8\ub4e0 \uc778\ud130\ud398\uc774\uc2a4\uc5d0 ebpf \ud504\ub85c\uadf8\ub7a8\uc744 \ubd80\ucc29\ud558\uc9c0\ub9cc \uae30\ubcf8 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c \uc2e4\ud589 \uc911\uc774\ubbc0\ub85c \ubaa8\ub4e0 \uc778\ud130\ud398\uc774\uc2a4(CNI \uc778\ud130\ud398\uc774\uc2a4 \ud3ec\ud568)\uac00 \ub178\ucd9c\ub418\uace0 loxilb\ub294 \uc774\ub7ec\ud55c \uc778\ud130\ud398\uc774\uc2a4\uc5d0 ebpf \ud504\ub85c\uadf8\ub7a8\uc744 \ubd80\ucc29\ud558\uac8c \ub429\ub2c8\ub2e4. \uc774\ub294 \uc6d0\ud558\ub294 \ubc14\uac00 \uc544\ub2c8\ubbc0\ub85c \uc0ac\uc6a9\uc790\ub294 \uc774\ub7ec\ud55c \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc81c\uc678\ud558\ub294 \uc815\uaddc \ud45c\ud604\uc2dd\uc744 \uc5b8\uae09\ud574\uc57c \ud569\ub2c8\ub2e4. \uc8fc\uc5b4\uc9c4 \uc608\uc81c\uc758 \uc815\uaddc \ud45c\ud604\uc2dd\uc740 flannel \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc81c\uc678\ud569\ub2c8\ub2e4. \"--blacklist=cali.|tunl.|vxlan[.]calico|veth.|cni[0-9a-z]\" \uc815\uaddc \ud45c\ud604\uc2dd\uc740 calico CNI\uc640 \ud568\uaed8 \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.
\uc0d8\ud50c loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#fail-over","title":"Fail-Over","text":"\uc774 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 Fail-over \uc2dc\ub098\ub9ac\uc624\ub97c \uc124\uba85\ud569\ub2c8\ub2e4:
kube-loxilb\ub294 loxilb\uc758 \uc0c1\ud0dc\ub97c \uc9c0\uc18d\uc801\uc73c\ub85c \ubaa8\ub2c8\ud130\ub9c1\ud569\ub2c8\ub2e4. \uc7a5\uc560\uac00 \ubc1c\uc0dd\ud558\uba74 loxilb\uc758 \uc0c1\ud0dc \ubcc0\uacbd\uc744 \uac10\uc9c0\ud558\uace0 \uc0ac\uc6a9 \uac00\ub2a5\ud55c loxilb pod \ud480\uc5d0\uc11c \uc0c8\ub85c\uc6b4 \u201c\ud65c\uc131\u201d\uc744 \ud560\ub2f9\ud569\ub2c8\ub2e4. \uc0c8\ub85c\uc6b4 pod\ub294 \uc774\uc804\uc5d0 \ub2e4\ub978 loxilb pod\uc5d0 \ud560\ub2f9\ub41c svcIP\ub97c \uc0c1\uc18d\ubc1b\uc544 \uc11c\ube44\uc2a4\uac00 \uc0c8\ub86d\uac8c \ud65c\uc131\ud654\ub41c loxilb pod\uc5d0 \uc758\ud574 \uc81c\uacf5\ub429\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#2-l3-bgp-","title":"\uc2dc\ub098\ub9ac\uc624 2 - L3 \ub124\ud2b8\uc6cc\ud06c (BGP\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\ubc31\uc5c5 \ubaa8\ub4dc)","text":""},{"location":"ha-deploy-KOR/#_5","title":"\uc124\uc815","text":"\uc774 \ubc30\ud3ec \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 Kubernetes\uc640 loxilb\uac00 \ub2e4\uc74c\uacfc \uac19\uc774 \uc124\uc815\ub429\ub2c8\ub2e4:
Kubernetes\ub294 2\uac1c\uc758 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc640 2\uac1c\uc758 \uc6cc\ucee4 \ub178\ub4dc\ub85c \uad6c\uc131\ub41c \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud558\uba70, \ubaa8\ub4e0 \ub178\ub4dc\ub294 \ub3d9\uc77c\ud55c 192.168.80.0/24 \uc11c\ube0c\ub137\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. SVC\ub294 \ud074\ub7ec\uc2a4\ud130/\ub85c\uceec \uc11c\ube0c\ub137\uc774 \uc544\ub2cc \uc678\ubd80 IP\ub97c \uac00\uc9d1\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 loxilb\uac00 \ubaa8\ub4e0 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc5d0\uc11c DaemonSet\uc73c\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4. \uadf8\ub9ac\uace0 kube-loxilb\ub294 Deployment\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_6","title":"\uc801\ud569\ud55c \uc0ac\uc6a9 \uc0ac\ub840","text":" - \ud074\ub77c\uc774\uc5b8\ud2b8\uc640 \ud074\ub7ec\uc2a4\ud130\uac00 \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0 \uc788\ub294 \uacbd\uc6b0.
- \ud074\ub77c\uc774\uc5b8\ud2b8\uc640 svc VIP\uac00 \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0 \uc788\uc5b4\uc57c \ud558\ub294 \uacbd\uc6b0(\ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub3c4 \ub2e4\ub978 \ub124\ud2b8\uc6cc\ud06c\uc5d0 \uc788\uc744 \uc218 \uc788\uc74c).
- \ud074\ub77c\uc6b0\ub4dc \ubc30\ud3ec\uc5d0 \uc774\uc0c1\uc801\uc785\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#kube-loxilb_1","title":"kube-loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0\uc11c CIDR\uc744 \uc120\ud0dd\ud569\ub2c8\ub2e4.
- SetRoles \uc635\uc158\uc744 \uc120\ud0dd\ud558\uc5ec \ud65c\uc131 loxilb pod\ub97c \uc120\ud0dd\ud560 \uc218 \uc788\ub3c4\ub85d \ud569\ub2c8\ub2e4.
- loxilb\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 Fail-over \uc2dc \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130\ub97c \uc120\ucd9c\ud569\ub2c8\ub2e4.
- loxilb Pod \uac04\uc758 BGP \ud53c\uc5b4\ub9c1 \ud504\ub85c\ube44\uc800\ub2dd\uc744 \uc790\ub3d9\ud654\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c One-arm \uc11c\ube44\uc2a4 \ubaa8\ub4dc\uc5d0\uc11c loxilb\ub97c \uc124\uc815\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_7","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --externalCIDR=123.123.123.1/24\n - --setRoles=0.0.0.0\n - --setLBMode=1\n - --setBGP=65100\n - --extBGPPeers=50.50.50.1:65101\n
* \"--externalCIDR=123.123.123.1/24\" - svc\uc758 \uc678\ubd80 \uc11c\ube44\uc2a4 IP\ub294 externalCIDR \ubc94\uc704\uc5d0\uc11c \uc120\ud0dd\ub429\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8, svc \ubc0f \ud074\ub7ec\uc2a4\ud130\uac00 \ubaa8\ub450 \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0 \uc788\uc2b5\ub2c8\ub2e4. * \"--setRoles=0.0.0.0\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 loxilb \uc778\uc2a4\ud134\uc2a4 \uc911\uc5d0\uc11c \ud65c\uc131-\ubc31\uc5c5\uc744 \uc120\ud0dd\ud558\uace0 svc IP\ub97c \ud65c\uc131 loxilb \ub178\ub4dc\uc5d0 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. * \"--setLBMode=1\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c One-arm \ubaa8\ub4dc\uc5d0\uc11c svc\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. * \"--setBGP=65100\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 BGP \uc778\uc2a4\ud134\uc2a4\uc5d0 \ub85c\uceec AS \ubc88\ud638\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. * \"--extBGPPeers=50.50.50.1:65101\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 BGP \uc778\uc2a4\ud134\uc2a4\uc758 \uc678\ubd80 \uc774\uc6c3\uc744 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc0d8\ud50c kube-loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#loxilb_1","title":"loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \uc0c1\ud0dc(\ud65c\uc131 \ub610\ub294 \ubc31\uc5c5)\uc5d0 \ub530\ub77c SVC IP\ub97c \uad11\uace0\ud569\ub2c8\ub2e4.
- svc\ub85c \ud5a5\ud558\ub294 \uc678\ubd80 \ud2b8\ub798\ud53d\uc744 \ucd94\uc801\ud558\uace0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc804\ub2ec\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \ud65c\uc131 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4(\uad6c\uc131\ub41c \uacbd\uc6b0).
"},{"location":"ha-deploy-KOR/#_8","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command: [ \"/root/loxilb-io/loxilb/loxilb\", \"--bgp\", \"--egr-hooks\", \"--blacklist=cni[0-9a-z]|veth.|flannel.\" ]\n ports:\n - containerPort: 11111\n - containerPort: 179\n - containerPort: 50051\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n
-
\"--bgp\" - \uc635\uc158\uc740 loxilb\uac00 goBGP \uc778\uc2a4\ud134\uc2a4\uc640 \ud568\uaed8 \uc2e4\ud589\ub418\uc5b4 \ud65c\uc131/\ubc31\uc5c5 \uc0c1\ud0dc\uc5d0 \ub530\ub77c \uc801\uc808\ud55c \uc6b0\uc120\uc21c\uc704\ub85c \uacbd\ub85c\ub97c \uad11\uace0\ud560 \uc218 \uc788\uac8c \ud569\ub2c8\ub2e4.
-
\"--egr-hooks\" - \uc6cc\ud06c\ub85c\ub4dc\uac00 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc5d0 \uc2a4\ucf00\uc904\ub9c1\ub420 \uc218 \uc788\ub294 \uacbd\uc6b0 \ud544\uc694\ud569\ub2c8\ub2e4. \uc6cc\ucee4 \ub178\ub4dc\ub85c \uc6cc\ud06c\ub85c\ub4dc \uc2a4\ucf00\uc904\ub9c1\uc744 \uad00\ub9ac\ud558\ub294 \uacbd\uc6b0 \uc774 \uc778\uc218\ub97c \uc5b8\uae09\ud560 \ud544\uc694\ub294 \uc5c6\uc2b5\ub2c8\ub2e4.
-
\"--blacklist=cni[0-9a-z]|veth.|flannel.\" - \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\uc5d0\uc11c \uc2e4\ud589\ud558\ub294 \uacbd\uc6b0 \ud544\uc218\uc785\ub2c8\ub2e4. loxilb\ub294 \ubaa8\ub4e0 \uc778\ud130\ud398\uc774\uc2a4\uc5d0 ebpf \ud504\ub85c\uadf8\ub7a8\uc744 \ubd80\ucc29\ud558\uc9c0\ub9cc \uae30\ubcf8 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c \uc2e4\ud589 \uc911\uc774\ubbc0\ub85c \ubaa8\ub4e0 \uc778\ud130\ud398\uc774\uc2a4(CNI \uc778\ud130\ud398\uc774\uc2a4 \ud3ec\ud568)\uac00 \ub178\ucd9c\ub418\uace0 loxilb\ub294 \uc774\ub7ec\ud55c \uc778\ud130\ud398\uc774\uc2a4\uc5d0 ebpf \ud504\ub85c\uadf8\ub7a8\uc744 \ubd80\ucc29\ud558\uac8c \ub429\ub2c8\ub2e4. \uc774\ub294 \uc6d0\ud558\ub294 \ubc14\uac00 \uc544\ub2c8\ubbc0\ub85c \uc0ac\uc6a9\uc790\ub294 \uc774\ub7ec\ud55c \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc81c\uc678\ud558\ub294 \uc815\uaddc \ud45c\ud604\uc2dd\uc744 \uc5b8\uae09\ud574\uc57c \ud569\ub2c8\ub2e4. \uc8fc\uc5b4\uc9c4 \uc608\uc81c\uc758 \uc815\uaddc \ud45c\ud604\uc2dd\uc740 flannel \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc81c\uc678\ud569\ub2c8\ub2e4. \"--blacklist=cali.|tunl.|vxlan[.]calico|veth.|cni[0-9a-z]\" \uc815\uaddc \ud45c\ud604\uc2dd\uc740 calico CNI\uc640 \ud568\uaed8 \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.
\uc0d8\ud50c loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#fail-over_1","title":"Fail-over","text":"\uc774 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 Fail-over \uc2dc\ub098\ub9ac\uc624\ub97c \uc124\uba85\ud569\ub2c8\ub2e4:
kube-loxilb\ub294 loxilb\uc758 \uc0c1\ud0dc\ub97c \uc9c0\uc18d\uc801\uc73c\ub85c \ubaa8\ub2c8\ud130\ub9c1\ud569\ub2c8\ub2e4. \uc7a5\uc560\uac00 \ubc1c\uc0dd\ud558\uba74 loxilb\uc758 \uc0c1\ud0dc \ubcc0\uacbd\uc744 \uac10\uc9c0\ud558\uace0 \uc0ac\uc6a9 \uac00\ub2a5\ud55c loxilb pod \ud480\uc5d0\uc11c \uc0c8\ub85c\uc6b4 \u201c\ud65c\uc131\u201d\uc744 \ud560\ub2f9\ud569\ub2c8\ub2e4. \uc0c8\ub85c\uc6b4 pod\ub294 \uc774\uc804\uc5d0 \ub2e4\ub978 loxilb pod\uc5d0 \ud560\ub2f9\ub41c svcIP\ub97c \uc0c1\uc18d\ubc1b\uc544 \uc0c8 \uc0c1\ud0dc\uc5d0 \ub530\ub77c SVC IP\ub97c \uad11\uace0\ud569\ub2c8\ub2e4. \ud074\ub77c\uc774\uc5b8\ud2b8\ub294 SVCIP\uc5d0 \ub300\ud55c \uc0c8\ub85c\uc6b4 \uacbd\ub85c\ub97c \uc218\uc2e0\ud558\uace0 \uc11c\ube44\uc2a4\ub294 \uc0c8\ub85c \ud65c\uc131\ud654\ub41c loxilb pod\uc5d0 \uc758\ud574 \uc81c\uacf5\ub429\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#3-l3-bgp-ecmp-","title":"\uc2dc\ub098\ub9ac\uc624 3 - L3 \ub124\ud2b8\uc6cc\ud06c (BGP ECMP\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\uc561\ud2f0\ube0c)","text":""},{"location":"ha-deploy-KOR/#_9","title":"\uc124\uc815","text":"\uc774 \ubc30\ud3ec \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 Kubernetes\uc640 loxilb\uac00 \ub2e4\uc74c\uacfc \uac19\uc774 \uc124\uc815\ub429\ub2c8\ub2e4:
Kubernetes\ub294 2\uac1c\uc758 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc640 2\uac1c\uc758 \uc6cc\ucee4 \ub178\ub4dc\ub85c \uad6c\uc131\ub41c \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud558\uba70, \ubaa8\ub4e0 \ub178\ub4dc\ub294 \ub3d9\uc77c\ud55c 192.168.80.0/24 \uc11c\ube0c\ub137\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. SVC\ub294 \ud074\ub7ec\uc2a4\ud130/\ub85c\uceec \uc11c\ube0c\ub137\uc774 \uc544\ub2cc \uc678\ubd80 IP\ub97c \uac00\uc9d1\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 loxilb\uac00 \ubaa8\ub4e0 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc5d0\uc11c DaemonSet\uc73c\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4. \uadf8\ub9ac\uace0 kube-loxilb\ub294 Deployment\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_10","title":"\uc801\ud569\ud55c \uc0ac\uc6a9 \uc0ac\ub840","text":" - \ud074\ub77c\uc774\uc5b8\ud2b8\uc640 \ud074\ub7ec\uc2a4\ud130\uac00 \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0 \uc788\ub294 \uacbd\uc6b0.
- \ud074\ub77c\uc774\uc5b8\ud2b8\uc640 svc VIP\uac00 \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0 \uc788\uc5b4\uc57c \ud558\ub294 \uacbd\uc6b0(\ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub3c4 \ub2e4\ub978 \ub124\ud2b8\uc6cc\ud06c\uc5d0 \uc788\uc744 \uc218 \uc788\uc74c).
- \ud074\ub77c\uc6b0\ub4dc \ubc30\ud3ec\uc5d0 \uc774\uc0c1\uc801\uc785\ub2c8\ub2e4.
- \uc561\ud2f0\ube0c-\uc561\ud2f0\ube0c \ud074\ub7ec\uc2a4\ud130\ub9c1\uc73c\ub85c \uc778\ud574 \ub354 \ub098\uc740 \uc131\ub2a5\uc774 \ud544\uc694\ud558\uc9c0\ub9cc \ub124\ud2b8\uc6cc\ud06c \uc7a5\uce58/\ud638\uc2a4\ud2b8\uac00 ECMP\ub97c \uc9c0\uc6d0\ud574\uc57c \ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#kube-loxilb_2","title":"kube-loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0\uc11c CIDR\uc744 \uc120\ud0dd\ud569\ub2c8\ub2e4.
- \uc774 \uacbd\uc6b0 SetRoles \uc635\uc158\uc744 \uc120\ud0dd\ud558\uc9c0 \ub9c8\uc138\uc694(svcIP\uac00 \ub3d9\uc77c\ud55c attributes/prio/med \ub85c \uad11\uace0\ub429\ub2c8\ub2e4).
- loxilb Pod \uac04\uc758 BGP \ud53c\uc5b4\ub9c1 \ud504\ub85c\ube44\uc800\ub2dd\uc744 \uc790\ub3d9\ud654\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c One-arm \uc11c\ube44\uc2a4 \ubaa8\ub4dc\uc5d0\uc11c loxilb\ub97c \uc124\uc815\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_11","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --externalCIDR=123.123.123.1/24\n - --setLBMode=1\n - --setBGP=65100\n - --extBGPPeers=50.50.50.1:65101\n
- \"--externalCIDR=123.123.123.1/24\" - svc\uc758 \uc678\ubd80 \uc11c\ube44\uc2a4 IP\ub294 externalCIDR \ubc94\uc704\uc5d0\uc11c \uc120\ud0dd\ub429\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8, svc \ubc0f \ud074\ub7ec\uc2a4\ud130\uac00 \ubaa8\ub450 \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0 \uc788\uc2b5\ub2c8\ub2e4.
- \"--setLBMode=1\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c One-arm \ubaa8\ub4dc\uc5d0\uc11c svc\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- \"--setBGP=65100\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 BGP \uc778\uc2a4\ud134\uc2a4\uc5d0 \ub85c\uceec AS \ubc88\ud638\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- \"--extBGPPeers=50.50.50.1:65101\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 BGP \uc778\uc2a4\ud134\uc2a4\uc758 \uc678\ubd80 \uc774\uc6c3\uc744 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
\uc0d8\ud50c kube-loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#loxilb_2","title":"loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \ub3d9\uc77c\ud55c \uc18d\uc131\uc73c\ub85c SVC IP\ub97c \uad11\uace0\ud569\ub2c8\ub2e4.
- svc\ub85c \ud5a5\ud558\ub294 \uc678\ubd80 \ud2b8\ub798\ud53d\uc744 \ucd94\uc801\ud558\uace0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc804\ub2ec\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \ud65c\uc131 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4(\uad6c\uc131\ub41c \uacbd\uc6b0).
"},{"location":"ha-deploy-KOR/#_12","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command: [ \"/root/loxilb-io/loxilb/loxilb\", \"--bgp\", \"--egr-hooks\", \"--blacklist=cni[0-9a-z]|veth.|flannel.\" ]\n ports:\n - containerPort: 11111\n - containerPort: 179\n - containerPort: 50051\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n
-
\"--bgp\" - \uc635\uc158\uc740 loxilb\uac00 \ub3d9\uc77c\ud55c \uc18d\uc131\uc73c\ub85c \uacbd\ub85c\ub97c \uad11\uace0\ud560 \uc218 \uc788\uac8c \ud569\ub2c8\ub2e4.
-
\"--egr-hooks\" - \uc6cc\ud06c\ub85c\ub4dc\uac00 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc5d0 \uc2a4\ucf00\uc904\ub9c1\ub420 \uc218 \uc788\ub294 \uacbd\uc6b0 \ud544\uc694\ud569\ub2c8\ub2e4. \uc6cc\ucee4 \ub178\ub4dc\ub85c \uc6cc\ud06c\ub85c\ub4dc \uc2a4\ucf00\uc904\ub9c1\uc744 \uad00\ub9ac\ud558\ub294 \uacbd\uc6b0 \uc774 \uc778\uc218\ub97c \uc5b8\uae09\ud560 \ud544\uc694\ub294 \uc5c6\uc2b5\ub2c8\ub2e4.
-
\"--blacklist=cni[0-9a-z]|veth.|flannel.\" - \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\uc5d0\uc11c \uc2e4\ud589\ud558\ub294 \uacbd\uc6b0 \ud544\uc218\uc785\ub2c8\ub2e4. loxilb\ub294 \ubaa8\ub4e0 \uc778\ud130\ud398\uc774\uc2a4\uc5d0 ebpf \ud504\ub85c\uadf8\ub7a8\uc744 \ubd80\ucc29\ud558\uc9c0\ub9cc \uae30\ubcf8 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c \uc2e4\ud589 \uc911\uc774\ubbc0\ub85c \ubaa8\ub4e0 \uc778\ud130\ud398\uc774\uc2a4(CNI \uc778\ud130\ud398\uc774\uc2a4 \ud3ec\ud568)\uac00 \ub178\ucd9c\ub418\uace0 loxilb\ub294 \uc774\ub7ec\ud55c \uc778\ud130\ud398\uc774\uc2a4\uc5d0 ebpf \ud504\ub85c\uadf8\ub7a8\uc744 \ubd80\ucc29\ud558\uac8c \ub429\ub2c8\ub2e4. \uc774\ub294 \uc6d0\ud558\ub294 \ubc14\uac00 \uc544\ub2c8\ubbc0\ub85c \uc0ac\uc6a9\uc790\ub294 \uc774\ub7ec\ud55c \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc81c\uc678\ud558\ub294 \uc815\uaddc \ud45c\ud604\uc2dd\uc744 \uc5b8\uae09\ud574\uc57c \ud569\ub2c8\ub2e4. \uc8fc\uc5b4\uc9c4 \uc608\uc81c\uc758 \uc815\uaddc \ud45c\ud604\uc2dd\uc740 flannel \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc81c\uc678\ud569\ub2c8\ub2e4. \"--blacklist=cali.|tunl.|vxlan[.]calico|veth.|cni[0-9a-z]\" \uc815\uaddc \ud45c\ud604\uc2dd\uc740 calico CNI\uc640 \ud568\uaed8 \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.
\uc0d8\ud50c loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#fail-over_2","title":"Fail-over","text":"\uc774 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 Fail-over \uc2dc\ub098\ub9ac\uc624\ub97c \uc124\uba85\ud569\ub2c8\ub2e4:
\uc7a5\uc560\uac00 \ubc1c\uc0dd\ud55c \uacbd\uc6b0, \ud074\ub77c\uc774\uc5b8\ud2b8\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 BGP\ub294 ECMP \uacbd\ub85c\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\uace0 \ud2b8\ub798\ud53d\uc744 \ud65c\uc131 ECMP \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \ubcf4\ub0b4\uae30 \uc2dc\uc791\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#4-","title":"\uc2dc\ub098\ub9ac\uc624 4 - \uc5f0\uacb0 \ub3d9\uae30\ud654\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\ubc31\uc5c5","text":""},{"location":"ha-deploy-KOR/#_13","title":"\uc124\uc815","text":"\uc774 \ubc30\ud3ec \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 Kubernetes\uc640 loxilb\uac00 \ub2e4\uc74c\uacfc \uac19\uc774 \uc124\uc815\ub429\ub2c8\ub2e4:
\uc774 \uae30\ub2a5\uc740 loxilb\uac00 \uae30\ubcf8 \ubaa8\ub4dc \ub610\ub294 Full NAT \ubaa8\ub4dc\uc5d0\uc11c Kubernetes \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc5d0\uc11c \uc2e4\ud589\ub420 \ub54c\ub9cc \uc9c0\uc6d0\ub429\ub2c8\ub2e4. Kubernetes\ub294 2\uac1c\uc758 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc640 2\uac1c\uc758 \uc6cc\ucee4 \ub178\ub4dc\ub85c \uad6c\uc131\ub41c \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud558\uba70, \ubaa8\ub4e0 \ub178\ub4dc\ub294 \ub3d9\uc77c\ud55c 192.168.80.0/24 \uc11c\ube0c\ub137\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. SVC\ub294 \uc678\ubd80 IP\ub97c \uac00\uc9d1\ub2c8\ub2e4.
\uc678\ubd80 \ud074\ub77c\uc774\uc5b8\ud2b8, loxilb \ubc0f Kubernetes \ud074\ub7ec\uc2a4\ud130\uc758 \uc5f0\uacb0\uc5d0 \ub530\ub77c \uba87 \uac00\uc9c0 \uac00\ub2a5\ud55c \uc2dc\ub098\ub9ac\uc624\uac00 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 L3 \uc5f0\uacb0\uc744 \uace0\ub824\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_14","title":"\uc801\ud569\ud55c \uc0ac\uc6a9 \uc0ac\ub840","text":" - loxilb pod \uc7a5\uc560 \uc2dc \uc7a5\uae30 \uc2e4\ud589 \uc5f0\uacb0\uc744 \uc720\uc9c0\ud574\uc57c \ud558\ub294 \uacbd\uc6b0
- DSR \ubaa8\ub4dc\ub85c \uc54c\ub824\uc9c4 \ub2e4\ub978 LB \ubaa8\ub4dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc5f0\uacb0\uc744 \uc720\uc9c0\ud560 \uc218 \uc788\uc9c0\ub9cc \ub2e4\uc74c\uacfc \uac19\uc740 \uc81c\ud55c \uc0ac\ud56d\uc774 \uc788\uc2b5\ub2c8\ub2e4:
- \uc0c1\ud0dc \uae30\ubc18 \ud544\ud130\ub9c1 \ubc0f \uc5f0\uacb0 \ucd94\uc801\uc744 \ubcf4\uc7a5\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4.
- \uba40\ud2f0\ud638\ubc0d \uae30\ub2a5\uc744 \uc9c0\uc6d0\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \ub2e4\ub978 5-\ud29c\ud50c\uc774 \ub3d9\uc77c\ud55c \uc5f0\uacb0\uc5d0 \uc18d\ud560 \uc218 \uc788\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#kube-loxilb_3","title":"kube-loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \ud544\uc694\ud55c \ub300\ub85c CIDR\uc744 \uc120\ud0dd\ud569\ub2c8\ub2e4.
- SetRoles \uc635\uc158\uc744 \uc120\ud0dd\ud558\uc5ec \ud65c\uc131 loxilb\ub97c \uc120\ud0dd\ud560 \uc218 \uc788\ub3c4\ub85d \ud569\ub2c8\ub2e4(svcIP\uac00 \ub2e4\ub978 attributes/prio/med \ub85c \uad11\uace0\ub429\ub2c8\ub2e4).
- loxilb \ucee8\ud14c\uc774\ub108 \uac04\uc758 BGP \ud53c\uc5b4\ub9c1 \ud504\ub85c\ube44\uc800\ub2dd\uc744 \uc790\ub3d9\ud654\ud569\ub2c8\ub2e4(\ud544\uc694\ud55c \uacbd\uc6b0).
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c Full NAT \uc11c\ube44\uc2a4 \ubaa8\ub4dc\uc5d0\uc11c loxilb\ub97c \uc124\uc815\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_15","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --setRoles=0.0.0.0\n - --loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\n - --externalCIDR=123.123.123.1/24\n - --setLBMode=2\n - --setBGP=65100\n - --extBGPPeers=50.50.50.1:65101\n
- \"--setRoles=0.0.0.0\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 loxilb \uc778\uc2a4\ud134\uc2a4 \uc911\uc5d0\uc11c \ud65c\uc131-\ubc31\uc5c5\uc744 \uc120\ud0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- \"--loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\" - \uc5f0\uacb0\ud560 loxilb URL\uc785\ub2c8\ub2e4.
- \"--externalCIDR=123.123.123.1/24\" - svc\uc758 \uc678\ubd80 \uc11c\ube44\uc2a4 IP\ub294 externalCIDR \ubc94\uc704\uc5d0\uc11c \uc120\ud0dd\ub429\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8, svc \ubc0f \ud074\ub7ec\uc2a4\ud130\uac00 \ubaa8\ub450 \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0 \uc788\uc2b5\ub2c8\ub2e4.
- \"--setLBMode=2\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c Full NAT \ubaa8\ub4dc\uc5d0\uc11c svc\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- \"--setBGP=65100\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 BGP \uc778\uc2a4\ud134\uc2a4\uc5d0 \ub85c\uceec AS \ubc88\ud638\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- \"--extBGPPeers=50.50.50.1:65101\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 BGP \uc778\uc2a4\ud134\uc2a4\uc758 \uc678\ubd80 \uc774\uc6c3\uc744 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
\uc0d8\ud50c kube-loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#loxilb_3","title":"loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \uc0c1\ud0dc(\ud65c\uc131/\ubc31\uc5c5)\uc5d0 \ub530\ub77c SVC IP\ub97c \uad11\uace0\ud569\ub2c8\ub2e4.
- svc\ub85c \ud5a5\ud558\ub294 \uc678\ubd80 \ud2b8\ub798\ud53d\uc744 \ucd94\uc801\ud558\uace0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc804\ub2ec\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \ud65c\uc131 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4(\uad6c\uc131\ub41c \uacbd\uc6b0).
- \uc7a5\uae30 \uc2e4\ud589 \uc5f0\uacb0\uc744 \ub2e4\ub978 \uad6c\uc131\ub41c loxilb \ud53c\uc5b4\uc640 \ub3d9\uae30\ud654\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_16","title":"\uc2e4\ud589 \uc635\uc158","text":"#llb1\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=$llb2IP --self=0 -b\n\n#llb2\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=$llb1IP --self=1 -b\n
- \"--cluster=\\<llb-peer-IP>\" - \uc635\uc158\uc740 \ub3d9\uae30\ud654\ub97c \uc704\ud55c \ud53c\uc5b4 loxilb IP\ub97c \uad6c\uc131\ud569\ub2c8\ub2e4.
- \"--self=0/1\" - \uc635\uc158\uc740 \uc778\uc2a4\ud134\uc2a4\ub97c \uc2dd\ubcc4\ud569\ub2c8\ub2e4.
- \"-b\" - \uc635\uc158\uc740 loxilb\uac00 \ud65c\uc131/\ubc31\uc5c5 \uc0c1\ud0dc\uc5d0 \ub530\ub77c \uc801\uc808\ud55c \uc6b0\uc120\uc21c\uc704\ub85c \uacbd\ub85c\ub97c \uad11\uace0\ud560 \uc218 \uc788\ub3c4\ub85d goBGP \uc778\uc2a4\ud134\uc2a4\uc640 \ud568\uaed8 \uc2e4\ud589\ub418\ub3c4\ub85d \ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#fail-over_3","title":"Fail-over","text":"\uc774 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 Fail-over \uc2dc\ub098\ub9ac\uc624\ub97c \uc124\uba85\ud569\ub2c8\ub2e4:
\uc7a5\uc560\uac00 \ubc1c\uc0dd\ud558\uba74 kube-loxilb\ub294 \uc7a5\uc560\ub97c \uac10\uc9c0\ud569\ub2c8\ub2e4. \ud65c\uc131 loxilb \ud480\uc5d0\uc11c \uc0c8\ub85c\uc6b4 loxilb\ub97c \uc120\ud0dd\ud558\uace0 \uc774\ub97c \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130\ub85c \uc5c5\ub370\uc774\ud2b8\ud569\ub2c8\ub2e4. \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130 loxilb\ub294 \ub192\uc740 \uc6b0\uc120\uc21c\uc704\ub85c svcIP\ub97c \uad11\uace0\ud558\uc5ec \ud074\ub77c\uc774\uc5b8\ud2b8\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 BGP\uac00 \ud2b8\ub798\ud53d\uc744 \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130 loxilb\ub85c \ubcf4\ub0b4\ub3c4\ub85d \uac15\uc81c\ud569\ub2c8\ub2e4. \uc5f0\uacb0\uc774 \ubaa8\ub450 \ub3d9\uae30\ud654\ub418\uc5b4 \uc788\uc73c\ubbc0\ub85c \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130 loxilb\ub294 \uc9c0\uc815\ub41c \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \ud2b8\ub798\ud53d\uc744 \ubcf4\ub0b4\uae30 \uc2dc\uc791\ud569\ub2c8\ub2e4.
\uc774 \uae30\ub2a5\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\ub824\uba74 \"Hitless HA\" \ube14\ub85c\uadf8\ub97c \uc77d\uc5b4\ubcf4\uc138\uc694.
"},{"location":"ha-deploy-KOR/#5-fail-over-","title":"\uc2dc\ub098\ub9ac\uc624 5 - \ube60\ub978 Fail-over \uac10\uc9c0\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\ubc31\uc5c5","text":""},{"location":"ha-deploy-KOR/#_17","title":"\uc124\uc815","text":"\uc774 \ubc30\ud3ec \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 Kubernetes\uc640 loxilb\uac00 \ub2e4\uc74c\uacfc \uac19\uc774 \uc124\uc815\ub429\ub2c8\ub2e4:
\uc774 \uae30\ub2a5\uc740 loxilb\uac00 Kubernetes \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc5d0\uc11c \uc2e4\ud589\ub420 \ub54c\ub9cc \uc9c0\uc6d0\ub429\ub2c8\ub2e4. Kubernetes\ub294 2\uac1c\uc758 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc640 2\uac1c\uc758 \uc6cc\ucee4 \ub178\ub4dc\ub85c \uad6c\uc131\ub41c \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud558\uba70, \ubaa8\ub4e0 \ub178\ub4dc\ub294 \ub3d9\uc77c\ud55c 192.168.80.0/24 \uc11c\ube0c\ub137\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. SVC\ub294 \uc678\ubd80 IP\ub97c \uac00\uc9d1\ub2c8\ub2e4.
\uc678\ubd80 \ud074\ub77c\uc774\uc5b8\ud2b8, loxilb \ubc0f Kubernetes \ud074\ub7ec\uc2a4\ud130\uc758 \uc5f0\uacb0\uc5d0 \ub530\ub77c \uba87 \uac00\uc9c0 \uac00\ub2a5\ud55c \uc2dc\ub098\ub9ac\uc624\uac00 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 \uc5f0\uacb0 \ub3d9\uae30\ud654\uc640 \ud568\uaed8 L2 \uc5f0\uacb0\uc744 \uace0\ub824\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_18","title":"\uc801\ud569\ud55c \uc0ac\uc6a9 \uc0ac\ub840","text":" - \ube60\ub978 Fail-over \uac10\uc9c0 \ubc0f \uc11c\ube44\uc2a4 \uc5f0\uc18d\uc131\uc774 \ud544\uc694\ud55c \uacbd\uc6b0.
- \uc774 \uae30\ub2a5\uc740 L2 \ub610\ub294 L3 \ub124\ud2b8\uc6cc\ud06c \uc124\uc815\uc5d0\uc11c \ubaa8\ub450 \uc791\ub3d9\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#kube-loxilb_4","title":"kube-loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \ud544\uc694\ud55c \ub300\ub85c CIDR\uc744 \uc120\ud0dd\ud569\ub2c8\ub2e4.
- SetRoles \uc635\uc158\uc744 \ube44\ud65c\uc131\ud654\ud558\uc5ec \ud65c\uc131 loxilb\ub97c \uc120\ud0dd\ud558\uc9c0 \uc54a\ub3c4\ub85d \ud569\ub2c8\ub2e4.
- loxilb \ucee8\ud14c\uc774\ub108 \uac04\uc758 BGP \ud53c\uc5b4\ub9c1 \ud504\ub85c\ube44\uc800\ub2dd\uc744 \uc790\ub3d9\ud654\ud569\ub2c8\ub2e4(\ud544\uc694\ud55c \uacbd\uc6b0).
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c \uad6c\uc131\ub41c \uc11c\ube44\uc2a4 \ubaa8\ub4dc\uc5d0\uc11c loxilb\ub97c \uc124\uc815\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_19","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n # Disable setRoles option\n #- --setRoles=0.0.0.0\n - --loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\n - --externalCIDR=192.168.80.5/32\n - --setLBMode=2\n
- \"--setRoles=0.0.0.0\" - SetRoles \uc635\uc158\uc744 \ube44\ud65c\uc131\ud654\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774 \uc635\uc158\uc744 \ud65c\uc131\ud654\ud558\uba74 kube-loxilb\uac00 loxilb \uc778\uc2a4\ud134\uc2a4 \uc911\uc5d0\uc11c \ud65c\uc131-\ubc31\uc5c5\uc744 \uc120\ud0dd\ud558\uac8c \ub429\ub2c8\ub2e4.
- \"--loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\" - \uc5f0\uacb0\ud560 loxilb URL\uc785\ub2c8\ub2e4.
- \"--externalCIDR=192.168.80.5/32\" - svc\uc758 \uc678\ubd80 \uc11c\ube44\uc2a4 IP\ub294 externalCIDR \ubc94\uc704\uc5d0\uc11c \uc120\ud0dd\ub429\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8, svc \ubc0f \ud074\ub7ec\uc2a4\ud130\uac00 \ubaa8\ub450 \ub3d9\uc77c\ud55c \uc11c\ube0c\ub137\uc5d0 \uc788\uc2b5\ub2c8\ub2e4.
- \"--setLBMode=2\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c Full NAT \ubaa8\ub4dc\uc5d0\uc11c svc\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
\uc0d8\ud50c kube-loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#loxilb_4","title":"loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \uc0c1\ud0dc(\ud65c\uc131/\ubc31\uc5c5)\uc5d0 \ub530\ub77c SVC IP\ub97c \uad11\uace0\ud569\ub2c8\ub2e4.
- svc\ub85c \ud5a5\ud558\ub294 \uc678\ubd80 \ud2b8\ub798\ud53d\uc744 \ucd94\uc801\ud558\uace0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc804\ub2ec\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \ud65c\uc131 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4(\uad6c\uc131\ub41c \uacbd\uc6b0).
- \uc7a5\uae30 \uc2e4\ud589 \uc5f0\uacb0\uc744 \ub2e4\ub978 \uad6c\uc131\ub41c loxilb \ud53c\uc5b4\uc640 \ub3d9\uae30\ud654\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_20","title":"\uc2e4\ud589 \uc635\uc158","text":"#llb1\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=192.168.80.2 --self=0 --ka=192.168.80.2:192.168.80.1\n\n#llb2\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=192.168.80.1 --self=1 --ka=192.168.80.1:192.168.80.2\n
- \"--ka=\\<llb-peer-IP>:\\<llb-self-IP>\" - \uc635\uc158\uc740 BFD\ub97c \uc704\ud55c \ud53c\uc5b4 loxilb IP\uc640 \uc18c\uc2a4 IP\ub97c \uad6c\uc131\ud569\ub2c8\ub2e4.
- \"--cluster=\\<llb-peer-IP>\" - \uc635\uc158\uc740 \ub3d9\uae30\ud654\ub97c \uc704\ud55c \ud53c\uc5b4 loxilb IP\ub97c \uad6c\uc131\ud569\ub2c8\ub2e4.
- \"--self=0/1\" - \uc635\uc158\uc740 \uc778\uc2a4\ud134\uc2a4\ub97c \uc2dd\ubcc4\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#fail-over_4","title":"Fail-over","text":"\uc774 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 Fail-over \uc2dc\ub098\ub9ac\uc624\ub97c \uc124\uba85\ud569\ub2c8\ub2e4:
\uc7a5\uc560\uac00 \ubc1c\uc0dd\ud558\uba74 BFD\uac00 \uc7a5\uc560\ub97c \uac10\uc9c0\ud569\ub2c8\ub2e4. \ubc31\uc5c5 loxilb\ub294 \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130\ub85c \uc0c1\ud0dc\ub97c \uc5c5\ub370\uc774\ud2b8\ud569\ub2c8\ub2e4. \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130 loxilb\ub294 gARP \ub610\ub294 BGP\ub97c \uc2e4\ud589 \uc911\uc778 \uacbd\uc6b0 \ub354 \ub192\uc740 \uc6b0\uc120\uc21c\uc704\ub85c svcIP\ub97c \uad11\uace0\ud558\uc5ec \ud074\ub77c\uc774\uc5b8\ud2b8\uac00 \ud2b8\ub798\ud53d\uc744 \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130 loxilb\ub85c \ubcf4\ub0b4\ub3c4\ub85d \uac15\uc81c\ud569\ub2c8\ub2e4. \uc5f0\uacb0\uc774 \ubaa8\ub450 \ub3d9\uae30\ud654\ub418\uc5b4 \uc788\uc73c\ubbc0\ub85c \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130 loxilb\ub294 \uc9c0\uc815\ub41c \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \ud2b8\ub798\ud53d\uc744 \ubcf4\ub0b4\uae30 \uc2dc\uc791\ud569\ub2c8\ub2e4.
\uc774 \uae30\ub2a5\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\ub824\uba74 \"Fast Failover Detection with BFD\" \ube14\ub85c\uadf8\ub97c \uc77d\uc5b4\ubcf4\uc138\uc694.
"},{"location":"ha-deploy-KOR/#_21","title":"\ucc38\uace0:","text":"loxilb\ub97c DSR \ubaa8\ub4dc, DNS \ub4f1\uc5d0\uc11c \uc0ac\uc6a9\ud558\ub294 \ubc29\ubc95\uc740 \uc774 \ubb38\uc11c\uc5d0\uc11c \uc790\uc138\ud788 \ub2e4\ub8e8\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4. \uc2dc\ub098\ub9ac\uc624\ub97c \uacc4\uc18d \uc5c5\ub370\uc774\ud2b8\ud560 \uc608\uc815\uc785\ub2c8\ub2e4.
"},{"location":"ha-deploy/","title":"How to deploy loxilb with High Availability","text":"This article describes different scenarios about how to deploy loxilb with High Availability. Before continuing to this page, all readers are advised to have a basic understanding about kube-loxilb and the different NAT modes supported by loxilb. loxilb can run in-cluster or external to kubernetes cluster depending on architectural choices. For this documentation, we have assumed an incluster deployment wherever applicable but similar configuration should suffice for an external deployment as well.
- Scenario 1 - Flat L2 Networking (active-backup)
- Scenario 2 - L3 network (active-backup mode using BGP)
- Scenario 3 - L3 network (active-active with BGP ECMP)
- Scenario 4 - ACTIVE-BACKUP with Connection Sync
- Scenario 5 - ACTIVE-BACKUP with Fast Failover Detection(BFD)
"},{"location":"ha-deploy/#scenario-1-flat-l2-networking-active-backup","title":"Scenario 1 - Flat L2 Networking (active-backup)","text":""},{"location":"ha-deploy/#setup","title":"Setup","text":"For this deployment scenario, kubernetes and loxilb are setup as follows:
Kubernetes uses a cluster with 2 Master Nodes and 2 Worker Nodes, all the nodes use the same 192.168.80.0/24 subnet. In this scenario, loxilb will be deployed as a DaemonSet in all the master nodes. And, kube-loxilb will be deployed as Deployment.
"},{"location":"ha-deploy/#ideal-for-use-when","title":"Ideal for use when","text":" - Clients and services need to be in same-subnet.
- End-points may or may not be in same subnet.
- Simpler deployment is desired.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-kube-loxilb","title":"Roles and Responsiblities for kube-loxilb:","text":" - Choose CIDR from local subnet.
- Choose SetRoles option so it can choose active loxilb pod.
- Monitors loxilb's health and elect new master on failover.
- Sets up loxilb in one-arm svc mode towards end-points.
"},{"location":"ha-deploy/#configuration-options","title":"Configuration options","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --externalCIDR=192.168.80.200/24\n - --setRoles=0.0.0.0\n - --setLBMode=1\n
- \"--externalCIDR=192.168.80.200/24\" - The external service IP for a svc is chosen from the externalCIDR range. In this scenario, the Client, svc and cluster are in the same subnet.
- \"--setRoles=0.0.0.0\" - This option will enable kube-loxilb to choose active-backup amongst the loxilb instance and the svc IP to be configured on the active loxilb node.
- \"--setLBMode=1\" - This option will enable kube-loxilb to configure svc in one-arm mode towards the endpoints.
Sample kube-loxilb.yaml can be found here.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-loxilb","title":"Roles and Responsiblities for loxilb:","text":" - Tracks and directs the external traffic destined to svc to the endpoints.
- Monitors endpoint's health and chooses active endpoints, if configured.
"},{"location":"ha-deploy/#configuration-options_1","title":"Configuration options","text":" containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command: [ \"/root/loxilb-io/loxilb/loxilb\", \"--egr-hooks\", \"--blacklist=cni[0-9a-z]|veth.|flannel.\" ]\n ports:\n - containerPort: 11111\n - containerPort: 179\n - containerPort: 50051\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n
-
\"--egr-hooks\" - required for those cases in which workloads can be scheduled in the master nodes. No need to mention this argument when you are managing the workload scheduling to worker nodes.
-
\"--blacklist=cni[0-9a-z]|veth.|flannel.\" - mandatory for running in in-cluster mode. As loxilb attaches it's ebpf programs on all the interfaces but since we running it in the default namespace then all the interfaces including CNI interfaces will be exposed and loxilb will attach it's ebpf program in those interfaces which is definitely not desired. So, user needs to mention a regex for excluding all those interfaces. The regex in the given example will exclude the flannel interfaces. \"--blacklist=cali.|tunl.|vxlan[.]calico|veth.|cni[0-9a-z]\" regex must be used with calico CNI.
Sample loxilb.yaml can be found here.
"},{"location":"ha-deploy/#failover","title":"Failover","text":"This diagram describes the failover scenario:
kube-loxilb actively monitors loxilb's health. In case of failure, it detects change in state of loxilb and assigns new \u201cactive\u201d from available healthy loxilb pod pool. The new pod inherits svcIP assigned previously to other loxilb pod and Services are served by newly active loxilb pod.
"},{"location":"ha-deploy/#scenario-2-l3-network-active-backup-mode-using-bgp","title":"Scenario 2 - L3 network (active-backup mode using BGP)","text":""},{"location":"ha-deploy/#setup_1","title":"Setup","text":"For this deployment scenario, kubernetes and loxilb are setup as follows:
Kubernetes uses a cluster with 2 Master Nodes and 2 Worker Nodes, all the nodes use the same 192.168.80.0/24 subnet. SVCs will have an external IP, not from the cluster/local subnet. In this scenario, loxilb will be deployed as a DaemonSet in all the master nodes. And, kube-loxilb will be deployed as Deployment.
"},{"location":"ha-deploy/#ideal-for-use-when_1","title":"Ideal for use when","text":" - Clients and Cluster are in different subnets.
- Clients and svc VIP need to be in different subnet (cluster end-points may also be in different networks).
- Ideal for cloud deployments.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-kube-loxilb_1","title":"Roles and Responsiblities for kube-loxilb:","text":" - Choose CIDR from a different subnet.
- Choose SetRoles option so it can choose active loxilb pod.
- Monitors loxilb's health and elect new master on failover.
- Automates provisioning of bgp-peering between loxilb pods.
- Sets up loxilb in one-arm svc mode towards end-points.
"},{"location":"ha-deploy/#configuration-options_2","title":"Configuration options","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --externalCIDR=123.123.123.1/24\n - --setRoles=0.0.0.0\n - --setLBMode=1\n - --setBGP=65100\n - --extBGPPeers=50.50.50.1:65101\n
- \"--externalCIDR=123.123.123.1/24\" - The external service IP for a svc is chosen from the externalCIDR range. In this scenario, the Client, svc and cluster are all in the different subnet.
- \"--setRoles=0.0.0.0\" - This option will enable kube-loxilb to choose active-backup amongst the loxilb instances and the svc IP to be configured on the active loxilb node.
- \"--setLBMode=1\" - This option will enable kube-loxilb to configure svc in one-arm mode towards the endpoints.
- \"--setBGP=65100\" - This option will let kube-loxilb to configure local AS number in the bgp instance.
- \"--extBGPPeers=50.50.50.1:65101\" - This option will configure the bgp instance's external neighbors.
Sample kube-loxilb.yaml can be found here.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-loxilb_1","title":"Roles and Responsiblities for loxilb:","text":" - Advertises SVC IP as per the state(active or backup).
- Tracks and directs the external traffic destined to svc to the endpoints.
- Monitors endpoint's health and chooses active endpoints, if configured.
"},{"location":"ha-deploy/#configuration-options_3","title":"Configuration options","text":" containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command: [ \"/root/loxilb-io/loxilb/loxilb\", \"--bgp\", \"--egr-hooks\", \"--blacklist=cni[0-9a-z]|veth.|flannel.\" ]\n ports:\n - containerPort: 11111\n - containerPort: 179\n - containerPort: 50051\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n
-
\"--bgp\" - option enables loxilb to run with goBGP instance which will advertise the routes with appropriate preference as per active/backup state.
-
\"--egr-hooks\" - required for those cases in which workloads can be scheduled in the master nodes. No need to mention this argument when you are managing the workload scheduling to worker nodes.
-
\"--blacklist=cni[0-9a-z]|veth.|flannel.\" - mandatory for running in in-cluster mode. As loxilb attaches it's ebpf programs on all the interfaces but since we running it in the default namespace then all the interfaces including CNI interfaces will be exposed and loxilb will attach it's ebpf program in those interfaces which is definitely not desired. So, user needs to mention a regex for excluding all those interfaces. The regex in the given example will exclude the flannel interfaces. \"--blacklist=cali.|tunl.|vxlan[.]calico|veth.|cni[0-9a-z]\" regex must be used with calico CNI.
Sample loxilb.yaml can be found here.
"},{"location":"ha-deploy/#failover_1","title":"Failover","text":"This diagram describes the failover scenario:
kube-loxilb actively monitors loxilb's health. In case of failure, it detects change in state of loxilb and assigns new \u201cactive\u201d from available healthy loxilb pod pool. The new pod inherits svcIP assigned previously to other loxilb pod and advertises the SVC IP with the preference as per the new state. Client receives the new route to SVCIP and the Services are served by newly active loxilb pod.
"},{"location":"ha-deploy/#scenario-3-l3-network-active-active-with-bgp-ecmp","title":"Scenario 3 - L3 network (active-active with BGP ECMP)","text":""},{"location":"ha-deploy/#setup_2","title":"Setup","text":"For this deployment scenario, kubernetes and loxilb are setup as follows:
Kubernetes uses a cluster with 2 Master Nodes and 2 Worker Nodes, all the nodes use the same 192.168.80.0/24 subnet. SVCs will have an external IP, not from the cluster/local subnet. In this scenario, loxilb will be deployed as a DaemonSet in all the master nodes. And, kube-loxilb will be deployed as Deployment.
"},{"location":"ha-deploy/#ideal-for-use-when_2","title":"Ideal for use when","text":" - Clients and Cluster are in different subnets.
- Clients and svc VIP need to be in different subnet (cluster end-points may also be in different networks).
- Ideal for cloud deployments.
- Better performance is desired due to active-active clustering but network devices/hosts must be capable of supporting ecmp.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-kube-loxilb_2","title":"Roles and Responsiblities for kube-loxilb:","text":" - Choose CIDR from a different subnet.
- Do not choose SetRoles option in this case (svcIPs will be advertised with same attributes/prio/med).
- Automates provisioning of bgp-peering between loxilb pods.
- Sets up loxilb in one-arm svc mode towards end-points.
"},{"location":"ha-deploy/#configuration-options_4","title":"Configuration options","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --externalCIDR=123.123.123.1/24\n - --setLBMode=1\n - --setBGP=65100\n - --extBGPPeers=50.50.50.1:65101\n
- \"--externalCIDR=123.123.123.1/24\" - The external service IP for a svc is chosen from the externalCIDR range. In this scenario, the Client, svc and cluster are all in the different subnet.
- \"--setLBMode=1\" - This option will enable kube-loxilb to configure svc in one-arm mode towards the endpoints.
- \"--setBGP=65100\" - This option will let kube-loxilb to configure local AS number in the bgp instance.
- \"--extBGPPeers=50.50.50.1:65101\" - This option will configure the bgp instance's external neighbors
Sample kube-loxilb.yaml can be found here.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-loxilb_2","title":"Roles and Responsiblities for loxilb:","text":" - Advertises SVC IP with same attributes.
- Tracks and directs the external traffic destined to svc to the endpoints.
- Monitors endpoint's health and chooses active endpoints, if configured.
"},{"location":"ha-deploy/#configuration-options_5","title":"Configuration options","text":" containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command: [ \"/root/loxilb-io/loxilb/loxilb\", \"--bgp\", \"--egr-hooks\", \"--blacklist=cni[0-9a-z]|veth.|flannel.\" ]\n ports:\n - containerPort: 11111\n - containerPort: 179\n - containerPort: 50051\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n
-
\"--bgp\" - option enables loxilb to run with goBGP instance which will advertise the routes with same attributes.
-
\"--egr-hooks\" - required for those cases in which workloads can be scheduled in the master nodes. No need to mention this argument when you are managing the workload scheduling to worker nodes.
-
\"--blacklist=cni[0-9a-z]|veth.|flannel.\" - mandatory for running in in-cluster mode. As loxilb attaches it's ebpf programs on all the interfaces but since we running it in the default namespace then all the interfaces including CNI interfaces will be exposed and loxilb will attach it's ebpf program in those interfaces which is definitely not desired. So, user needs to mention a regex for excluding all those interfaces. The regex in the given example will exclude the flannel interfaces. \"--blacklist=cali.|tunl.|vxlan[.]calico|veth.|cni[0-9a-z]\" regex must be used with calico CNI.
Sample loxilb.yaml can be found here.
"},{"location":"ha-deploy/#failover_2","title":"Failover","text":"This diagram describes the failover scenario:
In case of failure, BGP running on the client will update the ECMP route and start sending the traffic to the active ECMP endpoints.
"},{"location":"ha-deploy/#scenario-4-active-backup-with-connection-sync","title":"Scenario 4 - ACTIVE-BACKUP with Connection Sync","text":""},{"location":"ha-deploy/#setup_3","title":"Setup","text":"For this deployment scenario, kubernetes and loxilb are setup as follows:
This feature is only supported when loxilb runs externally outside the Kubernetes cluster in either default or fullnat mode. Kubernetes uses a cluster with 2 Master Nodes and 2 Worker Nodes, all the nodes use the same 192.168.80.0/24 subnet. SVCs will have an external IP.
There are few possible scenarios which depends upon the connectivity of External Client, loxilb and the Kubernetes cluster. For this scenario, we are here considering L3 connectivity.
"},{"location":"ha-deploy/#ideal-for-use-when_3","title":"Ideal for use when","text":" - Need to preserve long running connections during lb pod failures
- Another LB mode known as DSR mode can be used to preserve connections but has the following limitations :
- Can't ensure stateful filtering and connection-tracking.
- Can't support multihoming features since different 5-tuples might belong to the same connection.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-kube-loxilb_3","title":"Roles and Responsiblities for kube-loxilb:","text":" - Choose CIDR as required.
- Choose SetRoles option so it can choose active loxilb (svcIPs will be advertised with different attributes/prio/med).
- Automates provisioning of bgp-peering between loxilb containers (if required).
- Sets up loxilb in fullnat svc mode towards end-points.
"},{"location":"ha-deploy/#configuration-options_6","title":"Configuration options","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --setRoles=0.0.0.0\n - --loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\n - --externalCIDR=123.123.123.1/24\n - --setLBMode=2\n - --setBGP=65100\n - --extBGPPeers=50.50.50.1:65101\n
- \"--setRoles=0.0.0.0\" - This option will enable kube-loxilb to choose active-backup amongst the loxilb instance.
- \"--loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\" - loxilb URLs to connect with.
- \"--externalCIDR=123.123.123.1/24\" - The external service IP for a svc is chosen from the externalCIDR range. In this scenario, the Client, svc and cluster are all in the different subnet.
- \"--setLBMode=2\" - This option will enable kube-loxilb to configure svc in fullnat mode towards the endpoints.
- \"--setBGP=65100\" - This option will let kube-loxilb to configure local AS number in the bgp instance.
- \"--extBGPPeers=50.50.50.1:65101\" - This option will configure the bgp instance's external neighbors
Sample kube-loxilb.yaml can be found here.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-loxilb_3","title":"Roles and Responsiblities for loxilb:","text":" - Advertises SVC IP as per the state(active/backup).
- Tracks and directs the external traffic destined to svc to the endpoints.
- Monitors endpoint's health and chooses active endpoints, if configured.
- Syncs the long-lived connections to all other configured loxilb peers.
"},{"location":"ha-deploy/#running-options","title":"Running options","text":"#llb1\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=$llb2IP --self=0 -b\n\n#llb2\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=$llb1IP --self=1 -b\n
- \"--cluster=\\<llb-peer-IP>\" - option configures the peer loxilb IP for syncing.
- \"--self=0/1\" - option to identify the instance.
- \"-b\" - option enables loxilb to run with goBGP instance which will advertise the routes with appropriate preference as per active/backup state.
"},{"location":"ha-deploy/#failover_3","title":"Failover","text":"This diagram describes the failover scenario:
In case of failure, kube-loxilb will detect the failure. It will select a new loxilb from the pool of active loxilbs and update it's state to new master. New master loxilb will advertise the svcIPs with higher proference which will force the BGP running on the client to send the traffic towards new Master loxilb. Since, the connections are all synced up, new master loxilb will start sending the traffic to the designated endpoints.
Please read this detailed blog about \"Hitless HA\" to know about this feature.
"},{"location":"ha-deploy/#scenario-5-active-backup-with-fast-failover-detection","title":"Scenario 5 - ACTIVE-BACKUP with Fast Failover Detection","text":""},{"location":"ha-deploy/#setup_4","title":"Setup","text":"For this deployment scenario, kubernetes and loxilb are setup as follows:
This feature is only supported when loxilb runs externally outside the Kubernetes cluster. Kubernetes uses a cluster with 2 Master Nodes and 2 Worker Nodes, all the nodes use the same 192.168.80.0/24 subnet. SVCs will have an external IP.
There are few possible scenarios which depends upon the connectivity of External Client, loxilb and the Kubernetes cluster. For this scenario, we are here considering L2 connectivity with connection sync.
"},{"location":"ha-deploy/#ideal-for-use-when_4","title":"Ideal for use when","text":" - Need fast failover detection and service continuity.
- This feature works in both L2 or L3 network settings.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-kube-loxilb_4","title":"Roles and Responsiblities for kube-loxilb:","text":" - Choose CIDR as required.
- Disable SetRoles option so it should not choose active loxilb.
- Automates provisioning of bgp-peering between loxilb containers (if required).
- Sets up loxilb in configured svc mode towards end-points.
"},{"location":"ha-deploy/#configuration-options_7","title":"Configuration options","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n # Disable setRoles option\n #- --setRoles=0.0.0.0\n - --loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\n - --externalCIDR=192.168.80.5/32\n - --setLBMode=2\n
- \"--setRoles=0.0.0.0\" - We have to make sure to disable this option as it will enable kube-loxilb to choose active-backup amongst the loxilb instance.
- \"--loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\" - loxilb URLs to connect with.
- \"--externalCIDR=192.168.80.5/32\" - The external service IP for a svc is chosen from the externalCIDR range. In this scenario, the Client, svc and cluster are all in the same subnet.
- \"--setLBMode=2\" - This option will enable kube-loxilb to configure svc in fullnat mode towards the endpoints.
Sample kube-loxilb.yaml can be found here.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-loxilb_4","title":"Roles and Responsiblities for loxilb:","text":" - Advertises SVC IP as per the state(active/backup).
- Tracks and directs the external traffic destined to svc to the endpoints.
- Monitors endpoint's health and chooses active endpoints, if configured.
- Syncs the long-lived connections to all other configured loxilb peers.
"},{"location":"ha-deploy/#running-options_1","title":"Running options","text":"#llb1\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=192.168.80.2 --self=0 --ka=192.168.80.2:192.168.80.1\n\n#llb2\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=192.168.80.1 --self=1 --ka=192.168.80.1:192.168.80.2\n
- \"--ka=\\<llb-peer-IP>:\\<llb-self-IP>\" - option configures the peer loxilb IP and source IP for BFD.
- \"--cluster=\\<llb-peer-IP>\" - option configures the peer loxilb IP for syncing.
- \"--self=0/1\" - option to identify the instance.
"},{"location":"ha-deploy/#failover_4","title":"Failover","text":"This diagram describes the failover scenario:
In case of failure, BFD will detect the failure. BACKUP loxilb will update it's state to new master. New master loxilb will advertise the svcIPs through gARP or with higher proference, if running with BGP, which will force the client to send the traffic towards new Master loxilb. Since, the connections are all synced up, new master loxilb will start sending the traffic to the designated endpoints.
Please read this detailed blog about \"Fast Failover Detection with BFD\" to know about this feature.
"},{"location":"ha-deploy/#note","title":"Note :","text":"There are ways to use loxilb in DSR mode, DNS etc which is still not covered in details in this doc. We will keep updating the scenarios.
"},{"location":"https/","title":"HTTPS guide","text":"Key and Cert files are required for HTTPS, and they are not detailed, but explain how to generate them and where LoxiLB can read and use user-generated Key and Cert files.
--tls enable TLS [$TLS]\n --tls-host= the IP to listen on for tls, when not specified it's the same as --host [$TLS_HOST]\n --tls-port= the port to listen on for secure connections (default: 8091) [$TLS_PORT]\n --tls-certificate= the certificate to use for secure connections (default:\n /opt/loxilb/cert/server.crt) [$TLS_CERTIFICATE]\n --tls-key= the private key to use for secure connections (default:\n /opt/loxilb/cert/server.key) [$TLS_PRIVATE_KEY]\n
To enable https on LoxiLB, we changed it to enable it using the --tls
option. Tls-host and tls-port are the contents of deciding which IP to listen to. The default IP address used as tls-host is 0.0.0.0, which is everywhere, but for future security, we recommend doing only certain values. The port is 8091 as the default. You can also find and change this from a value that does not overlap with the service you use.
LoxiLB reads the key by default as /opt/loxilb/cert/path with server.key and the Cert file as server.crt in the same path. In this article, we will learn how to create the server.key and server.crt files.
You can enable and run HTTLS (TLS) with the following commands.
./loxilb --tls\n
"},{"location":"https/#preparation","title":"Preparation","text":"First of all, the simplest way is to create it using openssl. To install openssl, you can install it using the command below.
apt install openssl\n
The LoxiLB team confirmed that it operates on 1.1.1f version of openssl. openssl version\nOpenSSL 1.1.1f 31 Mar 2020\n
"},{"location":"https/#1-create-serverkey","title":"1. Create server.key","text":"openssl genrsa -out server.key 2048\n
The way to generate server.key is simple. You can create a new key by typing the command above. In fact, if you type in the command, you can see that the process is output and the server.key is generated.
openssl genrsa -out server.key 2048\nGenerating RSA private key, 2048 bit long modulus (2 primes)\n..............................................+++++\n...........................................+++++\ne is 65537 (0x010001)\n
"},{"location":"https/#2-create-servercsr","title":"2. Create server.csr","text":"openssl req -new -key server.key -out server.csr\n
Create a csr file by putting the desired value in the corresponding item. This file is not used directly for https, but it is necessary to create a Cert file to be created later. When you type in the command above, a long sentence appears asking you to enter information, and you can fill in the corresponding value according to your situation.
openssl req -new -key server.key -out server.csr\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:\nEmail Address []:\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge password []:\nAn optional company name []:\n
"},{"location":"https/#3-create-servercrt","title":"3. Create server.crt","text":"openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt\n
This is the process of creating server.crt using server.key and server.csr generated above. You can issue a certificate with a limited deadline by setting the expiration date of the certificate well and putting a value after -day. The server.crt file is created with the following output. openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt\nSignature ok\nsubject=C = AU, ST = Some-State, O = Internet Widgits Pty Ltd\nGetting Private key\n
"},{"location":"https/#4-validation","title":"4. Validation","text":"You can enable https with the server.key and server.cert files generated through the above process.
If you move all of these files to the /opt/loxilb
path and check them, you can see that they work well.
sudo cp server.key /opt/loxilb/cert/.\nsudo cp server.crt /opt/loxilb/cert/.\n./loxilb --tls\n
curl http://0.0.0.0:11111/netlox/v1/config/loadbalancer/all\n{\"lbAttr\":[]}\n\n curl -k https://0.0.0.0:8091/netlox/v1/config/loadbalancer/all\n{\"lbAttr\":[]}\n
It should appear in the log as follows.
2024/04/12 16:19:48 Serving loxilb rest API at http://[::]:11111\n2024/04/12 16:19:48 Serving loxilb rest API at https://[::]:8091\n
"},{"location":"integrate_bgp/","title":"loxilb & calico BGP \uc5f0\ub3d9","text":"\uc774 \ubb38\uc11c\uc5d0\uc11c\ub294 calico CNI\ub97c \uc0ac\uc6a9\ud558\ub294 kubernetes\uc640 loxilb\ub97c \uc5f0\ub3d9\ud558\ub294 \ubc29\ubc95\uc744 \uc124\uba85\ud569\ub2c8\ub2e4.
"},{"location":"integrate_bgp/#_1","title":"\ud658\uacbd","text":"\uc774 \uc608\uc81c\uc5d0\uc11c\ub294 kubernetes\uc640 loxilb\uac00 \ub2e4\uc74c\uacfc \uac19\uc774 \uc5f0\uacb0\ub418\uc5b4 \uc788\ub2e4\uace0 \uac00\uc815\ud569\ub2c8\ub2e4. kubernetes\ub294 \ub2e8\uc21c\ud568\uc744 \uc704\ud574\uc11c \ub2e8\uc77c \ub9c8\uc2a4\ud130 \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud558\uba70 \ubaa8\ub4e0 \ud074\ub7ec\uc2a4\ud130\ub294 192.168.57.0/24 \ub3d9\uc77c\ud55c \uc11c\ube0c\ub137\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. loxilb \ucee8\ud14c\uc774\ub108\uac00 \uc2e4\ud589\uc911\uc778 \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub178\ub4dc \uc5ed\uc2dc kubernetes\uc640 \ub3d9\uc77c\ud55c \uc11c\ube0c\ub137\uc5d0 \uc5f0\uacb0\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \uc678\ubd80\uc5d0\uc11c kubernetes\uc811\uc18d\uc740 \ubaa8\ub450 \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub178\ub4dc\uc640 loxilb\ub97c \uac70\uce58\ub3c4\ub85d \uc124\uc815\ud588\uc2b5\ub2c8\ub2e4.
\ud574\ub2f9 \uc608\uc81c\uc5d0\uc11c\ub294 docker\ub97c \uc0ac\uc6a9\ud574 loxilb \ucee8\ud14c\uc774\ub108\ub97c \uc2e4\ud589\ud569\ub2c8\ub2e4. \ud574\ub2f9 \uc608\uc81c\uc5d0\uc11c\ub294 kubernetes & calico\ub294 \uc774\ubbf8 \uc124\uce58\ub418\uc5b4 \uc788\ub2e4\uace0 \uac00\uc815\ud558\uace0 \uc124\uba85\ud569\ub2c8\ub2e4.
"},{"location":"integrate_bgp/#1-loxilb-container","title":"1. loxilb container \uc0dd\uc131","text":""},{"location":"integrate_bgp/#11-docker-network","title":"1.1 docker network \uc0dd\uc131","text":"\uc6b0\uc120 loxilb\uc640 kubernetes \uc5f0\ub3d9\uc744 \uc704\ud574\uc11c\ub294 \uc11c\ub85c \ud1b5\uc2e0\ud560 \uc218 \uc788\uc5b4\uc57c \ud569\ub2c8\ub2e4. kubernetes & \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub178\ub4dc\uac00 \uc5f0\uacb0\ub418\uc5b4 \uc788\ub294 \ub124\ud2b8\uc6cc\ud06c\uc5d0 loxilb \ucee8\ud14c\uc774\ub108\ub3c4 \uc5f0\uacb0\ub418\ub3c4\ub85d docker network\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4. \ud604\uc7ac \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub178\ub4dc\ub294 eno6 \uc778\ud130\ud398\uc774\uc2a4\ub97c \ud1b5\ud574 kubernetes\uc640 \uc5f0\uacb0\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \ub530\ub77c\uc11c eno6 \uc778\ud130\ud398\uc774\uc2a4\ub97c parent\ub85c \uc0ac\uc6a9\ud558\ub294 macvlan \ud0c0\uc785 docker network \ub9cc\ub4e4\uc5b4\uc11c loxilb \ucee8\ud14c\uc774\ub108\uc5d0 \uc81c\uacf5\ud558\ub3c4\ub85d \ud558\uaca0\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c \uba85\ub839\uc5b4\ub85c docker network\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4.
sudo docker network create -d macvlan -o parent=eno6 \\\n --subnet 192.168.57.0/24 \\\n --gateway 192.168.57.1 \\\n --aux-address 'cp1=192.168.57.101' \\\n --aux-address 'cp2=192.168.57.102' \\\n --aux-address 'cp3=192.168.57.103' k8snet\n
|\uc635\uc158|\uc124\uba85| |----|----| |-d macvlan|\ub124\ud2b8\uc6cc\ud06c \ud0c0\uc785\uc744 macvlan\uc73c\ub85c \uc9c0\uc815| |-o parent=eno6|eno6 \uc778\ud130\ud398\uc774\uc2a4\ub97c parent\ub85c \uc0ac\uc6a9\ud574\uc11c macvlan type \ub124\ud2b8\uc6cc\ud06c \uc0dd\uc131| |--subnet 192.168.57.0/24|\ub124\ud2b8\uc6cc\ud06c \uc11c\ube0c\ub137 \uc9c0\uc815| |--gateway 192.168.57.1|\uac8c\uc774\ud2b8\uc6e8\uc774 \uc124\uc815(\uc0dd\ub7b5 \uac00\ub2a5)| |--aux-address 'serverName=serverIP'|\ud574\ub2f9 \ub124\ud2b8\uc6cc\ud06c\uc5d0\uc11c \uc774\ubbf8 \uc0ac\uc6a9\uc911\uc778 IP \uc8fc\uc18c\ub4e4\uc774 \uc911\ubcf5\uc73c\ub85c \ucee8\ud14c\uc774\ub108\uc5d0 \ud560\ub2f9\ub418\uc9c0 \uc54a\ub3c4\ub85d \ubbf8\ub9ac \ub4f1\ub85d\ud558\ub294 \uc635\uc158| |k8snet|\ub124\ud2b8\uc6cc\ud06c \uc774\ub984\uc744 k8snet\uc73c\ub85c \uc9c0\uc815| \uc678\ubd80\uc5d0\uc11c kubernetes \uc11c\ube44\uc2a4\ub85c \uc811\uadfc\ud558\ub294 \ud2b8\ub798\ud53d \uc5ed\uc2dc loxilb\ub97c \uac70\uce58\ub3c4\ub85d, \uc678\ubd80\uc640 \ud1b5\uc2e0\uc774 \uac00\ub2a5\ud55c docker network \uc5ed\uc2dc \uc0dd\uc131\ud569\ub2c8\ub2e4. \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub178\ub4dc\ub294 eno8\uc744 \ud1b5\ud574 \uc678\ubd80\uc640 \uc5f0\uacb0\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.
sudo docker network create -d macvlan -o parent=eno8 \\\n --subnet 192.168.20.0/24 \\\n --gateway 192.168.20.1 llbnet\n
docker network list \uba85\ub839\uc5b4\ub85c \uc0dd\uc131\ud55c \ub124\ud2b8\uc6cc\ud06c\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
netlox@nd8:~$ sudo docker network list\nNETWORK ID NAME DRIVER SCOPE\n5c97ae74fc32 bridge bridge local\n6142f53e8be6 host host local\n24ee7dbd7707 k8snet macvlan local\n81c96ceda375 llbnet macvlan local\n7bcd1738501b none null local\n
"},{"location":"integrate_bgp/#12-loxilb-container","title":"1.2 loxilb container \uc0dd\uc131","text":"loxilb container \uc774\ubbf8\uc9c0\ub294 github\uc5d0\uc11c \uc81c\uacf5\ub418\uace0 \uc788\uc2b5\ub2c8\ub2e4. \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub9cc \uba3c\uc800 \ub2e4\uc6b4\ub85c\ub4dc\ud558\uace0 \uc2f6\uc744 \uacbd\uc6b0 \ub2e4\uc74c \uba85\ub839\uc5b4\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.
docker pull ghcr.io/loxilb-io/loxilb:latest\n
\ub2e4\uc74c \uba85\ub839\uc5b4\ub85c loxilb \ucee8\ud14c\uc774\ub108\ub97c \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
sudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped \\\n --privileged -dit -v /dev/log:/dev/log \\\n --net=k8snet --ip=192.168.57.4 --name loxilb ghcr.io/loxilb-io/loxilb:latest \\\n --host=0.0.0.0\n
\uc0ac\uc6a9\uc790\uac00 \uc9c0\uc815\ud574\uc57c \ud558\ub294 \uc635\uc158\uc740 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4. |\uc635\uc158|\uc124\uba85| |----|----| |--net=k8snet|\ucee8\ud14c\uc774\ub108\ub97c \uc5f0\uacb0\ud560 \ub124\ud2b8\uc6cc\ud06c| |--ip=192.168.57.4|\ucee8\ud14c\uc774\ub108\uac00 \uc0ac\uc6a9\ud560 IP address \uc9c0\uc815. \uc9c0\uc815\ud558\uc9c0 \uc54a\uc744 \uacbd\uc6b0 network \uc11c\ube0c\ub0c7 \ubc94\uc704 \ub0b4\uc5d0\uc11c \uc784\uc758\uc758 IP \uc0ac\uc6a9| |--name loxilb|\ucee8\ud14c\uc774\ub108 \uc774\ub984 \uc124\uc815| docker ps \uba85\ub839\uc5b4\ub85c \uc0dd\uc131\ud55c \ucee8\ud14c\uc774\ub108\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
netlox@nd8:~$ sudo docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\neae349a283ae loxilbio/loxilb:beta \"/root/loxilb-io/lox\u2026\" 11 days ago Up 11 days loxilb\n
\uc704\uc5d0\uc11c \ucee8\ud14c\uc774\ub108 \uc0dd\uc131\ud560 \ub54c kubernetes \ub124\ud2b8\uc6cc\ud06c\ub9cc \uc5f0\uacb0\ud588\uae30 \ub54c\ubb38\uc5d0, \uc678\ubd80 \ud1b5\uc2e0\uc6a9 docker network\uc640\ub3c4 \uc5f0\uacb0\ud574\uc57c \ud569\ub2c8\ub2e4. \ub2e4\uc74c \uba85\ub839\uc5b4\ub85c \ucee8\ud14c\uc774\ub108\uc5d0 \ub124\ud2b8\uc6cc\ud06c\ub97c \uc8fc\uac00\ub85c \uc5f0\uacb0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
sudo docker network connect llbnet loxilb\n
\uc5f0\uacb0\uc774 \uc644\ub8cc\ub418\uba74 \ub2e4\uc74c\uacfc \uac19\uc774 \ucee8\ud14c\uc774\ub108\uc758 \uc778\ud130\ud398\uc774\uc2a4 2\uac1c\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4
netlox@netlox:~$ sudo docker exec -ti loxilb ip route\ndefault via 192.168.20.1 dev eth0\n192.168.20.0/24 dev eth0 proto kernel scope link src 192.168.20.4\n192.168.30.0/24 dev eth1 proto kernel scope link src 192.168.30.2\n
"},{"location":"integrate_bgp/#2-kubernetes-loxi-ccm","title":"2. kubernetes\uc5d0 loxi-ccm \uc124\uce58","text":"loxi-ccm\uc740 loxilb \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub97c kubernetes\uc5d0\uac8c \uc81c\uacf5\ud558\uae30 \uc704\ud55c cloud-controller-manager \ub85c\uc11c, kubernetes\uc640 loxilb \uc5f0\ub3d9\uc5d0 \ubc18\ub4dc\uc2dc \ud544\uc694\ud569\ub2c8\ub2e4. \ud574\ub2f9 \ubb38\uc11c\ub97c \ucc38\uace0\ud574\uc11c, configMap\uc758 apiServerURL\uc744 \uc704\uc5d0\uc11c \uc0dd\uc131\ud55c loxilb\uc758 IP \uc8fc\uc18c\ub85c \ubcc0\uacbd\ud55c \ud6c4 kubernetes\uc5d0 \uc124\uce58\ud558\uc2dc\uba74 \ub429\ub2c8\ub2e4. loxi-ccm\uae4c\uc9c0 \uc815\uc0c1\uc801\uc73c\ub85c \uc124\uce58\ub418\uc5c8\ub2e4\uba74 \uc5f0\ub3d9 \uc791\uc5c5\uc774 \uc644\ub8cc\ub429\ub2c8\ub2e4.
"},{"location":"integrate_bgp/#3","title":"3. \uae30\ubcf8 \uc5f0\ub3d9 \ud655\uc778","text":"2\ubc88 \ud56d\ubaa9\uae4c\uc9c0 \uc644\ub8cc\ub418\uc5c8\ub2e4\uba74, \uc774\uc81c kubernetes\uc5d0\uc11c LoadBalancer \ud0c0\uc785 \uc11c\ube44\uc2a4\ub97c \uc0dd\uc131\ud558\uba74 External IP\uac00 \ubd80\uc5ec\ub429\ub2c8\ub2e4. \ub2e4\uc74c\uacfc \uac19\uc774 \ud14c\uc2a4\ud2b8\uc6a9\uc73c\ub85c test-nginx-svc.yaml \ud30c\uc77c\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4.
apiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n name: http-web-svc\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n type: LoadBalancer\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 8888\n targetPort: http-web-svc\n
\ud30c\uc77c\uc744 \uc0dd\uc131\ud55c \ub2e4\uc74c, \uc544\ub798 \uba85\ub839\uc73c\ub85c nginx pod\uc640 LoadBalancer \uc11c\ube44\uc2a4\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4.
kubectl apply -f test-nginx-svc.yaml\n
\uc11c\ube44\uc2a4 nginx-service\uac00 LoadBalancer \ud0c0\uc785\uc73c\ub85c \uc0dd\uc131\ub418\uc5c8\uace0 External IP\ub97c \ud560\ub2f9\ubc1b\uc558\uc74c\uc744 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\uc81c IP 123.123.123.15\uc640 port 8888\uc744 \uc0ac\uc6a9\ud574 \uc678\ubd80\uc5d0\uc11c kubernetes \uc11c\ube44\uc2a4\ub85c \uc811\uadfc\uc774 \uac00\ub2a5\ud569\ub2c8\ub2e4.
vagrant@node1:~$ sudo kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.233.0.1 <none> 443/TCP 28d\nnginx-service LoadBalancer 10.233.21.235 123.123.123.15 8888:31655/TCP 3s\n
LoadBalancer \ub8f0\uc740 loxilb \ucee8\ud14c\uc774\ub108\uc5d0\ub3c4 \uc0dd\uc131\ub429\ub2c8\ub2e4. \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub178\ub4dc\uc5d0\uc11c \ub2e4\uc74c\uacfc \uac19\uc774 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
netlox@nd8:~$ sudo docker exec -ti loxilb loxicmd get lb\n| EXTERNAL IP | PORT | PROTOCOL | SELECT | # OF ENDPOINTS |\n|----------------|------|----------|--------|----------------|\n| 123.123.123.15 | 8888 | tcp | 0 | 2 |\n
"},{"location":"integrate_bgp/#4-calico-bgp-loxilb","title":"4. calico BGP & loxilb \uc5f0\ub3d9","text":"calico\uc5d0\uc11c BGP \ubaa8\ub4dc\ub85c \ub124\ud2b8\uc6cc\ud06c\ub97c \uad6c\uc131\ud560 \uacbd\uc6b0, loxilb \uc5ed\uc2dc BGP \ubaa8\ub4dc\ub85c \ub3d9\uc791\ud574\uc57c \ud569\ub2c8\ub2e4. loxilb\ub294 goBGP \uae30\ubc18\uc73c\ub85c BGP \ub124\ud2b8\uc6cc\ud06c \uae30\ub2a5\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \uc774\ud558 \ub0b4\uc6a9\uc740 calico\uac00 BGP mode\ub85c \uc124\uc815\ub418\uc5b4 \uc788\ub2e4\uace0 \uac00\uc815\ud558\uace0 \uc124\uba85\ud569\ub2c8\ub2e4.
"},{"location":"integrate_bgp/#41-loxilb-bgp","title":"4.1 loxilb BGP \ubaa8\ub4dc\ub85c \uc2e4\ud589","text":"\ub2e4\uc74c \uba85\ub839\uc5b4\ub85c loxilb \ucee8\ud14c\uc774\ub108\ub97c \uc0dd\uc131\ud558\uba74 BGP \ubaa8\ub4dc\ub85c \uc2e4\ud589\ub429\ub2c8\ub2e4. \uba85\ub839\uc5b4 \ub9c8\uc9c0\ub9c9\uc758 -b \uc635\uc158\uc774 BGP \ubaa8\ub4dc \uc635\uc158\uc785\ub2c8\ub2e4.
sudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped \\\n --privileged -dit -v /dev/log:/dev/log \\\n --net=k8snet --ip=192.168.57.4 --name loxilb ghcr.io/loxilb-io/loxilb:latest \\\n --host=0.0.0.0 -b\n
"},{"location":"integrate_bgp/#42-gobgp_loxilbyaml","title":"4.2 gobgp_loxilb.yaml \ud30c\uc77c \uc0dd\uc131","text":"loxilb \ucee8\ud14c\uc774\ub108\uc758 /opt/loxilb/ \ub514\ub809\ud1a0\ub9ac\uc5d0 gobgp_loxilb.yaml \ud30c\uc77c\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4.
global:\n config:\n as: 65002\n router-id: 172.1.0.2\nneighbors:\n - config:\n neighbor-address: 192.168.57.101\n peer-as: 64512\n - config:\n neighbor-address: 192.168.20.55\n peer-as: 64001\n
global \ud56d\ubaa9\uc5d0\ub294 loxilb \ucee8\ud14c\uc774\ub108\uc758 as-id\uc640 router-id \ub4f1 BGP \uc815\ubcf4\ub97c \ub4f1\ub85d\ud574\uc57c \ud569\ub2c8\ub2e4. neighbors\ub294 loxilb\uc640 Peering\ub418\ub294 BGP \ub77c\uc6b0\ud130\uc758 IP \uc8fc\uc18c \ubc0f as-id \uc815\ubcf4\ub97c \ub4f1\ub85d\ud569\ub2c8\ub2e4. \ud574\ub2f9 \uc608\uc81c\uc5d0\uc11c\ub294 calico\uc758 BGP \uc815\ubcf4(192.168.57.101)\uc640 \uc678\ubd80 BGP \uc815\ubcf4(129.168.20.55) \ub97c \ub4f1\ub85d\ud588\uc2b5\ub2c8\ub2e4.
"},{"location":"integrate_bgp/#43-loxilb-lo-router-id","title":"4.3 loxilb \ucee8\ud14c\uc774\ub108\uc758 lo \uc778\ud130\ud398\uc774\uc2a4\uc5d0 router-id \ucd94\uac00","text":"gobgp_loxilb.yaml \ud30c\uc77c\uc5d0\uc11c router-id\ub85c \ub4f1\ub85d\ud55c IP\ub97c lo \uc778\ud130\ud398\uc774\uc2a4\uc5d0 \ucd94\uac00\ud574\uc57c \ud569\ub2c8\ub2e4.
sudo docker exec -ti loxilb ip addr add 172.1.0.2/32 dev lo\n
"},{"location":"integrate_bgp/#44-loxilb","title":"4.4 loxilb \ucee8\ud14c\uc774\ub108 \uc7ac\uc2dc\uc791","text":"gobgp_loxilb.yaml\uc5d0 \uc791\uc131\ud55c \uc124\uc815\uc774 \uc801\uc6a9\ub418\ub3c4\ub85d \ucee8\ud14c\uc774\ub108\ub97c \uc7ac\uc2dc\uc791\ud569\ub2c8\ub2e4.
sudo docker stop loxilb\nsudo docker start loxilb\n
"},{"location":"integrate_bgp/#45-calico-bgp-peer","title":"4.5 calico\uc5d0 BGP Peer \uc815\ubcf4 \ucd94\uac00","text":"calico\uc5d0\ub3c4 loxilb\uc758 BGP Peer \uc815\ubcf4\ub97c \ucd94\uac00\ud574\uc57c \ud569\ub2c8\ub2e4. \ub2e4\uc74c\uacfc \uac19\uc774 calico-bgp-config.yaml \ud30c\uc77c\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4.
apiVersion: projectcalico.org/v3\nkind: BGPPeer\nmetadata:\n name: my-global-peers2\nspec:\n peerIP: 192.168.57.4\n asNumber: 65002\n
peerIP\uc5d0 loxilb\uc758 IP \uc8fc\uc18c\ub97c \uc785\ub825\ud569\ub2c8\ub2e4. asNumber\uc5d0\ub294 \uc704\uc5d0\uc11c \uc124\uc815\ud55c loxilb BGP\uc758 as-ID\ub97c \uc785\ub825\ud569\ub2c8\ub2e4. \ud30c\uc77c\uc744 \uc0dd\uc131\ud55c \ub2e4\uc74c, \uc544\ub798 \uba85\ub839\uc5b4\ub85c calico\uc5d0 BGP Peer \uc815\ubcf4\ub97c \ucd94\uac00\ud569\ub2c8\ub2e4.
sudo calicoctl apply -f calico-bgp-config.yaml\n
"},{"location":"integrate_bgp/#46-bgp","title":"4.6 BGP \uc124\uc815 \ud655\uc778","text":"\uc774\uc81c \ub2e4\uc74c\uacfc \uac19\uc774 loxilb \ucee8\ud14c\uc774\ub108\uc5d0\uc11c BGP \uc5f0\uacb0\uc744 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
netlox@nd8:~$ sudo docker exec -ti loxilb3 gobgp neigh\nPeer AS Up/Down State |#Received Accepted\n192.168.57.101 64512 00:00:59 Establ | 4 4\n
\uc815\uc0c1\uc801\uc73c\ub85c \uc5f0\uacb0\ub418\uc5c8\ub2e4\uba74 State\uac00 Establish\ub85c \ud45c\uc2dc\ub429\ub2c8\ub2e4. gobgp global rib \uba85\ub839\uc73c\ub85c calico\uc758 route \uc815\ubcf4\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
netlox@nd8:~$ sudo docker exec -ti loxilb3 gobgp global rib\n Network Next Hop AS_PATH Age Attrs\n*> 10.233.71.0/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n*> 10.233.74.64/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n*> 10.233.75.0/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n*> 10.233.102.128/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n
"},{"location":"integrate_bgp_eng/","title":"How to run loxilb with calico CNI in BGP mode","text":"This article describes how to integrate loxilb using calico CNI in Kubernetes.
"},{"location":"integrate_bgp_eng/#setup","title":"Setup","text":"For this example, kubernetes and loxilb are setup as follows:
Kubernetes uses a single master cluster for simplicity, all clusters use the same 192.168.57.0/24 subnet. The load balancer node where the loxilb container is running is also connected to the same subnet as kubernetes. Externally, all kubernetes connections are configured to go through the \"loxilb\" load balancer node.
This example uses docker to run the loxilb container. These examples assume that kubernetes & calico are already installed.
"},{"location":"integrate_bgp_eng/#1-loxilb-setup","title":"1. loxilb setup","text":""},{"location":"integrate_bgp_eng/#11-docker-network-setup","title":"1.1 docker network setup","text":"In order to integrate loxilb and kubernetes, loxilb needs to be able to communicate with Kubernetes. First, we create a docker network so that the loxilb container is also connected to the same network which is used by kubernetes nodes. Currently, the load balancer node is connected to kubernetes through the eno6 interface. Therefore, we will create a macvlan-type docker network that uses the eno6 interface as a parent and provide it to the loxilb docker. Create a docker network with the following command:
sudo docker network create -d macvlan -o parent=eno6 \\\n --subnet 192.168.57.0/24 \\\n --gateway 192.168.57.1 \\\n --aux-address 'cp1=192.168.57.101' \\\n --aux-address 'cp2=192.168.57.102' \\\n --aux-address 'cp3=192.168.57.103' k8snet\n
|Options|Description| |----|----| |-d macvlan|Specify network type as macvlan| |-o parent=eno6|Create a macvlan type network using the eno6 interface as parent| |--subnet 192.168.57.0/24|Specify network subnet| |--gateway 192.168.57.1|Set gateway (optional)| |--aux-address 'serverName=serverIP'|Option to register in advance so that IP addresses already in use on the network are not duplicated| |k8snet|Name the network k8snet| A docker network that can communicate with the outside is also created so that traffic accessing the kubernetes service from outside also goes through loxilb. The load balancer node is connected to the outside through eno8.
sudo docker network create -d macvlan -o parent=eno8 \\\n --subnet 192.168.20.0/24 \\\n --gateway 192.168.20.1 llbnet\n
We can check the network created with the docker network list command.
netlox@nd8:~$ sudo docker network list\nNETWORK ID NAME DRIVER SCOPE\n5c97ae74fc32 bridge bridge local\n6142f53e8be6 host host local\n24ee7dbd7707 k8snet macvlan local\n81c96ceda375 llbnet macvlan local\n7bcd1738501b none null local\n
"},{"location":"integrate_bgp_eng/#12-loxilb-docker-setup","title":"1.2 loxilb docker setup","text":"The loxilb container image is provided at github. To download the docker image, use the following command.
docker pull ghcr.io/loxilb-io/loxilb:latest\n
To run loxilb docker, we can use the following command.
sudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped \\\n --privileged -dit -v /dev/log:/dev/log \\\n --net=k8snet --ip=192.168.57.4 --name loxilb ghcr.io/loxilb-io/loxilb:latest \\\n --host=0.0.0.0\n
The options that need to specified are: Options Description --net=k8snet Network to connect to container --ip=192.168.57.4 Specifies the IP address the container will use. If not specified, use any IP within the network subnet range --name loxilb Set container name We can check the docker created with the docker ps command.
netlox@nd8:~$ sudo docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\neae349a283ae loxilbio/loxilb:beta \"/root/loxilb-io/lox\u2026\" 11 days ago Up 11 days loxilb\n
Since we only connected the kubernetes network (k8snet) when running the docker above, we also need to connect to the docker network for external communication. Currently, docker only supports one network to be connected when running \"docker run\" command. But, its easy to connect other network to the docker with the following command: sudo docker network connect llbnet loxilb\n
Once the connection is complete, we can see the docker container's interfaces as follows:
netlox@netlox:~$ sudo docker exec -ti loxilb ip route\ndefault via 192.168.20.1 dev eth0\n192.168.20.0/24 dev eth0 proto kernel scope link src 192.168.20.4\n192.168.30.0/24 dev eth1 proto kernel scope link src 192.168.30.2\n
"},{"location":"integrate_bgp_eng/#2-loxi-ccm-setup-in-kubernetes","title":"2. loxi-ccm setup in kubernetes","text":"loxi-ccm is a ccm provider to provide a loxilb load balancer to kubernetes, and it is essential for interworking with kubernetes and loxilb. Refer to the relevant document, change the apiServerURL of configMap to the IP address of loxilb created above and install it in kubernetes. If loxi-ccm is installed properly, the setup is complete.
"},{"location":"integrate_bgp_eng/#3basic-load-balancer-test","title":"3.Basic load-balancer test","text":"We can now give an External IP when you create a LoadBalancer type service in kubernetes. Create a test-nginx-svc.yaml file for testing as follows:
apiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n name: http-web-svc\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n type: LoadBalancer\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 8888\n targetPort: http-web-svc\n
The above steps create the nginx pod and then associates a LoadBalancer service to it. The command to apply is as follows
kubectl apply -f test-nginx-svc.yaml\n
We can verify that the service nginx-service has been created as a LoadBalancer type and has been assigned an External IP. Now you can access the kubernetes service from outside using IP 123.123.123.15 and port 8888.
vagrant@node1:~$ sudo kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.233.0.1 <none> 443/TCP 28d\nnginx-service LoadBalancer 10.233.21.235 123.123.123.15 8888:31655/TCP 3s\n
The LoadBalancer rule is also created in the loxilb container. We can check in the loxilb load-balancer node as follows
netlox@nd8:~$ sudo docker exec -ti loxilb loxicmd get lb\n| EXTERNAL IP | PORT | PROTOCOL | SELECT | # OF ENDPOINTS |\n|----------------|------|----------|--------|----------------|\n| 123.123.123.15 | 8888 | tcp | 0 | 2 |\n
"},{"location":"integrate_bgp_eng/#4-calico-bgp-loxilb-setup","title":"4. calico BGP & loxilb setup","text":"If calico configures the network in BGP mode, loxilb must also operate in BGP mode. loxilb supports BGP functions based on goBGP. The following description assumes that calico is already set to use BGP mode.
"},{"location":"integrate_bgp_eng/#41-loxilb-bgp-mode-setup","title":"4.1 loxilb BGP mode setup","text":"If we create a loxilb container with the following command, it will run in BGP mode. The -b option at the end of the command is to enable BGP mode in loxilb.
sudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped \\\n --privileged -dit -v /dev/log:/dev/log \\\n --net=k8snet --ip=192.168.57.4 --name loxilb ghcr.io/loxilb-io/loxilb:latest \\\n --host=0.0.0.0 -b\n
"},{"location":"integrate_bgp_eng/#42-gobgp_loxilbyaml-file-setup","title":"4.2 gobgp_loxilb.yaml file setup","text":"Create a gobgp_loxilb.yaml file in the /etc/gobgp/ directory of the loxilb container.
global:\n config:\n as: 65002\n router-id: 172.1.0.2\nneighbors:\n - config:\n neighbor-address: 192.168.57.101\n peer-as: 64512\n - config:\n neighbor-address: 192.168.20.55\n peer-as: 64001\n
BGP information such as as-id and router-id of the loxilb container must be registered as global items. The neighbors item need info about the IP address and as-id information of the BGP router peering with loxilb. In this example, calico's BGP information (192.168.57.101) and external BGP information (129.168.20.55) were registered.
"},{"location":"integrate_bgp_eng/#43-add-router-id-to-the-lo-interface-of-the-loxilb-container","title":"4.3 Add router-id to the lo interface of the loxilb container","text":"We need to add the IP registered as loxilb router-id in the gobgp_loxilb.yaml file to the lo interface of loxilb docker.
sudo docker exec -ti loxilb ip addr add 172.1.0.2/32 dev lo\n
"},{"location":"integrate_bgp_eng/#44-loxilb-docker-restart","title":"4.4 loxilb docker restart","text":"Restart the loxilb docker for the settings in gobgp_loxilb.yaml to take effect.
sudo docker stop loxilb\nsudo docker start loxilb\n
"},{"location":"integrate_bgp_eng/#45-setup-bgp-peer-information-in-calico","title":"4.5 Setup BGP Peer information in Calico","text":"We also need to add loxilb's BGP peer information to calico. Create the calico-bgp-config.yaml file as follows:
apiVersion: projectcalico.org/v3\nkind: BGPPeer\nmetadata:\n name: my-global-peers2\nspec:\n peerIP: 192.168.57.4\n asNumber: 65002\n
In peerIP, enter the IP address of loxilb. In asNumber, enter the as-ID of the loxilb BGP set above. After creating the file, add BGP peer information to calico with the command below.
sudo calicoctl apply -f calico-bgp-config.yaml\n
"},{"location":"integrate_bgp_eng/#46-check-bgp-status","title":"4.6 Check BGP status","text":"We now check the BGP connectivity in the loxilb docker like this:
netlox@nd8:~$ sudo docker exec -ti loxilb3 gobgp neigh\nPeer AS Up/Down State |#Received Accepted\n192.168.57.101 64512 00:00:59 Establ | 4 4\n
If the connection is successful, the State will be shown as \"Established\". We can check the route information of calico with the gobgp global rib command.
netlox@nd8:~$ sudo docker exec -ti loxilb3 gobgp global rib\n Network Next Hop AS_PATH Age Attrs\n*> 10.233.71.0/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n*> 10.233.74.64/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n*> 10.233.75.0/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n*> 10.233.102.128/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n
"},{"location":"k0s_quick_start/","title":"K0s/loxilb with kube-router","text":""},{"location":"k0s_quick_start/#loxilb-quick-start-guide-with-k0skube-router","title":"LoxiLB Quick Start Guide with k0s/kube-router","text":"This guide will explain how to:
- Deploy a single-node K0s cluster with kube-router networking
- Expose services with loxilb as an external load balancer
"},{"location":"k0s_quick_start/#prerequisites","title":"Prerequisite(s)","text":" - Single node with Linux
- Install docker runtime to manage loxilb
"},{"location":"k0s_quick_start/#topology","title":"Topology","text":"For quickly bringing up loxilb with K0s/kube-router, we will be deploying all components in a single node :
loxilb is run as a docker and will use macvlan for the incoming traffic. This is to mimic a topology close to cloud-hosted k8s where LB nodes run outside a cluster. loxilb can be used in more complex in-cluster mode as well, but not used here for simplicity.
"},{"location":"k0s_quick_start/#install-loxilb-docker","title":"Install loxilb docker","text":"## Set underlying interface of the VM/cluster-node to promisc mode for mac-vlan to work\nsudo ifconfig eth1 promisc\n\n## Run loxilb\nsudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged --entrypoint /root/loxilb-io/loxilb/loxilb -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest\n\n# Create mac-vlan on top of underlying eth1 interface\nsudo docker network create -d macvlan -o parent=eth1 --subnet 192.168.82.0/24 --gateway 192.168.82.1 --aux-address 'host=192.168.82.252' llbnet\n\n# Assign mac-vlan to loxilb docker with specified IP (which will be used as LB VIP)\n# Please note that this node should already have an IP assigned belonging to the same subnet on underlying interface\nsudo docker network connect llbnet loxilb --ip=192.168.82.100\n\n# Add iptables rule to allow traffic from source/host IP(192.168.82.1) to loxilb\nsudo iptables -A DOCKER -s 192.168.82.1 -j ACCEPT\n
All the above steps related to docker setup can be further automated using docker-compose.
"},{"location":"k0s_quick_start/#setup-k0skube-router-in-single-node","title":"Setup k0s/kube-router in single-node","text":"#K0s installation steps\ncurl -sSLf https://get.k0s.sh | sudo sh\nsudo k0s install controller --single\nsudo k0s start\n
"},{"location":"k0s_quick_start/#check-k0s-status","title":"Check k0s status","text":"sudo k0s status\n
"},{"location":"k0s_quick_start/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used to deploy loxilb with Kubernetes.
wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/ext-cluster/kube-loxilb.yaml\n
Change args in kube-loxilb.yaml
args:\n - --loxiURL=http://172.17.0.2:11111\n - --externalCIDR=192.168.82.100/32\n - --setMode=1\n
In the above snippet, loxiURL uses docker interface IP of loxilb, which can be different for each setup. Apply in k0s:
$ sudo k0s kubectl apply -f kube-loxilb.yaml\n
"},{"location":"k0s_quick_start/#create-the-service","title":"Create the service","text":"$ sudo k0s kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/docker-k0s-lb/tcp-svc-lb.yml\n
"},{"location":"k0s_quick_start/#check-the-status","title":"Check the status","text":"In k0s:
$ sudo k0s kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 80m\ntcp-lb-onearm LoadBalancer 10.43.183.123 llb-192.168.82.100 56002:30001/TCP 6m50s\n
In loxilb docker: $ sudo docker exec -it loxilb loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-----------------------|------|-----|--------|-----------|-------|--------|--------|----------|\n| 192.168.82.100 | | 56002 | tcp | default_tcp-lb-onearm | 0 | rr | onearm | 10.0.2.15 | 30001 | 1 | active | 12:880 |\n
"},{"location":"k0s_quick_start/#connect-from-hostclient","title":"Connect from host/client","text":"$ curl http://192.168.82.100:56002\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
All of the above steps are also available as part of loxilb CICD workflow. Follow the steps below to replicate the above (please note that you will need vagrant tool installed to run:
$ git clone https://github.com/loxilb-io/loxilb.git\n$ cd cicd/docker-k0s-lb/\n\n# To setup the single node k0s setup with kube-router networking and loxilb as external load balancer\n$ ./config.sh\n\n# To validate the results\n$ ./validation.sh\n\n# Cleanup\n$ ./rmconfig.sh\n
"},{"location":"k0s_quick_start_incluster/","title":"K0s/loxilb in-cluster mode","text":""},{"location":"k0s_quick_start_incluster/#quick-start-guide-with-k0s-and-loxilb-in-cluster-mode","title":"Quick Start Guide with K0s and LoxiLB in-cluster mode","text":"This document will explain how to install a K0s cluster with loxilb as a serviceLB provider running in-cluster mode.
"},{"location":"k0s_quick_start_incluster/#prerequisites","title":"Prerequisite(s)","text":" - Single node with Linux
"},{"location":"k0s_quick_start_incluster/#topology","title":"Topology","text":"For quickly bringing up loxilb in-cluster and K0s, we will be deploying all components in a single node :
loxilb and kube-loxilb components run as pods managed by kubernetes in this scenario.
"},{"location":"k0s_quick_start_incluster/#setup-k0s-in-a-single-node","title":"Setup k0s in a single-node","text":"# k0s installation steps\ncurl -sSLf https://get.k0s.sh | sudo sh\nsudo k0s install controller --single\nsudo k0s start\n
"},{"location":"k0s_quick_start_incluster/#check-k0s-status","title":"Check k0s status","text":"$ sudo k0s status\nVersion: v1.29.2+k0s.0\nProcess ID: 2631\nWorkloads: true\nSingleNode: true\nKube-api probing successful: true\nKube-api probing last error: \n
"},{"location":"k0s_quick_start_incluster/#how-to-deploy-loxilb","title":"How to deploy loxilb ?","text":"loxilb can be deloyed by using the following command in the K3s node
sudo k0s kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/k0s-incluster/loxilb.yml\n
"},{"location":"k0s_quick_start_incluster/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used as an operator to manage loxilb.
wget https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/k0s-incluster/kube-loxilb.yml\n
kube-loxilb.yaml args:\n #- --loxiURL=http://172.17.0.2:11111\n - --externalCIDR=192.168.82.100/32\n - --setRoles=0.0.0.0\n #- --monitor\n #- --setBGP\n
In the above snippet, loxiURL is commented out which denotes to utilize in-cluster mode to discover loxilb pods automatically. External CIDR represents the IP pool from where serviceLB VIP will be allocated. Apply after making changes (if any) :
sudo k0s kubectl apply -f kube-loxilb.yaml\n
"},{"location":"k0s_quick_start_incluster/#create-the-service","title":"Create the service","text":"sudo k0s kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/k0s-incluster/tcp-svc-lb.yml\n
"},{"location":"k0s_quick_start_incluster/#check-status-of-various-components-in-k0s-node","title":"Check status of various components in k0s node","text":"In k0s node:
## Check the pods created\n$ sudo k0s kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system kube-proxy-vczxm 1/1 Running 0 4m48s\nkube-system kube-router-gjp7g 1/1 Running 0 4m48s\nkube-system metrics-server-7556957bb7-25hsk 1/1 Running 0 4m50s\nkube-system coredns-6cd46fb86c-xllg2 1/1 Running 0 4m50s\nkube-system loxilb-lb-4fmdp 1/1 Running 0 3m43s\nkube-system kube-loxilb-6f44cdcdf5-ffdcv 1/1 Running 0 2m22s\ndefault tcp-onearm-test 1/1 Running 0 92s\n\n\n## Check the services created\n$ sudo k0s kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5m28s\ntcp-lb-onearm LoadBalancer 10.96.108.109 llb-192.168.82.100 56002:32033/TCP 111s\n
In loxilb pod, we can check internal LB rules: $ sudo k0s kubectl exec -it -n kube-system loxilb-lb-4fmdp -- loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-----------------------|------|-----|--------|-----------|-------|--------|--------|----------|\n| 192.168.82.100 | | 56002 | tcp | default_tcp-lb-onearm | 0 | rr | onearm | 10.0.2.15 | 32033 | 1 | active | 25:1842 |\n
"},{"location":"k0s_quick_start_incluster/#connect-from-hostclient","title":"Connect from host/client","text":"$ curl http://192.168.82.100:56002\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
For more detailed information on incluster deployment of loxilb with bgp in a full-blown cluster, kindly follow this blog. All of the above steps are also available as part of loxilb CICD workflow. Follow the steps below to replicate the above (please note that you will need vagrant tool installed to run:
$ git clone https://github.com/loxilb-io/loxilb.git\n$ cd cicd/k0s-incluster/\n\n# To setup the single node k0s setup with kube-router networking and loxilb as external load balancer\n$ ./config.sh\n\n# To validate the results\n$ ./validation.sh\n\n# To login to the node and check the installation\n$ vagrant ssh k0s-node1\n\n# Cleanup\n$ ./rmconfig.sh\n
"},{"location":"k3s-multi-master/","title":"How-To - Deploy multi-server K3s HA with loxilb","text":""},{"location":"k3s-multi-master/#guide-to-deploy-multi-master-ha-k3s-with-loxilb","title":"Guide to deploy multi-master HA K3s with loxilb","text":"This document will explain how to install a multi-master HA K3s cluster with loxilb as a serviceLB provider running in-cluster mode. K3s is a lightweight Kubernetes distribution and is increasingly used for prototyping as well as for production workloads. K3s nodes are deployed as: 1) k3s-server nodes for k3s control plane components like apiserver and etcd. 2) k3s-agent nodes hosting user workloads/apps. When we deploy multi-master nodes, it is necessary that they be accessed from the k3s-agents in HA configuration and behind a load-balancer. Usually deploying such a load-balancer is outside the scope of kubernetes.
In this guide, we will see how to deploy loxilb not only as cluster's serviceLB provider but also as a VIP-LB for accessing server/master node(s) services.
"},{"location":"k3s-multi-master/#topology","title":"Topology","text":"For multi-master setup we need an odd number of server nodes to maintain quorum. So, we will have 3 k3s-server nodes for this setup. Overall, we will be deploying the components as per the following topology :
"},{"location":"k3s-multi-master/#k3s-installation-and-setup","title":"K3s installation and Setup","text":""},{"location":"k3s-multi-master/#in-k3s-server1-node-","title":"In k3s-server1 node -","text":"$ curl -fL https://get.k3s.io | sh -s - server --node-ip=192.168.80.10 \\\n --disable servicelb --disable traefik --cluster-init external-hostname=192.168.80.10 \\\n --node-external-ip=192.168.80.80 --disable-cloud-controller\n
It is to be noted that --node-external-ip=192.168.80.80
is used since we will utilize 192.168.80.80 as the VIP to access the multi-master setup from k3s-agents and other clients."},{"location":"k3s-multi-master/#setup-the-node-for-loxilb","title":"Setup the node for loxilb :","text":"sudo mkdir -p /etc/loxilb\n
Create the following files in /etc/loxilb
- lbconfig.txt with following contents (change as per your requirement)
{\n \"lbAttr\":[\n {\n \"serviceArguments\":{\n \"externalIP\":\"192.168.80.80\",\n \"port\":6443,\n \"protocol\":\"tcp\",\n \"sel\":0,\n \"mode\":2,\n \"BGP\":false,\n \"Monitor\":true,\n \"inactiveTimeOut\":240,\n \"block\":0\n },\n \"secondaryIPs\":null,\n \"endpoints\":[\n {\n \"endpointIP\":\"192.168.80.10\",\n \"targetPort\":6443,\n \"weight\":1,\n \"state\":\"active\",\n \"counter\":\"\"\n },\n {\n \"endpointIP\":\"192.168.80.11\",\n \"targetPort\":6443,\n \"weight\":1,\n \"state\":\"active\",\n \"counter\":\"\"\n },\n {\n \"endpointIP\":\"192.168.80.12\",\n \"targetPort\":6443,\n \"weight\":1,\n \"state\":\"active\",\n \"counter\":\"\"\n }\n ]\n }\n ]\n}\n
2. EPconfig.txt with the following contents (change as per your requirement) {\n \"Attr\":[\n {\n \"hostName\":\"192.168.80.10\",\n \"name\":\"192.168.80.10_tcp_6443\",\n \"inactiveReTries\":2,\n \"probeType\":\"tcp\",\n \"probeReq\":\"\",\n \"probeResp\":\"\",\n \"probeDuration\":10,\n \"probePort\":6443\n },\n {\n \"hostName\":\"192.168.80.11\",\n \"name\":\"192.168.80.11_tcp_6443\",\n \"inactiveReTries\":2,\n \"probeType\":\"tcp\",\n \"probeReq\":\"\",\n \"probeResp\":\"\",\n \"probeDuration\":10,\n \"probePort\":6443\n },\n {\n \"hostName\":\"192.168.80.12\",\n \"name\":\"192.168.80.12_tcp_6443\",\n \"inactiveReTries\":2,\n \"probeType\":\"tcp\",\n \"probeReq\":\"\",\n \"probeResp\":\"\",\n \"probeDuration\":10,\n \"probePort\":6443\n }\n ]\n}\n
The above serve as bootstrap LB rules for load-balancing into the k3s-server nodes as we will see later.
"},{"location":"k3s-multi-master/#in-k3s-server2-node-","title":"In k3s-server2 node -","text":"$ curl -fL https://get.k3s.io | K3S_TOKEN=${NODE_TOKEN} sh -s - server --server https://192.168.80.10:6443 \\\n --disable traefik --disable servicelb --node-ip=192.168.80.11 \\\n external-hostname=192.168.80.11 --node-external-ip=192.168.80.80 -t ${NODE_TOKEN}\n
where NODE_TOKEN contain simply contents of /var/lib/rancher/k3s/server/node-token from server1. For example, it can be set using a command equivalent to the following : export NODE_TOKEN=$(cat node-token)\n
"},{"location":"k3s-multi-master/#setup-the-node-for-loxilb_1","title":"Setup the node for loxilb:","text":"Simply follow the steps as outlined for server1.
"},{"location":"k3s-multi-master/#in-k3s-server3-node-","title":"In k3s-server3 node -","text":"$ curl -fL https://get.k3s.io | K3S_TOKEN=${NODE_TOKEN} sh -s - server --server https://192.168.80.10:6443 \\\n --disable traefik --disable servicelb --node-ip=192.168.80.12 \\\n external-hostname=192.168.80.12 --node-external-ip=192.168.80.80 -t ${NODE_TOKEN}\n
where NODE_TOKEN contain simply contents of /var/lib/rancher/k3s/server/node-token from server1. For example, it can be set using a command equivalent to the following : export NODE_TOKEN=$(cat node-token)\n
"},{"location":"k3s-multi-master/#setup-the-node-for-loxilb_2","title":"Setup the node for loxilb:","text":"First, follow the steps as outlined for server1. Additionally, we will have to start loxilb pod instances as follows :
$ sudo kubectl apply -f - <<EOF\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: loxilb-lb\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n app: loxilb-app\n template:\n metadata:\n name: loxilb-lb\n labels:\n app: loxilb-app\n spec:\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet\n tolerations:\n - key: \"node-role.kubernetes.io/master\"\n operator: Exists\n - key: \"node-role.kubernetes.io/control-plane\"\n operator: Exists\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: \"node-role.kubernetes.io/master\"\n operator: Exists\n - key: \"node-role.kubernetes.io/control-plane\"\n operator: Exists\n volumes:\n - name: hllb\n hostPath:\n path: /etc/loxilb\n type: DirectoryOrCreate\n containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command:\n - /root/loxilb-io/loxilb/loxilb\n args:\n - --egr-hooks\n - --blacklist=cni[0-9a-z]|veth.|flannel.\n volumeMounts:\n - name: hllb\n mountPath: /etc/loxilb\n ports:\n - containerPort: 11111\n - containerPort: 179\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: loxilb-lb-service\n namespace: kube-system\nspec:\n clusterIP: None\n selector:\n app: loxilb-app\n ports:\n - name: loxilb-app\n port: 11111\n targetPort: 11111\n protocol: TCP\nEOF\n
Kindly note that the args for loxilb might change depending on the scenario. This scenario considers loxilb running in-cluster mode. For service-proxy mode, please follow this yaml for exact args. Next, we will install loxilb's operator kube-loxilb as follows :
sudo kubectl apply -f - <<EOF\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: kube-loxilb\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nrules:\n - apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - pods\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - endpoints\n - services\n - services/status\n verbs:\n - get\n - watch\n - list\n - patch\n - update\n - apiGroups:\n - gateway.networking.k8s.io\n resources:\n - gatewayclasses\n - gatewayclasses/status\n - gateways\n - gateways/status\n - tcproutes\n - udproutes\n verbs: [\"get\", \"watch\", \"list\", \"patch\", \"update\"]\n - apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - get\n - watch\n - list\n - apiGroups:\n - authentication.k8s.io\n resources:\n - tokenreviews\n verbs:\n - create\n - apiGroups:\n - authorization.k8s.io\n resources:\n - subjectaccessreviews\n verbs:\n - create\n - apiGroups:\n - bgppeer.loxilb.io\n resources:\n - bgppeerservices\n verbs:\n - get\n - watch\n - list\n - create\n - update\n - delete\n - apiGroups:\n - bgppolicydefinedsets.loxilb.io\n resources:\n - bgppolicydefinedsetsservices\n verbs:\n - get\n - watch\n - list\n - create\n - update\n - delete\n - apiGroups:\n - bgppolicydefinition.loxilb.io\n resources:\n - bgppolicydefinitionservices\n verbs:\n - get\n - watch\n - list\n - create\n - update\n - delete\n - apiGroups:\n - bgppolicyapply.loxilb.io\n resources:\n - bgppolicyapplyservices\n verbs:\n - get\n - watch\n - list\n - create\n - update\n - delete\n\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: kube-loxilb\nsubjects:\n - kind: ServiceAccount\n name: kube-loxilb\n namespace: kube-system\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: kube-loxilb\n namespace: kube-system\n labels:\n app: kube-loxilb-app\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: kube-loxilb-app\n template:\n metadata:\n labels:\n app: kube-loxilb-app\n spec:\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet\n tolerations:\n # Mark the pod as a critical add-on for rescheduling.\n - key: CriticalAddonsOnly\n operator: Exists\n priorityClassName: system-node-critical\n serviceAccountName: kube-loxilb\n terminationGracePeriodSeconds: 0\n containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n #- --loxiURL=http://192.168.80.10:11111\n - --externalCIDR=192.168.80.200/32\n #- --externalSecondaryCIDRs=124.124.124.1/24,125.125.125.1/24\n #- --setBGP=64512\n #- --listenBGPPort=1791\n - --setRoles=0.0.0.0\n #- --monitor\n #- --extBGPPeers=50.50.50.1:65101,51.51.51.1:65102\n #- --setLBMode=1\n #- --config=/opt/loxilb/agent/kube-loxilb.conf\n resources:\n requests:\n cpu: \"100m\"\n memory: \"50Mi\"\n limits:\n cpu: \"100m\"\n memory: \"50Mi\"\n securityContext:\n privileged: true\n capabilities:\n add: [\"NET_ADMIN\", \"NET_RAW\"]\nEOF\n
At this point we can check the pods running in our kubernetes cluster (in server1, server2 & server3 at this point):
$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-6799fbcd5-7jhcx 1/1 Running 0 3h15m\nkube-system kube-loxilb-5d99c445f7-j4x6k 1/1 Running 0 3h6m\nkube-system local-path-provisioner-6c86858495-pjn9j 1/1 Running 0 3h15m\nkube-system loxilb-lb-8bddf 1/1 Running 0 3h6m\nkube-system loxilb-lb-nsrr9 1/1 Running 0 3h6m\nkube-system loxilb-lb-fp2z6 1/1 Running 0 3h6m\nkube-system metrics-server-54fd9b65b-g5lfn 1/1 Running 0 3h15m\n
"},{"location":"k3s-multi-master/#in-k3s-agent1-node-","title":"In k3s-agent1 node -","text":"The following steps need to be followed to install k3s in the agent nodes:
$ curl -sfL https://get.k3s.io | K3S_TOKEN=${NODE_TOKEN} sh -s - agent --server https://192.168.80.80:6443 --node-ip=${WORKER_ADDR} --node-external-ip=${WORKER_ADDR} -t ${NODE_TOKEN}\n
where WORKER_ADDR is the IP address of the agent node itself (in this case 192.168.80.101) and NODE_TOKEN has contents of /var/lib/rancher/k3s/server/node-token from server1. It is also to be noted that we use VIP - 192.168.80.80 provided by loxilb to access the server(master) K3s nodes and not the actual private node addresses.
For rest of the agent nodes, we can follow the same set of steps as outlined above for k3s-agent1.
"},{"location":"k3s-multi-master/#validation","title":"Validation","text":"After setting up all the k3s-server and k3s-agents, we should be able to see all nodes up and running
$ sudo kubectl get nodes -A\nNAME STATUS ROLES AGE VERSION\nmaster1 Ready control-plane,etcd,master 4h v1.29.3+k3s1\nmaster2 Ready control-plane,etcd,master 4h v1.29.3+k3s1\nmaster3 Ready control-plane,etcd,master 4h v1.29.3+k3s1 \nworker1 Ready <none> 4h v1.29.3+k3s1\nworker2 Ready <none> 4h v1.29.3+k3s1\nworker3 Ready <none> 4h v1.29.3+k3s1\n
To verify, let's shutdown master1 k3s-server.
## Run shutdown the master1 node\n$ sudo shutdown -t now\n
And try to access cluster information from other master nodes or worker nodes :
$ sudo kubectl get nodes -A\nNAME STATUS ROLES AGE VERSION\nmaster1 NotReady control-plane,etcd,master 4h10m v1.29.3+k3s1\nmaster2 Ready control-plane,etcd,master 4h10m v1.29.3+k3s1\nmaster3 Ready control-plane,etcd,master 4h10m v1.29.3+k3s1\nworker1 Ready <none> 4h10m v1.29.3+k3s1\nworker2 Ready <none> 4h10m v1.29.3+k3s1\n
Also, we can confirm pods getting rescheduled to other \"ready\" nodes :
$ sudo kubectl get pods -A -o wide\nNAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\nkube-system coredns-6799fbcd5-6dvm7 1/1 Running 0 27m 10.42.2.2 master3 <none> <none>\nkube-system coredns-6799fbcd5-mrjgt 1/1 Terminating 0 3h58m 10.42.0.4 master1 <none> <none>\nkube-system kube-loxilb-5d99c445f7-x7qd6 1/1 Running 0 3h58m 192.168.80.11 master2 <none> <none>\nkube-system local-path-provisioner-6c86858495-6f8rz 1/1 Terminating 0 3h58m 10.42.0.2 master1 <none> <none>\nkube-system local-path-provisioner-6c86858495-z2p6m 1/1 Running 0 27m 10.42.3.2 worker1 <none> <none>\nkube-system loxilb-lb-65jnz 1/1 Running 0 3h58m 192.168.80.10 master1 <none> <none>\nkube-system loxilb-lb-pfkf8 1/1 Running 0 3h58m 192.168.80.12 master3 <none> <none>\nkube-system loxilb-lb-xhr95 1/1 Running 0 3h58m 192.168.80.11 master2 <none> <none>\nkube-system metrics-server-54fd9b65b-l5pqz 1/1 Running 0 27m 10.42.4.2 worker2 <none> <none>\nkube-system metrics-server-54fd9b65b-x9bd7 1/1 Terminating 0 3h58m 10.42.0.3 master1 <none> <none>\n
If the above set of command works fine in any of the \"ready\" nodes, it indicates that the api server is available even when one of k3s server (master) goes down. The same can be followed if need be for any services apart from K8s/K3s apiserver as well.
"},{"location":"k3s-rmq/","title":"K3s rmq","text":""},{"location":"k3s-rmq/#quick-start-guide-with-k3s-and-loxilb-in-cluster-service-proxy-mode","title":"Quick Start Guide with K3s and LoxiLB in-cluster \"service-proxy\" mode","text":"This document will explain how to install a K3s cluster with loxilb as a serviceLB provider running in-cluster \"service-proxy\" mode. \u00a0 \u00a0
"},{"location":"k3s-rmq/#what-is-service-proxy-mode","title":"What is service-proxy mode?","text":"service-proxy mode is where kubernetes cluster networking is entirely streamlined by loxilb for better performance.
Looking at the left side of the image, you will notice the traffic flow of the packet as it enters the Kubernetes cluster. Kube-proxy, the de-facto networking agent in the Kubernetes which runs on each node of the cluster which monitors the services and translates them to either iptables or IPVS tangible rules. If we talk about the functionality or a cluster with low volume traffic then kube-proxy is fine but when it comes to scalability or a high volume traffic then it acts as a bottle-neck. loxilb \"service-proxy\" mode works with Flannel and kube-proxy in IPVS mode. It simplifies all the IPVS rules and injects them in it's in-kernel eBPF data-path. Traffic will reach at the interface, will be processed by eBPF and sent directly to the pod or to the other node, bypassing all the layers of Linux networking. This way, all the services, be it External, NodePort or ClusterIP, can be managed through LoxiLB.
"},{"location":"k3s-rmq/#topology","title":"Topology","text":"For quickly bringing up loxilb in-cluster and K3s, we will be deploying a 4 nodes k3s cluster : \u00a0
loxilb and kube-loxilb components run as pods managed by kubernetes \u00a0in this scenario.
"},{"location":"k3s-rmq/#setup-k3s","title":"Setup K3s","text":""},{"location":"k3s-rmq/#configure-master-node","title":"Configure Master node","text":"$ curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable traefik \\\n--disable servicelb \\\n--disable-cloud-controller \\\n--kube-proxy-arg proxy-mode=ipvs \\\n--flannel-iface=eth1 \\\n--disable-network-policy \\\n--node-ip=${MASTER_IP} \\\n--node-external-ip=${MASTER_IP} \\\n--bind-address=${MASTER_IP}\" sh -\n
"},{"location":"k3s-rmq/#configure-worker-nodes","title":"Configure Worker nodes","text":"$ curl -sfL https://get.k3s.io | K3S_URL=\"https://${MASTER_IP}:6443\"\\\n\u00a0K3S_TOKEN=\"${NODE_TOKEN}\" \\\n\u00a0INSTALL_K3S_EXEC=\"--node-ip=${WORKER_ADDR} \\\n--node-external-ip=${WORKER_IP} \\\n--kube-proxy-arg proxy-mode=ipvs \\\n--flannel-iface=eth1\" sh -\n
"},{"location":"k3s-rmq/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used as an operator to manage loxilb.
wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/mesh/kube-loxilb.yml\n
kube-loxilb.yaml
\u00a0 \u00a0 \u00a0 \u00a0args:\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0#- --loxiURL=http://172.17.0.2:11111\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0- --externalCIDR=192.168.82.100/32\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0- --setRoles=0.0.0.0\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0#- --monitor\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0#- --setBGP\n
In the above snippet, loxiURL is commented out which denotes to utilize in-cluster mode to discover loxilb pods automatically. External CIDR represents the IP pool from where serviceLB VIP will be allocated. Apply after making changes (if any) :
sudo kubectl apply -f kube-loxilb.yaml\n
"},{"location":"k3s-rmq/#how-to-deploy-loxilb","title":"How to deploy loxilb ?","text":"loxilb can be deloyed by using the following command in the K3s node
sudo kubectl apply -f https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/mesh/loxilb-mesh.yml\n
"},{"location":"k3s-rmq/#seup-rabbitmq-operator","title":"Seup RabbitMQ Operator","text":"sudo kubectl apply -f \"https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml\"\n
"},{"location":"k3s-rmq/#setup-rabbitmq-application-with-loxilb","title":"Setup RabbitMQ application with loxilb","text":"wget https://raw.githubusercontent.com/rabbitmq/cluster-operator/main/docs/examples/hello-world/rabbitmq.yaml\n
Change the following: apiVersion: rabbitmq.com/v1beta1\nkind: RabbitmqCluster\nmetadata:\n\u00a0 name: hello-world\nspec:\n\u00a0 replicas: 3\n\u00a0 service:\n\u00a0 \u00a0 type: LoadBalancer\n\u00a0 override:\n\u00a0 \u00a0 service:\n\u00a0 \u00a0 \u00a0 spec:\n\u00a0 \u00a0 \u00a0 \u00a0 loadBalancerClass: loxilb.io/loxilb\n\u00a0 \u00a0 \u00a0 \u00a0 externalTrafficPolicy: Local\n\u00a0 \u00a0 \u00a0 \u00a0 ports:\n\u00a0 \u00a0 \u00a0 \u00a0 - port: 5672\n
"},{"location":"k3s-rmq/#create-the-service","title":"Create the service","text":"sudo kubectl apply -f rabbitmq.yaml\n
"},{"location":"k3s-rmq/#check-the-status","title":"Check the status","text":"In k3s node:
## Check the pods created\n$ sudo kubectl get pods -A\nNAMESPACE \u00a0 \u00a0 \u00a0 \u00a0 NAME \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0READY \u00a0 STATUS \u00a0 \u00a0RESTARTS \u00a0 AGE\nkube-system \u00a0 \u00a0 \u00a0 local-path-provisioner-6c86858495-65tbc \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0137m\nkube-system \u00a0 \u00a0 \u00a0 coredns-6799fbcd5-5h2dw \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0137m\nkube-system \u00a0 \u00a0 \u00a0 metrics-server-67c658944b-mtv9q \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0137m\nrabbitmq-system \u00a0 rabbitmq-cluster-operator-ccf488f4c-sphfm \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a08m12s\nkube-system \u00a0 \u00a0 \u00a0 kube-loxilb-5fb5566999-4dj2v \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a01/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a04m18s\nkube-system \u00a0 \u00a0 \u00a0 loxilb-lb-txtfm \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a03m57s\nkube-system \u00a0 \u00a0 \u00a0 loxilb-lb-fnv97 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a03m57s\nkube-system \u00a0 \u00a0 \u00a0 loxilb-lb-r7mks \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a03m57s\nkube-system \u00a0 \u00a0 \u00a0 loxilb-lb-xxn29 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a03m57s\ndefault \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 hello-world-server-0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a01/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a072s\ndefault \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 hello-world-server-1 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a01/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a072s\ndefault \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 hello-world-server-2 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a01/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a072s\n\n\n## Check the services created\n$ sudo kubectl get svc\nsudo kubectl get svc\nNAME \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0TYPE \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 CLUSTER-IP \u00a0 \u00a0 \u00a0EXTERNAL-IP \u00a0 \u00a0 \u00a0 \u00a0 PORT(S) \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 AGE\nkubernetes \u00a0 \u00a0 \u00a0 \u00a0 \u00a0ClusterIP \u00a0 \u00a0 \u00a010.43.0.1 \u00a0 \u00a0 \u00a0 <none> \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0443/TCP \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 136m\nhello-world-nodes \u00a0 ClusterIP \u00a0 \u00a0 \u00a0None \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0<none> \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a04369/TCP,25672/TCP \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a07s\nhello-world \u00a0 \u00a0 \u00a0 \u00a0 LoadBalancer \u00a0 10.43.190.199 \u00a0 llb-192.168.82.100 \u00a0 15692:31224/TCP,5672:30817/TCP,15672:30698/TCP \u00a0 7s\n
In loxilb pod, we can check internal LB rules: $ sudo kubectl exec -it -n kube-system loxilb-lb-8l85d -- loxicmd get lb -o wide\nsudo kubectl exec -it loxilb-lb-txtfm -n kube-system -- loxicmd get lb -o wide\n| \u00a0 \u00a0EXT IP \u00a0 \u00a0 | SEC IPS | PORT \u00a0| PROTO | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 NAME \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | MARK | SEL | \u00a0MODE \u00a0 | \u00a0 \u00a0ENDPOINT \u00a0 \u00a0| EPORT | WEIGHT | STATE \u00a0| COUNTERS |\n|---------------|---------|-------|-------|------------------------------|------|-----|---------|----------------|-------|--------|--------|----------|\n| 10.0.2.15 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 30698 | tcp \u00a0 | ipvs_10.0.2.15:30698-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.0.2.15 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 30817 | tcp \u00a0 | ipvs_10.0.2.15:30817-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.0.2.15 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 31224 | tcp \u00a0 | ipvs_10.0.2.15:31224-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.42.0.0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 30698 | tcp \u00a0 | ipvs_10.42.0.0:30698-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.42.0.0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 30817 | tcp \u00a0 | ipvs_10.42.0.0:30817-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.42.0.0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 31224 | tcp \u00a0 | ipvs_10.42.0.0:31224-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.42.0.1 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 30698 | tcp \u00a0 | ipvs_10.42.0.1:30698-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.42.0.1 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 30817 | tcp \u00a0 | ipvs_10.42.0.1:30817-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.42.0.1 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 31224 | tcp \u00a0 | ipvs_10.42.0.1:31224-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.0.10 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a053 | tcp \u00a0 | ipvs_10.43.0.10:53-tcp \u00a0 \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| default | 10.42.0.3 \u00a0 \u00a0 \u00a0| \u00a0 \u00a053 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.0.10 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a053 | udp \u00a0 | ipvs_10.43.0.10:53-udp \u00a0 \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| default | 10.42.0.3 \u00a0 \u00a0 \u00a0| \u00a0 \u00a053 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.0.10 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0 \u00a0 | \u00a09153 | tcp \u00a0 | ipvs_10.43.0.10:9153-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| default | 10.42.0.3 \u00a0 \u00a0 \u00a0| \u00a09153 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.0.1 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 443 | tcp \u00a0 | ipvs_10.43.0.1:443-tcp \u00a0 \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| default | 192.168.80.10 \u00a0| \u00a06443 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.190.199 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a05672 | tcp \u00a0 | ipvs_10.43.190.199:5672-tcp \u00a0| \u00a0 \u00a00 | rr \u00a0| default | 10.42.1.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.190.199 | \u00a0 \u00a0 \u00a0 \u00a0 | 15672 | tcp \u00a0 | ipvs_10.43.190.199:15672-tcp | \u00a0 \u00a00 | rr \u00a0| default | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.190.199 | \u00a0 \u00a0 \u00a0 \u00a0 | 15692 | tcp \u00a0 | ipvs_10.43.190.199:15692-tcp | \u00a0 \u00a00 | rr \u00a0| default | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.5.58 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 443 | tcp \u00a0 | ipvs_10.43.5.58:443-tcp \u00a0 \u00a0 \u00a0| \u00a0 \u00a00 | rr \u00a0| default | 10.42.0.4 \u00a0 \u00a0 \u00a0| 10250 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.10 | \u00a0 \u00a0 \u00a0 \u00a0 | 30698 | tcp \u00a0 | ipvs_192.168.80.10:30698-tcp | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.10 | \u00a0 \u00a0 \u00a0 \u00a0 | 30817 | tcp \u00a0 | ipvs_192.168.80.10:30817-tcp | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.10 | \u00a0 \u00a0 \u00a0 \u00a0 | 31224 | tcp \u00a0 | ipvs_192.168.80.10:31224-tcp | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.20 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a05672 | tcp \u00a0 | default_hello-world \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a00 | rr \u00a0| onearm \u00a0| 192.168.80.101 | 30817 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 192.168.80.102 | 30817 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 192.168.80.103 | 30817 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.20 | \u00a0 \u00a0 \u00a0 \u00a0 | 15672 | tcp \u00a0 | default_hello-world \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a00 | rr \u00a0| onearm \u00a0| 192.168.80.101 | 30698 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 192.168.80.102 | 30698 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 192.168.80.103 | 30698 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.20 | \u00a0 \u00a0 \u00a0 \u00a0 | 15692 | tcp \u00a0 | default_hello-world \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a00 | rr \u00a0| onearm \u00a0| 192.168.80.101 | 31224 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 192.168.80.102 | 31224 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 192.168.80.103 | 31224 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.20 | \u00a0 \u00a0 \u00a0 \u00a0 | 30698 | tcp \u00a0 | ipvs_192.168.80.20:30698-tcp | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.20 | \u00a0 \u00a0 \u00a0 \u00a0 | 30817 | tcp \u00a0 | ipvs_192.168.80.20:30817-tcp | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.20 | \u00a0 \u00a0 \u00a0 \u00a0 | 31224 | tcp \u00a0 | ipvs_192.168.80.20:31224-tcp | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n
"},{"location":"k3s-rmq/#get-rabbitmq-credentials","title":"Get RabbitMQ Credentials","text":"username=\"$(sudo kubectl get secret hello-world-default-user -o jsonpath='{.data.username}' | base64 --decode)\"\npassword=\"$(sudo kubectl get secret hello-world-default-user -o jsonpath='{.data.password}' | base64 --decode)\"\n
"},{"location":"k3s-rmq/#test-rabbitmq-from-hostclient","title":"Test RabbitMQ from host/client","text":"sudo docker run -it --rm --net=host pivotalrabbitmq/perf-test:latest --uri amqp://$username:$password@192.168.80.20:5672 -x 10 \u00a0-y 10 -u \"throughput-test-4\" -a --id \"test 4\" \u00a0-z100\n
For more detailed performance comparison with other solutions, kindly follow this blog and for more detailed information on incluster deployment of loxilb with bgp in a full-blown cluster, kindly follow this blog.\u00a0 \u00a0"},{"location":"k3s_quick_start_calico/","title":"K3s/loxilb with calico","text":""},{"location":"k3s_quick_start_calico/#loxilb-quick-start-guide-with-calico","title":"LoxiLB Quick Start Guide with Calico","text":"This guide will explain how to:
- Deploy a single-node K3s cluster with calico networking
- Expose services with loxilb as an external load balancer
"},{"location":"k3s_quick_start_calico/#pre-requisite","title":"Pre-requisite","text":" - Single node with Linux
- Install docker runtime to manage loxilb
"},{"location":"k3s_quick_start_calico/#topology","title":"Topology","text":"For quickly bringing up loxilb with K3s and calico, we will be deploying all components in a single node :
loxilb is run as a docker and will use macvlan for the incoming traffic. This is to mimic a topology close to cloud-hosted k8s where LB nodes run outside a cluster. loxilb can be used in more complex in-cluster mode as well, but not used here for simplicity.
"},{"location":"k3s_quick_start_calico/#install-loxilb-docker","title":"Install loxilb docker","text":"## Set underlying interface of the VM/cluster-node to promisc mode for mac-vlan to work\nsudo ifconfig eth1 promisc\n\n## Run loxilb\nsudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged --entrypoint /root/loxilb-io/loxilb/loxilb -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest\n\n# Create mac-vlan on top of underlying eth1 interface\nsudo docker network create -d macvlan -o parent=eth1 --subnet 192.168.82.0/24 --gateway 192.168.82.1 --aux-address 'host=192.168.82.252' llbnet\n\n# Assign mac-vlan to loxilb docker with specified IP (which will be used as LB VIP)\n# Please note that this node should already have an IP assigned belonging to the same subnet on underlying interface\nsudo docker network connect llbnet loxilb --ip=192.168.82.100\n\n# Add iptables rule to allow traffic from source/host IP(192.168.82.1) to loxilb\nsudo iptables -A DOCKER -s 192.168.82.1 -j ACCEPT\n
All the above steps related to docker setup can be further automated using docker-compose.
"},{"location":"k3s_quick_start_calico/#setup-k3s-with-calico","title":"Setup K3s with Calico","text":"# Install IPVS\nsudo apt-get -y install ipset ipvsadm\n\n# Install K3s with Calico and kube-proxy in IPVS mode\ncurl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable traefik,metrics-server,servicelb --disable-cloud-controller --kubelet-arg cloud-provider=external --flannel-backend=none --disable-network-policy\" K3S_KUBECONFIG_MODE=\"644\" sh -s - server --kube-proxy-arg proxy-mode=ipvs\n\n# Install Calico\nkubectl $KUBECONFIG create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/tigera-operator.yaml\n\nkubectl $KUBECONFIG create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/custom-resources.yaml\n\n# Remove taints in k3s if any (usually happens if started without cloud-manager)\nsudo kubectl taint nodes --all node.cloudprovider.kubernetes.io/uninitialized=false:NoSchedule-\n
"},{"location":"k3s_quick_start_calico/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used to deploy loxilb with Kubernetes.
wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/ext-cluster/kube-loxilb.yaml\n
kube-loxilb.yaml
args:\n - --loxiURL=http://172.17.0.2:11111\n - --externalCIDR=192.168.82.100/32\n - --setMode=1\n
In the above snippet, loxiURL uses docker interface IP of loxilb, which can be different for each setup. Apply in k8s:
kubectl apply -f kube-loxilb.yaml\n
"},{"location":"k3s_quick_start_calico/#create-the-service","title":"Create the service","text":"kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/docker-k3s-cilium/tcp-svc-lb.yml\n
"},{"location":"k3s_quick_start_calico/#check-the-status","title":"Check the status","text":"In k3s:
kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 2m48s\ntcp-lb-onearm LoadBalancer 10.43.183.123 llb-192.168.82.100 56002:30001/TCP 30s\n
In loxilb docker: $ sudo docker exec -it loxilb loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-----------------------|------|-----|--------|-----------|-------|--------|--------|----------|\n| 192.168.82.100 | | 56002 | tcp | default_tcp-lb-onearm | 0 | rr | onearm | 10.0.2.15 | 30001 | 1 | active | 0:0 |\n
"},{"location":"k3s_quick_start_calico/#connect-from-hostclient","title":"Connect from host/client","text":"$ curl http://192.168.82.100:56002\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
All of the above steps are also available as part of loxilb CICD workflow. Follow the steps below to replicate the above:
$ cd cicd/docker-k3s-calico/\n\n# To setup the single node k3s setup with calico as CNI and loxilb as external load balancer\n$ ./config.sh\n\n# To validate the results\n$ ./validation.sh\n\n# Cleanup\n$ ./rmconfig.sh\n
"},{"location":"k3s_quick_start_flannel/","title":"K3s/loxilb with default flannel","text":""},{"location":"k3s_quick_start_flannel/#loxilb-quick-start-guide-with-k3sflannel","title":"LoxiLB Quick Start Guide with K3s/Flannel","text":"This guide will explain how to:
- Deploy a single-node K3s cluster with flannel networking
- Expose services with loxilb as an external load balancer
"},{"location":"k3s_quick_start_flannel/#pre-requisite","title":"Pre-requisite","text":" - Single node with Linux
- Install docker runtime to manage loxilb
"},{"location":"k3s_quick_start_flannel/#topology","title":"Topology","text":"For quickly bringing up loxilb with K3s/Flannel, we will be deploying all components in a single node :
loxilb is run as a docker and will use macvlan for the incoming traffic. This is to mimic a topology close to cloud-hosted k8s where LB nodes run outside a cluster. loxilb can be used in more complex in-cluster mode as well, but not used here for simplicity.
"},{"location":"k3s_quick_start_flannel/#install-loxilb-docker","title":"Install loxilb docker","text":"## Set underlying interface of the VM/cluster-node to promisc mode for mac-vlan to work\nsudo ifconfig eth1 promisc\n\n## Run loxilb\nsudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged --entrypoint /root/loxilb-io/loxilb/loxilb -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest\n\n# Create mac-vlan on top of underlying eth1 interface\nsudo docker network create -d macvlan -o parent=eth1 --subnet 192.168.82.0/24 --gateway 192.168.82.1 --aux-address 'host=192.168.82.252' llbnet\n\n# Assign mac-vlan to loxilb docker with specified IP (which will be used as LB VIP)\n# Please note that this node should already have an IP assigned belonging to the same subnet on underlying interface\nsudo docker network connect llbnet loxilb --ip=192.168.82.100\n\n# Add iptables rule to allow traffic from source/host IP(192.168.82.1) to loxilb\nsudo iptables -A DOCKER -s 192.168.82.1 -j ACCEPT\n
All the above steps related to docker setup can be further automated using docker-compose.
"},{"location":"k3s_quick_start_flannel/#setup-k3sflannel","title":"Setup K3s/Flannel","text":"#K3s installation\ncurl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"server --disable traefik --disable servicelb --disable-cloud-controller --kube-proxy-arg metrics-bind-address=0.0.0.0 --kubelet-arg cloud-provider=external\" K3S_KUBECONFIG_MODE=\"644\" sh -\n\n# Remove taints in k3s if any (usually happens if started without cloud-manager)\nsudo kubectl taint nodes --all node.cloudprovider.kubernetes.io/uninitialized=false:NoSchedule-\n
"},{"location":"k3s_quick_start_flannel/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used to deploy loxilb with Kubernetes.
wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/ext-cluster/kube-loxilb.yaml\n
kube-loxilb.yaml
args:\n - --loxiURL=http://172.17.0.2:11111\n - --externalCIDR=192.168.82.100/32\n - --setMode=1\n
In the above snippet, loxiURL uses docker interface IP of loxilb, which can be different for each setup. Apply in k8s:
kubectl apply -f kube-loxilb.yaml\n
"},{"location":"k3s_quick_start_flannel/#create-the-service","title":"Create the service","text":"kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/docker-k3s-cilium/tcp-svc-lb.yml\n
"},{"location":"k3s_quick_start_flannel/#check-the-status","title":"Check the status","text":"In k3s:
kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 80m\ntcp-lb-onearm LoadBalancer 10.43.183.123 llb-192.168.82.100 56002:30001/TCP 6m50s\n
In loxilb docker: $ sudo docker exec -it loxilb loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-----------------------|------|-----|--------|-----------|-------|--------|--------|----------|\n| 192.168.82.100 | | 56002 | tcp | default_tcp-lb-onearm | 0 | rr | onearm | 10.0.2.15 | 30001 | 1 | active | 12:880 |\n
"},{"location":"k3s_quick_start_flannel/#connect-from-hostclient","title":"Connect from host/client","text":"$ curl http://192.168.82.100:56002\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
"},{"location":"k3s_quick_start_incluster/","title":"K3s/loxilb in-cluster mode","text":""},{"location":"k3s_quick_start_incluster/#quick-start-guide-with-k3s-and-loxilb-in-cluster-mode","title":"Quick Start Guide with K3s and LoxiLB in-cluster mode","text":"This document will explain how to install a K3s cluster with loxilb as a serviceLB provider running in-cluster mode.
"},{"location":"k3s_quick_start_incluster/#topology","title":"Topology","text":"For quickly bringing up loxilb in-cluster and K3s, we will be deploying all components in a single node :
loxilb and kube-loxilb components run as pods managed by kubernetes in this scenario.
"},{"location":"k3s_quick_start_incluster/#setup-k3s","title":"Setup K3s","text":"# K3s installation\n$ curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"server --disable traefik --disable servicelb --disable-cloud-controller --kube-proxy-arg metrics-bind-address=0.0.0.0 --kubelet-arg cloud-provider=external\" K3S_KUBECONFIG_MODE=\"644\" sh -\n\n# Remove taints in k3s if any (usually happens if started without cloud-manager)\n$ sudo kubectl taint nodes --all node.cloudprovider.kubernetes.io/uninitialized=false:NoSchedule-\n
"},{"location":"k3s_quick_start_incluster/#how-to-deploy-loxilb","title":"How to deploy loxilb ?","text":"loxilb can be deloyed by using the following command in the K3s node
sudo kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/k3s-incluster/loxilb.yml\n
"},{"location":"k3s_quick_start_incluster/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used as an operator to manage loxilb.
wget https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/k3s-incluster/kube-loxilb.yml\n
kube-loxilb.yaml
args:\n #- --loxiURL=http://172.17.0.2:11111\n - --externalCIDR=192.168.82.100/32\n - --setRoles=0.0.0.0\n #- --monitor\n #- --setBGP\n
In the above snippet, loxiURL is commented out which denotes to utilize in-cluster mode to discover loxilb pods automatically. External CIDR represents the IP pool from where serviceLB VIP will be allocated. Apply after making changes (if any) :
sudo kubectl apply -f kube-loxilb.yaml\n
"},{"location":"k3s_quick_start_incluster/#create-the-service","title":"Create the service","text":"sudo kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/k3s-incluster/tcp-svc-lb.yml\n
"},{"location":"k3s_quick_start_incluster/#check-the-status","title":"Check the status","text":"In k3s node:
## Check the pods created\n$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system local-path-provisioner-6c86858495-snvcm 1/1 Running 0 4m37s\nkube-system coredns-6799fbcd5-cpj6x 1/1 Running 0 4m37s\nkube-system metrics-server-67c658944b-42ptz 1/1 Running 0 4m37s\nkube-system loxilb-lb-8l85d 1/1 Running 0 3m40s\nkube-system kube-loxilb-6f44cdcdf5-5fdtl 1/1 Running 0 2m19s\ndefault tcp-onearm-test 1/1 Running 0 88s\n\n\n## Check the services created\n$ sudo kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5m12s\ntcp-lb-onearm LoadBalancer 10.43.47.60 llb-192.168.82.100 56002:30001/TCP 108s\n
In loxilb pod, we can check internal LB rules: $ sudo kubectl exec -it -n kube-system loxilb-lb-8l85d -- loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-----------------------|------|-----|--------|-----------|-------|--------|--------|----------|\n| 192.168.82.100 | | 56002 | tcp | default_tcp-lb-onearm | 0 | rr | onearm | 10.0.2.15 | 30001 | 1 | active | 39:2874 |\n
"},{"location":"k3s_quick_start_incluster/#connect-from-hostclient","title":"Connect from host/client","text":"$ curl http://192.168.82.100:56002\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
For more detailed information on incluster deployment of loxilb with bgp in a full-blown cluster, kindly follow this blog. "},{"location":"k8s_bgp_policy_crd/","title":"License","text":"This document is based on the original work by GOBGP. Changes have been made to the original document.
Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0\n
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
"},{"location":"k8s_bgp_policy_crd/#policy-configuration","title":"Policy Configuration","text":"This page explains LoxiLB with GoBGP policy feature for controlling the route advertisement. It might be called Route Map in other BGP implementations.
And This document was written with reference to this goBGP official document.
We explain the overview firstly, then the details.
"},{"location":"k8s_bgp_policy_crd/#prerequisites","title":"Prerequisites","text":"Assumed that you run loxilb with -b
option. Or If you control loxilb through kube-loxilb, be sure to set the --set-bgp
option in the kube-loxilb.yaml file.
docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest -b\n
or in the kube-loxilb.yaml And adding - --enableBGPCRDs
option in kube-loxilb.yaml args:\n - --loxiURL=http://12.12.12.1:11111\n - --externalCIDR=123.123.123.1/24\n - --setBGP=65100\n - --enableBGPCRDs\n
And apply CRD yamls as first step.
kubectl apply -f manifest/crds/bgp-policy-apply-service.yaml\nkubectl apply -f manifest/crds/bgp-policy-defined-sets-service.yaml\nkubectl apply -f manifest/crds/bgp-policy-definition-service.yaml\n
(Note) Currently, gobgp does not support the Policy command in global state. Therefore, only the policy for neighbors is applied, and we plan to apply the global policy through additional development. To apply a policy in a neighbor, you must form a peer by adding the route-server-client
option when using gobgp in loxilb. This does not provide a separate API and will be provided in the future. For examples in gobgp, please refer to the following documents.
"},{"location":"k8s_bgp_policy_crd/#contents","title":"Contents","text":" - Policy Configuration
- Prerequisites
- Contents
- Overview
- Route Server Policy Model
- Policy Structure
- Configure Policies
- 1. Defining defined-sets
- prefix-sets
- Examples
- neighbor-sets
- Examples
- 2. Defining bgp-defined-sets
- community-sets
- Examples
- ext-community-sets
- Examples
- as-path-sets
- Examples
- 3. Defining policy-definitions
- Execution condition of Action
- Examples
- 4. Attaching policy
- 4.1. Attach policy to route-server-client
- Policy and Soft Reset
"},{"location":"k8s_bgp_policy_crd/#overview","title":"Overview","text":"Policy is a way to control how BGP routes inserted to RIB or advertised to peers. Policy has two parts, Condition and Action. When a policy is configured, Action is applied to routes which meet Condition before routes proceed to next step.
GoBGP supports Condition like prefix
, neighbor
(source/destination of the route), aspath
etc.., and Action like accept
, reject
, MED/aspath/community manipulation
etc...
You can configure policy by configuration file, CLI or gRPC API. Here, we show how to configure policy via configuration file.
"},{"location":"k8s_bgp_policy_crd/#route-server-policy-model","title":"Route Server Policy Model","text":"The following figure shows how policy works in route server BGP configuration.
In route server mode, Import and Export policies are defined with respect to a peer. The Import policy defines what routes will be imported into the master RIB. The Export policy defines what routes will be exported from the master RIB.
You can check each policy by the following commands in the loxilb.
$ gobgp neighbor <neighbor-addr> policy import\n$ gobgp neighbor <neighbor-addr> policy export\n
"},{"location":"k8s_bgp_policy_crd/#policy-structure","title":"Policy Structure","text":"A policy consists of statements. Each statement has condition(s) and action(s).
Conditions are categorized into attributes below:
- prefix
- neighbor
- aspath
- aspath length
- community
- extended community
- rpki validation result
- route type (internal/external/local)
- large community
- afi-safi in
As showed in the figure above, some of the conditions point to defined sets, which are a container for each condition item (e.g. prefixes).
Actions are categorized into attributes below:
- accept or reject
- add/replace/remove community or remove all communities
- add/subtract or replace MED value
- set next-hop (specific address/own local address/don't modify)
- set local-pref
- prepend AS number in the AS_PATH attribute
When ALL conditions in the statement are true
, the action(s) in the statement are executed.
You can check policy configuration by the following commands.
$ kubectl get bgppolicydefinedsetsservice\n\n$ kubectl get bgppolicydefinitionservice\n\n$ kubectl get bgppolicyapplyservice\n
"},{"location":"k8s_bgp_policy_crd/#configure-policies","title":"Configure Policies","text":"Policy Configuration comes from two parts, definition and attachment. For definition, we have defined-sets and policy-definition. defined-sets defines condition item for some of the condition type. policy-definitions defines policies based on actions and conditions.
-
defined-sets A single defined-sets entry has prefix match that is named prefix-sets and neighbor match part that is named neighbor-sets. It also has bgp-defined-sets, a subset of defined-sets that defines conditions referring to BGP attributes such as aspath. This defined-sets has a name and it's used to refer to defined-sets items from outside.
-
policy-definitions policy-definitions is a list of policy. A single element has statements part that combines conditions with an action.
Below are the steps for policy configuration
- define defined-sets
- define prefix-sets
- define neighbor-sets
- define bgp-defined-sets
- define community-sets
- define ext-community-sets
- define as-path-setList
- define large-community-sets
- define policy-definitions
- attach neighbor
"},{"location":"k8s_bgp_policy_crd/#1-defining-defined-sets","title":"1. Defining defined-sets","text":"defined-sets has prefix information and neighbor information in prefix-sets and neighbor-sets section, and GoBGP uses these information to evaluate routes. Defining defined-sets is needed at first. prefix-sets and neighbor-sets section are prefix match part and neighbor match part.
- defined-sets example
```yaml # prefix match part apiVersion: bgppolicydefinedsets.loxilb.io/v1 kind: BGPPolicyDefinedSetsService metadata: name: policy-prefix spec: name: \"ps1\" definedType: \"prefix\" prefixList: - ipPrefix: \"10.33.0.0/16\" masklengthRange: \"21..24\"
"},{"location":"k8s_bgp_policy_crd/#neighbor-match-part","title":"neighbor match part","text":"apiVersion: bgppolicydefinedsets.loxilb.io/v1 kind: BGPPolicyDefinedSetsService metadata: name: policy-neighbor spec: name: \"ns1\" definedType: \"neighbor\" List: - \"10.0.255.1/32\"
```
"},{"location":"k8s_bgp_policy_crd/#prefix-sets","title":"prefix-sets","text":"prefix-sets has prefix-set-list, and prefix-set-list has prefix-set-name and prefix-list as its element. prefix-set-list is used as a condition. Note that prefix-sets has either v4 or v6 addresses.
prefix has 1 element and list of sub-elements.
Element Description Example Optional name name of prefix-set \"ps1\" prefixList list of prefix and range of length PrefixList has 2 elements.
Element Description Example Optional ipPrefix prefix value \"10.33.0.0/16\" masklengthRange range of length \"21..24\" Yes"},{"location":"k8s_bgp_policy_crd/#examples","title":"Examples","text":" - example 1
- Match routes whose high order 2 octets of NLRI is 10.33 and its prefix length is between from 21 to 24
- If you define a prefix-list that doesn't have MasklengthRange, it matches routes that have just 10.33.0.0/16 as NLRI.
# example 1\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-prefix\nspec:\n name: \"ps1\"\n definedType: \"prefix\"\n prefixList:\n - ipPrefix: \"10.33.0.0/16\"\n masklengthRange: \"21..24\"\n
- example 2
- If you want to evaluate multiple routes with a single prefix-set-list, you can do this by adding an another prefix-list like this:
- This prefix-set-list match checks if a route has 10.33.0.0/21 to 24 or 10.50.0.0/21 to 24.
# example 2\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-prefix\nspec:\n name: \"ps1\"\n definedType: \"prefix\"\n prefixList:\n - ipPrefix: \"10.33.0.0/16\"\n masklengthRange: \"21..24\"\n - ipPrefix: \"10.50.0.0/16\"\n masklengthRange: \"21..24\"\n
- example 3
- prefix-set-name under prefix-set-list is reference to a single prefix-set.
- If you want to add different prefix-set more, you can add other blocks that form the same structure with example 1.
apiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-prefix\nspec:\n name: \"ps1\"\n definedType: \"prefix\"\n prefixList:\n - ipPrefix: \"10.33.0.0/16\"\n masklengthRange: \"21..24\"\n---\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-prefix\nspec:\n name: \"ps2\"\n definedType: \"prefix\"\n prefixList:\n - ipPrefix: \"10.50.0.0/16\"\n masklengthRange: \"21..24\"\n
"},{"location":"k8s_bgp_policy_crd/#neighbor-sets","title":"neighbor-sets","text":"neighbor-sets has neighbor-set-list, and neighbor-set-list has neighbor-set-name and neighbor-info-list as its element. It is necessary to specify a neighbor address in neighbor-info-list. neighbor-set-list is used as a condition. Attention: an empty neighbor-set will match against ANYTHING and not invert based on the match option
neighbor has 1 element and list of sub-elements.
Element Description Example Optional name name of neighbor \"ns1\" List list of neighbor address neighbor-info-list has 1 element.
Element Description Example Optional - neighbor address \"10.0.255.1\""},{"location":"k8s_bgp_policy_crd/#examples_1","title":"Examples","text":" - example 1
apiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-neighbor\nspec:\n name: \"ns1\"\n definedType: \"neighbor\"\n List:\n - \"10.0.255.1/32\"\n---\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-neighbor\nspec:\n name: \"ns2\"\n definedType: \"neighbor\"\n List:\n - \"10.0.0.0/24\"\n
- example 2
- As with prefix-set-list, neighbor-set-list can have multiple neighbor-info-list like this.
# example 2\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-neighbor\nspec:\n name: \"ns1\"\n definedType: \"neighbor\"\n List:\n - \"10.0.255.1/32\"\n - \"10.0.255.2/32\"\n ```\n\n- example 3\n - As with prefix-set-list, multiple neighbor-set-lists can be defined.\n\n```yaml\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-neighbor\nspec:\n name: \"ns1\"\n definedType: \"neighbor\"\n List:\n - \"10.0.255.1/32\"\n---\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-neighbor\nspec:\n name: \"ns2\"\n definedType: \"neighbor\"\n List:\n - \"10.0.254.1/32\"\n
"},{"location":"k8s_bgp_policy_crd/#2-defining-bgp-defined-sets","title":"2. Defining bgp-defined-sets","text":"bgp-defined-sets has Community information, Extended Community information and AS_PATH information in each Sets section respectively. And it is a child element of defined-sets. community-sets, ext-community-sets and as-path-sets section are each match part. Like prefix-sets and neighbor-sets, each can have multiple sets and each set can have multiple values.
- bgp-defined-sets example
# Community match part\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-community\nspec:\n name: \"community1\"\n definedType: \"community\"\n List:\n - \"65100:10\"\n\n# Extended Community match part\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-extcommunity\nspec:\n name: \"ecommunity1\"\n definedType: \"extcommunity\"\n List:\n - \"RT:65100:100\"\n\n# AS_PATH match part\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-aspath\nspec:\n name: \"aspath1\"\n definedType: \"asPath\"\n List:\n - \"^65100\"\n\n# Large Community match part\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-largecommunity\nspec:\n name: \"lcommunity1\"\n definedType: \"largecommunity\"\n List:\n - \"65100:100:100\"\n
"},{"location":"k8s_bgp_policy_crd/#community-sets","title":"community-sets","text":"community-sets has community-set-name and community-list as its element. The Community value are used to evaluate communities held by the destination.
Element Description Example Optional name name of CommunitySet \"community1\" List list of community value community-list has 1 element.
Element Description Example Optional - community value \"65100:10\" You can use regular expressions to specify community in community-list.
"},{"location":"k8s_bgp_policy_crd/#examples_2","title":"Examples","text":" - example 1
- Match routes which has \"65100:10\" as a community value.
# example 1\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-community\nspec:\n name: \"community1\"\n definedType: \"community\"\n List:\n - \"65100:10\"\n
- example 2
- Specifying community by regular expression
- You can use regular expressions based on POSIX 1003.2 regular expressions.
# example 2\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-community\nspec:\n name: \"community2\"\n definedType: \"community\"\n List:\n - \"6[0-9]+:[0-9]+\"\n
"},{"location":"k8s_bgp_policy_crd/#ext-community-sets","title":"ext-community-sets","text":"ext-community-sets has ext-community-set-name and ext-community-list as its element. The values are used to evaluate extended communities held by the destination.
Element Description Example Optional name name of ExtCommunitySet \"ecommunity1\" List list of extended community value List has 1 element.
Element Description Example Optional - extended community value \"RT:65001:200\" You can use regular expressions to specify extended community in ext-community-list. However, the first one element separated by (part of \"RT\") does not support to the regular expression. The part of \"RT\" indicates a subtype of extended community and subtypes that can be used are as follows:
- RT: mean the route target.
- SoO: mean the site of origin(route origin).
- encap: mean the encapsulation tunnel type, currently gobgp supports the following encap tunnels: l2tp3 gre ip-in-ip vxlan nvgre mpls mpls-in-gre vxlan-gre mpls-in-udp sr-policy geneve
- LB: mean the link-bandwidth (in bytes).
"},{"location":"k8s_bgp_policy_crd/#examples_3","title":"Examples","text":" - example 1
- Match routes which has \"RT:65001:200\" as a extended community value.
# example 1\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-extcommunity\nspec:\n name: \"ecommunity1\"\n definedType: \"extcommunity\"\n List:\n - \"RT:65100:100\"\n
- example 2
- Specifying extended community by regular expression
- You can use regular expressions that is available in Golang.
# example 2\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-extcommunity\nspec:\n name: \"ecommunity2\"\n definedType: \"extcommunity\"\n List:\n - \"RT:6[0-9]+:[0-9]+\"\n
"},{"location":"k8s_bgp_policy_crd/#as-path-sets","title":"as-path-sets","text":"as-path-sets has as-path-set-name and as-path-list as its element. The numbers are used to evaluate AS numbers in the destination's AS_PATH attribute.
Element Description Example Optional name name of as-path-set \"aspath1\" List list of as path value List has 1 elements.
Element Description Example Optional - as path value \"^65100\" The AS path regular expression is compatible with Quagga and Cisco. Note Character _
has special meaning. It is abbreviation for (^|[,{}() ]|$)
.
Some examples follow:
- From:
^65100_
means the route is passed from AS 65100 directly. - Any:
_65100_
means the route comes through AS 65100. - Origin:
_65100$
means the route is originated by AS 65100. - Only:
^65100$
means the route is originated by AS 65100 and comes from it directly. ^65100_65001
65100_[0-9]+_.*$
^6[0-9]_5.*_65.?00$
"},{"location":"k8s_bgp_policy_crd/#examples_4","title":"Examples","text":" - example 1
- Match routes which come from AS 65100.
# example 1\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-aspath\nspec:\n name: \"aspath1\"\n definedType: \"asPath\"\n List:\n - \"^65100\"\n
- example 2
- Match routes which come Origin AS 65100 and use regular expressions to other AS.
# example 2\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-aspath\nspec:\n name: \"aspath1\"\n definedType: \"asPath\"\n List:\n - \"[0-9]+_65[0-9]+_65100$\"\n
"},{"location":"k8s_bgp_policy_crd/#3-defining-policy-definitions","title":"3. Defining policy-definitions","text":"policy-definitions consists of condition and action. Condition part is used to evaluate routes from neighbors, if matched, action will be applied.
- an example of policy-definitions
apiVersion: bgppolicydefinition.loxilb.io/v1\nkind: BGPPolicyDefinitionService\nmetadata:\n name: example-policy\nspec:\n name: example-policy\n statements:\n - name: statement1\n conditions:\n matchPrefixSet:\n prefixSet: ps1\n matchSetOptions: any\n matchNeighborSet:\n neighborSet: ns1\n matchSetOptions: invert\n bgpConditions:\n matchCommunitySet:\n communitySet: community1\n matchSetOptions: any\n matchExtCommunitySet:\n communitySet: ecommunity1\n matchSetOptions: any\n matchAsPathSet:\n asPathSet: aspath1\n matchSetOptions: any\n asPathLength:\n operator: eq\n value: 2\n afiSafiIn:\n - l3vpn-ipv4-unicast\n - ipv4-unicast\n actions:\n routeDisposition: accept-route\n bgpActions:\n setMed: \"-200\"\n setAsPathPrepend:\n as: \"65005\"\n repeatN: 5\n setCommunity:\n options: add\n setCommunityMethod:\n communitiesList:\n - 65100:20 \n
The elements of policy-definitions are as follows:
-
policy-definitions
Element Description Example name policy's name \"example-policy\" - statements Element Description Example name statements's name \"statement1\" - conditions - match-prefix-set Element Description Example prefixSet name for defined-sets.prefix-sets.prefix-set-list that is used in this policy \"ps1\" matchSetOptions option for the check: \"any\" or \"invert\". default is \"any\" \"any\" - conditions - match-neighbor-set Element Description Example neighborSet name for defined-sets.neighbor-sets.neighbor-set-list that is used in this policy \"ns1\" matchSetOptions option for the check: \"any\" or \"invert\". default is \"any\" \"any\" - conditions - bgp-conditions - match-community-set Element Description Example communitySet name for defined-sets.bgp-defined-sets.community-sets.CommunitySetList that is used in this policy \"community1\" matchSetOptions option for the check: \"any\" or \"all\" or \"invert\". default is \"any\" \"invert\" - conditions - bgp-conditions - match-ext-community-set Element Description Example communitySet name for defined-sets.bgp-defined-sets.ext-community-sets that is used in this policy \"ecommunity1\" matchSetOptions option for the check: \"any\" or \"all\" or \"invert\". default is \"any\" \"invert\" - conditions - bgp-conditions - match-as-path-set Element Description Example asPathSet name for defined-sets.bgp-defined-sets.as-path-sets that is used in this policy \"aspath1\" matchSetOptions option for the check: \"any\" or \"all\" or \"invert\". default is \"any\" \"invert\" - conditions - bgp-conditions - match-as-path-length Element Description Example operator operator to compare the length of AS number in AS_PATH attribute. \"eq\",\"ge\",\"le\" can be used. \"eq\" means that length of AS number is equal to Value element \"ge\" means that length of AS number is equal or greater than the Value element \"le\" means that length of AS number is equal or smaller than the Value element \"eq\" value value used to compare with the length of AS number in AS_PATH attribute 2 - statements - actions Element Description Example routeDisposition stop following policy/statement evaluation and accept/reject the route: \"accept-route\" or \"reject-route\" \"accept-route\" - statements - actions - bgp-actions Element Description Example setMed set-med used to change the med value of the route. If only numbers have been specified, replace the med value of route. if number and operater(+ or -) have been specified, adding or subtracting the med value of route. \"-200\" - statements - actions - bgp-actions - set-community Element Description Example options operator to manipulate Community attribute in the route \"ADD\" communities communities used to manipulate the route's community according to options below \"65100:20\" - statements - actions - bgp-actions - set-as-path-prepend Element Description Example as AS number to prepend. You can use \"last-as\" to prepend the leftmost AS number in the aspath attribute. \"65100\" repeatN repeat count to prepend AS 5
"},{"location":"k8s_bgp_policy_crd/#execution-condition-of-action","title":"Execution condition of Action","text":"Action statement is executed when the result of each Condition, including match-set-options is all true. match-set-options is defined how to determine the match result, in the condition with multiple evaluation set as follows:
Value Description any match is true if given value matches any member of the defined set all match is true if given value matches all members of the defined set invert match is true if given value does not match any member of the defined set"},{"location":"k8s_bgp_policy_crd/#examples_5","title":"Examples","text":" - example 1
- This policy definition has prefix-set ps1 and neighbor-set ns1 as its condition and routes matches the condition is rejected.
# example 1\napiVersion: bgppolicydefinition.loxilb.io/v1\nkind: BGPPolicyDefinitionService\nmetadata:\n name: policy1\nspec:\n name: policy1\n statements:\n - name: statement1\n conditions:\n matchPrefixSet:\n prefixSet: ps1\n matchSetOptions: any\n matchNeighborSet:\n neighborSet: ns1\n matchSetOptions: any\n actions:\n routeDisposition: reject-route\n
- example 2
- policy-definition has two statements
- If a route matches the condition inside the first statement(1), GoBGP applies its action and quits the policy evaluation.
# example 2\napiVersion: bgppolicydefinition.loxilb.io/v1\nkind: BGPPolicyDefinitionService\nmetadata:\n name: policy1\nspec:\n name: policy1\n statements:\n - name: statement1\n conditions:\n matchPrefixSet:\n prefixSet: ps1\n matchSetOptions: any\n matchNeighborSet:\n neighborSet: ns1\n matchSetOptions: any\n actions:\n routeDisposition: reject-route\n - name: statement2\n conditions:\n matchPrefixSet:\n prefixSet: ps2\n matchSetOptions: any\n matchNeighborSet:\n neighborSet: ns2\n matchSetOptions: any\n actions:\n routeDisposition: reject-route\n
- example 3
- If you want to add other policies, just add policy-definitions block following the first one like this
apiVersion: bgppolicydefinition.loxilb.io/v1\nkind: BGPPolicyDefinitionService\nmetadata:\n name: policy1\nspec:\n name: policy1\n statements:\n - name: statement1\n conditions:\n matchPrefixSet:\n prefixSet: ps1\n matchSetOptions: any\n matchNeighborSet:\n neighborSet: ns1\n matchSetOptions: any\n actions:\n routeDisposition: reject-route\n---\napiVersion: bgppolicydefinition.loxilb.io/v1\nkind: BGPPolicyDefinitionService\nmetadata:\n name: policy2\nspec:\n name: policy2\n - name: statement2\n conditions:\n matchPrefixSet:\n prefixSet: ps2\n matchSetOptions: any\n matchNeighborSet:\n neighborSet: ns2\n matchSetOptions: any\n actions:\n routeDisposition: reject-route\n
- example 4
- This PolicyDefinition has multiple conditions including BgpConditions as follows:
- prefix-set: ps1
- neighbor-set: ns1
- community-set: community1
- ext-community-set: ecommunity1
- as-path-set: aspath1
- as-path length: equal 2
- If a route matches all these conditions, it will be accepted with community \"65100:20\", next-hop 10.0.0.1, local-pref 110, med subtracted 200, as-path prepended 65005 five times.
# example 4\napiVersion: bgppolicydefinition.loxilb.io/v1\nkind: BGPPolicyDefinitionService\nmetadata:\n name: example-policy\nspec:\n name: example-policy\n statements:\n - name: statement1\n conditions:\n matchPrefixSet:\n prefixSet: ps1\n matchSetOptions: any\n matchNeighborSet:\n neighborSet: ns1\n matchSetOptions: invert\n bgpConditions:\n matchCommunitySet:\n communitySet: community1\n matchSetOptions: any\n matchExtCommunitySet:\n communitySet: ecommunity1\n matchSetOptions: any\n matchAsPathSet:\n asPathSet: aspath1\n matchSetOptions: any\n asPathLength:\n operator: eq\n value: 2\n afiSafiIn:\n - l3vpn-ipv4-unicast\n - ipv4-unicast\n actions:\n routeDisposition: accept-route\n bgpActions:\n setMed: \"-200\"\n setAsPathPrepend:\n as: \"65005\"\n repeatN: 5\n setCommunity:\n options: add\n setCommunityMethod:\n communitiesList:\n - 65100:20 \n
"},{"location":"k8s_bgp_policy_crd/#4-attaching-policy","title":"4. Attaching policy","text":"Here we explain how to attach defined policies to neighbor local rib.
"},{"location":"k8s_bgp_policy_crd/#41-attach-policy-to-route-server-client","title":"4.1. Attach policy to route-server-client","text":"You can use policies defined above as Import or Export or In policy by attaching them to neighbors which is configured to be route-server client.
To attach policies to neighbors, you need to add policy's name to neighbors.apply-policy
in the neighbor's setting. This example attaches policy1 to Import policy and policy2 to Export policy and policy3 is used as the In policy.
apiVersion: bgppolicyapply.loxilb.io/v1\nkind: BGPPolicyApplyService\nmetadata:\n name: policy-apply\nspec:\n ipAddress: \"10.0.255.2\"\n policyType: \"import\"\n polices:\n - \"policy1\"\n routeAction: \"accept\"\n
neighbors has a section to specify policies and the section's name is apply-policy. The apply-policy has 4 elements.
Element Description Example ipAddress neighbor IP address \"10.0.255.2\" policyType option for the Policy type: \"import\" or \"export\" . \"import\" polices The list of the policy - \"policy1\" routeAction action when the route doesn't match any policy or none of the matched policy specifies route-disposition: \"accept\" or \"reject\". \"accept\""},{"location":"k8s_bgp_policy_crd/#policy-and-soft-reset","title":"Policy and Soft Reset","text":"When you change an import policy and reset the inbound routing table (aka soft reset in), a withdraw for a route rejected by the latest import policies will be sent to peers. However, when you change an export policy and reset the outbound routing table (aka soft reset out), even if a route is rejected by the latest export policies, a withdraw for the route will not be sent.
The outbound routing table doesn't exist for saving memory usage, it's impossible to know whether the route was actually sent to peer or the route also was rejected by the previous export policies and not sent. GoBGP doesn't send such withdraw rather than possible unwilling leaking information.
"},{"location":"kube-loxilb-KOR/","title":"kube loxilb KOR","text":""},{"location":"kube-loxilb-KOR/#kube-loxilb","title":"kube-loxilb\ub780 \ubb34\uc5c7\uc778\uac00?","text":"kube-loxilb\ub294 loxilb\uc758 Kubernetes Operator \ub85c\uc368, Kubernetes \uc11c\ube44\uc2a4 \ub85c\ub4dc \ubc38\ub7f0\uc11c \uc0ac\uc591\uc744 \ud3ec\ud568\ud558\uace0 \uc788\uc73c\uba70, \ub85c\ub4dc \ubc38\ub7f0\uc11c \ud074\ub798\uc2a4, \uace0\uae09 IPAM(\uacf5\uc720 \ub610\ub294 \ub2e8\ub3c5 \ubaa8\ub4dc) \ub4f1\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. kube-loxilb\ub294 kube-system \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c Deployment \ud615\ud0dc\ub85c \uc2e4\ud589\ub429\ub2c8\ub2e4. \uc774\ub294 \ud56d\uc0c1 k8s \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ub418\uba70 \ub178\ub4dc/\uc5d4\ub4dc\ud3ec\uc778\ud2b8/\ub3c4\ub2ec \uac00\ub2a5\uc131/LB \uc11c\ube44\uc2a4 \ub4f1\uc758 \ubcc0\uacbd \uc0ac\ud56d\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud558\ub294 \uc81c\uc5b4 \ud50c\ub808\uc778 \uc5ed\ud560\uc744 \uc218\ud589\ud569\ub2c8\ub2e4. \uc774\ub294 K8s \uc624\ud37c\ub808\uc774\ud130\ub85c\uc11c loxilb\ub97c \uad00\ub9ac\ud569\ub2c8\ub2e4. loxilb \uad6c\uc131 \uc694\uc18c\ub294 \uc2e4\uc81c \uc11c\ube44\uc2a4 \uc5f0\uacb0 \ubc0f \ub85c\ub4dc \ubc38\ub7f0\uc2f1 \uc791\uc5c5\uc744 \uc218\ud589\ud569\ub2c8\ub2e4. \ub530\ub77c\uc11c \ubc30\ud3ec \uad00\uc810\uc5d0\uc11c kube-loxilb\ub294 K8s \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ub418\uc5b4\uc57c \ud558\uc9c0\ub9cc, loxilb\ub294 \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ub610\ub294 \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc5d0\uc11c \ubc30\ud3ec\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
\uad8c\uc7a5 \ubc29\ubc95\uc740 kube-loxilb \uad6c\uc131 \uc694\uc18c\ub97c \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ud558\uace0 \uc774 \uac00\uc774\ub4dc\uc5d0 \uc124\uba85\ub41c \ub300\ub85c \uc678\ubd80 \ub178\ub4dc/VM\uc5d0 loxilb \ub3c4\ucee4\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uc774\ub294 \uc0ac\uc6a9\uc790\uac00 \uc628-\ud504\ub808\ubbf8\uc2a4 \ub610\ub294 \ud37c\ube14\ub9ad \ud074\ub77c\uc6b0\ub4dc \ud658\uacbd\uc5d0\uc11c loxilb\ub97c \uc2e4\ud589\ud560 \ub54c \uc720\uc0ac\ud55c \ud615\ud0dc\ub85c \uc81c\uacf5\ud558\uae30 \uc704\ud568\uc785\ub2c8\ub2e4. \ud37c\ube14\ub9ad \ud074\ub77c\uc6b0\ub4dc \ud658\uacbd\uc5d0\uc11c\ub294 \ubcf4\ud1b5 \ub85c\ub4dc \ubc38\ub7f0\uc11c/\ubc29\ud654\ubcbd\uc744 \uc2e4\uc81c \uc6cc\ud06c\ub85c\ub4dc \uc678\ubd80\uc5d0 \uc788\ub294 \ubcf4\uc548/DMZ \uc601\uc5ed\uc5d0\uc11c \uc2e4\ud589\ud569\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uc0ac\uc6a9\uc790\ub294 \ud3b8\uc758\uc640 \uc2dc\uc2a4\ud15c \uc544\ud0a4\ud14d\ucc98\uc5d0 \ub530\ub77c \uc678\ubd80 \ub178\ub4dc \ubaa8\ub4dc\uc640 \uc778-\ud074\ub7ec\uc2a4\ud130 \ubaa8\ub4dc\ub97c \uc120\ud0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c \ube14\ub85c\uadf8\ub4e4\uc740 \uc774\ub7ec\ud55c \ubaa8\ub4dc\ub4e4\uc5d0 \ub300\ud574 \uc124\uba85\ud569\ub2c8\ub2e4:
- AWS EKS\uc5d0\uc11c \uc678\ubd80 \ub178\ub4dc\ub85c loxilb \uc2e4\ud589
- \uc628-\ud504\ub808\ubbf8\uc2a4\uc5d0\uc11c \uc778-\ud074\ub7ec\uc2a4\ud130\ub85c loxilb \uc2e4\ud589
\uc0ac\uc6a9\uc790\ub4e4\uc740 \uc678\ubd80 \ubaa8\ub4dc\uc5d0\uc11c \ud574\ub2f9 loxilb \ub97c \ub204\uac00 \uad00\ub9ac\ud560\uc9c0\uc5d0 \ub300\ud55c \uc9c8\ubb38\uc774 \uc0dd\uae38 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud37c\ube14\ub9ad \ud074\ub77c\uc6b0\ub4dc\uc5d0\uc11c\ub294 VPC\uc5d0\uc11c \uc0c8\ub85c\uc6b4 \uc778\uc2a4\ud134\uc2a4\ub97c \uc0dd\uc131\ud558\uace0 loxilb \ub3c4\ucee4\ub97c \uc2e4\ud589\ud558\ub294 \uac83\uc73c\ub85c \uac04\ub2e8\ud788 \uc0ac\uc6a9 \uac00\ub2a5\ud569\ub2c8\ub2e4. \uc628-\ud504\ub808\ubbf8\uc2a4\uc758 \uacbd\uc6b0, \uc5ec\ubd84\uc758 \ub178\ub4dc/VM\uc5d0\uc11c loxilb \ub3c4\ucee4\ub97c \uc2e4\ud589\ud574\uc57c \ud569\ub2c8\ub2e4. loxilb \ub3c4\ucee4\ub294 \uc790\uccb4 \ud3ec\ud568 \uc5d4\ud130\ud2f0\ub85c, docker, containerd, podman \ub4f1 \uc798 \uc54c\ub824\uc9c4 \ub3c4\uad6c\ub85c \uc27d\uac8c \uad00\ub9ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub3c5\ub9bd\uc801\uc73c\ub85c \uc7ac\uc2dc\uc791/\uc5c5\uadf8\ub808\uc774\ub4dc\ud560 \uc218 \uc788\uc73c\uba70, kube-loxilb\ub294 Kubernetes \ub85c\ub4dc\ubc38\ub7f0\uc11c \uc11c\ube44\uc2a4\uac00 \ub9e4\ubc88 \uc801\uc808\ud788 \uad6c\uc131\ub418\ub3c4\ub85d \ubcf4\uc7a5\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\ub85c \ubc30\ud3ec\ud560 \ub54c\ub294 \ubaa8\ub4e0 \uac83\uc774 Kubernetes\uc5d0 \uc758\ud574 \uad00\ub9ac\ub418\uba70 \uc218\ub3d9 \uac1c\uc785\uc774 \uac70\uc758 \ud544\uc694\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.
"},{"location":"kube-loxilb-KOR/#_1","title":"\uc804\uccb4 \ud1a0\ud3f4\ub85c\uc9c0","text":" - \uc678\ubd80 \ubaa8\ub4dc\uc758 \uacbd\uc6b0, \ubaa8\ub4e0 \uad6c\uc131 \uc694\uc18c\ub97c \ud3ec\ud568\ud55c \uc804\uccb4 \ud1a0\ud3f4\ub85c\uc9c0\ub294 \ub2e4\uc74c\uacfc \uc720\uc0ac\ud574\uc57c \ud569\ub2c8\ub2e4:
- \uc778-\ud074\ub7ec\uc2a4\ud130 \ubaa8\ub4dc\uc758 \uacbd\uc6b0, \ubaa8\ub4e0 \uad6c\uc131 \uc694\uc18c\ub97c \ud3ec\ud568\ud55c \uc804\uccb4 \ud1a0\ud3f4\ub85c\uc9c0\ub294 \ub2e4\uc74c\uacfc \uc720\uc0ac\ud574\uc57c \ud569\ub2c8\ub2e4:
"},{"location":"kube-loxilb-KOR/#kube-loxilb_1","title":"kube-loxilb \ubc30\ud3ec \ubc29\ubc95","text":" -
\uc678\ubd80 \ubaa8\ub4dc\ub97c \uc120\ud0dd\ud55c \uacbd\uc6b0, loxilb \ub3c4\ucee4\uac00 \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc758 \ub178\ub4dc\uc5d0 \uc801\uc808\ud788 \ub2e4\uc6b4\ub85c\ub4dc \ubc0f \uc124\uce58\ub418\uc5c8\ub294\uc9c0 \ud655\uc778\ud558\uc138\uc694. \uc5ec\uae30\uc758 \uac00\uc774\ub4dc\ub97c \ub530\ub974\uac70 \ub2e4\uc74c \ubb38\uc11c\ub97c \ucc38\uc870\ud558\uc138\uc694. \uc774 \ub178\ub4dc\uc5d0\uc11c k8s \ud074\ub7ec\uc2a4\ud130 \ub178\ub4dc(\ub098\uc911\uc5d0 kube-loxilb\uac00 \uc2e4\ud589\ub420)\ub85c\uc758 \ub124\ud2b8\uc6cc\ud06c \uc5f0\uacb0\uc774 \ud544\uc694\ud569\ub2c8\ub2e4. (PS - \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\ub85c \uc2e4\ud589 \uc911\uc778 \uacbd\uc6b0 \uc774 \ub2e8\uacc4\ub294 \uac74\ub108\ub6f8 \uc218 \uc788\uc2b5\ub2c8\ub2e4)
-
kube-loxilb \uc124\uc815 yaml\uc744 \ub2e4\uc6b4\ub85c\ub4dc\ud558\uc138\uc694:
wget https://github.com/loxilb-io/kube-loxilb/raw/main/manifest/ext-cluster/kube-loxilb.yaml\n
- \uc0ac\uc6a9\uc790\uc758 \ud544\uc694\uc5d0 \ub9de\uac8c \ubcc0\uc218\ub97c \uc218\uc815\ud558\uc138\uc694:
args:\n - --loxiURL=http://12.12.12.1:11111\n - --externalCIDR=123.123.123.1/24\n #- --externalSecondaryCIDRs=124.124.124.1/24,125.125.125.1/24\n #- --externalCIDR6=3ffe::1/96\n #- --monitor\n #- --setBGP=65100\n #- --extBGPPeers=50.50.50.1:65101,51.51.51.1:65102\n #- --setRoles=0.0.0.0\n #- --setLBMode=1\n #- --setUniqueIP=false\n
\ubcc0\uc218\uc758 \uc758\ubbf8\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4:
\uc774\ub984 \uc124\uba85 loxiURL loxilb\uc758 API \uc11c\ubc84 \uc8fc\uc18c\uc785\ub2c8\ub2e4. \uc774\ub294 1\ub2e8\uacc4\uc758 loxilb \ub3c4\ucee4\uc758 \ub3c4\ucee4 IP \uc8fc\uc18c\uc785\ub2c8\ub2e4. \uc9c0\uc815\ub418\uc9c0 \uc54a\uc73c\uba74 kube-loxilb\ub294 loxilb\uac00 \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\ub85c \uc2e4\ud589 \uc911\uc774\ub77c\uace0 \uac00\uc815\ud558\uace0 \uc790\ub3d9\uc73c\ub85c \uad6c\uc131\ud569\ub2c8\ub2e4. externalCIDR \uc8fc\uc18c\ub97c \ud560\ub2f9\ud560 CIDR \ub610\ub294 IP \uc8fc\uc18c \ubc94\uc704\uc785\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c \ud560\ub2f9\ub41c \uc8fc\uc18c\ub294 \uc11c\ub85c \ub2e4\ub978 \uc11c\ube44\uc2a4\uc5d0 \uacf5\uc720\ub429\ub2c8\ub2e4(\uacf5\uc720 \ubaa8\ub4dc). externalCIDR6 \uc8fc\uc18c\ub97c \ud560\ub2f9\ud560 IPv6 CIDR \ub610\ub294 IP \uc8fc\uc18c \ubc94\uc704\uc785\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c \ud560\ub2f9\ub41c \uc8fc\uc18c\ub294 \uc11c\ub85c \ub2e4\ub978 \uc11c\ube44\uc2a4\uc5d0 \uacf5\uc720\ub429\ub2c8\ub2e4(\uacf5\uc720 \ubaa8\ub4dc). monitor LB \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \ub77c\uc774\ube0c\ub2c8\uc2a4 \ud504\ub85c\ube0c\ub97c \ud65c\uc131\ud654\ud569\ub2c8\ub2e4(\uae30\ubcf8\uac12: \ube44\ud65c\uc131\ud654). setBGP \uc774 \uc11c\ube44\uc2a4\ub97c \uad11\uace0\ud560 BGP AS-ID\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \uc9c0\uc815\ub418\uc9c0 \uc54a\uc73c\uba74 BGP\uac00 \ube44\ud65c\uc131\ud654\ub429\ub2c8\ub2e4. \uc791\ub3d9 \ubc29\uc2dd\uc740 \uc5ec\uae30\ub97c \ucc38\uc870\ud558\uc138\uc694. extBGPPeers \uc801\uc808\ud55c \uc6d0\uaca9 AS\uc640 \ud568\uaed8 \uc678\ubd80 BGP \ud53c\uc5b4\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. setRoles \uc874\uc7ac\ud558\ub294 \uacbd\uc6b0, kube-loxilb\ub294 \ud074\ub7ec\uc2a4\ud130 \ubaa8\ub4dc\uc5d0\uc11c loxilb \uc5ed\ud560\uc744 \uc870\uc815\ud569\ub2c8\ub2e4. \ub610\ud55c \ud2b9\ubcc4\ud55c VIP(\uc18c\uc2a4 IP\ub85c \uc120\ud0dd\ub428)\ub97c \uc124\uc815\ud558\uc5ec \ud480 NAT \ubaa8\ub4dc\uc5d0\uc11c \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc640 \ud1b5\uc2e0\ud569\ub2c8\ub2e4. setLBMode 0, 1, 2 0 - \uae30\ubcf8\uac12 (DNAT\ub9cc \uc218\ud589, \uc18c\uc2a4 IP \uc720\uc9c0) 1 - OneARM(\uc18c\uc2a4 IP\ub97c \ub85c\ub4dc \ubc38\ub7f0\uc11c\uc758 \uc778\ud130\ud398\uc774\uc2a4 IP\ub85c \ubcc0\uacbd) 2 - Full NAT(\uc18c\uc2a4 IP\ub97c \uac00\uc0c1 IP\ub85c \ubcc0\uacbd) setUniqueIP LB \uc11c\ube44\uc2a4\ub2f9 \uace0\uc720\ud55c \uc11c\ube44\uc2a4 IP\ub97c \ud560\ub2f9\ud569\ub2c8\ub2e4(\uae30\ubcf8\uac12: false). externalSecondaryCIDRs \uba40\ud2f0\ud638\ubc0d \uc9c0\uc6d0\uc758 \uacbd\uc6b0, \uc8fc\uc18c\ub97c \ud560\ub2f9\ud560 \ubcf4\uc870 CIDR \ub610\ub294 IP \uc8fc\uc18c \ubc94\uc704\uc785\ub2c8\ub2e4. \uc704\uc758 \ub9ce\uc740 \ud50c\ub798\uadf8\uc640 \uc778\uc218\ub294 loxilb \ud2b9\uc815 \ubcc0\uc218\uc744 \uae30\ubc18\uc73c\ub85c \uc11c\ube44\uc2a4\ubcc4\ub85c \uc7ac\uc815\uc758\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- kube-loxilb \uc9c0\uc6d0 \ubcc0\uc218:
\ubcc0\uc218 \uc124\uba85 loxilb.io/multus-nets Multus \ub97c \uc0ac\uc6a9\ud560 \ub54c, Multus \ub124\ud2b8\uc6cc\ud06c\ub3c4 \uc11c\ube44\uc2a4 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc0ac\uc6a9\ud560 Multus \ub124\ud2b8\uc6cc\ud06c \uc774\ub984\uc744 \ub4f1\ub85d\ud569\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: multus-service\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/multus-nets: macvlan1,macvlan2spec:\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0app: pod-01\u00a0\u00a0ports:\u00a0\u00a0- port: 55002\u00a0\u00a0\u00a0\u00a0targetPort: 5002\u00a0\u00a0type: LoadBalancer loxilb.io/num-secondary-networks SCTP \uba40\ud2f0\ud638\ubc0d \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud560 \ub54c, \uc11c\ube44\uc2a4\uc5d0 \ud560\ub2f9\ud560 \ubcf4\uc870 IP\uc758 \uc218\ub97c \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4(\ucd5c\ub300 3\uac1c). loxilb.io/secondaryIPs \uc8fc\uc11d\uacfc \ud568\uaed8 \uc0ac\uc6a9\ud560 \ub54c\ub294 loxilb.io/num-secondary-networks\uc5d0 \uc124\uc815\ub41c \uac12\uc774 \ubb34\uc2dc\ub429\ub2c8\ub2e4. (loxilb.io/secondaryIPs \uc8fc\uc11d\uc774 \uc6b0\uc120\uc21c\uc704\ub97c \uac00\uc9d1\ub2c8\ub2e4)\uc608:metadata:\u00a0\u00a0name: sctp-lb1\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/num-secondary-networks: \u201c2\u201dspec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 55002\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/secondaryIPs SCTP \uba40\ud2f0\ud638\ubc0d \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud560 \ub54c, \uc11c\ube44\uc2a4\uc5d0 \ud560\ub2f9\ud560 \ubcf4\uc870 IP\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. \uc5ec\ub7ec IP(\ucd5c\ub300 3\uac1c)\ub97c \ucf64\ub9c8(,)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub3d9\uc2dc\uc5d0 \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. loxilb.io/num-secondary-networks \uc8fc\uc11d\uacfc \ud568\uaed8 \uc0ac\uc6a9\ud560 \ub54c\ub294 loxilb.io/secondaryIPs\uac00 \uc6b0\uc120\uc21c\uc704\ub97c \uac00\uc9d1\ub2c8\ub2e4.\uc608:metadata:name: sctp-lb-secips\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/lbmode: \"fullnat\"loxilb.io/secondaryIPs: \"1.1.1.1,2.2.2.2\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb-secips\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0type: LoadBalancer loxilb.io/staticIP \ub85c\ub4dc \ubc38\ub7f0\uc11c \uc11c\ube44\uc2a4\uc5d0 \ud560\ub2f9\ud560 \uc678\ubd80 IP\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c \uc678\ubd80 IP\ub294 kube-loxilb\uc5d0 \uc124\uc815\ub41c externalCIDR \ubc94\uc704 \ub0b4\uc5d0\uc11c \ud560\ub2f9\ub418\uc9c0\ub9cc, \uc8fc\uc11d\uc744 \uc0ac\uc6a9\ud558\uc5ec \ubc94\uc704 \uc678\ubd80\uc758 IP\ub97c \uc815\uc801\uc73c\ub85c \uc9c0\uc815\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb-fullnat\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/lbmode: \"fullnat\"\u00a0\u00a0\u00a0\u00a0loxilb.io/staticIP: \"192.168.255.254\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-fullnat-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/liveness \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc120\ud0dd\uc5d0 \uae30\ubc18\ud55c loxilb\uac00 \uc0c1\ud0dc \ud655\uc778(\ud504\ub85c\ube0c)\uc744 \uc218\ud589\ud558\ub3c4\ub85d \uc124\uc815\ud569\ub2c8\ub2e4(\ud50c\ub798\uadf8\uac00 \uc124\uc815\ub41c \uacbd\uc6b0, \ud65c\uc131 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub9cc \uc120\ud0dd\ub429\ub2c8\ub2e4). \uae30\ubcf8\uac12\uc740 \ube44\ud65c\uc131\ud654\uc774\uba70, \uac12\uc774 yes\ub85c \uc124\uc815\ub418\uba74 \ud574\ub2f9 \uc11c\ube44\uc2a4\uc758 \ud504\ub85c\ube0c \uae30\ub2a5\uc774 \ud65c\uc131\ud654\ub429\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb-fullnat\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-fullnat-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/lbmode \uac01 \uc11c\ube44\uc2a4\uc5d0 \ub300\ud574 \uac1c\ubcc4\uc801\uc73c\ub85c LB \ubaa8\ub4dc\ub97c \uc124\uc815\ud569\ub2c8\ub2e4. \uc9c0\uc815\ud560 \uc218 \uc788\ub294 \uac12 \uc911 \ud558\ub098\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4: \u201cdefault\u201d, \u201conearm\u201d, \u201cfullnat\u201d \ub610\ub294 \"dsr\". \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 \uc774 \ubb38\uc11c\ub97c \ucc38\uc870\ud558\uc138\uc694.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb-fullnat\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/lbmode: \"fullnat\"\u00a0\u00a0\u00a0\u00a0spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-fullnat-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/ipam \uc11c\ube44\uc2a4\uac00 \uc0ac\uc6a9\ud560 IPAM \ubaa8\ub4dc\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. \"ipv4\", \"ipv6\", \ub610\ub294 \"ipv6to4\" \uc911 \ud558\ub098\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4. \uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/ipam : \"ipv4\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/timeout \uc11c\ube44\uc2a4\uc758 \uc138\uc158 \uc720\uc9c0 \uc2dc\uac04\uc744 \uc124\uc815\ud569\ub2c8\ub2e4. \uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/timeout : \"60\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probetype \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ud504\ub85c\ube0c \uc791\uc5c5\uc5d0 \uc0ac\uc6a9\ud560 \ud504\ub85c\ud1a0\ucf5c \uc720\ud615\uc744 \uc9c0\uc815\ud569\ub2c8\ub2e4. \"udp\", \"tcp\", \"https\", \"http\", \"sctp\", \"ping\", \ub610\ub294 \"none\" \uc911 \ud558\ub098\ub97c \uc120\ud0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. lbMode\ub97c \"fullnat\" \ub610\ub294 \"onearm\"\uc73c\ub85c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0, probetype\uc744 \ud504\ub85c\ud1a0\ucf5c \uc720\ud615\uc73c\ub85c \uc124\uc815\ud569\ub2c8\ub2e4. \ud574\uc81c\ud558\ub824\uba74 probetype : \"none\"\uc744 \uc0ac\uc6a9\ud558\uc138\uc694.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"ping\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probeport \ud504\ub85c\ube0c \uc791\uc5c5\uc5d0 \uc0ac\uc6a9\ud560 \ud3ec\ud2b8\ub97c \uc124\uc815\ud569\ub2c8\ub2e4. loxilb.io/probetype \uc8fc\uc11d\uc774 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\uac70\ub098 \uc720\ud615\uc774 icmp \ub610\ub294 none\uc778 \uacbd\uc6b0 \uc801\uc6a9\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"tcp\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probeport : \"3000\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probereq \ud504\ub85c\ube0c \uc694\uccad\uc744 \uc704\ud55c API\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. loxilb.io/probetype \uc8fc\uc11d\uc774 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\uac70\ub098 \uc720\ud615\uc774 icmp \ub610\ub294 none\uc778 \uacbd\uc6b0 \uc801\uc6a9\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"tcp\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probeport : \"3000\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probereq : \"health\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/proberesp \ud504\ub85c\ube0c \uc694\uccad\uc5d0 \ub300\ud55c \uc751\ub2f5\uc744 \uc9c0\uc815\ud569\ub2c8\ub2e4. loxilb.io/probetype \uc8fc\uc11d\uc774 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\uac70\ub098 \uc720\ud615\uc774 icmp \ub610\ub294 none\uc778 \uacbd\uc6b0 \uc801\uc6a9\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"tcp\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probeport : \"3000\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probereq : \"health\"\u00a0\u00a0\u00a0\u00a0loxilb.io/proberesp : \"ok\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probetimeout \ud504\ub85c\ube0c \uc694\uccad\uc758 \ud0c0\uc784\uc544\uc6c3 \uc2dc\uac04(\ucd08)\uc744 \uc9c0\uc815\ud569\ub2c8\ub2e4. \uae30\ubcf8\uac12\uc740 60\ucd08\uc785\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probetimeout : \"10\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/proberetries \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ube44\ud65c\uc131\uc73c\ub85c \uac04\uc8fc\ud558\uae30 \uc804\uc5d0 \ud504\ub85c\ube0c \uc694\uccad\uc744 \ub2e4\uc2dc \uc2dc\ub3c4\ud558\ub294 \ud69f\uc218\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. \uae30\ubcf8\uac12\uc740 2\uc785\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probetimeout : \"10\"\u00a0\u00a0\u00a0\u00a0loxilb.io/proberetries : \"3\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/epselect \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc120\ud0dd \uc54c\uace0\ub9ac\uc998\uc744 \uc9c0\uc815\ud569\ub2c8\ub2e4(e.g \"rr\", \"hash\", \"persist\", \"lc\" \ub4f1). \uae30\ubcf8\uac12\uc740 \ub77c\uc6b4\ub4dc \ub85c\ube48\uc785\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probetimeout : \"10\"\u00a0\u00a0\u00a0\u00a0loxilb.io/proberetries : \"3\"\u00a0\u00a0\u00a0\u00a0loxilb.io/epselect : \"hash\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/prefLocalPod \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\uc5d0\uc11c \ud56d\uc0c1 \ub85c\uceec \ud30c\ub4dc\ub97c \uc120\ud0dd\ud558\ub3c4\ub85d \uc124\uc815\ud569\ub2c8\ub2e4. \uae30\ubcf8\uac12\uc740 false\uc785\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/prefLocalPod : \"yes\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer - \ud544\uc694\ud55c \ubcc0\uacbd\uc744 \uc644\ub8cc\ud55c \ud6c4 yaml\uc744 \uc801\uc6a9\ud558\uc138\uc694:
kubectl apply -f kube-loxilb.yaml\n
- \uc704 \uba85\ub839\uc5b4\ub294 kube-loxilb\uac00 \uc131\uacf5\uc801\uc73c\ub85c \uc2e4\ud589\ub418\ub3c4\ub85d \ubcf4\uc7a5\ud569\ub2c8\ub2e4. kube-loxilb\uac00 \uc2e4\ud589 \uc911\uc778\uc9c0 \ud655\uc778\ud558\uc138\uc694:
k8s@master:~$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system local-path-provisioner-84db5d44d9-pczhz 1/1 Running 0 16h\nkube-system coredns-6799fbcd5-44qpx 1/1 Running 0 16h\nkube-system metrics-server-67c658944b-t4x5d 1/1 Running 0 16h\nkube-system kube-loxilb-5fb5566999-ll4gs 1/1 Running 0 14h\n
- \ub9c8\uc9c0\ub9c9\uc73c\ub85c \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc704\ud55c \uc11c\ube44\uc2a4 LB\ub97c \uc0dd\uc131\ud558\ub824\uba74 \ub2e4\uc74c \ud15c\ud50c\ub9bf yaml\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc801\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
(\ucc38\uace0 - loadBalancerClass \ubc0f \uae30\ud0c0 loxilb \ud2b9\uc815 \uc8fc\uc11d\uc744 \ud655\uc778\ud558\uc138\uc694):
apiVersion: v1\n kind: Service\n metadata:\n name: iperf-service\n annotations:\n # If there is a need to do liveness check from loxilb\n loxilb.io/liveness: \"yes\"\n # Specify LB mode - one of default, onearm or fullnat \n loxilb.io/lbmode: \"default\"\n # Specify loxilb IPAM mode - one of ipv4, ipv6 or ipv6to4 \n loxilb.io/ipam: \"ipv4\"\n # Specify number of secondary networks for multi-homing\n # Only valid for SCTP currently\n # loxilb.io/num-secondary-networks: \"2\n # Specify a static externalIP for this service\n # loxilb.io/staticIP: \"123.123.123.2\"\n spec:\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: perf-test\n ports:\n - port: 55001\n targetPort: 5001\n type: LoadBalancer\n ---\n apiVersion: v1\n kind: Pod\n metadata:\n name: iperf1\n labels:\n what: perf-test\n spec:\n containers:\n - name: iperf\n image: eyes852/ubuntu-iperf-test:0.5\n command:\n - iperf\n - \"-s\"\n ports:\n - containerPort: 5001\n
\uc0ac\uc6a9\uc790\ub294 \uc704 \ub0b4\uc6a9\uc744 \ud544\uc694\uc5d0 \ub530\ub77c \ubcc0\uacbd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- LB \uc11c\ube44\uc2a4\uac00 \uc0dd\uc131\ub418\uc5c8\ub294\uc9c0 \ud655\uc778\ud558\uc138\uc694:
k8s@master:~$ sudo kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 13h\niperf1 LoadBalancer 10.43.8.156 llb-192.168.80.20 55001:5001/TCP 8m20s\n
- \ub354 \ub9ce\uc740 \uc608\uc81c yaml \ud15c\ud50c\ub9bf\uc740 kube-loxilb\uc758 \ub9e4\ub2c8\ud398\uc2a4\ud2b8 \ub514\ub809\ud130\ub9ac\ub97c \ucc38\uc870\ud558\uc138\uc694.
"},{"location":"kube-loxilb-KOR/#loxilb","title":"\ucd94\uac00 \ub2e8\uacc4: loxilb\ub97c (\ud074\ub7ec\uc2a4\ud130 \ub0b4) \ubaa8\ub4dc\ub85c \ubc30\ud3ec","text":"loxilb\ub97c \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\ub85c \uc2e4\ud589\ud558\ub824\uba74, kube-loxilb.yaml\uc758 URL \uc778\uc218\ub97c \uc8fc\uc11d \ucc98\ub9ac\ud574\uc57c \ud569\ub2c8\ub2e4:
args:\n #- --loxiURL=http://12.12.12.1:11111\n - --externalCIDR=123.123.123.1/24\n
\uc774\ub294 kube-loxilb\uc758 \uc790\uccb4 \uac80\uc0c9 \ubaa8\ub4dc\ub97c \ud65c\uc131\ud654\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 loxilb \ud30c\ub4dc\ub97c \ucc3e\uace0 \uc811\uadfc\ud560 \uc218 \uc788\uac8c \ud569\ub2c8\ub2e4. \ub9c8\uc9c0\ub9c9\uc73c\ub85c \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c loxilb \ud30c\ub4dc\ub97c \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4:
sudo kubectl apply -f https://github.com/loxilb-io/kube-loxilb/raw/main/manifest/in-cluster/loxilb.yaml\n
\ubaa8\ub4e0 \ud30c\ub4dc\uac00 \uc0dd\uc131\ub41c \ud6c4, \ub2e4\uc74c\uacfc \uac19\uc774 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4(\ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c kube-loxilb\uc640 loxilb \uad6c\uc131 \uc694\uc18c\uac00 \uc2e4\ud589 \uc911\uc778 \uac83\uc744 \ubcfc \uc218 \uc788\uc2b5\ub2c8\ub2e4):
k8s@master:~$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system local-path-provisioner-84db5d44d9-pczhz 1/1 Running 0 16h\nkube-system coredns-6799fbcd5-44qpx 1/1 Running 0 16h\nkube-system metrics-server-67c658944b-t4x5d 1/1 Running 0 16h\nkube-system kube-loxilb-5fb5566999-ll4gs 1/1 Running 0 14h\nkube-system loxilb-lb-mklj2 1/1 Running 0 13h\nkube-system loxilb-lb-stp5k 1/1 Running 0 13h\nkube-system loxilb-lb-j8fc6 1/1 Running 0 13h\nkube-system loxilb-lb-5m85p 1/1 Running 0 13h\n
\uc774\ud6c4 \uc11c\ube44\uc2a4 \uc0dd\uc131 \uacfc\uc815\uc740 \uc774\uc804 \uc139\uc158\uc5d0\uc11c \uc124\uba85\ud55c \uac83\uacfc \ub3d9\uc77c\ud569\ub2c8\ub2e4.
"},{"location":"kube-loxilb-KOR/#kube-loxilb-crd","title":"kube-loxilb CRD \uc0ac\uc6a9 \ubc29\ubc95","text":"kube-loxilb\ub294 \ucee4\uc2a4\ud140 \ub9ac\uc18c\uc2a4 \uc815\uc758(CRD)\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. \ud604\uc7ac \uc9c0\uc6d0\ub418\ub294 \uc791\uc5c5\uc740 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4(\uacc4\uc18d \uc5c5\ub370\uc774\ud2b8\ub420 \uc608\uc815\uc785\ub2c8\ub2e4): - BGP \ud53c\uc5b4 \ucd94\uac00 - BGP \ud53c\uc5b4 \uc0ad\uc81c
CRD \uc608\uc81c\ub294 manifest/crds\uc5d0 \uc800\uc7a5\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. BGP \ud53c\uc5b4 \uc124\uc815 \uc608\uc81c\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4:
- \uc0ac\uc804 \ucc98\ub9ac(kube-loxilb CRD\ub97c K8s\uc5d0 \ub4f1\ub85d). \uccab \ubc88\uc9f8 \ub2e8\uacc4\ub85c lbpeercrd.yaml\uc744 \uc801\uc6a9\ud569\ub2c8\ub2e4:
kubectl apply -f manifest/crds/lbpeercrd.yaml\n
- CRD \uc815\uc758
BGP \ud53c\uc5b4\ub97c \ucd94\uac00\ud558\ub294 yaml \ud30c\uc77c\uc744 \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \uc544\ub798 \uc608\uc81c\ub294 123.123.123.2\uc758 \ud53c\uc5b4 IP \uc8fc\uc18c\uc640 \uc6d0\uaca9 AS \ubc88\ud638 65123\uc73c\ub85c \ud53c\uc5b4\ub97c \uc0dd\uc131\ud558\ub294 \uc608\uc81c\uc785\ub2c8\ub2e4. bgp-peer.yaml\uc774\ub77c\ub294 \ud30c\uc77c\uc744 \uc0dd\uc131\ud558\uace0 \uc544\ub798 \ub0b4\uc6a9\uc744 \ucd94\uac00\ud569\ub2c8\ub2e4:
apiVersion: \"bgppeer.loxilb.io/v1\"\nkind: BGPPeerService\nmetadata:\n name: bgp-peer-test\nspec:\n ipAddress: 123.123.123.2\n remoteAs: 65123\n remotePort: 179\n
- \uc0c8\ub85c\uc6b4 BGP \ud53c\uc5b4\ub97c \ucd94\uac00\ud558\uae30 \uc704\ud574 CRD\ub97c \uc801\uc6a9\ud569\ub2c8\ub2e4:
kubectl apply -f bgp-peer.yaml\n
- \uc801\uc6a9\ub41c CRD \ud655\uc778
\ub450 \uac00\uc9c0 \ubc29\ubc95\uc73c\ub85c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uccab \ubc88\uc9f8\ub294 loxicmd(loxilb \ucee8\ud14c\uc774\ub108 \ub0b4)\ub97c \ud1b5\ud574 \ud655\uc778\ud558\ub294 \ubc29\ubc95\uc774\uace0, \ub450 \ubc88\uc9f8\ub294 kubectl\uc744 \ud1b5\ud574 \ud655\uc778\ud558\ub294 \ubc29\ubc95\uc785\ub2c8\ub2e4.
# loxicmd\nkubectl exec -it {loxilb} -n kube-system -- loxicmd get bgpneigh \n| PEER | AS | UP/DOWN | STATE | \n|----------------|-------|-------------|-------------|\n| 123.123.123.2 | 65123 | never | ACTIVE |\n\n# kubectl\nkubectl get bgppeerservice\nNAME PEER AS \nbgp-peer-test 123.123.123.2 65123 \n
"},{"location":"kube-loxilb/","title":"Understanding loxilb deployment in K8s with kube-loxilb","text":""},{"location":"kube-loxilb/#what-is-kube-loxilb","title":"What is kube-loxilb ?","text":"kube-loxilb is loxilb's implementation of kubernetes service load-balancer spec which includes support for load-balancer class, advanced IPAM (shared or exclusive) etc. kube-loxilb runs as a deloyment set in kube-system namespace. It is a control-plane component that always runs inside k8s cluster and watches k8s system for changes to nodes/end-points/reachability/LB services etc. It acts as a K8s Operator of loxilb. The loxilb component takes care of doing actual job of providing service connectivity and load-balancing. So, from deployment perspective we need to run kube-loxilb inside K8s cluster but we have option to deploy loxilb in-cluster or external to the cluster.
The preferred way is to run kube-loxilb component inside the cluster and provision loxilb docker in any external node/vm as mentioned in this guide. The rationale is to provide users a similar look and feel whether running loxilb in an on-prem or public cloud environment. Public-cloud environments usually run load-balancers/firewalls externally in order to provide a secure/dmz perimeter layer outside actual workloads. But users are free to choose any mode (in-cluster mode or external mode) as per convenience and their system architecture. The following blogs give detailed steps for :
- Running loxilb in external node with AWS EKS
- Running in-cluster LB with K3s for on-prem use-cases
This usually leads to another query - In external mode, who will be responsible for managing this entity ? On public cloud(s), it is as simple as spawning a new instance in your VPC and launch loxilb docker in it. For on-prem cases, you need to run loxilb docker in a spare node/vm as applicable. loxilb docker is a self-contained entity and easily managed with well-known tools like docker, containerd, podman etc. It can be independently restarted/upgraded anytime and kube-loxilb will make sure all the k8s LB services are properly configured each time. When deploying in-cluster mode, everything is managed by Kubernetes itself with little-to-no manual intervention.
"},{"location":"kube-loxilb/#overall-topology","title":"Overall topology","text":" - For external mode, the overall topology including all components should be similar to the following :
- For in-cluster mode, the overall topology including all components should be similar to the following :
"},{"location":"kube-loxilb/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":" -
If you have chosen external-mode, please make sure loxilb docker is downloaded and installed properly in a node external to your cluster. One can follow guides here or refer to various other documentation . It is important to have network connectivity from this node to the k8s cluster nodes (where kube-loxilb will eventually run) as seen in the above figure. (PS - This step can be skipped if running in-cluster mode)
-
Download the kube-loxilb config yaml :
wget https://github.com/loxilb-io/kube-loxilb/raw/main/manifest/ext-cluster/kube-loxilb.yaml\n
- Modify arguments as per user's needs :
args:\n - --loxiURL=http://12.12.12.1:11111\n - --externalCIDR=123.123.123.1/24\n #- --externalSecondaryCIDRs=124.124.124.1/24,125.125.125.1/24\n #- --externalCIDR6=3ffe::1/96\n #- --monitor\n #- --setBGP=65100\n #- --extBGPPeers=50.50.50.1:65101,51.51.51.1:65102\n #- --setRoles=0.0.0.0\n #- --setLBMode=1\n #- --setUniqueIP=false\n
The arguments have the following meaning :
Name Description loxiURL API server address of loxilb. This is the docker IP address loxilb docker of Step 1. If unspecified, kube-loxilb assumes loxilb is running in-cluster mode and autoconfigures this. externalCIDR CIDR or IPAddress range to allocate addresses from. By default address allocated are shared for different services(shared Mode) externalCIDR6 Ipv6 CIDR or IPAddress range to allocate addresses from. By default address allocated are shared for different services(shared Mode) monitor Enable liveness probe for the LB end-points (default : unset) setBGP Use specified BGP AS-ID to advertise this service. If not specified BGP will be disabled. Please check here how it works. extBGPPeers Specifies external BGP peers with appropriate remote AS setRoles If present, kube-loxilb arbitrates loxilb role(s) in cluster-mode. Further, it sets a special VIP (selected as sourceIP) to communicate with end-points in full-nat mode. setLBMode 0, 1, 2 0 - default (only DNAT, preserves source-IP) 1 - onearm (source IP is changed to load balancer\u2019s interface IP) 2 - fullNAT (sourceIP is changed to virtual IP) setUniqueIP Allocate unique service-IP per LB service (default : false) externalSecondaryCIDRs Secondary CIDR or IPAddress ranges to allocate addresses from in case of multi-homing support Many of the above flags and arguments can be overriden on a per-service basis based on loxilb specific annotation as mentioned below.
- kube-loxilb supported annotations:
Annotations Description loxilb.io/multus-nets When using multus, the multus network can also be used as a service endpoint.Register the multus network name to be used.Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: multus-service\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/multus-nets: macvlan1,macvlan2spec:\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0app: pod-01\u00a0\u00a0ports:\u00a0\u00a0- port: 55002\u00a0\u00a0\u00a0\u00a0targetPort: 5002\u00a0\u00a0type: LoadBalancer loxilb.io/num-secondary-networks When using the SCTP multi-homing function, you can specify the number of secondary IPs(upto 3) to be assigned to the service. When used with the loxilb.io/secondaryIPs annotation, the value set in loxilb.io/num-secondary-networks is ignored. (loxilb.io/secondaryIPs annotation takes precedence)Example:metadata:\u00a0\u00a0name: sctp-lb1\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/num-secondary-networks: \u201c2\u201dspec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 55002\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/secondaryIPs When using the SCTP multi-homing function, specify the secondary IP to be assigned to the service. Multiple IPs(upto 3) can be specified at the same time using a comma(,). When used with the loxilb.io/num-secondary-networks annotation, loxilb.io/secondaryIPs takes priority.)Example:metadata:name: sctp-lb-secips\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/lbmode: \"fullnat\"loxilb.io/secondaryIPs: \"1.1.1.1,2.2.2.2\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb-secips\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0type: LoadBalancer loxilb.io/staticIP Specifies the External IP to assign to the LoadBalancer service. By default, an external IP is assigned within the externalCIDR range set in kube-loxilb, but using the annotation, IPs outside the range can also be statically specified. Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb-fullnat\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/lbmode: \"fullnat\"\u00a0\u00a0\u00a0\u00a0loxilb.io/staticIP: \"192.168.255.254\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-fullnat-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/liveness Set LoxiLB to perform a health check (probe) based endpoint selection(If flag is set, only active endpoints will be selected). The default value is no, and when the value is set to yes, the probe function of the corresponding service is activated.Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb-fullnat\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-fullnat-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/lbmode Set LB mode individually for each service. Select one among types of values \u200b\u200bthat can be specified: \u201cdefault\u201d, \u201conearm\u201d, \u201cfullnat\u201d or \"dsr\". Please refer to this document for more details.Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb-fullnat\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/lbmode: \"fullnat\"\u00a0\u00a0\u00a0\u00a0spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-fullnat-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/ipam Specify which IPAM mode the service will use. Select one of three options: \u201cipv4\u201d, \u201cipv6\u201d, or \u201cipv6to4\u201d. Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/ipam : \"ipv4\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/timeout Set the session retention time for the service. Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/timeout : \"60\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probetype Specifies the protocol type to use for endpoint probe operations. You can select one of \u201cudp\u201d, \u201ctcp\u201d, \u201chttps\u201d, \u201chttp\u201d, \u201csctp\u201d, \u201cping\u201d, or \u201cnone\u201d. Probetype is set to protocol type, if you are using lbMode as \"fullnat\" or \"onearm\". To set it off, use probetype : \"none\" Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"ping\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probeport Set the port to use for probe operation. It is not applied if the loxilb.io/probetype annotation is not used or if it is of type icmp or none.Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"tcp\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probeport : \"3000\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probereq Specifies API for the probe request. It is not applied if the loxilb.io/probetype annotation is not used or if it is of type icmp or none.Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"tcp\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probeport : \"3000\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probereq : \"health\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/proberesp Specifies the response to the probe request. It is not applied if the loxilb.io/probetype annotation is not used or if it is of type icmp or none.Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"tcp\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probeport : \"3000\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probereq : \"health\"\u00a0\u00a0\u00a0\u00a0loxilb.io/proberesp : \"ok\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probetimeout Specifies the timeout for starting a probe request (in seconds). The default value is 60 seconds Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probetimeout : \"10\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/proberetries Specifies the number of probe request retries before considering an endpoint as inoperative. The default value is 2 Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probetimeout : \"10\"\u00a0\u00a0\u00a0\u00a0loxilb.io/proberetries : \"3\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/epselect Specifies the algorithm for end-point slection e.g \"rr\", \"hash\", \"persist\", \"lc\" etc. The default value is roundrobin. Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probetimeout : \"10\"\u00a0\u00a0\u00a0\u00a0loxilb.io/proberetries : \"3\"\u00a0\u00a0\u00a0\u00a0loxilb.io/epselect : \"hash\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/prefLocalPod Specifies whether to always prefer to select a local pod in in-cluster mode. The default value is false. Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/prefLocalPod : \"yes\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer - Apply the yaml after making necessary changes :
kubectl apply -f kube-loxilb.yaml\n
* The above should make sure kube-loxilb is successfully running. Check kube-loxilb is running : k8s@master:~$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system local-path-provisioner-84db5d44d9-pczhz 1/1 Running 0 16h\nkube-system coredns-6799fbcd5-44qpx 1/1 Running 0 16h\nkube-system metrics-server-67c658944b-t4x5d 1/1 Running 0 16h\nkube-system kube-loxilb-5fb5566999-ll4gs 1/1 Running 0 14h\n
- Finally to create service LB for a workload, we can use and apply the following template yaml
(Note - Check loadBalancerClass and other loxilb specific annotation) :
apiVersion: v1\n kind: Service\n metadata:\n name: iperf-service\n annotations:\n # If there is a need to do liveness check from loxilb\n loxilb.io/liveness: \"yes\"\n # Specify LB mode - one of default, onearm or fullnat \n loxilb.io/lbmode: \"default\"\n # Specify loxilb IPAM mode - one of ipv4, ipv6 or ipv6to4 \n loxilb.io/ipam: \"ipv4\"\n # Specify number of secondary networks for multi-homing\n # Only valid for SCTP currently\n # loxilb.io/num-secondary-networks: \"2\n # Specify a static externalIP for this service\n # loxilb.io/staticIP: \"123.123.123.2\"\n spec:\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: perf-test\n ports:\n - port: 55001\n targetPort: 5001\n type: LoadBalancer\n ---\n apiVersion: v1\n kind: Pod\n metadata:\n name: iperf1\n labels:\n what: perf-test\n spec:\n containers:\n - name: iperf\n image: eyes852/ubuntu-iperf-test:0.5\n command:\n - iperf\n - \"-s\"\n ports:\n - containerPort: 5001\n
Users can change the above as per their needs.
- Verify LB service is created
k8s@master:~$ sudo kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 13h\niperf1 LoadBalancer 10.43.8.156 llb-192.168.80.20 55001:5001/TCP 8m20s\n
- For more example yaml templates, kindly refer to kube-loxilb's manifest directory
"},{"location":"kube-loxilb/#additional-steps-to-deploy-loxilb-in-cluster-mode","title":"Additional steps to deploy loxilb (in-cluster) mode","text":"To run loxilb in-cluster mode, the URL argument in kube-loxilb.yaml needs to be commented out:
args:\n #- --loxiURL=http://12.12.12.1:11111\n - --externalCIDR=123.123.123.1/24\n
This enables a self-discovery mode of kube-loxilb where it can find and reach loxilb pods running inside the cluster. Last but not the least we need to create the loxilb pods in cluster :
sudo kubectl apply -f https://github.com/loxilb-io/kube-loxilb/raw/main/manifest/in-cluster/loxilb.yaml\n
Once all the pods are created, the same can be verified as follows (you can see both kube-loxilb and loxilb components running:
k8s@master:~$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system local-path-provisioner-84db5d44d9-pczhz 1/1 Running 0 16h\nkube-system coredns-6799fbcd5-44qpx 1/1 Running 0 16h\nkube-system metrics-server-67c658944b-t4x5d 1/1 Running 0 16h\nkube-system kube-loxilb-5fb5566999-ll4gs 1/1 Running 0 14h\nkube-system loxilb-lb-mklj2 1/1 Running 0 13h\nkube-system loxilb-lb-stp5k 1/1 Running 0 13h\nkube-system loxilb-lb-j8fc6 1/1 Running 0 13h\nkube-system loxilb-lb-5m85p 1/1 Running 0 13h\n
Thereafter, the process of service creation remains the same as explained in previous sections.
"},{"location":"kube-loxilb/#how-to-use-kube-loxilb-crds","title":"How to use kube-loxilb CRDs ?","text":"Kube-loxilb provides Custom Resource Definition (CRD). Current the following operations are supported (which would be continually updated): - Add a BGP Peer - Delete a BGP Peer
An example of CRD is stored in manifest/crds. Setting up a BGP Peer as an example is as follows:
- Pre-Processing (Register kube-loxilb CRDs with K8s). Apply lbpeercrd.yaml as first step
kubectl apply -f manifest/crds/lbpeercrd.yaml\n
- CRD definition
You need to create a yaml file that adds a peer for BGP. The example below is an example of creating a Peer with a RemoteAS number of Peer IP address 65123 at 123.123.123.2. Create a file named bgp-peer.yaml and add the contents below.
apiVersion: \"bgppeer.loxilb.io/v1\"\nkind: BGPPeerService\nmetadata:\n name: bgp-peer-test\nspec:\n ipAddress: 123.123.123.2\n remoteAs: 65123\n remotePort: 179\n
- Apply CRD to add a new BGP Peer
kubectl apply -f bgp-peer.yaml\n
- Verify the applied CRD
You can check it in two ways. The first one can be checked through loxicmd(in loxilb container), and the second one can be checked through kubectl.
# loxicmd\nkubectl exec -it {loxilb} -n kube-system -- loxicmd get bgpneigh \n| PEER | AS | UP/DOWN | STATE | \n|----------------|-------|-------------|-------------|\n| 123.123.123.2 | 65123 | never | ACTIVE |\n\n# kubectl\nkubectl get bgppeerservice\nNAME PEER AS \nbgp-peer-test 123.123.123.2 65123 \n
"},{"location":"lb-algo/","title":"loxilb load-balancer algorithms","text":""},{"location":"lb-algo/#load-balancer-algorithms-in-loxilb","title":"Load-balancer algorithms in loxilb","text":"loxilb implements a variety of algortihms to achieve load-balancing and distribute incoming traffic to the server end-points
"},{"location":"lb-algo/#1-round-robin-rr","title":"1. Round-Robin (rr)","text":"This is default algo used by loxilb. In this mode, loxilb selects the end-points configured for a service in simple round-robin fashion for each new incoming connection
"},{"location":"lb-algo/#2-weighted-round-robin-wrr","title":"2. Weighted round-robin (wrr)","text":"In this mode, loxilb selects the end-points as per weight(in terms of percentage of overall traffic connections) associated with the end-points of a service. For example, if we have three end-points, we can have 70%, 10% and 20% distribution.
"},{"location":"lb-algo/#3-persistence-persist","title":"3. Persistence (persist)","text":"In this mode, every client (sourceIP) will always get connected to a particular end-point. In essence there is no real load-balancing involved but it can be useful for applications which require client session-affinity e.g FTP which requires two connections with the end-point.
"},{"location":"lb-algo/#4-flow-hash-hash","title":"4. Flow-hash (hash)","text":"In this mode, loxilb will select the end-point based on 5-tuple hash on incoming traffic. This 5-tuple consists of SourceIP, SourcePort, DestinationIP, DestinationPort and IP protocol number. Please note that in this mode connections from same client can also get mapped to different end-points since SourcePort is usually selected randomly by operating systems resulting in a different hash value.
"},{"location":"lb-algo/#5-least-connections-lc","title":"5. Least-Connections (lc)","text":"In this mode, loxilb will select end-point which has the least active connections (or least loaded) at a given point of time.
"},{"location":"lb/","title":"What is service type external load-balancer in Kubernetes ?","text":"There are many different types of Kubernetes services like NodePort, ClusterIP etc. However, service type external load-balancer provides a way of exposing your application internally and/or externally in the perspective of the k8s cluster. Usually, Kubernetes CCM provider ensures that a load balancer of some sort is created, deleted and updated in your cloud. For on-prem or edge deployments however, organizations need to provide their own CCM load-balancer functions. MetalLB (initially developed at Google) has been the choice for such cases for long.
But edge services need to support so many exotic protocols in play like GTP, SCTP, SRv6 etc and integrating everything into a seamlessly working solution has been quite difficult. This is an area where loxilb aims to play a pivotal role.
The following is a simple yaml config file which needs to be applied to create a service type load-balancer :
\"type\": \"LoadBalancer\"\n {\n \"kind\": \"Service\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"sample-service\"\n },\n \"spec\": {\n \"ports\": [{\n \"port\": 9001,\n \"targetPort\": 5001\n }],\n \"selector\": {\n \"app\": \"sample\"\n },\n \"type\": \"LoadBalancer\"\n }\n }\n
However, if there is no K8s CCM plugin implementing external service load-balancer, such services won't be created and remain in pending state forever.
"},{"location":"loxilb-ingress/","title":"k3s/Run loxilb-ingress","text":""},{"location":"loxilb-ingress/#how-to-run-loxilb-ingress","title":"How to run loxilb-ingress","text":"In Kubernetes, there is usually a lot of overlap between network load-balancer and an Ingress functionality. This creates a lot of confusion. Overall, the differences between an Ingress and a load-balancer service can be categorized as follows:
Feature Ingress Load-balancer Protocol HTTP(s) level - Layer7 Network Layer4 Additional Features Ingress Rules, Resource-Backends, Based on L4 Session Params Path-Types, HostName Yaml Manifest apiVersion: networking.k8s.io/v2 type: LoadBalancer When using Ingress, the clients connect to one of the pods through Ingress. The clients first perform a DNS lookup which returns IP address of the ingress. This IP address is usually funnelled through a L4 Load-balancer. The client sends a HTTP(s) request to Ingress specifying URL, hostname and other HTTP headers. Based on the HTTP payload, the ingress finds an associated Service and its EndPoint Objects. The Ingress then forwards the client's request to appopriate pod. It can also serve as HTTS termination point or as a mTLS hub.
With Kubernetes ingress, we can expose multiple paths with the same service IP. This might be helpful if one is using public cloud, where one has to pay for managed LB services. Hence, creating a single service and exposing mulitple URL paths might be optimal in such use-cases.
loxilb-ingress is optimized for cases which require long-lived connections and https termination with eBPF.
"},{"location":"loxilb-ingress/#getting-started","title":"Getting Started","text":"The following getting started example is based on K3s as the kubernetes platform but should work on any kubernetes implementation or distribution like EKS, GKE etc but should work well with any. We will use K3s as the kubernetes platform.
"},{"location":"loxilb-ingress/#install-k3s","title":"Install K3s","text":"curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable traefik,servicelb\" K3S_KUBECONFIG_MODE=\"644\" sh -\n
"},{"location":"loxilb-ingress/#install-loxilb-as-a-l4-service-lb","title":"Install loxilb as a L4 service LB","text":"Follow any of the loxilb getting started guides as per requirement. In this example, we will run loxilb-lb in external mode. Check all the pods are up and running as expected :
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-6799fbcd5-cp5lv 1/1 Running 0 3h26m\nkube-system kube-loxilb-755f6fb85-gbg7f 1/1 Running 0 3h26m\nkube-system local-path-provisioner-6f5d79df6-47n2b 1/1 Running 0 3h26m\nkube-system metrics-server-54fd9b65b-b6c6x 1/1 Running 0 3h26m\n
"},{"location":"loxilb-ingress/#prepare-tlsssl-certificates-for-ingress","title":"Prepare TLS/SSL certificates for Ingress","text":"Self-signed TLS/SSL certificates and private keys can be built using various tools like OpenSSL or Minica. Basically, one will need to have two files - server.crt and server.key for loxilb-ingress usage. Once these files are in place, a Kubernetes secret can be created using the following yaml:
apiVersion: v1\ndata:\n server.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUI3RENDQVhPZ0F3SUJBZ0lJU.....\n server.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JRzJBZ0VBTUJBR0J5cUdTTTQ5Q.....\nkind: Secret\nmetadata:\n creationTimestamp: null\n name: loxilb-ssl\n namespace: kube-system\ntype: Opaque\n
The above values are just dummy values but it is important to note that they need to be in base64 format not in pem format. How do we get the base64 values from server.crt and server.key files ?
$ base64 server.crt\nLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUI3RENDQVhPZ0F3SUJBZ0lJU.....\n$ base64 server.key\nLS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JRzJBZ0VBTUJBR0J5cUdTTTQ5Q.....\n
Now, after applying the yaml, we can check the created secret :
$ kubectl get secret -n kube-system loxilb-ssl\nNAME TYPE DATA AGE\nloxilb-ssl Opaque 2 106m\n
In the subsequent steps, this secret loxilb-ssl
will be used throughout.
"},{"location":"loxilb-ingress/#install-loxilb-ingress","title":"Install loxilb-ingress","text":"kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb-ingress/main/manifests/loxilb-ingress-deploy.yml\n
Check status of running pods :
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-6799fbcd5-cp5lv 1/1 Running 0 3h26m\nkube-system kube-loxilb-755f6fb85-gbg7f 1/1 Running 0 3h26m\nkube-system local-path-provisioner-6f5d79df6-47n2b 1/1 Running 0 3h26m\nkube-system loxilb-ingress-hn5ld 1/1 Running 0 61m\nkube-system metrics-server-54fd9b65b-b6c6x 1/1 Running 0 3h26m\n
"},{"location":"loxilb-ingress/#install-service-backend-app-and-ingress-rules","title":"Install service, backend app and ingress rules","text":" - Create a LB service for exposing ingress ports
apiVersion: v1\nkind: Service\nmetadata:\n name: loxilb-ingress-manager\n namespace: kube-system\n annotations:\n loxilb.io/lbmode: \"onearm\"\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n app.kubernetes.io/instance: loxilb-ingress\n app.kubernetes.io/name: loxilb-ingress\n ports:\n - name: http\n port: 80\n protocol: TCP\n targetPort: 80\n - name: https\n port: 443\n protocol: TCP\n targetPort: 443\n type: LoadBalancer\n
Check the services created :
$ kubectl get svc -A\nNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ndefault kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 3h28m\nkube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 3h28m\nkube-system loxilb-ingress-manager LoadBalancer 10.43.136.1 llb-192.168.80.9 80:31686/TCP,443:31994/TCP 62m\nkube-system metrics-server ClusterIP 10.43.236.60 <none> 443/TCP 3h28m\n
At this point of time, all services exposed via ingress can be accessed via \"192.168.80.9\". This IP could be different as per use-case and scenario. This IP can then be associated with DNS for name based access.
- Create backend apps for
domain1.loxilb.io
and configure ingress rules with the following yaml : apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: site\nspec:\n replicas: 1\n selector:\n matchLabels:\n name: site-handler\n template:\n metadata:\n labels:\n name: site-handler\n spec:\n containers:\n - name: blog\n image: ghcr.io/loxilb-io/nginx:stable\n imagePullPolicy: Always\n ports:\n - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: site-handler-service\nspec:\n ports:\n - name: http\n port: 80\n protocol: TCP\n targetPort: 80\n selector:\n name: site-handler\n---\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: site-loxilb-ingress\nspec:\n #ingressClassName: loxilb\n tls:\n - hosts:\n - domain1.loxilb.io\n secretName: loxilb-ssl\n rules:\n - host: domain1.loxilb.io\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: site-handler-service\n port:\n number: 80\n
Double check status of pods, services and ingress:
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\ndefault site-869fd54548-t82bq 1/1 Running 0 64m\nkube-system coredns-6799fbcd5-cp5lv 1/1 Running 0 3h31m\nkube-system kube-loxilb-755f6fb85-gbg7f 1/1 Running 0 3h31m\nkube-system local-path-provisioner-6f5d79df6-47n2b 1/1 Running 0 3h31m\nkube-system loxilb-ingress-hn5ld 1/1 Running 0 66m\nkube-system metrics-server-54fd9b65b-b6c6x 1/1 Running 0 3h31m\n\n$ kubectl get svc -A\nNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ndefault kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 3h31m\ndefault site-handler-service ClusterIP 10.43.101.77 <none> 80/TCP 64m\nkube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 3h31m\nkube-system loxilb-ingress-manager LoadBalancer 10.43.136.1 llb-192.168.80.9 80:31686/TCP,443:31994/TCP 65m\nkube-system metrics-server ClusterIP 10.43.236.60 <none> 443/TCP 3h31m\n\n\n$ kubectl get ingress -A\nNAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE\ndefault site-loxilb-ingress <none> domain1.loxilb.io 80, 443 65m\n
We can for the above example and create backend apps for other hostnames e.g. domain2.loxilb.io
and configure ingress rules.
"},{"location":"loxilb-ingress/#testing-loxilb-ingress","title":"Testing loxilb ingress","text":"If you are testing locally you can simply add the following for dns resolution in your bastion/host :
$ tail -n 2 /etc/hosts\n192.168.80.9 domain1.loxilb.io\n
The above step is similar to adding A records in a DNS like route53. - Finally, try to access the service \"domain1.loxilb.io\" :
$ curl -H \"HOST: domain1.loxilb.io\" https://domain1.loxilb.io\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
"},{"location":"loxilb-nginx-ingress/","title":"How-To - Deploy loxilb with ingress-nginx","text":""},{"location":"loxilb-nginx-ingress/#how-to-run-loxilb-with-ingress-nginx","title":"How to run loxilb with ingress-nginx","text":"In Kubernetes, there is usually a lot of overlap between network load-balancer and an Ingress functionality. This creates a lot of confusion. Overall, the differences between an Ingress and a load-balancer service can be categorized as follows:
Feature Ingress Load-balancer Protocol HTTP(s) level - Layer7 Network Layer4 Additional Features Ingress Rules, Resource-Backends Based on L4 Session Params Yaml Manifest apiVersion: networking.k8s.io/v2 type: LoadBalancer With Kubernetes ingress, we can expose multiple paths with the same service IP. This might be helpful if one is using public cloud, where one has to pay for managed LB services. Hence, creating a single service and exposing mulitple URL paths might be optimal in such use-cases.
For this example, we will use ingress-nginx which is a kubernetes community driven ingress. loxilb has its own ingress implementation, which is optimized (with eBPF helpers) for cases which require long-lived connections, https termination etc. However, if someone needs to use it any other ingress implementation, they can follow this guide. This guide uses ingress-nginx as the ingress implementation.
"},{"location":"loxilb-nginx-ingress/#considerations","title":"Considerations","text":"This example is not specific to any particular managed kubernetes implementation like EKS, GKE etc but should work well with any. We will simply use K3s as a based kubernetes platform.
"},{"location":"loxilb-nginx-ingress/#install-k3s","title":"Install K3s","text":"curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable traefik,servicelb\" K3S_KUBECONFIG_MODE=\"644\" sh -\n
"},{"location":"loxilb-nginx-ingress/#install-loxilb","title":"Install loxilb","text":"Follow any of the getting started guides as per requirement. Check all the pods are up and running as expected :
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-6799fbcd5-4n4kl 1/1 Running 0 56m\nkube-system kube-loxilb-b466c99bb-fpgll 1/1 Running 0 56m\nkube-system local-path-provisioner-6f5d79df6-f52sw 1/1 Running 0 56m\nkube-system loxilb-lb-gbkw7 1/1 Running 0 30s\nkube-system metrics-server-54fd9b65b-dchv2 1/1 Running 0 56m\n
"},{"location":"loxilb-nginx-ingress/#install-ingress-nginx","title":"Install ingress-nginx","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/baremetal/deploy.yaml\n
Double confirm :
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\ningress-nginx ingress-nginx-admission-create-9vq66 0/1 Completed 0 113s\ningress-nginx ingress-nginx-admission-patch-k4d74 0/1 Completed 1 113s\ningress-nginx ingress-nginx-controller-845698f4f6-xq6hm 1/1 Running 0 113s\nkube-system coredns-6799fbcd5-4n4kl 1/1 Running 0 59m\nkube-system kube-loxilb-b466c99bb-fpgll 1/1 Running 0 59m\nkube-system local-path-provisioner-6f5d79df6-f52sw 1/1 Running 0 59m\nkube-system loxilb-lb-gbkw7 1/1 Running 0 3m33s\nkube-system metrics-server-54fd9b65b-dchv2 1/1 Running 0 59m\n
"},{"location":"loxilb-nginx-ingress/#install-service-backend-app-and-ingress-rules","title":"Install service, backend app and ingress rules","text":" - Create a LB service for exposing ingress ports
apiVersion: v1\nkind: Service\nmetadata:\n name: ingress-nginx-controller-loadbalancer\n namespace: ingress-nginx\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n app.kubernetes.io/component: controller\n app.kubernetes.io/instance: ingress-nginx\n app.kubernetes.io/name: ingress-nginx\n ports:\n - name: http\n port: 80\n protocol: TCP\n targetPort: 80\n - name: https\n port: 443\n protocol: TCP\n targetPort: 443\n type: LoadBalancer\n
Check the services created :
$ kubectl get svc -A\nNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ndefault kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 61m\ningress-nginx ingress-nginx-controller NodePort 10.43.114.138 <none> 80:30958/TCP,443:31794/TCP 3m22s\ningress-nginx ingress-nginx-controller-admission ClusterIP 10.43.107.66 <none> 443/TCP 3m22s\ningress-nginx ingress-nginx-controller-loadbalancer LoadBalancer 10.43.27.248 llb-192.168.80.10 80:32218/TCP,443:32617/TCP 9s\nkube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 61m\nkube-system loxilb-lb-service ClusterIP None <none> 11111/TCP,179/TCP,50051/TCP 5m2s\nkube-system metrics-server ClusterIP 10.43.20.55 <none> 443/TCP 61m\n
At this point of time, all services exposed via ingress can be accessed via \"192.168.80.10\". This IP could be different as per use-case and scenario. This IP can then be associated with DNS for name based access.
- Create backend apps for
domain1.loxilb.io
and configure ingress rules with the following yaml apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: site\nspec:\n replicas: 1\n selector:\n matchLabels:\n name: site-nginx-frontend\n template:\n metadata:\n labels:\n name: site-nginx-frontend\n spec:\n containers:\n - name: blog\n image: ghcr.io/loxilb-io/nginx:stable\n imagePullPolicy: Always\n ports:\n - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: site-nginx-service\nspec:\n ports:\n - name: http\n port: 80\n protocol: TCP\n targetPort: 80\n selector:\n name: site-nginx-frontend\n---\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: site-nginx-ingress\n annotations:\n #app.kubernetes.io/ingress.class: \"nginx\"\n nginx.ingress.kubernetes.io/ssl-redirect: \"false\"\nspec:\n ingressClassName: nginx\n rules:\n - host: domain1.loxilb.io\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: site-nginx-service\n port:\n number: 80\n
Double check status of pods, services and ingress:
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\ndefault site-69d64fcd49-j4qhj 1/1 Running 0 46s\ningress-nginx ingress-nginx-admission-create-9vq66 0/1 Completed 0 8m21s\ningress-nginx ingress-nginx-admission-patch-k4d74 0/1 Completed 1 8m21s\ningress-nginx ingress-nginx-controller-845698f4f6-xq6hm 1/1 Running 0 8m21s\nkube-system coredns-6799fbcd5-4n4kl 1/1 Running 0 66m\nkube-system kube-loxilb-b466c99bb-fpgll 1/1 Running 0 66m\nkube-system local-path-provisioner-6f5d79df6-f52sw 1/1 Running 0 66m\nkube-system loxilb-lb-gbkw7 1/1 Running 0 10m\nkube-system metrics-server-54fd9b65b-dchv2 1/1 Running 0 66m\n\n$ kubectl get svc -A\nNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ndefault kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 67m\ndefault site-nginx-service ClusterIP 10.43.16.35 <none> 80/TCP 108s\ningress-nginx ingress-nginx-controller NodePort 10.43.114.138 <none> 80:30958/TCP,443:31794/TCP 9m23s\ningress-nginx ingress-nginx-controller-admission ClusterIP 10.43.107.66 <none> 443/TCP 9m23s\ningress-nginx ingress-nginx-controller-loadbalancer LoadBalancer 10.43.27.248 llb-192.168.80.10 80:32218/TCP,443:32617/TCP 6m10s\nkube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 67m\nkube-system loxilb-lb-service ClusterIP None <none> 11111/TCP,179/TCP,50051/TCP 11m\nkube-system metrics-server ClusterIP 10.43.20.55 <none> 443/TCP 67m\n\n\n$ kubectl get ingress -A\nNAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE\ndefault site-nginx-ingress nginx domain1.loxilb.io 10.0.2.15 80 2m10s\n
- Now, lets create backend apps for
domain2.loxilb.io
and configure ingress rules with the following yaml apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: site2\nspec:\n replicas: 1\n selector:\n matchLabels:\n name: site-nginx-frontend2\n template:\n metadata:\n labels:\n name: site-nginx-frontend2\n spec:\n containers:\n - name: blog\n image: ghcr.io/loxilb-io/nginx:stable\n imagePullPolicy: Always\n ports:\n - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: site-nginx-service2\nspec:\n ports:\n - name: http\n port: 80\n protocol: TCP\n targetPort: 80\n selector:\n name: site-nginx-frontend2\n---\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: site-nginx-ingress2\n annotations:\n #app.kubernetes.io/ingress.class: \"nginx\"\n nginx.ingress.kubernetes.io/ssl-redirect: \"false\"\nspec:\n ingressClassName: nginx\n rules:\n - host: domain2.loxilb.io\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: site-nginx-service2\n port:\n number: 80\n
Again, we can check the status of pods, service and ingress:
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\ndefault site-69d64fcd49-j4qhj 1/1 Running 0 9m12s\ndefault site2-7fff6cfbbf-8d6rp 1/1 Running 0 2m34s\ningress-nginx ingress-nginx-admission-create-9vq66 0/1 Completed 0 16m\ningress-nginx ingress-nginx-admission-patch-k4d74 0/1 Completed 1 16m\ningress-nginx ingress-nginx-controller-845698f4f6-xq6hm 1/1 Running 0 16m\nkube-system coredns-6799fbcd5-4n4kl 1/1 Running 0 74m\nkube-system kube-loxilb-b466c99bb-fpgll 1/1 Running 0 74m\nkube-system local-path-provisioner-6f5d79df6-f52sw 1/1 Running 0 74m\nkube-system loxilb-lb-gbkw7 1/1 Running 0 18m\nkube-system metrics-server-54fd9b65b-dchv2 1/1 Running 0 74m\n\n$ kubectl get svc -A\nNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ndefault kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 75m\ndefault site-nginx-service ClusterIP 10.43.16.35 <none> 80/TCP 9m32s\ndefault site-nginx-service2 ClusterIP 10.43.107.99 <none> 80/TCP 2m54s\ningress-nginx ingress-nginx-controller NodePort 10.43.114.138 <none> 80:30958/TCP,443:31794/TCP 17m\ningress-nginx ingress-nginx-controller-admission ClusterIP 10.43.107.66 <none> 443/TCP 17m\ningress-nginx ingress-nginx-controller-loadbalancer LoadBalancer 10.43.27.248 llb-192.168.80.10 80:32218/TCP,443:32617/TCP 13m\nkube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 75m\nkube-system loxilb-lb-service ClusterIP None <none> 11111/TCP,179/TCP,50051/TCP 18m\nkube-system metrics-server ClusterIP 10.43.20.55 <none> 443/TCP 75m\n\n$ kubectl get ingress -A\nNAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE\ndefault site-nginx-ingress nginx domain1.loxilb.io 10.0.2.15 80 9m49s\ndefault site-nginx-ingress2 nginx domain2.loxilb.io 10.0.2.15 80 3m11s\n
"},{"location":"loxilb-nginx-ingress/#test","title":"Test","text":"If you are testing locally you can simply add the following for dns resolution in your bastion/host :
$ tail -n 2 /etc/hosts\n192.168.80.10 domain1.loxilb.io\n192.168.80.10 domain2.loxilb.io\n
The above step is similar to adding A records in a DNS like route53. -
Finally, try to access the service \"domain1.loxilb.io\" :
$ curl -H \"HOST: domain1.loxilb.io\" domain1.loxilb.io\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
-
And then try to access services domain2.loxilb.io:
$ curl -H \"HOST: domain2.loxilb.io\" domain2.loxilb.io\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
"},{"location":"loxilbebpf/","title":"loxilb eBPF implementation details","text":"In this section, we will look into details of loxilb ebpf implementation in little details and try to check what goes on under the hood. When loxilb is build, it builds two object files as follows :
llb@nd2:~/loxilb$ ls -l /opt/loxilb/\ntotal 396\ndrwxrwxrwt 3 root root 0 6?? 20 11:17 dp\n-rw-rw-r-- 1 llb llb 305536 6?? 29 09:39 llb_ebpf_main.o\n-rw-rw-r-- 1 llb llb 95192 6?? 29 09:39 llb_xdp_main.o\n
As the name suggests and based on hook point, xdp version does XDP packet processing while ebpf version is used at TC layer for TC eBPF processing. Interesting enough, the packet forwarding code is largely agnostic of its final hook point due to usage of a light abstraction layer to hide differences between eBPF and XDP layer.
Now this beckons the question why separate hook points and how does it all work together ? loxilb does bulk of its processing at TC eBPF layer as this layer is most optimized for doing L4+ processing needed for loxilb operation. XDP's frame format is different than what is used by skb (linux kernel's generic socket buffer). This makes it very difficult (if not impossible) to do tcp checksum offload and other such features used by linux networking stack for quite some time now. In short, if we need to do such operations, XDP performance will be inherently slow. XDP as such is perfect for quick operations at l2 layer. loxilb uses XDP to do certain operations like mirroring. Due to how TC eBPF works, it is difficult to work with multiple packet copies and loxilb's TC eBPF offloads some functinality to XDP layer in such special cases.
"},{"location":"loxilbebpf/#loading-of-loxilb-ebpf-program","title":"Loading of loxilb eBPF program","text":"loxilb's goLang based agent by default loads the loxilb ebpf programs in all the interfaces(only physical/real/bond/wireguard) available in the system. As loxilb is designed to run in its own docker/container, this is convenient for users who dont want to have to manually load/unload eBPF programs. However, it is still possible to do so manually if need arises :
To load :
ntc filter add dev eth1 ingress bpf da obj /opt/loxilb/llb_ebpf_main.o sec tc_packet_hook0\n
To unload:
ntc filter del dev eth1 ingress\n
To check:
root@nd2:/home/llb# ntc filter show dev eth1 ingress\nfilter protocol all pref 49152 bpf chain 0 \nfilter protocol all pref 49152 bpf chain 0 handle 0x1 llb_ebpf_main.o:[tc_packet_hook0] direct-action not_in_hw id 8715 tag 43a829222e969bce jited \n
Please not that ntc is the customized tc tool from iproute2 package which can be found in loxilb's repository
"},{"location":"loxilbebpf/#entry-points-of-loxilb-ebpf","title":"Entry points of loxilb eBPF","text":"loxilb's eBPF code is usually divided into two program sections with the following entry functions :
- tc_packet_func
This alongwith the consequent code does majority of the packet processing. If conntrack entries are in established state, this is also responsible for packet tx. However if conntrack entry for a particular packet flow is not established, it makes a bpf tail call to the tc_packet_func_slow
- tc_packet_func_slow
This is responsible mainly for doing NAT lookup and stateful conntrack implementation. Once conntrack entry transitions to established state, the forwarding then can happen directly from tc_packet_func
loxilb's XDP code is contained in the following section :
- xdp_packet_func
This is the entry point for packet processing when hook point is XDP instead of TC eBPF
"},{"location":"loxilbebpf/#pinned-maps-of-loxilb-ebpf","title":"Pinned Maps of loxilb eBPF","text":"All maps used by loxilb eBPF are mounted in the file-system as below :
root@nd2:/home/llb/loxilb# ls -lart /opt/loxilb/dp/\ntotal 4\ndrwxrwxrwt 3 root root 0 6?? 20 11:17 .\ndrwxr-xr-x 3 root root 4096 6?? 29 10:19 ..\ndrwx------ 3 root root 0 6?? 29 10:19 bpf\nroot@nd2:/home/llb/loxilb# mount | grep bpf\nnone on /opt/netlox/loxilb type bpf (rw,relatime)\n\nroot@nd2:/home/llb/loxilb# ls -lart /opt/loxilb/dp/bpf/\ntotal 0\ndrwxrwxrwt 3 root root 0 6?? 20 11:17 ..\nlrwxrwxrwx 1 root root 0 6?? 20 11:17 xdp -> /opt/loxilb/dp/bpf//tc/\ndrwx------ 3 root root 0 6?? 20 11:17 tc\nlrwxrwxrwx 1 root root 0 6?? 20 11:17 ip -> /opt/loxilb/dp/bpf//tc/\n-rw------- 1 root root 0 6?? 29 10:19 xfis\n-rw------- 1 root root 0 6?? 29 10:19 xfck\n-rw------- 1 root root 0 6?? 29 10:19 xctk\n-rw------- 1 root root 0 6?? 29 10:19 tx_intf_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 tx_intf_map\n-rw------- 1 root root 0 6?? 29 10:19 tx_bd_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 tmac_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 tmac_map\n-rw------- 1 root root 0 6?? 29 10:19 smac_map\n-rw------- 1 root root 0 6?? 29 10:19 rt_v6_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 rt_v4_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 rt_v4_map\n-rw------- 1 root root 0 6?? 29 10:19 polx_map\n-rw------- 1 root root 0 6?? 29 10:19 pkts\n-rw------- 1 root root 0 6?? 29 10:19 pkt_ring\n-rw------- 1 root root 0 6?? 29 10:19 pgm_tbl\n-rw------- 1 root root 0 6?? 29 10:19 nh_map\n-rw------- 1 root root 0 6?? 29 10:19 nat_v4_map\n-rw------- 1 root root 0 6?? 29 10:19 mirr_map\n-rw------- 1 root root 0 6?? 29 10:19 intf_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 intf_map\n-rw------- 1 root root 0 6?? 29 10:19 fc_v4_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 fc_v4_map\n-rw------- 1 root root 0 6?? 29 10:19 fcas\n-rw------- 1 root root 0 6?? 29 10:19 dmac_map\n-rw------- 1 root root 0 6?? 29 10:19 ct_v4_map\n-rw------- 1 root root 0 6?? 29 10:19 bd_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 acl_v6_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 acl_v4_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 acl_v4_map\n
Using bpftool, it is easy to check state of these maps as follows :
root@nd2:/home/llb# bpftool map dump pinned /opt/loxilb/dp/bpf/intf_map \n[{\n \"key\": {\n \"ifindex\": 2,\n \"ing_vid\": 0,\n \"pad\": 0\n },\n \"value\": {\n \"ca\": {\n \"act_type\": 11,\n \"ftrap\": 0,\n \"oif\": 0,\n \"cidx\": 0\n },\n \"\": {\n \"set_ifi\": {\n \"xdp_ifidx\": 1,\n \"zone\": 0,\n \"bd\": 3801,\n \"mirr\": 0,\n \"polid\": 0,\n \"r\": [0,0,0,0,0,0\n ]\n }\n }\n }\n },{\n \"key\": {\n \"ifindex\": 3,\n \"ing_vid\": 0,\n \"pad\": 0\n },\n \"value\": {\n \"ca\": {\n \"act_type\": 11,\n \"ftrap\": 0,\n \"oif\": 0,\n \"cidx\": 0\n },\n \"\": {\n \"set_ifi\": {\n \"xdp_ifidx\": 3,\n \"zone\": 0,\n \"bd\": 3803,\n \"mirr\": 0,\n \"polid\": 0,\n \"r\": [0,0,0,0,0,0\n ]\n }\n }\n }\n }\n]\n
As our development progresses, we will keep updating details about these map's internals
"},{"location":"loxilbebpf/#loxilb-ebpf-pipeline-at-a-glance","title":"loxilb eBPF pipeline at a glance","text":"The following figure shows a very high-level diagram of packet flow through loxilb eBPF pipeline :
We use eBPF tail calls to jump from one section to another majorly due to the fact that there is clear separation for CT (conntrack) functionality and packet-forwarding logic. At the same time, since kernel's built in eBPF-verifier imposes a maximum code size limit for a single program/section, it also helps to circumvent this issue.
"},{"location":"microk8s_quick_start_incluster/","title":"MicroK8s/loxilb in-cluster mode","text":""},{"location":"microk8s_quick_start_incluster/#quick-start-guide-with-microk8s-and-loxilb-in-cluster-mode","title":"Quick Start Guide with MicroK8s and LoxiLB in-cluster mode","text":"This document will explain how to install a MicroK8s cluster with loxilb as a serviceLB provider running in-cluster mode.
"},{"location":"microk8s_quick_start_incluster/#prerequisites","title":"Prerequisite(s)","text":" - Single node with Linux
"},{"location":"microk8s_quick_start_incluster/#topology","title":"Topology","text":"For quickly bringing up loxilb in-cluster and MicroK8s, we will be deploying all components in a single node :
loxilb and kube-loxilb components run as pods managed by kubernetes(MicroK8s) in this scenario.
"},{"location":"microk8s_quick_start_incluster/#setup-microk8s-in-a-single-node","title":"Setup MicroK8s in a single-node","text":"# MicroK8s installation steps\nsudo apt-get update\nsudo apt install -y snapd\nsudo snap install microk8s --classic --channel=1.28/stable\n
"},{"location":"microk8s_quick_start_incluster/#check-microk8s-status","title":"Check MicroK8s status","text":"$ sudo microk8s status --wait-ready\nmicrok8s is running\nhigh-availability: no\n datastore master nodes: 127.0.0.1:19001\n datastore standby nodes: none\naddons:\n enabled:\n dns # (core) CoreDNS\n ha-cluster # (core) Configure high availability on the current node\n helm # (core) Helm - the package manager for Kubernetes\n helm3 # (core) Helm 3 - the package manager for Kubernetes\n disabled:\n cert-manager # (core) Cloud native certificate management\n cis-hardening # (core) Apply CIS K8s hardening\n community # (core) The community addons repository\n dashboard # (core) The Kubernetes dashboard\n gpu # (core) Automatic enablement of Nvidia CUDA\n host-access # (core) Allow Pods connecting to Host services smoothly\n hostpath-storage # (core) Storage class; allocates storage from host directory\n ingress # (core) Ingress controller for external access\n kube-ovn # (core) An advanced network fabric for Kubernetes\n mayastor # (core) OpenEBS MayaStor\n metrics-server # (core) K8s Metrics Server for API access to service metrics\n minio # (core) MinIO object storage\n observability # (core) A lightweight observability stack for logs, traces and metrics\n prometheus # (core) Prometheus operator for monitoring and logging\n rbac # (core) Role-Based Access Control for authorisation\n registry # (core) Private image registry exposed on localhost:32000\n rook-ceph # (core) Distributed Ceph storage using Rook\n storage # (core) Alias to hostpath-storage add-on, deprecated\n
"},{"location":"microk8s_quick_start_incluster/#how-to-deploy-loxilb","title":"How to deploy loxilb ?","text":"loxilb can be deloyed by using the following command in the MicroK8s node
sudo microk8s kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/microk8s-incluster/loxilb.yml\n
"},{"location":"microk8s_quick_start_incluster/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used as an operator to manage loxilb.
wget https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/microk8s-incluster/kube-loxilb.yml\n
kube-loxilb.yaml args:\n #- --loxiURL=http://172.17.0.2:11111\n - --externalCIDR=192.168.82.100/32\n - --setRoles=0.0.0.0\n #- --monitor\n #- --setBGP\n
In the above snippet, loxiURL is commented out which denotes to utilize in-cluster mode to discover loxilb pods automatically. External CIDR represents the IP pool from where serviceLB VIP will be allocated. Apply after making changes (if any) :
sudo microk8s kubectl apply -f kube-loxilb.yaml\n
"},{"location":"microk8s_quick_start_incluster/#create-the-service","title":"Create the service","text":"sudo microk8s kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/microk8s-incluster/tcp-svc-lb.yml\n
"},{"location":"microk8s_quick_start_incluster/#check-status-of-various-components-in-microk8s-node","title":"Check status of various components in MicroK8s node","text":"In MicroK8s node:
## Check the pods created\n$ sudo microk8s kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system calico-node-fjfvz 1/1 Running 0 10m\nkube-system coredns-864597b5fd-xtmt4 1/1 Running 0 10m\nkube-system calico-kube-controllers-77bd7c5b-4kldr 1/1 Running 0 10m\nkube-system loxilb-lb-7xctp 1/1 Running 0 9m11s\nkube-system kube-loxilb-6f44cdcdf5-4864j 1/1 Running 0 7m40s\ndefault tcp-onearm-test 1/1 Running 0 6m49s\n\n## Check the services created\n$ sudo microk8s kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.152.183.1 <none> 443/TCP 18m\ntcp-lb-onearm LoadBalancer 10.152.183.216 llb-192.168.82.100 56002:32186/TCP 14m\n
In loxilb pod, we can check internal LB rules: $ sudo microk8s kubectl exec -it -n kube-system loxilb-lb-7xctp -- loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-----------------------|------|-----|--------|-----------|-------|--------|--------|----------|\n| 192.168.82.100 | | 56002 | tcp | default_tcp-lb-onearm | 0 | rr | onearm | 10.0.2.15 | 32186 | 1 | active | 25:1842 |\n
"},{"location":"microk8s_quick_start_incluster/#connect-from-hostclient","title":"Connect from host/client","text":"$ curl http://192.168.82.100:56002\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
For more detailed information on incluster deployment of loxilb with bgp in a full-blown cluster, kindly follow this blog. All of the above steps are also available as part of loxilb CICD workflow. Follow the steps below to replicate the above (please note that you will need vagrant tool installed to run:
$ git clone https://github.com/loxilb-io/loxilb.git\n$ cd cicd/microk8s-incluster/\n\n# To setup the single node microk8s setup and loxilb in-cluster\n$ ./config.sh\n\n# To validate the results\n$ ./validation.sh\n\n# To login to the node and check the installation\n$ vagrant ssh k8s-node1\n\n# Cleanup\n$ ./rmconfig.sh\n
"},{"location":"multi-cloud-ha/","title":"How-To - Deploy loxilb with multi-cloud HA support","text":""},{"location":"multi-cloud-ha/#deploy-loxilb-with-multi-cloud-ha-support","title":"Deploy LoxiLB with multi-cloud HA support","text":"LoxiLB supports stateful HA configuration in various cloud environments such as AWS. Especially for AWS, one can configure HA using the Floating IP pattern, together with LoxiLB.
"},{"location":"multi-cloud-ha/#overall-scenario","title":"Overall Scenario","text":"Overall scenario will look like this:
Setup configuration for Multi-Cloud/Multi-region will be similar to Multi-AZ-HA configuration
"},{"location":"multi-cloud-ha/#important-considerations","title":"Important considerations","text":" - The steps mentioned in the above documentation are for a single AWS region. For cross-region, similar configuration needs to be done in other AWS regions.
- Two LoxiLB instances - loxilb1 and loxilb2 will be deployed in different AZs per region. These two loxilbs form a HA pair and operate in active-backup roles.
- One instance of kube-loxilb will be deployed per region.
- Every region\u2019s private CIDR will be different and one region\u2019s privateCIDR should be reachable to others through VPC peering.
- As elastic IP is bound to a particular region, it is impossible to provide connection synchronization for cross-region HA. Only warm stand-by cross-region HA is supported.
- Full Support for elasticIP in GCP is not available yet. For testing HA with GCP, run single loxilb and kube-loxilb with standard configuration. There will not be any privateCIDR in kube-loxilb.yaml. Mention loxilb IP as externalCIDR.
To summarize, when a failover occurs within the region, the public ElasticIP address is always associated to the active LoxiLB instance, so users who were previously accessing EKS using the same ElasticIP address can continue to do so without being affected by any node failure or other issues. When a region-wise failover occurs, DNS will redirect the requests to the different region.
"},{"location":"multi-cloud-ha/#an-example-configuration","title":"An example configuration","text":"Please follow the steps to create cluster and prepare VM instances mentioned here.
"},{"location":"multi-cloud-ha/#configuring-loxilb-ec2-instances","title":"Configuring LoxiLB EC2 Instances","text":""},{"location":"multi-cloud-ha/#kube-loxilb-deployment","title":"kube-loxilb deployment","text":"kube-loxilb is a K8s operator for LoxiLB. Download the manifest file required for your deployment in EKS. Create the ServiceAccount and other necessary settings for the cluster before start deploying kube-loxilb per cluster.
---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: kube-loxilb\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nrules:\n - apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - pods\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - endpoints\n - services\n - services/status\n verbs:\n - get\n - watch\n - list\n - patch\n - update\n - apiGroups:\n - gateway.networking.k8s.io\n resources:\n - gatewayclasses\n - gatewayclasses/status\n - gateways\n - gateways/status\n - tcproutes\n - udproutes\n verbs: [\"get\", \"watch\", \"list\", \"patch\", \"update\"]\n - apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - get\n - watch\n - list\n - apiGroups:\n - authentication.k8s.io\n resources:\n - tokenreviews\n verbs:\n - create\n - apiGroups:\n - authorization.k8s.io\n resources:\n - subjectaccessreviews\n verbs:\n - create\n - apiGroups:\n - bgppeer.loxilb.io\n resources:\n - bgppeerservices\n verbs:\n - get\n - watch\n - list\n - create\n - update\n - delete\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: kube-loxilb\nsubjects:\n - kind: ServiceAccount\n name: kube-loxilb\n namespace: kube-system\n
"},{"location":"multi-cloud-ha/#change-the-args-inside-the-yaml-belowas-applicable-and-install-it-for-every-region","title":"Change the args inside the yaml below(as applicable) and install it for every region.","text":"kube-loxilb-osaka-deployment.yaml
apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: kube-loxilb-osaka\n namespace: kube-system\n labels:\n app: kube-loxilb-app\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: kube-loxilb-app\n template:\n metadata:\n labels:\n app: kube-loxilb-app\n spec:\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet\n tolerations:\n # Mark the pod as a critical add-on for rescheduling.\n - key: CriticalAddonsOnly\n operator: Exists\n priorityClassName: system-node-critical\n serviceAccountName: kube-loxilb\n terminationGracePeriodSeconds: 0\n containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:aws-support\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n - --loxiURL=http://192.168.218.60:11111,192.168.218.61:11111\n - --externalCIDR=13.208.X.X/32\n - --privateCIDR=192.168.248.254/32\n - --setLBMode=2\n - --zone=osaka\n resources:\n requests:\n cpu: \"100m\"\n memory: \"50Mi\"\n limits:\n cpu: \"100m\"\n memory: \"50Mi\"\n securityContext:\n privileged: true\n capabilities:\nadd: [\"NET_ADMIN\", \"NET_RAW\"]\n
kube-loxilb-seoul-deployment.yaml
apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: kube-loxilb-seoul\n namespace: kube-system\n labels:\n app: kube-loxilb-app\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: kube-loxilb-app\n template:\n metadata:\n labels:\n app: kube-loxilb-app\n spec:\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet\n tolerations:\n # Mark the pod as a critical add-on for rescheduling.\n - key: CriticalAddonsOnly\n operator: Exists\n priorityClassName: system-node-critical\n serviceAccountName: kube-loxilb\n terminationGracePeriodSeconds: 0\n containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:aws-support\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n - --loxiURL=http://192.168.119.11:11111,192.168.119.12:11111\n - --externalCIDR=14.112.X.X/32\n - --privateCIDR=192.168.150.254/32\n - --setLBMode=2\n - --zone=seoul\n resources:\n requests:\n cpu: \"100m\"\n memory: \"50Mi\"\n limits:\n cpu: \"100m\"\n memory: \"50Mi\"\n securityContext:\n privileged: true\n capabilities:\nadd: [\"NET_ADMIN\", \"NET_RAW\"]\n
For every region, Edit kube-loxilb-region.yaml * Modify loxiURL with the IPs of the LoxiLB EC2 instances created in the region above. * For externalCIDR, specify the Elastic IP created above. * PrivateCIDR specifies the VIP that will be associated with the Elastic IP."},{"location":"multi-cloud-ha/#run-loxilb-pods","title":"Run LoxiLB Pods","text":""},{"location":"multi-cloud-ha/#install-docker-on-loxilb-instances","title":"Install docker on LoxiLB instance(s)","text":"LoxiLB is deployed as a container on each instance. To use containers, docker must first be installed on the instance. Docker installation guide can be found here
"},{"location":"multi-cloud-ha/#running-loxilb-container","title":"Running LoxiLB container","text":"The following command is for a LoxiLB instance (loxilb1) using subnet-a.
sudo docker run -u root --cap-add SYS_ADMIN \\\n --restart unless-stopped \\\n --net=host \\\n --privileged \\\n -dit \\\n -v /dev/log:/dev/log -e AWS_REGION=ap-northeast-3 \\\n --name loxilb \\\n ghcr.io/loxilb-io/loxilb:aws-support \\\n --cloud=aws --cloudcidrblock=192.168.248.0/24 --cluster=192.168.218.61 --self=0\n
- In the cloudcidrblock option, specify the IP band that includes the VIP set in kube-loxilb's privateCIDR. master LoxiLB uses the value set here to create a new subnet in the AZ where it is located and uses it for HA operation.
- The cluster option specifies the IP of the partner instance (LoxiLB instance using subnet-b) for which HA is configured.
- The self option is set to 0. It is just a identier used internally to identify each instance
Similarily we can run loxilb2 instance in the second EC2 instance using subnet-b:
sudo docker run -u root --cap-add SYS_ADMIN \\\n --restart unless-stopped \\\n --net=host \\\n --privileged \\\n -dit \\\n -v /dev/log:/dev/log -e AWS_REGION=ap-northeast-3 \\\n --name loxilb \\\n ghcr.io/loxilb-io/loxilb:aws-support \\\n --cloud=aws --cloudcidrblock=192.168.248.0/24 --cluster=192.168.218.60 --self=1\n
For each instance, HA status can be checked as follows:
When the container runs, you can check the HA status as follows:
ubuntu@ip-192-168-218-60:~$ sudo docker exec -ti loxilb bash\nroot@ip-192-168-228-108:/# loxicmd get ha\n| INSTANCE | HASTATE |\n|----------|---------|\n| default | MASTER |\nroot@ip-192-168-218-60:/#\n
"},{"location":"multi-cloud-ha/#creating-a-service","title":"Creating a service","text":"Let's create a test service to test HA functionality. Below are the manifest files for the nginx pod and service that we will use for testing.
apiVersion: v1\nkind: Service\nmetadata:\n name: nginx-lb1\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: nginx-test\n ports:\n - port: 55002\n targetPort: 80\n type: LoadBalancer\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-test\n labels:\n what: nginx-test\nspec:\n containers:\n - name: nginx-test\n image: nginx:stable\n ports:\n - containerPort: 80\n
After creating an nginx service with the above, weu can see that the ElasticIP has been designated as the externalIP of the service. LEIS6N3:~/workspace/aws-demo$ kubectl apply -f nginx.yaml\nservice/nginx-lb1 created\npod/nginx-test created\nLEIS6N3:~/workspace/aws-demo$ kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.100.0.1 <none> 443/TCP 22h\nnginx-lb1 LoadBalancer 10.100.178.3 llb-13.208.X.X 55002:32403/TCP 15s\n
"},{"location":"nat/","title":"NAT Modes of loxilb","text":""},{"location":"nat/#nat-modes-in-loxilb","title":"NAT modes in loxilb","text":"loxilb implements a variety of NAT modes to achieve load-balancing for different scenarios as far as L4 load-balancing is concerned. These NAT modes have subtle differences and this guide will shed light on these details
"},{"location":"nat/#1-normal-nat","title":"1. Normal NAT","text":"This is basic NAT mode used by loxilb. In this mode, loxilb employs simple DNAT for incoming requests i.e destination IP (which is also the service IP) is changed to the chosen end-point IP. For the outgoing responses it does the exactly opposite(SNAT). Since loxilb relies on statefulness for this mode, it is necessary that return packets also traverse through loxilb. The following figure illustrates this operation -
In this mode, original source IP is preserved till the end-point and provides best visibility for anyone needing it. Finally, this also means the end-points should know how to reach the source.
"},{"location":"nat/#2-one-arm","title":"2. One-ARM","text":"Traditionally one-arm NAT mode meant that the LB node used to have a single arm (or connection) to the LAN instead of separate ingress and egress networks. loxilb's one-arm NAT mode is a little extended version of the traditional one-arm mode. In one-arm mode, loxilb chooses its LAN IP as source-IP when sending incoming requests towards end-points nodes. Even if the originating source is not on the same LAN, this is loxilb's default behaviour for one arm mode.
"},{"location":"nat/#3-full-nat","title":"3. Full-NAT","text":"In the full-NAT mode, loxilb replaces the source-IP of an incoming request to a special instance IP. This instance IP is associated with each instance in a cluster deployment and maintained internally by loxilb. In this mode, various instances of loxilb cluster will have unique instance IPs and each of them will be advertised by BGP towards the end-point to set the return PATH accordingly. This helps in optimal distribution and spread of traffic in case an active-active clustering mode is desired.
"},{"location":"nat/#4-l2-dsr-mode","title":"4. L2-DSR mode","text":"In L2-DSR (direct server return) mode, loxilb performs load-balancing operation but without changing any IP addresses. It just updates the layer2 header as per selected end-point. Also in DSR mode, loxilb does not need statefulness and end-point can choose a different return path not involving loxilb. This maybe advantageous for certain scenarios where there is a need to reduce load in LB nodes by allowing return traffic to bypass the LB.
"},{"location":"nat/#5-l3-dsr-mode","title":"5. L3-DSR mode","text":"In L3-DSR (direct server return) mode, loxilb performs load-balancing operation but encapsulates the original payload with an IPinIP tunnel towards the end-points. Also like L2-DSR mode, loxilb does not need statefulness and end-point can choose a different/direct return path not involving loxilb.
"},{"location":"perf-multi/","title":"Perf multi","text":""},{"location":"perf-multi/#bare-metal-performance","title":"Bare-Metal Performance","text":"The topology for this test is as follows :
In this test, all the hosts, end-points and load-balancer run in separate dedicated servers/nodes. Server specs used - Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz - 40 core RAM 125GB, Kernel 5.15.0-52-generic. The following command can be used to configure loxilb for the given topology:
# loxicmd create lb 20.20.20.1 --tcp=2020:5001 --endpoints=31.31.31.1:1,32.32.32.1:1,33.33.33.1:1\n
The default mode of LoxiLB is RR(round-robin) while other popular distribution modes such as consistent hash(Maglev), WRR etc are also supported. We run popular tool netperf for the above topology. A quick explanation of terminologies used :
RPS - requests per seconds. Given a fixed number of connections, this denotes how many requests/message per second can be supported CPS - connections per second. This denotes how many new TCP connection setup/teardowns can be supported per second and hence one of the most important indicators of load-balancer performance CRR - connect/request/response. This is same as CPS but netperf tool uses this term to refer to CPS as part of its test scenario RR - request/response. This is another netperf test option. We used it to measure min and avg latency
We are comparing loxilb with ipvs and haproxy.
The results are as follows :
"},{"location":"perf-multi/#connections-per-second-tcp_crr","title":"Connections per second (TCP_CRR)","text":""},{"location":"perf-multi/#requests-per-second-tcp_rr","title":"Requests per second (TCP_RR)","text":""},{"location":"perf-multi/#minimum-latency","title":"Minimum Latency","text":""},{"location":"perf-multi/#average-latency","title":"Average Latency","text":""},{"location":"perf-multi/#conclusionnotes-","title":"Conclusion/Notes -","text":" - loxilb provides enhanced performance across the spectrum of tests. There is a noticeable gain in CPS.
- loxilb's CPS scales linearly with number of cores
- haproxy version used - 2.0.29
- netperf test scripts can be found here
"},{"location":"perf-single/","title":"Perf single","text":""},{"location":"perf-single/#single-node-performance","title":"Single-node performance","text":"The hosts/LB/end-points are run as docker pods inside a single server/node. The topology is as follows :
The following command can be used to configure lb for the given topology:
# loxicmd create lb 20.20.20.1 --tcp=2020:5001 --endpoints=31.31.31.1:1,32.32.32.1:1,33.33.33.1:1\n
The testing is done with full stateful connection tracking enabled (non dsr mode). To create the above topology for testing loxilb, users can follow this guide. A go webserver with an empty response is used for benchmark purposes. The code is as following : package main\n\nimport (\n \"log\"\n \"net/http\"\n)\n\nfunc main() {\n http.HandleFunc(\"/\", func(w http.ResponseWriter, r *http.Request) {\n\n })\n if err := http.ListenAndServe(\":5001\", nil); err != nil {\n log.Fatal(\"ListenAndServe: \", err)\n }\n}\n
The above code runs in each of the load-balancer end-points as following : go run ./webserver.go\n
wrk based HTTP benchmarking is one of the tools used in this test. This tool is run with the following parameters:
root@loxilb:/home/loxilb # wrk -t8 -c400 -d30s http://20.20.20.1:2020/\n
- where t: No. of threads, c: No. of connections. d: Duration of test We also run other popular performance testing tools like netperf, iperf along with wrk for the above topology. A quick explanation of terminologies used :
RPS - requests per seconds. Given a fixed number of connections, this denotes how many requests/message per second can be supported CPS - connections per second. This denotes how many new TCP connection setup/teardowns can be supported per second and hence one of the most important indicators of load-balancer performance CRR - connect/request/response. This is same as CPS but netperf tool uses this term to refer to CPS as part of its test scenario RR - request/response. This is another netperf test option. We used it to measure min and avg latency
The results are as follows :
"},{"location":"perf-single/#case-1-system-configuration-intelr-coretm-i7-4770hq-cpu-220ghz-3-core-6gb-ram-kernel-5150-52-generic","title":"Case 1. System Configuration - Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz , 3-Core, 6GB RAM, Kernel 5.15.0-52-generic","text":"Tool loopback loxilb ipvs wrk(RPS) 38040 44833 40012 wrk(CPS) n/a 7020 6048 netperf(CRR) n/a 11674 9901 netperf(RR min) 12.31 us 15.2us 19.75us netperf(RR avg) 61.27 us 78.1us 131us iperf 43.5Gbps 41.2Gbps 34.4Gbps"},{"location":"perf-single/#case-2-system-configuration-intelr-xeonr-silver-4210r-cpu-240ghz-40-core-124gb-ram-kernel-5150-52-generic","title":"Case 2. System Configuration - Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz, 40-core, 124GB RAM, Kernel 5.15.0-52-generic","text":"Tool loopback loxilb ipvs haproxy wrk(RPS) 406953 421746 388021 217004 wrk(CPS) n/a 45064 24400 22000 netperf(CRR) n/a 375k 174k 21k netperf(RR min) n/a 12 us 15us 27us netperf(RR avg) n/a 15.78 us 18.25us 35.76us iperf 456Gbps 402Gbps 374Gbps 91Gbps"},{"location":"perf-single/#conclusionnotes-","title":"Conclusion/Notes -","text":" - loxilb provides enhanced performance across the spectrum of tests. There is a noticeable gain in CPS
- loxilb's CPS is limited only by the fact that this is a single node scenario with shared resources
- \"loopback\" here refers to client and server running in the same host/pod. This is supposed to be the best case scenario but since there is only a single end-point for lo compared to 3 for LB testing , hence the RPS measurements are on the lower side.
- iperf is run with 100 threads ( iperf X.X.X.X -P 100 )
- haproxy version used - 2.0.29
- netperf test scripts can be found here
"},{"location":"perf-single/#watch-the-video","title":"Watch the video","text":"https://github.com/loxilb-io/loxilbdocs/assets/106566094/6cf85c4e-7cb4-4d23-b5f6-a7854e07cd7b
"},{"location":"perf/","title":"loxilb Performance","text":" - Single-node (cnf) performance report
- Bare-metal performance report
"},{"location":"quick_start_with_cilium/","title":"K3s/loxilb with cilium","text":""},{"location":"quick_start_with_cilium/#loxilb-quick-start-guide-with-cilium","title":"LoxiLB Quick Start Guide with Cilium","text":"This guide will explain how to:
- Deploy a single-node K3s cluster with cilium networking
- Expose services with loxilb as an external load balancer
"},{"location":"quick_start_with_cilium/#pre-requisite","title":"Pre-requisite","text":" - Single node with Linux
- Install docker runtime to manage loxilb
"},{"location":"quick_start_with_cilium/#topology","title":"Topology","text":"For quickly bringing up loxilb with cilium CNI, we will be deploying all components in a single node :
loxilb and cilium both uses ebpf technology for load balancing and implementing policies. So, to avoid the conflict we have to run them in separate network space. This is reason we are going to run loxilb in a docker and use macvlan for the incoming traffic. Also, this is to mimic a topology close to cloud-hosted k8s where LB nodes run outside a cluster.
"},{"location":"quick_start_with_cilium/#install-loxilb-docker","title":"Install loxilb docker","text":"## Set promisc mode for mac-vlan to work\nsudo ifconfig eth1 promisc\n\nsudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged --entrypoint /root/loxilb-io/loxilb/loxilb -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest\n\n# Create mac-vlan on top of underlying eth1 interface\nsudo docker network create -d macvlan -o parent=eth1 --subnet 192.168.82.0/24 --gateway 192.168.82.1 --aux-address 'host=192.168.82.252' llbnet\n\n# Assign mac-vlan to loxilb docker with specified IP (which will be used as LB VIP)\nsudo docker network connect llbnet loxilb --ip=192.168.82.100\n\n# Add iptables rule to allow traffic from source IP(192.168.82.1) to loxilb\nsudo iptables -A DOCKER -s 192.168.82.1 -j ACCEPT\n
"},{"location":"quick_start_with_cilium/#setup-k3s-with-cilium","title":"Setup K3s with cilium","text":"#K3s installation\ncurl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable traefik --disable servicelb --disable-cloud-controller \\\n--flannel-backend=none \\\n--disable-network-policy\" sh -\n\n#Install Cilium\nCILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)\nCLI_ARCH=amd64\nif [ \"$(uname -m)\" = \"aarch64\" ]; then CLI_ARCH=arm64; fi\ncurl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}\nsha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum\nsudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin\nrm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}\nmkdir -p ~/.kube/\nsudo cat /etc/rancher/k3s/k3s.yaml > ~/.kube/config\ncilium install\n\necho $MASTER_IP > /vagrant/master-ip\nsudo cp /var/lib/rancher/k3s/server/node-token /vagrant/node-token\nsudo cp /etc/rancher/k3s/k3s.yaml /vagrant/k3s.yaml\nsudo sed -i -e \"s/127.0.0.1/${MASTER_IP}/g\" /vagrant/k3s.yaml\n
"},{"location":"quick_start_with_cilium/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used to deploy loxilb with Kubernetes.
wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/ext-cluster/kube-loxilb.yaml\n
kube-loxilb.yaml
args:\n - --loxiURL=http://172.17.0.2:11111\n - --externalCIDR=192.168.82.100/32\n - --setMode=1\n
In the above snippet, loxiURL uses docker interface IP of loxilb, which can be different for each setup. Apply in k8s:
kubectl apply -f kube-loxilb.yaml\n
"},{"location":"quick_start_with_cilium/#create-the-service","title":"Create the service","text":"kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/docker-k3s-cilium/tcp-svc-lb.yml\n
"},{"location":"quick_start_with_cilium/#check-the-status","title":"Check the status","text":"In k3s:
kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 80m\ntcp-lb-onearm LoadBalancer 10.43.183.123 llb-192.168.82.100 56002:30001/TCP 6m50s\n
In loxilb docker: $ sudo docker exec -it loxilb loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-----------------------|------|-----|--------|-----------|-------|--------|--------|----------|\n| 192.168.82.100 | | 56002 | tcp | default_tcp-lb-onearm | 0 | rr | onearm | 10.0.2.15 | 30001 | 1 | active | 12:880 |\n
"},{"location":"quick_start_with_cilium/#connect-from-client","title":"Connect from client","text":"$ curl http://192.168.82.100:56002\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
All of the above steps are also available as part of loxilb CICD workflow. Follow the steps below to replicate the above:
$ cd cicd/docker-k3s-cilium/\n\n# To setup the single node k3s setup with cilium as CNI and loxilb as external load balancer\n$ ./config.sh\n\n# To validate the results\n$ ./validation.sh\n\n# Cleanup\n$ ./rmconfig.sh\n
"},{"location":"requirements/","title":"System Requirements","text":""},{"location":"requirements/#loxilb-system-requirements","title":"LoxiLB system requirements","text":"To run loxilb, we need to have the following -
"},{"location":"requirements/#host-os-requirements","title":"Host OS requirements","text":"To install LoxiLB software packages, you need the 64-bit version of one of these OS versions:
- Ubuntu 20.04(LTS)
- Ubuntu 22.04(LTS)
- Fedora 36
- RockyOS
- Enterprise Redhat (Planned)
- Windows Server(Planned)
"},{"location":"requirements/#kernel-requirements","title":"Kernel Requirements","text":" - Linux Kernel Version >= 5.15.x && < 6.5.x
- Windows (Planned)
"},{"location":"requirements/#compatible-kubernetes-versions","title":"Compatible Kubernetes Versions","text":" - Kubernetes 1.19 ~ 1.29 (k0s, k3s, k8s, eks, openshift, kind etc)
"},{"location":"requirements/#hardware-requirements","title":"Hardware Requirements","text":" - None as long as above criteria are met (2vcpu/2GB should be enough for starters)
"},{"location":"roadmap/","title":"Release Notes (For major release milestones)","text":""},{"location":"roadmap/#070-beta-aug-2022","title":"0.7.0 beta (Aug, 2022)","text":"Initial release of loxilb
-
Functional Features:
- Two-Arm Load-Balancer (NAT+Routed mode)
- Upto 16 end-points support
- Load-balancer selection policy
- Round-robin, traffic-hash (fallback to RR if hash fails)
- Conntrack support in eBPF - TCP/UDP/ICMP/SCTP profiles
- GTP with QFI extension support
- ULCL classifier support
- Native QoS-Policer support (SRTCM/TRTCM)
- GoBGP Integration
- Extended visibility and statistics
-
LB Spec Support:
- IP allocation policy
- Kubernetes 1.20 base support
- Support for Calico Networking
-
Utilities:
- loxicmd support : Configuration utlity with the look and feel of kubectl
"},{"location":"roadmap/#080-dec-2022","title":"0.8.0 (Dec, 2022)","text":" -
Functional Features:
- Enhanced load-balancer support including SCTP statefulness, WRR distribution
- Integrated Firewall support
- Integrated end-point health-checks
- One-ARM, FullNAT, DSR LB mode support
- NAT66/NAT64 support
- Clustering support
- Integration with Linux egress TC hooks
-
LB Spec:
- Stand-alone mode to support LB Spec kube-loxilb
- Load-balancer class support
- Advanced IPAM for ipv4/ipv6 with shared/exclusive mode
- Kubernetes 1.25 Integration
-
Utilities:
- loxicmd support : Data-Store support, more commands
"},{"location":"roadmap/#090-nov-2023","title":"0.9.0 (Nov, 2023)","text":" -
Functional Features:
- Hardened NAT Support - CGNAT'ish
- L3 DSR mode Support
- Https end-point liveness probes
- Maglev clustering
- SCTP multihoming support
- Integration with Linux native QoS
- Support for Cilium, Weave Networking
- Grafana based dashboard
- IPSEC Support (with VTI)
- Initial support for in-cluster mode
-
kube-loxilb/LB Spec Support:
- OpenShift Integration
- Support for per-service liveness-checks, IPAM type, multi-homing annotations
- Kubernetes 1.26 (k0s, k3s, k8s )
- Operator support
- AWS EKS support
"},{"location":"roadmap/#093-may-2024","title":"0.9.3 (May, 2024)","text":" -
Functional Features:
- Kube-proxy replacement support
- IPVS compatibility mode
- Master-plane HA support
- BFD and GARP support for Hitless HA
- Enhancements for Multus support
- SCTP multi-homing end-to-end support
- Cloud Availability zone(s) support
- Redhat9 and Ubuntu24 support
- Support for upto Linux Kernel 6.8
- Full Support for Oracle OCI
- SockAddr eBPF for LocalVIP access
- Container size enhancements
- HA enhancements for multiple cloud-providers and various scenarios (active-active, active-standby, clustered etc)
- CICD infra enhancements
- Robust secret management for HTTPS apis
- Performance enhancements with CT scaling
- Enhanced exception handling
- GoLang Profiling Support
- Full support for in-cluster mode
- Better support for virtio environments
- Enhanced RSS distribution mode via XDP (especially for SCTP workloads)
- Loadbalancer algorithms - LeastConnections and SessionAffinity added
-
kube-loxilb Support:
- Kubernetes 1.29
- BGP (auto) Mesh support
- CRD for BGP peers
- Kubernetes GWAPI support
-
Utilities:
- N4 pfcp test-tool added
- Seagull test tool integrated
- Massive updates to documentation
"},{"location":"roadmap/#095-jul-2024","title":"0.9.5 (Jul, 2024)","text":" -
Functional Features:
- L7 (Transparent) proxy
- HTTPS termination
- Native eBPF implementation for Policy based IP Masquerade/SNAT
- Kubernetes vCluster support
- E2E SCTP multi-homing support with Multus
- Multi-AZ/Region hitless HA support for AWS/EKS
- Service communication proxy support for Telco deployments
-
Kubernetes Support:
- Kubernetes 1.30
- CRD for BGP policies
"},{"location":"roadmap/#096-aug-2024","title":"0.9.6 (Aug, 2024)","text":" -
Functional Features:
- Support for any host onearm LB rule
- HTTP 2.0 parser
- NGAP protocol parser
- ECMP Load-balancing support
- Non-privileged Container support
- AWS Local-Zone support
- Multi-Cloud HA support (AWS+GCP)
- Updated CICD workflows
-
Kubernetes Support:
- Ingress Manager support
- Enhanced GW API support
"},{"location":"roadmap/#097-oct-2024-planned","title":"0.9.7 (Oct, 2024) Planned","text":" -
Functional Features:
- SRv6 implementation
- Rolling upgrades
- URL Filtering
- Wireguard support (ingress + egress)
- SIP protocol support
- Sockmap support for SCTP
- Support for proxy protocol v2
- SYNProxy support
- IPSec service mesh for Telco workloads (ingress + egress)
-
Kubernetes Support:
- Kubernetes 1.31
- Multi-cluster support
- Support for Cilium and LoxiLB in-cluster support
- Kubernetes network policy support
"},{"location":"run/","title":"loxilb - How to build/run","text":""},{"location":"run/#1-build-from-code-and-run-difficult","title":"1. Build from code and run (difficult)","text":" - Install GoLang > v1.17
wget https://go.dev/dl/go1.22.0.linux-amd64.tar.gz && sudo tar -xzf go1.22.0.linux-amd64.tar.gz --directory /usr/local/\nexport PATH=\"${PATH}:/usr/local/go/bin\"\n
- Install standard packages
sudo apt install -y clang llvm libelf-dev gcc-multilib libpcap-dev vim net-tools linux-tools-$(uname -r) elfutils dwarves git libbsd-dev bridge-utils wget unzip build-essential bison flex iproute2 curl\n
- Install loxilb eBPF loader tools
curl -sfL https://github.com/loxilb-io/tools/raw/main/loader/install.sh | sh -\n
- Build and run loxilb
git clone --recurse-submodules https://github.com/loxilb-io/loxilb.git\ncd loxilb\n./loxilb-ebpf/utils/mkllb_bpffs.sh\nmake\nsudo ./loxilb \n
- Build and use loxicmd
git clone https://github.com/loxilb-io/loxicmd.git\ncd loxicmd\ngo get .\nmake\nsudo cp -f loxicmd /usr/local/sbin/\n
loxicmd usage guide can be found here"},{"location":"run/#2-build-and-run-using-docker-easy","title":"2. Build and run using docker (easy)","text":"Build the docker image
git clone --recurse-submodules https://github.com/loxilb-io/loxilb.git\ncd loxilb\nmake docker\n
This would create the docker image ghcr.io/loxilb-io/loxilb:latest
locally. One can then run loxilb in standalone mode by following guide here
"},{"location":"run/#3-running-in-kubernetes","title":"3. Running in Kubernetes","text":" - For running in K8s environment, kindly follow kube-loxilb guide
"},{"location":"service-proxy-calico/","title":"K3s/loxilb service-proxy with calico","text":""},{"location":"service-proxy-calico/#quick-start-guide-k3s-loxilb-service-proxy-and-calico","title":"Quick Start Guide - K3s, LoxiLB \"service-proxy\" and Calico","text":"This document will explain how to install a K3s cluster with loxilb in \"service-proxy\" mode alongside calico networking.
"},{"location":"service-proxy-calico/#what-is-service-proxy-mode","title":"What is service-proxy mode?","text":"service-proxy mode is where kubernetes kube-proxy services are entirely replaced by loxilb for better performance. Users can continue to use their existing networking providers while enjoying streamlined performance and superior feature-set provided by loxilb.
Looking at the left side of the image, you will notice the traffic flow of the packet as it enters the Kubernetes cluster. Kube-proxy, the de-facto networking agent in the Kubernetes which runs on each node of the cluster which monitors the services and translates them to either iptables or IPVS tangible rules. If we talk about the functionality or a cluster with low volume traffic then kube-proxy is fine but when it comes to scalability or a high volume traffic then it acts as a bottle-neck. loxilb \"service-proxy\" mode works with Flannel/Calico and kube-proxy in IPVS mode only as of now. It inherits the IPVS rules and imports these in it's in-kernel eBPF implementation. Traffic will reach at the interface, will be processed by eBPF and sent directly to the pod or to the other node, bypassing all the layers of Linux networking. This way, all the services, be it External, NodePort or ClusterIP, can be managed through LoxiLB and provide optimal performance for the users. The added benefit for the user's is the fact that there is no need to rip and replace their current networking provider (e.g flannel or calico). Kindly note that Kubernetes network policies can't be supported in this miode currently.
"},{"location":"service-proxy-calico/#topology","title":"Topology","text":"For quickly bringing up loxilb \"service-proxy\" in K3s with Calico, we will be deploying a single node k3s cluster (v1.29.3+k3s1) : \u00a0
loxilb and kube-loxilb components run as pods managed by kubernetes in this scenario.
"},{"location":"service-proxy-calico/#setup-k3s","title":"Setup K3s","text":""},{"location":"service-proxy-calico/#configure-k3s-node","title":"Configure K3s node","text":"$ curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable traefik \\\n --disable servicelb --disable-cloud-controller --kube-proxy-arg proxy-mode=ipvs \\\n cloud-provider=external --flannel-backend=none --disable-network-policy --cluster-cidr=10.42.0.0/16 \\\n --node-ip=${MASTER_IP} --node-external-ip=${MASTER_IP} \\\n --bind-address=${MASTER_IP}\" sh -\n
"},{"location":"service-proxy-calico/#deploy-calico","title":"Deploy calico","text":"K3s uses by default flannel for networking but here we are using calico to provide the same:
sudo kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml\nsudo kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml\n
"},{"location":"service-proxy-calico/#deploy-kube-loxilb-and-loxilb","title":"Deploy kube-loxilb and loxilb ?","text":"kube-loxilb is used as an operator to manage loxilb. We need to deploy both kube-loxilb and loxilb components in your kubernetes cluster
sudo kubectl apply -f https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/service-proxy/kube-loxilb.yml\nsudo kubectl apply -f https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/service-proxy/loxilb-service-proxy.yml\n
"},{"location":"service-proxy-calico/#check-the-status","title":"Check the status","text":"In k3s node:
## Check the pods created\n$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\ntigera-operator tigera-operator-689d868448-wwvts 1/1 Running 0 2d23h\ncalico-system calico-typha-67d4484996-2cmzs 1/1 Running 0 2d23h\ncalico-system calico-node-l8r8b 1/1 Running 0 2d23h\nkube-system local-path-provisioner-6c86858495-mrtzv 1/1 Running 0 2d23h\ncalico-system csi-node-driver-ssbnf 2/2 Running 0 2d23h\ncalico-apiserver calico-apiserver-7dccc79b59-txnl5 1/1 Running 0 2d10h\ncalico-apiserver calico-apiserver-7dccc79b59-vk68t 1/1 Running 0 2d10h\ncalico-system calico-node-glm64 1/1 Running 0 2d23h\ncalico-system calico-node-hs7pw 1/1 Running 0 2d23h\ncalico-system csi-node-driver-xqjcd 2/2 Running 0 2d23h\ncalico-system calico-typha-67d4484996-wctwv 1/1 Running 0 2d23h\nkube-system kube-loxilb-5fb5566999-4vvls 1/1 Running 0 38h\ncalico-system csi-node-driver-hz87c 2/2 Running 0 2d23h\nkube-system coredns-6799fbcd5-mhgwg 1/1 Running 0 2d8h\ncalico-system calico-kube-controllers-f5c6cdbdc-vztls 1/1 Running 0 32h\ncalico-system calico-node-mjjs5 1/1 Running 0 2d23h\ncalico-system csi-node-driver-l5r75 2/2 Running 0 2d23h\ndefault iperf1 1/1 Running 0 32h\nkube-system metrics-server-54fd9b65b-78mwr 1/1 Running 0 2d23h\nkube-system loxilb-lb-px6th 1/1 Running 0 20h\n
In loxilb pod, we can check internal LB rules: $ sudo kubectl exec -it -n kube-system loxilb-lb-px6th -- loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-------------------------------|------|-----|---------|-----------------|-------|--------|--------|----------|\n| 10.0.2.15 | | 32598 | tcp | ipvs_10.0.2.15:32598-tcp | 0 | rr | fullnat | 192.168.235.161 | 5001 | 1 | active | 0:0 |\n| 10.43.0.10 | | 53 | tcp | ipvs_10.43.0.10:53-tcp | 0 | rr | default | 192.168.182.39 | 53 | 1 | - | 0:0 |\n| 10.43.0.10 | | 53 | udp | ipvs_10.43.0.10:53-udp | 0 | rr | default | 192.168.182.39 | 53 | 1 | - | 6:1149 |\n| 10.43.0.10 | | 9153 | tcp | ipvs_10.43.0.10:9153-tcp | 0 | rr | default | 192.168.182.39 | 9153 | 1 | - | 0:0 |\n| 10.43.0.1 | | 443 | tcp | ipvs_10.43.0.1:443-tcp | 0 | rr | default | 192.168.80.10 | 6443 | 1 | - | 0:0 |\n| 10.43.182.250 | | 443 | tcp | ipvs_10.43.182.250:443-tcp | 0 | rr | default | 192.168.182.14 | 5443 | 1 | - | 0:0 |\n| | | | | | | | | 192.168.189.75 | 5443 | 1 | - | 0:0 |\n| 10.43.184.155 | | 55001 | tcp | ipvs_10.43.184.155:55001-tcp | 0 | rr | default | 192.168.235.161 | 5001 | 1 | - | 0:0 |\n| 10.43.78.171 | | 5473 | tcp | ipvs_10.43.78.171:5473-tcp | 0 | rr | default | 192.168.80.10 | 5473 | 1 | - | 0:0 |\n| | | | | | | | | 192.168.80.102 | 5473 | 1 | - | 0:0 |\n| 10.43.89.40 | | 443 | tcp | ipvs_10.43.89.40:443-tcp | 0 | rr | default | 192.168.219.68 | 10250 | 1 | - | 0:0 |\n| 192.168.219.64 | | 32598 | tcp | ipvs_192.168.219.64:32598-tcp | 0 | rr | fullnat | 192.168.235.161 | 5001 | 1 | active | 0:0 |\n| 192.168.80.10 | | 32598 | tcp | ipvs_192.168.80.10:32598-tcp | 0 | rr | fullnat | 192.168.235.161 | 5001 | 1 | active | 0:0 |\n| 192.168.80.20 | | 32598 | tcp | ipvs_192.168.80.20:32598-tcp | 0 | rr | fullnat | 192.168.235.161 | 5001 | 1 | active | 0:0 |\n| 192.168.80.20 | | 55001 | tcp | default_iperf-service | 0 | rr | onearm | 192.168.80.101 | 32598 | 1 | - | 0:0 |\n
"},{"location":"service-proxy-calico/#deploy-a-sample-service","title":"Deploy a sample service","text":"To deploy a sample service, we can create service as usual in Kubernetes with few extra annotations as follows :
sudo kubectl apply -f - <<EOF\napiVersion: v1\nkind: Service\nmetadata:\n name: iperf-service\n annotations:\n loxilb.io/lbmode: \"onearm\" \n loxilb.io/staticIP: \"192.168.80.20\"\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: perf-test\n ports:\n - port: 55001\n targetPort: 5001\n type: LoadBalancer\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: iperf1\n labels:\n what: perf-test\nspec:\n containers:\n - name: iperf\n image: ghcr.io/nicolaka/netshoot:latest\n command:\n - iperf\n - \"-s\"\n ports:\n - containerPort: 5001\nEOF\n
Check the service created :
$ sudo kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 3d1h\niperf-service LoadBalancer 10.43.131.107 llb-192.168.80.20 55001:31181/TCP 9m14s\n
Test the service created (from a host outside the cluster) :
## Using service VIP\n$ iperf -c 192.168.80.20 -p 55001 -i1 -t3\n------------------------------------------------------------\nClient connecting to 192.168.80.20, TCP port 55001\nTCP window size: 85.0 KByte (default)\n------------------------------------------------------------\n[ 1] local 192.168.80.80 port 58936 connected with 192.168.80.20 port 55001\n[ ID] Interval Transfer Bandwidth\n[ 1] 0.0000-1.0000 sec 282 MBytes 2.36 Gbits/sec\n[ 1] 1.0000-2.0000 sec 276 MBytes 2.31 Gbits/sec\n[ 1] 2.0000-3.0000 sec 279 MBytes 2.34 Gbits/sec\n\n## Using node-port\n$ iperf -c 192.168.80.100 -p 31181 -i1 -t10\n------------------------------------------------------------\nClient connecting to 192.168.80.100, TCP port 31181\nTCP window size: 85.0 KByte (default)\n------------------------------------------------------------\n[ 1] local 192.168.80.80 port 43208 connected with 192.168.80.100 port 31181\n[ ID] Interval Transfer Bandwidth\n[ 1] 0.0000-1.0000 sec 612 MBytes 5.14 Gbits/sec\n[ 1] 1.0000-2.0000 sec 598 MBytes 5.02 Gbits/sec\n[ 1] 2.0000-3.0000 sec 617 MBytes 5.17 Gbits/sec\n[ 1] 3.0000-4.0000 sec 600 MBytes 5.04 Gbits/sec\n[ 1] 4.0000-5.0000 sec 630 MBytes 5.28 Gbits/sec\n[ 1] 5.0000-6.0000 sec 699 MBytes 5.86 Gbits/sec\n[ 1] 6.0000-7.0000 sec 682 MBytes 5.72 Gbits/sec\n
For more detailed performance comparison with other solutions, kindly follow this blog and for more detailed information on incluster deployment of loxilb with bgp in a full-blown cluster, kindly follow this blog.\u00a0 \u00a0
"},{"location":"service-proxy-flannel/","title":"K3s/loxilb service-proxy with flannel","text":""},{"location":"service-proxy-flannel/#quick-start-guide-k3s-with-loxilb-service-proxy","title":"Quick Start Guide - K3s with LoxiLB \"service-proxy\"","text":"This document will explain how to install a K3s cluster with loxilb in \"service-proxy\" mode alongside flannel networking (default for k3s).
"},{"location":"service-proxy-flannel/#what-is-service-proxy-mode","title":"What is service-proxy mode?","text":"service-proxy mode is where kubernetes kube-proxy services are entirely replaced by loxilb for better performance. Users can continue to use their existing networking providers while enjoying streamlined performance and superior feature-set provided by loxilb.
Looking at the left side of the image, you will notice the traffic flow of the packet as it enters the Kubernetes cluster. Kube-proxy, the de-facto networking agent in the Kubernetes which runs on each node of the cluster which monitors the services and translates them to either iptables or IPVS tangible rules. If we talk about the functionality or a cluster with low volume traffic then kube-proxy is fine but when it comes to scalability or a high volume traffic then it acts as a bottle-neck. loxilb \"service-proxy\" mode works with Flannel/Calico and kube-proxy in IPVS mode only as of now. It inherits the IPVS rules and imports these in it's in-kernel eBPF implementation. Traffic will reach at the interface, will be processed by eBPF and sent directly to the pod or to the other node, bypassing all the layers of Linux networking. This way, all the services, be it External, NodePort or ClusterIP, can be managed through LoxiLB and provide optimal performance for the users. The added benefit for the user's is the fact that there is no need to rip and replace their current networking provider (e.g flannel or calico).
"},{"location":"service-proxy-flannel/#topology","title":"Topology","text":"For quickly bringing up loxilb \"service-proxy\" in K3s, we will be deploying a single node k3s cluster (v1.29.3+k3s1) : \u00a0
loxilb and kube-loxilb components run as pods managed by kubernetes in this scenario.
"},{"location":"service-proxy-flannel/#setup-k3s","title":"Setup K3s","text":""},{"location":"service-proxy-flannel/#configure-k3s-node","title":"Configure K3s node","text":"$ curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable traefik \\\n --disable servicelb --disable-cloud-controller --kube-proxy-arg proxy-mode=ipvs \\\n cloud-provider=external --node-ip=${MASTER_IP} --node-external-ip=${MASTER_IP} \\\n --bind-address=${MASTER_IP}\" sh -\n
"},{"location":"service-proxy-flannel/#deploy-kube-loxilb-and-loxilb","title":"Deploy kube-loxilb and loxilb ?","text":"kube-loxilb is used as an operator to manage loxilb. We need to deploy both kube-loxilb and loxilb components in your kubernetes cluster
sudo kubectl apply -f https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/service-proxy/kube-loxilb.yml\nsudo kubectl apply -f https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/service-proxy/loxilb-service-proxy.yml\n
"},{"location":"service-proxy-flannel/#check-the-status","title":"Check the status","text":"In k3s node:
## Check the pods created\n$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-6799fbcd5-c68ws 1/1 Running 0 15m\nkube-system local-path-provisioner-6c86858495-rxk2w 1/1 Running 0 15m\nkube-system metrics-server-54fd9b65b-xtgk2 1/1 Running 0 15m\nkube-system loxilb-lb-5p6pg 1/1 Running 0 6m58s\nkube-system kube-loxilb-5fb5566999-7xdkk 1/1 Running 0 6m59s\n
In loxilb pod, we can check internal LB rules: $ udo kubectl exec -it -n kube-system loxilb-lb-5p6pg -- loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-------------------------------|------|-----|---------|----------------|-------|--------|--------|----------|\n| 10.0.2.15 | | 31377 | tcp | ipvs_10.0.2.15:31377-tcp | 0 | rr | fullnat | 10.42.1.2 | 5001 | 1 | active | 0:0 |\n| 10.42.1.0 | | 31377 | tcp | ipvs_10.42.1.0:31377-tcp | 0 | rr | fullnat | 10.42.1.2 | 5001 | 1 | active | 0:0 |\n| 10.42.1.1 | | 31377 | tcp | ipvs_10.42.1.1:31377-tcp | 0 | rr | fullnat | 10.42.1.2 | 5001 | 1 | active | 0:0 |\n| 10.43.0.10 | | 53 | tcp | ipvs_10.43.0.10:53-tcp | 0 | rr | default | 10.42.0.3 | 53 | 1 | - | 0:0 |\n| 10.43.0.10 | | 53 | udp | ipvs_10.43.0.10:53-udp | 0 | rr | default | 10.42.0.3 | 53 | 1 | - | 0:0 |\n| 10.43.0.10 | | 9153 | tcp | ipvs_10.43.0.10:9153-tcp | 0 | rr | default | 10.42.0.3 | 9153 | 1 | - | 0:0 |\n| 10.43.0.1 | | 443 | tcp | ipvs_10.43.0.1:443-tcp | 0 | rr | default | 192.168.80.10 | 6443 | 1 | - | 0:0 |\n| 10.43.202.90 | | 55001 | tcp | ipvs_10.43.202.90:55001-tcp | 0 | rr | default | 10.42.1.2 | 5001 | 1 | - | 0:0 |\n| 10.43.30.93 | | 443 | tcp | ipvs_10.43.30.93:443-tcp | 0 | rr | default | 10.42.0.4 | 10250 | 1 | - | 0:0 |\n| 192.168.80.101 | | 31377 | tcp | ipvs_192.168.80.101:31377-tcp | 0 | rr | fullnat | 10.42.1.2 | 5001 | 1 | active | 15:1014 |\n| 192.168.80.20 | | 55001 | tcp | default_iperf-service | 0 | rr | onearm | 192.168.80.101 | 31377 | 1 | - | 0:0 |\n
"},{"location":"service-proxy-flannel/#deploy-a-sample-service","title":"Deploy a sample service","text":"To deploy a sample service, we can create service as usual in Kubernetes with few extra annotations as follows :
sudo kubectl apply -f - <<EOF\napiVersion: v1\nkind: Service\nmetadata:\n name: iperf-service\n annotations:\n loxilb.io/lbmode: \"onearm\" \n loxilb.io/staticIP: \"192.168.80.20\"\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: perf-test\n ports:\n - port: 55001\n targetPort: 5001\n type: LoadBalancer\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: iperf1\n labels:\n what: perf-test\nspec:\n containers:\n - name: iperf\n image: ghcr.io/nicolaka/netshoot:latest\n command:\n - iperf\n - \"-s\"\n ports:\n - containerPort: 5001\nEOF\n
Check the service created :
$ sudo kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 17m\niperf-service LoadBalancer 10.43.202.90 llb-192.168.80.20 55001:31377/TCP 2m34s\n
Test the service created (from a host outside the cluster) :
## Using service VIP\n$ iperf -c 192.168.80.20 -p 55001 -i1 -t3\n------------------------------------------------------------\nClient connecting to 192.168.80.20, TCP port 55001\nTCP window size: 85.0 KByte (default)\n------------------------------------------------------------\n[ 1] local 192.168.80.80 port 55686 connected with 192.168.80.20 port 55001\n[ ID] Interval Transfer Bandwidth\n[ 1] 0.0000-1.0000 sec 311 MBytes 2.61 Gbits/sec\n[ 1] 1.0000-2.0000 sec 309 MBytes 2.59 Gbits/sec\n[ 1] 2.0000-3.0000 sec 305 MBytes 2.56 Gbits/sec\n[ 1] 0.0000-3.0109 sec 926 MBytes 2.58 Gbits/sec\n\n## Using node-port\n$ iperf -c 192.168.80.101 -p 31377 -i1 -t10\n------------------------------------------------------------\nClient connecting to 192.168.80.101, TCP port 31377\nTCP window size: 85.0 KByte (default)\n------------------------------------------------------------\n[ 1] local 192.168.80.80 port 34066 connected with 192.168.80.101 port 31377\n[ ID] Interval Transfer Bandwidth\n[ 1] 0.0000-1.0000 sec 792 MBytes 6.64 Gbits/sec\n[ 1] 1.0000-2.0000 sec 727 MBytes 6.10 Gbits/sec\n[ 1] 2.0000-3.0000 sec 784 MBytes 6.57 Gbits/sec\n[ 1] 3.0000-4.0000 sec 814 MBytes 6.83 Gbits/sec\n[ 1] 4.0000-5.0000 sec 1.01 GBytes 8.64 Gbits/sec\n[ 1] 5.0000-6.0000 sec 1.02 GBytes 8.79 Gbits/sec\n[ 1] 6.0000-7.0000 sec 1.03 GBytes 8.84 Gbits/sec\n[ 1] 7.0000-8.0000 sec 814 MBytes 6.83 Gbits/sec\n[ 1] 8.0000-9.0000 sec 965 MBytes 8.09 Gbits/sec\n[ 1] 9.0000-10.0000 sec 946 MBytes 7.93 Gbits/sec\n[ 1] 0.0000-10.0170 sec 8.76 GBytes 7.51 Gbits/sec\n
If you are wondering why there is a performance difference between serviceLB and node-port, there is an interesting blog about it here by one our users. For more detailed performance comparison with other solutions, kindly follow this blog and for more detailed information on incluster deployment of loxilb with bgp in a full-blown cluster, kindly follow this blog.\u00a0 \u00a0"},{"location":"service-zones/","title":"How-To - service-group zones","text":""},{"location":"service-zones/#service-group-zoning-in-loxilb","title":"Service-Group zoning in loxilb","text":"kube-loxilb is used to deploy loxilb with Kubernetes. By default a kube-loxilb instance does not differentiate the services in any way and uses a set-of loxilb instances to setup rules related to these services. But there are potential scenarios where grouping of services is necessary. It might be beneficial for increasing capacity, uptime and security of the cluster services.
"},{"location":"service-zones/#overall-topology","title":"Overall topology","text":"For implementing service-groups with zones, the overall topology including all components should be similar to the following :
The overall concept is to run multiple sets of kube-loxilb each for a separate zone. Each set of kube-loxilb communicates with a particular set of designated loxilb instances dedicated for that zone. Finally when the services are created, we need to mention which zone we want to place them in using special loxilb annotation.
"},{"location":"service-zones/#how-to-deploy-kube-loxilb-for-zones","title":"How to deploy kube-loxilb for zones ?","text":" - The manifest files for deploying kube-loxilb for zones need to mention the zone they cater to. For example:
kube-loxilb-south.yml
args:\n - --loxiURL=http://12.12.12.1:11111\n - --externalCIDR=123.123.123.1/24\n - --zone=south\n
kube-loxilb-north.yml
args:\n - --loxiURL=http://12.12.12.2:11111\n - --externalCIDR=124.124.124.1/24\n - --zone=north\n
-
Complete kube-loxilb manifests for zones can be found here which can be further modified as per user need
-
After deployment, you can find multiple sets of kube-loxilb running as follows :
# sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-6799fbcd5-6w52r 1/1 Running 0 11h\nkube-system local-path-provisioner-6c86858495-gkqgc 1/1 Running 0 11h\nkube-system metrics-server-67c658944b-vgjqd 1/1 Running 0 11h\ndefault udp-test 1/1 Running 0 11h\nkube-system kube-loxilb-south-596fb8957b-7xg2k 1/1 Running 0 11h\nkube-system kube-loxilb-north-5887f5d848-f86jv 1/1 Running 0 10h\n
"},{"location":"service-zones/#how-to-deploy-services-for-zones","title":"How to deploy services for zones ?","text":" - The manifest files for services need to have annotation related to zone they will be served by. For example, we need to specify \"loxilb.io/zoneselect\" annotation :
apiVersion: v1\nkind: Service\nmetadata:\n name: nginx-lb1\n annotations:\n loxilb.io/lbmode: \"fullnat\"\n loxilb.io/probetimeout: \"10\"\n loxilb.io/proberetries: \"2\"\n loxilb.io/zoneselect: \"north\"\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: nginx-test\n ports:\n - port: 55002\n targetPort: 80 \n type: LoadBalancer\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-test\n labels:\n what: nginx-test\nspec:\n containers:\n - name: nginx-test\n image: nginx:stable\n ports:\n - containerPort: 80\n
- Example services manifests for zones can be found here which can be further modified as per user need
"},{"location":"simple_topo/","title":"Creating a simple test topology for loxilb","text":"To test loxilb in a single node cloud-native environment, it is possible to quickly create a test topology. We will explain the steps required to create a very simple topology (more complex topologies can be built using this example) :
Prerequisites :
- Docker should be preinstalled
- Pull and run loxilb docker
# docker pull ghcr.io/loxilb-io/loxilb:latest\n# docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest\n
Next step is to run the following script to create and configure the above topology :
#!/bin/bash\n\ndocker=$1\nHADD=\"sudo ip netns add \"\nLBHCMD=\"sudo ip netns exec loxilb \"\nHCMD=\"sudo ip netns exec \"\n\nid=`docker ps -f name=loxilb | cut -d \" \" -f 1 | grep -iv \"CONTAINER\"`\necho $id\npid=`docker inspect -f '{{.State.Pid}}' $id`\nif [ ! -f /var/run/netns/loxilb ]; then\n sudo touch /var/run/netns/loxilb\n sudo mount -o bind /proc/$pid/ns/net /var/run/netns/loxilb\nfi\n\n$HADD ep1\n$HADD ep2\n$HADD ep3\n$HADD h1\n\n## Configure load-balancer end-point ep1\nsudo ip -n loxilb link add ellb1ep1 type veth peer name eep1llb1 netns ep1\nsudo ip -n loxilb link set ellb1ep1 mtu 9000 up\nsudo ip -n ep1 link set eep1llb1 mtu 7000 up\n$LBHCMD ip addr add 31.31.31.254/24 dev ellb1ep1\n$HCMD ep1 ifconfig eep1llb1 31.31.31.1/24 up\n$HCMD ep1 ip route add default via 31.31.31.254\n$HCMD ep1 ifconfig lo up\n\n## Configure load-balancer end-point ep2\nsudo ip -n loxilb link add ellb1ep2 type veth peer name eep2llb1 netns ep2\nsudo ip -n loxilb link set ellb1ep2 mtu 9000 up\nsudo ip -n ep2 link set eep2llb1 mtu 7000 up\n$LBHCMD ip addr add 32.32.32.254/24 dev ellb1ep2\n$HCMD ep2 ifconfig eep2llb1 32.32.32.1/24 up\n$HCMD ep2 ip route add default via 32.32.32.254\n$HCMD ep2 ifconfig lo up\n\n## Configure load-balancer end-point ep3\nsudo ip -n loxilb link add ellb1ep3 type veth peer name eep3llb1 netns ep3\nsudo ip -n loxilb link set ellb1ep3 mtu 9000 up\nsudo ip -n ep3 link set eep3llb1 mtu 7000 up\n$LBHCMD ip addr add 33.33.33.254/24 dev ellb1ep3\n$HCMD ep3 ifconfig eep3llb1 33.33.33.1/24 up\n$HCMD ep3 ip route add default via 33.33.33.254\n$HCMD ep3 ifconfig lo up\n\n## Configure load-balancer end-point h1\nsudo ip -n loxilb link add ellb1h1 type veth peer name eh1llb1 netns h1\nsudo ip -n loxilb link set ellb1h1 mtu 9000 up\nsudo ip -n h1 link set eh1llb1 mtu 7000 up\n$LBHCMD ip addr add 10.10.10.254/24 dev ellb1h1\n$HCMD h1 ifconfig eh1llb1 10.10.10.1/24 up\n$HCMD h1 ip route add default via 10.10.10.254\n$HCMD h1 ifconfig lo up\n
Finally, we need to configure load-balancer rule inside loxilb docker as follows :
docker exec -it loxilb bash\nroot@8b74b5ddc4d2:/# loxicmd create lb 20.20.20.1 --tcp=2020:5001 --endpoints=31.31.31.1:1,32.32.32.1:1,33.33.33.1:1\n
So, we now have loxilb running as a docker pod with 4 hosts connected to it. 3 of the hosts act as load-balancer end-points and 1 of them act as a client. We can run any workloads as we wish inside the host pods and start testing loxilb
"},{"location":"standalone/","title":"Standalone Mode","text":""},{"location":"standalone/#how-to-run-loxilb-in-standalone-mode","title":"How to run loxilb in standalone mode","text":"This guide will help users to run loxilb in a standalone mode decoupled from kubernetes
"},{"location":"standalone/#pre-requisites","title":"Pre-requisites","text":"This guide uses Ubuntu 20.04.5 LTS as the base operating system
"},{"location":"standalone/#install-docker","title":"Install docker","text":"One can follow the guide here to install latest docker engine or use snap to install docker.
sudo apt update\nsudo apt install snapd\nsudo snap install docker\n
"},{"location":"standalone/#enable-ipv6-if-running-nat64nat66","title":"Enable IPv6 (if running NAT64/NAT66)","text":"sysctl net.ipv6.conf.all.disable_ipv6=0\nsysctl net.ipv6.conf.default.disable_ipv6=0\n
"},{"location":"standalone/#run-loxilb","title":"Run loxilb","text":"Get the loxilb official docker image
-
Latest build image (multi-arch amd64/arm64)
docker pull ghcr.io/loxilb-io/loxilb:latest\n
-
Release build image
docker pull ghcr.io/loxilb-io/loxilb:v0.9.5\n
-
To run loxilb docker, we can use the following commands :
docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest\n
- To drop in to a shell of loxilb doker :
docker exec -it loxilb bash\n
- For load-balancing to effectively work in a bare-metal environment, we need multiple interfaces assigned to the docker (external and internal connectivitiy). loxilb docker relies on docker's macvlan driver for achieving this. The following is an example of creating macvlan network and using with loxilb:
# Create a mac-vlan (on an underlying interface e.g. enp0s3).\n# Subnet used for mac-vlan is usually the same as underlying interface\ndocker network create -d macvlan -o parent=enp0s3 --subnet 172.30.1.0/24 --gateway 172.30.1.254 --aux-address 'host=172.30.1.193\u2019 llbnet\n\n# Run loxilb docker with the created macvlan \ndocker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --net=llbnet --ip=172.30.1.195 --name loxilb ghcr.io/loxilb-io/loxilb:latest\n\n# If we still want to connect loxilb docker additionally to docker's default \"bridge\" network or more macvlan networks\ndocker network connect bridge loxilb\ndocker network connect llbnet2 loxilb --ip=172.30.2.195\n
Note:
- While working with macvlan interfaces, the parent/underlying interface should be put in promiscous mode
- One can further use docker-compose to automate attaching multiple networks to loxilb docker or use
--net=host
as per requirement - To use local socket policy or eBPF sockmap related features, we need to use
--pid=host --cgroupns=host
as additional arguments to docker run. - To create a simple and self-contained topology for testing loxilb, users can follow this guide
- If loxilb docker is in the same node as the app/workload docker, it is advised that \"tx checksum offload\" inside app/workload docker is turned off for sctp load-balancing to work properly
docker exec -dt <app-docker-name> ethtool -K <app-docker-interface> tx off\n
"},{"location":"standalone/#configuration","title":"Configuration","text":"loxicmd command line tool can be used to configure loxilb in standalone mode. A simple example of configuration using loxilb is as follows:
- Drop into loxilb shell
sudo docker exec -it loxilb bash\n
- Create a LB rule inside loxilb docker. Various other options for LB manipulation can be found here
loxicmd create lb 2001::1 --tcp=2020:8080 --endpoints=33.33.33.1:1\n
- Validate entry is created using the command:
loxicmd get lb -o wide\n
The detailed usage guide of loxicmd can be found here.
"},{"location":"standalone/#working-with-gobgp","title":"Working with gobgp","text":"loxilb works in tandem with gobgp when bgp services are required. As a first step, create a file gobgp.conf in host where loxilb docker will run and add the basic necessary fields :
[global.config]\n as = 64512\n router-id = \"10.10.10.1\"\n\n[[neighbors]]\n [neighbors.config]\n neighbor-address = \"10.10.10.254\"\n peer-as = 64512\n
Run loxilb docker with following arguments:
docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v gobgp.conf:/etc/gobgp/gobgp.conf -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest -b \n
The gobgp daemon should pick the configuration. The neighbors can be verified by :
sudo docker exec -it loxilb gobgp neighbor\n
At run time, there are two ways to change gobgp configuration. Ephemeral configuration can simply be done using \u201cgobgp\u201d command as detailed here. If persistence is required, then one can change the gobgp config file /etc/gobgp/gobgp.conf and apply SIGHUP to gobgpd process for loading the edited configuration.
sudo docker exec -it loxilb pkill -1 gobgpd\n
"},{"location":"standalone/#persistent-lb-entries","title":"Persistent LB entries","text":"To save the created rules across reboots, one can use the following command:
sudo mkdir -p /etc/loxilb/\nsudo loxicmd save --lb\n
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":""},{"location":"#welcome-to-loxilb-documentation","title":"Welcome to loxilb documentation","text":""},{"location":"#background","title":"Background","text":"loxilb started as a project to ease deployments of cloud-native/kubernetes workloads for the edge. When we deploy services in public clouds like AWS/GCP, the services becomes easily accessible or exported to the outside world. The public cloud providers, usually by default, associate load-balancer instances for incoming requests to these services to ensure everything is quite smooth.
However, for on-prem and edge deployments, there is no service type - external load balancer provider by default. For a long time, MetalLB was the only choice for the needy. But edge services are a different ball game altogether due to the fact that there are so many exotic protocols in play like GTP, SCTP, SRv6 etc and integrating everything into a seamlessly working solution has been quite difficult.
loxilb dev team was approached by many people who wanted to solve this problem. As a first step to solve the problem, it became apparent that networking stack provided by Linux kernel, although very solid, really lacked the development process agility to quickly provide support for a wide variety of permutations and combinations of protocols and stateful load-balancing on them. Our search led us to the awesome tech developed by the Linux community - eBPF. The flexibility to introduce new functionality into the OS Kernel as a safe sandbox program was a complete fit to our design philosophy. It also does not need any dedicated CPU cores which makes it perfect for designing energy-efficient edge architectures.
"},{"location":"#what-is-loxilb","title":"What is loxilb","text":"loxilb is an open source cloud-native load-balancer based on GoLang/eBPF with the goal of achieving cross-compatibity across a wide range of on-prem, public-cloud or hybrid K8s environments. loxilb is being developed to support the adoption of cloud-native tech in telco, mobility, and edge computing.
"},{"location":"#kubernetes-with-loxilb","title":"Kubernetes with loxilb","text":"Kubernetes defines many service constructs like cluster-ip, node-port, load-balancer, ingress etc for pod to pod, pod to service and outside-world to service communication.
All these services are provided by load-balancers/proxies operating at Layer4/Layer7. Since Kubernetes's is highly modular, these services can be provided by different software modules. For example, kube-proxy is used by default to provide cluster-ip and node-port services.
Service type load-balancer is usually provided by public cloud-provider(s) as a managed entity. But for on-prem and self-managed clusters, there are only a few good options available. Even for provider-managed K8s like EKS, there are many who would want to bring their own LB to clusters running anywhere. loxilb provides service type load-balancer as its main use-case. loxilb can be run in-cluster or ext-to-cluster as per user need.
loxilb works as a L4 load-balancer/service-proxy by default. Although L4 load-balancing provides great performance and functionality, at times, an equally performant L7 load-balancer is also necessary in K8s for various use-cases. loxilb also supports L7 load-balancing in the form of Kubernetes Ingress implementation. This also benefit users who need L4 and L7 load-balancing under the same hood.
Additionally, loxilb also supports: - [x] kube-proxy replacement with eBPF(full cluster-mesh implementation for Kubernetes) - [x] Ingress Support - [x] Kubernetes Gateway API - [ ] Kubernetes Network Policies (in-progress)
"},{"location":"#telco-cloud-with-loxilb","title":"Telco-Cloud with loxilb","text":"For deploying telco-cloud with cloud-native functions, loxilb can be used as a SCP(service communication proxy). SCP is a communication proxy defined by 3GPP and aimed at optimizing telco micro-services running in cloud-native environment. Read more about it here.
Telco-cloud requires load-balancing and communication across various interfaces/standards like N2, N4, E2(ORAN), S6x, 5GLAN, GTP etc. Each of these present its own unique challenges which loxilb aims to solve e.g.: - N4 requires PFCP level session-intelligence - N2 requires NGAP parsing capability(Related Blogs - Blog-1, Blog-2, Blog-3) - S6x requires Diameter/SCTP multi-homing LB support(Related Blog) - MEC use-cases might require UL-CL understanding(Related Blog) - Hitless failover support might be essential for mission-critical applications - E2 might require SCTP-LB with OpenVPN bundled together - SIP support is needed to enable cloud-native VOIP
"},{"location":"#why-choose-loxilb","title":"Why choose loxilb?","text":" Performs
much better compared to its competitors across various architectures - Single-Node Performance
- Multi-Node Performance
- Performance on ARM
- Short Demo on Performance
- Utitlizes ebpf which makes it
flexible
as well as customizable
- Advanced
quality of service
for workloads (per LB, per end-point or per client) - Works with
any
Kubernetes distribution/CNI - k8s/k3s/k0s/kind/OpenShift + Calico/Flannel/Cilium/Weave/Multus etc - Extensive support for
SCTP workloads
(with multi-homing) on k8s - Dual stack with
NAT66, NAT64
support for k8s - k8s
multi-cluster
support (planned \ud83d\udea7) - Runs in
any
cloud (public cloud/on-prem) or standalone
environments
"},{"location":"#overall-features-of-loxilb","title":"Overall features of loxilb","text":" - L4/NAT stateful loadbalancer
- NAT44, NAT66, NAT64 with One-ARM, FullNAT, DSR etc
- Support for TCP, UDP, SCTP (w/ multi-homing), QUIC, FTP, TFTP etc
- High-availability support with hitless/maglev/cgnat clustering
- Extensive and scalable end-point liveness probes for cloud-native environments
- Stateful firewalling and IPSEC/Wireguard support
- Optimized implementation for features like Conntrack, QoS etc
- Full compatibility for ipvs (ipvs policies can be auto inherited)
- Policy oriented L7 proxy support - HTTP1.0, 1.1, 2.0 etc (planned \ud83d\udea7)
"},{"location":"#components-of-loxilb","title":"Components of loxilb","text":" - GoLang based control plane components
- A scalable/efficient eBPF based data-path implementation
- Integrated goBGP based routing stack
- A kubernetes agent kube-loxilb written in Go
"},{"location":"#architectural-considerations","title":"Architectural Considerations","text":" - Understanding loxilb modes and deployment in K8s with kube-loxilb
- Understanding High-availability with loxilb
"},{"location":"#getting-started-with-different-k8s-distributions-tools","title":"Getting started with different K8s distributions & tools","text":""},{"location":"#loxilb-as-ext-cluster-pod","title":"loxilb as ext-cluster pod","text":" - K3s : loxilb with default flannel
- K3s : loxilb with calico
- K3s : loxilb with cilium
- K0s : loxilb with default kube-router networking
- EKS : loxilb ext-mode
"},{"location":"#loxilb-as-in-cluster-pod","title":"loxilb as in-cluster pod","text":" - K3s : loxilb in-cluster mode
- K0s : loxilb in-cluster mode
- MicroK8s : loxilb in-cluster mode
- EKS : loxilb in-cluster mode
"},{"location":"#loxilb-as-service-proxy","title":"loxilb as service-proxy","text":" - K3s : loxilb service-proxy with flannel
- K3s : loxilb service-proxy with calico
"},{"location":"#loxilb-as-kubernetes-ingress","title":"loxilb as Kubernetes Ingress","text":" - K3s: How to run loxilb-ingress
"},{"location":"#loxilb-in-standalone-mode","title":"loxilb in standalone mode","text":" - Run loxilb standalone
"},{"location":"#advanced-guides","title":"Advanced Guides","text":" - How-To : Service-group zones with loxilb
- How-To : Access end-points outside K8s
- How-To : Deploy multi-server K3s HA with loxilb
- How-To : Deploy loxilb with multi-AZ HA support in AWS
- How-To : Deploy loxilb with multi-cloud HA support
- How-To : Deploy loxilb with ingress-nginx
"},{"location":"#knowledge-base","title":"Knowledge-Base","text":" - What is eBPF
- What is k8s service - load-balancer
- Architecture in brief
- Code organization
- eBPF internals of loxilb
- What are loxilb NAT Modes
- loxilb load-balancer algorithms
- Manual steps to build/run
- Debugging loxilb
- loxicmd command-line tool usage
- Developer's guide to loxicmd
- Developer's guide to loxilb API
- API Reference - loxilb web-Api
- Performance Reports
- Development Roadmap
- Contribute
- System Requirements
- Frequenctly Asked Questions- FAQs
"},{"location":"#blogs","title":"Blogs","text":" - K8s - Elevating cluster networking
- eBPF - Map sync using Go
- K8s in-cluster service LB with LoxiLB
- K8s - Introducing SCTP Multihoming with LoxiLB
- Load-balancer performance comparison on Amazon Graviton2
- Hyperscale anycast load balancing with HA
- Getting started with loxilb on Amazon EKS
- K8s - Deploying \"hitless\" Load-balancing
- Oracle Cloud - Hitless HA load balancing
- Ipv6 migration in Kubernetes made easy
"},{"location":"#community-posts","title":"Community Posts","text":" - 5G SCTP LoadBalancer Using loxilb
- 5G Uplink Classifier Using loxilb
- K3s - Using loxilb as external service lb
- K8s - Bringing load-balancing to multus workloads with loxilb
- 5G SCTP LoadBalancer Using LoxiLB on free5GC
- Kubernetes Services: Achieving optimal performance is elusive
"},{"location":"#research-papers-featuring-loxilb","title":"Research Papers (featuring loxilb)","text":" - Mitigating Spectre-PHT using Speculation Barriers in Linux BPF
"},{"location":"#latest-cicd-status","title":"Latest CICD Status","text":" -
Features(Ubuntu20.04)
-
Features(Ubuntu22.04)
-
K3s Tests
-
K8s Cluster Tests
-
EKS Test
"},{"location":"api-dev/","title":"loxilb API Development Guide","text":"For building and extending LoxiLB API server.
"},{"location":"api-dev/#api-source-architecture","title":"API source Architecture","text":".\n\u251c\u2500\u2500 certification\n\u2502 \u251c\u2500\u2500 serverca.crt\n\u2502 \u2514\u2500\u2500 serverkey.pem\n\u251c\u2500\u2500 cmd\n\u2502 \u2514\u2500\u2500 loxilb_rest_api-server\n\u2502 \u2514\u2500\u2500 main.go\n\u251c\u2500\u2500 \u2026.\n\u251c\u2500\u2500 models\n\u2502 \u251c\u2500\u2500 error.go\n\u2502 \u251c\u2500\u2500 \u2026..\n\u251c\u2500\u2500 restapi\n\u2502 \u251c\u2500\u2500 configure_loxilb_rest_api.go\n\u2502 \u251c\u2500\u2500 \u2026..\n\u2502 \u251c\u2500\u2500 handler\n\u2502 \u2502 \u251c\u2500\u2500 common.go\n\u2502 \u2502 \u2514\u2500\u2500\u2026..\n\u2502 \u251c\u2500\u2500 operations\n\u2502 \u2502 \u251c\u2500\u2500 get_config_conntrack_all.go\n\u2502 \u2502 \u2514\u2500\u2500 \u2026.\n\u2502 \u2514\u2500\u2500 server.go\n\u2514\u2500\u2500 swagger.yml\n
* Except for the ./api/restapi/handler and ./api/certification directories, the rest of the contents are automatically created. * Add the logic for the function to the handler directory. * Add logic to file ./api/restapi/configure_loxilb_rest_api.go - Swagger.yml file update
paths:\n '/additional/url/{param}':\n get:\n summary: Test Swagger API Server.\n description: Check Swagger API server. This basic information or architecture is for the later applications.\n parameters:\n - name: param\n in: path\n required: true\n type: string\n format: string\n description: Description of the additional url\n responses:\n '204':\n description: OK\n '400':\n description: Malformed arguments for API call\n schema:\n $ref: '#/definitions/Error'\n '401':\n description: Invalid authentication credentials\n
- path.{Set path and parameter URL}.{get,post,etc RESTful setting}.{Description}
- {Set path and parameter URL} Set the path used in the RESTful API. It begins with \"config/\" and is defined as a sub-category from a large category. Define the parameters using the symbol {param}. The parameters are defined in the description section.
- {get,post,etc RESTful setting} Use get, post, delete, and patch to define queries, registrations, deletions, and modifications.
-
{Description} Summary description of API Detailed description of API Parameters Set the name, path, etc. Define the content of the response
-
Creating Additional Parts with Swagger
# alias swagger='docker run --rm -it --user $(id -u):$(id -g) -e GOPATH=$(go env GOPATH):/go -v $HOME:$HOME -w $(pwd) quay.io/goswagger/swagger'\n# swagger generate server\n
-
Development of Additional Partial Handlers
package handler\n\nimport (\n \"fmt\"\n\n \"github.com/go-openapi/runtime/middleware\"\n\n \"testswagger.com/restapi/operations\"\n)\n\nfunc ConfigAdditionalUrl(params operations.GetAdditionalUrlParams) middleware.Responder {\n /////////////////////////////////////////////\n // Add logic Here //\n ////////////////////////////////////////////.\n return &ResultResponse{Result: fmt.Sprintf(\"params.param : %s\", params.param)}\n}\n
-
Select the logic required for the ConfigAdditionalUrl portion of the handler directory. The required parameters come from operations.GetAdditionalUrlParams.
-
Additional Partial Handler Registration
func configureAPI(api *operations.LoxilbRestAPIAPI) http.Handler {\n ...... \n // Change it by putting a function here\n api.GetAdditionalUrlHandler = operations.GetAdditionalUrlHandlerFunc(handler.ConfigAdditionalUrl)\n \u2026.\n}\n
- if api.{REST}...The Handler form is automatically generated, where if nil is erased and a handler is added to the operation function.
- In many cases, additional generation is not possible. In that case, you can add the function by entering it separately. The name of the function consists of a combination of Method, URL, and Parameter.
"},{"location":"api/","title":"loxilb api-reference","text":""},{"location":"api/#loxilb-web-apis","title":"loxilb Web-APIs","text":"Loxilb can be fully configured using extensive list of RestAPI. Refer to the API Documentation.
"},{"location":"arch/","title":"loxilb architecture and modules","text":"loxilb consists of the following modules :
-
loxilb CCM plugin
It fully implements K8s CCM load-balancer interface and talks to goLang based loxilb process using Restful APIs. Although loxilb CCM is logically shown as part of loxilb cluster nodes, it will usually run in the worker/master nodes of the K8s cluster. LoxiCCM can easily be used as part of any CCM operator implementation.
-
loxicmd
loxicmd is command line tool to configure and dump loxilb information which is based on same foundation as the wildly popular kubectl tools.
- loxilb
loxilb is a modern goLang based framework (process) which mantains information coming in from various sources e.g apiserver and populates the eBPF maps used by the loxilb eBPF kernel. It is also responsible for loading eBPF programs to the interfaces.It also acts as a client to goBGP to exchange routes based on information from loxilb CCM. Last but not the least, it will be finally responsible for maintaining HA state sync with its remote peers. Almost all serious lb implementations need to be deployed as a HA cluster.
- loxilb eBPF kernel
eBPF kernel module implements the data-plane of loxilb which provides complete kernel bypass. It is a fully self contained and feature-rich stack able to process packets from rx to tx without invoking linux native kernel networking.
- goBGP
Although goBGP is a separate project, loxilb has adopted and integrated with goBGP as its routing stack of choice. We also hope to develop features for this awesome project in the future.
- DashBoards
Grafana based dashboards to provide highly dynamic insight into loxilb state.
The following is a typical loxilb deployment topology (Currently HA implementation is in development) :
"},{"location":"aws-multi-az/","title":"How-To - Deploy loxilb with multi-AZ HA support in AWS","text":""},{"location":"aws-multi-az/#deploy-loxilb-with-multi-az-ha-support-in-aws","title":"Deploy LoxiLB with multi-AZ HA support in AWS","text":"LoxiLB supports stateful HA configuration in various cloud environments such as AWS. Especially for AWS, one can configure HA using the Floating IP pattern, together with LoxiLB.
The HA configuration described in the above document has certain limitations. It could only be configured within a single Availability-Zone(AZ). The HA instances need to share the VIP of the same subnet in order to provide a single access point to users, but this configuration was so far not possible in a multi-AZ environment. This blog explains how to deploy LoxiLB in a multi-AZ environment and configure HA.
"},{"location":"aws-multi-az/#overall-scenario","title":"Overall Scenario","text":"Two LoxiLB instances - loxilb1 and loxilb2 will be deployed in different AZs. These two loxilbs form a HA pair and operate in active-backup roles.
The active loxilb1 instance is additionally assigned a secondary network interface called loxi-eni. The loxi-eni network interface has a private IP (192.168.248.254 in this setup) which is used as a secondary IP.
loxilb1 associates this 192.168.248.254 secondary IP with an user-specified public ElasticIP address. When a user accesses the EKS service externally using an ElasticIP address, this traffic is NATed to the 192.168.248.254 IP and delivered to the active loxilb instance. The active loxilb instance can then load balance the traffic to the appropriate endpoint in EKS.
If loxilb1 goes down due to any reason, the status of loxilb2, which was backup previously, changes to active.
During this transition, loxilb2 instance is assigned a new loxil-eni secondary network interface, and the 192.168.248.254 IP used by the the original master \"loxilb1\" is set to the secondary network interface of loxilb2.
The ElasticIP used by the user is also (re)associated to the 192.168.248.254 private IP address of the \"new\" active instance. This makes it possible to maintain active sessions even during failover or situations where there is a need to upgrade orginal master instance etc.
To summarize, when a failover occurs the public ElasticIP address is always associated to the active LoxiLB instance, so users who were previously accessing EKS using the same ElasticIP address can continue to do so without being affected by any node failure or other issues.
"},{"location":"aws-multi-az/#an-example-scenario","title":"An example scenario","text":"We will use eksctl to create an EKS cluster. To use eksctl, we need to register authentication information through AWS CLI. Instructions for installing aws CLI & eksctl etc are omitted in this document and can be found in AWS.
Using eksctl, let's create an EKS cluster with the following command. For this test, we are using AWS's Osaka region, so using ap-northeast-3
in the --region option.
eksctl create cluster \\\n --version 1.24 \\\n --name multi-az-eks \\\n --vpc-nat-mode Single \\\n --region ap-northeast-3 \\\n --node-type t3.medium \\\n --nodes 2 \\\n --with-oidc \\\n --managed\n
After running the above, we will have an EKS clsuter with two nodes named \"multi-az-eks\"."},{"location":"aws-multi-az/#configuring-loxilb-ec2-instances","title":"Configuring LoxiLB EC2 Instances","text":""},{"location":"aws-multi-az/#create-loxilb-subnet","title":"Create LoxiLB subnet","text":"After configuring EKS, it's time to configure the LoxiLB instance. Let's create the subnet that each of the LoxiLB instances will use.
LoxiLB instances will be created each located in a different AZ. Therefore, the subnets to be used by the instances will also be created in different AZs: AZ-a and AZ-b.
First, create a subnet loxilb-subnet-a in ap-northeast-3a with the subnet 192.168.218.0/24.
Similarly, create a subnet loxilb-subnet-b in ap-northeast-3b with the subnet 192.168.228.0/24.
After creating it, we can double check the \"Enable auto-assign public IPv4 address\" setting so that interfaces connected to each subnet are automatically assigned a public IP.
"},{"location":"aws-multi-az/#aws-route-table","title":"AWS Route table","text":"Newly created subnets automatically use the default route table. We will connect the default route table to the internet gateway so that users can access the LoxiLB instance from outside.
"},{"location":"aws-multi-az/#loxilb-iam-settings","title":"LoxiLB IAM Settings","text":"LoxiLB instances require permission to access the AWS EC2 API to associate ElasticIPs and create secondary interfaces and subnets.
We will create a role with the following IAM policy for LoxiLB EC2 instances.
{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\": \"*\",\n \"Resource\": \"*\"\n }\n ]\n}\n
"},{"location":"aws-multi-az/#loxilb-ec2-instance-creation","title":"LoxiLB EC2 instance creation","text":"We will create two LoxiLB instances for this example and connect the instances wits subnets A and B created above.
And specify to use the IAM role created above in the IAM instance profile of the Advanced details settings.
After the instance is created, go to the Action \u2192 networking \u2192 Change Source /destination check menu in the instance menu and disable this check. Since LoxiLB is a load balancer, this configration must be disabled for LoxiLB to operate properly.
"},{"location":"aws-multi-az/#create-elastic-ip","title":"Create Elastic IP","text":"Next we will create an Elastic IP to use to access the service from outside.
For this example, the IP 13.208.x.x
was assigned. The Elastic IP is used when deploying kube-loxilb, and is automatically associated to the LoxiLB master instance when configuring LoxiLB HA without any user intervention.
"},{"location":"aws-multi-az/#kube-loxilb-deployment","title":"kube-loxilb deployment","text":"kube-loxilb is a K8s operator for LoxiLB. Download the manifest file required for your deployment in EKS.
wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/ext-cluster/kube-loxilb.yaml\n
Change the args inside this yaml (as applicable)
spec:\n containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n - --loxiURL=http://192.168.228.108:11111,http://192.168.218.60:11111\n - --externalCIDR=13.208.X.X/32\n - --privateCIDR=192.168.248.254/32\n - --setRoles=0.0.0.0\n - --setLBMode=2 \n
- Modify loxiURL with the IPs of the LoxiLB EC2instances created above.
- For externalCIDR, specify the Elastic IP created above.
- PrivateCIDR specifies the VIP that will be associated with the Elastic IP. As described in the scenario above, we will use 192.168.248.254 as the VIP in this article. The IP must be set within the range of the VPC CIDR and not currently part of any another subnet.
"},{"location":"aws-multi-az/#run-loxilb-pods","title":"Run LoxiLB Pods","text":""},{"location":"aws-multi-az/#install-docker-on-loxilb-instances","title":"Install docker on LoxiLB instance(s)","text":"LoxiLB is deployed as a container on each instance. To use containers, docker must first be installed on the instance. Docker installation guide can be found here
"},{"location":"aws-multi-az/#running-loxilb-container","title":"Running LoxiLB container","text":"The following command is for a LoxiLB instance (loxilb1) using subnet-a.
sudo docker run -u root --cap-add SYS_ADMIN \\\n --restart unless-stopped \\\n --net=host \\\n --privileged \\\n -dit \\\n -v /dev/log:/dev/log -e AWS_REGION=ap-northeast-3 \\\n --name loxilb \\\n ghcr.io/loxilb-io/loxilb:aws-support \\\n --cloud=aws --cloudcidrblock=192.168.248.0/24 --cluster=192.168.228.108 --self=0\n
- In the cloudcidrblock option, specify the IP band that includes the VIP set in kube-loxilb's privateCIDR. master LoxiLB uses the value set here to create a new subnet in the AZ where it is located and uses it for HA operation.
- The cluster option specifies the IP of the partner instance (LoxiLB instance using subnet-b) for which HA is configured.
- The self option is set to 0. It is just a identier used internally to identify each instance
Similarily we can run loxilb2 instance in the second EC2 instance using subnet-b:
sudo docker run -u root --cap-add SYS_ADMIN \\\n --restart unless-stopped \\\n --net=host \\\n --privileged \\\n -dit \\\n -v /dev/log:/dev/log -e AWS_REGION=ap-northeast-3 \\\n --name loxilb \\\n ghcr.io/loxilb-io/loxilb:aws-support \\\n --cloud=aws --cloudcidrblock=192.168.248.0/24 --cluster=192.168.218.60 --self=1\n
For each instance, HA status can be checked as follows:
When the container runs, you can check the HA status as follows:
ubuntu@ip-192-168-218-60:~$ sudo docker exec -ti loxilb bash\nroot@ip-192-168-228-108:/# loxicmd get ha\n| INSTANCE | HASTATE |\n|----------|---------|\n| default | MASTER |\nroot@ip-192-168-228-108:/#\n
"},{"location":"aws-multi-az/#creating-a-service","title":"Creating a service","text":"Let's create a test service to test HA functionality. Below are the manifest files for the nginx pod and service that we will use for testing.
apiVersion: v1\nkind: Service\nmetadata:\n name: nginx-lb1\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: nginx-test\n ports:\n - port: 55002\n targetPort: 80\n type: LoadBalancer\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-test\n labels:\n what: nginx-test\nspec:\n containers:\n - name: nginx-test\n image: nginx:stable\n ports:\n - containerPort: 80\n
After creating an nginx service with the above, weu can see that the ElasticIP has been designated as the externalIP of the service. LEIS6N3:~/workspace/aws-demo$ kubectl apply -f nginx.yaml\nservice/nginx-lb1 created\npod/nginx-test created\nLEIS6N3:~/workspace/aws-demo$ kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.100.0.1 <none> 443/TCP 22h\nnginx-lb1 LoadBalancer 10.100.178.3 llb-13.208.X.X 55002:32403/TCP 15s\n
We can now access the service from a host client : "},{"location":"aws-multi-az/#testing-ha-functionality","title":"Testing HA functionality","text":"Once LoxiLB HA is configured, we can check in the AWS console that a secondary interface has been added to the master. To test HA operation, simply stop the LoxiLB pod in master state.
ubuntu@ip-192-168-228-108:~$ sudo docker stop loxilb\nloxilb\nubuntu@ip-192-168-228-108:~$\n
Even after stopping the masterLB, the service can be accessed without interruption :
During failover, a secondary interface is created on the new master instance, and you can see that the ElasticIP is also associated to the new interface.
"},{"location":"ccm/","title":"Howto - ccm plugin","text":"loxi-ccm is a cloud-manager that provides kubernetes with loxilb load balancer. kubernetes provides the cloud-provider interface for the implementation of external cloud provider-specific logic, and loxi-ccm is an implementation of the cloud-provider interface.
"},{"location":"ccm/#typical-loxi-ccm-deployment-topology","title":"Typical loxi-ccm deployment topology","text":"As seen in the loxilb architecture documentation, loxi-ccm is logically shown as part of the loxilb cluster. But it's actually running on the k8s master/control-plane node.
loxi-ccm implements the k8s load balancer service function using RESTful API of loxilb. When a user creates a k8s load balancer type service, loxi-ccm allocates an IP from the registered External IP subnet Pool. loxi-ccm sets rules in loxilb to allow service access from external with the assigned IP. In other words, loxi-ccm needs two information.
- loxilb API server address
- External IP Subnet
These informations are managed through k8s ConfigMap. loxi-ccm users should modify this informations to suit your environment.
"},{"location":"ccm/#deploy-loxi-ccm-on-kubernetes","title":"Deploy loxi-ccm on kubernetes","text":"The guide below has been tested in environment on Ubuntu 20.04, kubernetes v1.24 (calico CNI)
"},{"location":"ccm/#1-modify-k8s-configmap","title":"1. Modify k8s ConfigMap","text":"In the manifests/loxi-ccm.yaml manifests file, the ConfigMap is defined as follows
---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: loxilb-config\n namespace: kube-system\ndata:\n apiServerURL: \"http://192.168.20.54:11111\"\n externalIPcidr: 123.123.123.0/24\n---\n
The ConfigMap has two values: apiServerURL and externalIPcidr. - apiServerURL : API Server address of loxilb.
- externalIPcidr : Subnet band to be allocated by loxilb as External IP of the load balancer.
apiServerURL and externalIPcidr must be modified according to the environment of the user using loxi-ccm.
"},{"location":"ccm/#2-deploy-loxi-ccm","title":"2. Deploy loxi-ccm","text":"Once you have modified ConfigMap, you can deploy loxi-ccm using the loxi-ccm.yaml manifest file. Run the following command on the kubernetes you want to deploy.
kubectl apply -f https://github.com/loxilb-io/loxi-ccm/raw/master/manifests/loxi-ccm.yaml\n
After entering the command, check whether loxi-cloud-controller-manager is created in the daemonset of the kube-system namespace."},{"location":"ccm/#manual-build","title":"Manual build","text":"If you want to build loxi-ccm manually, do the following:
"},{"location":"ccm/#1-build","title":"1. build","text":"./build.sh\n
"},{"location":"ccm/#2-build-upload-container-image","title":"2. Build & upload container image","text":"Below is an example. This case use docker to build container images, and images is uploaded to docker hub.
TAG=\"0.1\"\nDOCKER_ID=YOUR_DOCKER_ID\nsudo docker build -t $DOCKER_ID/loxi-ccm:$TAG -f ./Dockerfile .\nsudo docker push $DOCKER_ID/loxi-ccm:$TAG\n
"},{"location":"ccm/#3-create-loxi-ccm-daemonset-using-custom-image","title":"3. create loxi-ccm daemonset using custom image","text":"In the DaemonSet section of the ./manifests/loxi-ccm.yaml file, change the image name to a custom image. (spec.template.spec.containers.image)
---\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n labels:\n k8s-app: loxi-cloud-controller-manager\n name: loxi-cloud-controller-manager\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n k8s-app: loxi-cloud-controller-manager\n template:\n metadata:\n labels:\n k8s-app: loxi-cloud-controller-manager\n spec:\n serviceAccountName: loxi-cloud-controller-manager\n containers:\n - name: loxi-cloud-controller-manager\n imagePullPolicy: Always\n # for in-tree providers we use k8s.gcr.io/cloud-controller-manager\n # this can be replaced with any other image for out-of-tree providers\n image: {DOCKER_ID}/loxi-ccm:{TAG}\n command:\n - /bin/loxi-cloud-controller-manager\n
"},{"location":"cmd-dev/","title":"Developer's guide to loxicmd","text":""},{"location":"cmd-dev/#loxicmd-development-guide","title":"loxicmd development guide","text":"This guide should help developers extend and enhance loxicmd. The guide is divided into three main stages: design, development, and testing. Start with cloning the loxicmd git:
git clone git@github.com:loxilb-io/loxicmd.git\n
"},{"location":"cmd-dev/#api-check-and-command-design","title":"API check and command design","text":"Before developing Command, we need to check if the API of the necessary functions is provided. Check the official API document of LoxiLB to see if the required API is provided. Afterwards, the GET, POST, and DELETE methods are designed with get, create, and delete commands according to the API provided.
loxicmd$ tree\n.\n\u251c\u2500\u2500 AUTHORS\n\u251c\u2500\u2500 cmd\n\u2502 \u251c\u2500\u2500 create\n\u2502 \u2502 \u251c\u2500\u2500 create.go\n\u2502 \u2502 \u2514\u2500\u2500 create_loadbalancer.go\n\u2502 \u251c\u2500\u2500 delete\n\u2502 \u2502 \u251c\u2500\u2500 delete.go\n\u2502 \u2502 \u2514\u2500\u2500 delete_loadbalancer.go\n\u2502 \u251c\u2500\u2500 get\n\u2502 \u2502 \u251c\u2500\u2500 get.go\n\u2502 \u2502 \u251c\u2500\u2500 get_loadbalancer.go\n\u2502 \u2514\u2500\u2500 root.go\n\u251c\u2500\u2500 go.mod\n\u251c\u2500\u2500 go.sum\n\u251c\u2500\u2500 LICENSE\n\u251c\u2500\u2500 main.go\n\u251c\u2500\u2500 Makefile\n\u251c\u2500\u2500 pkg\n\u2502 \u2514\u2500\u2500 api\n\u2502 \u251c\u2500\u2500 client.go\n\u2502 \u251c\u2500\u2500 common.go\n\u2502 \u251c\u2500\u2500 loadBalancer.go\n\u2502 \u2514\u2500\u2500 rest.go\n\u2514\u2500\u2500 README.md\n
Add the code in the ./cmd/get, ./cmd/delete, ./cmd/create, and ./pkg/api directories to add functionality."},{"location":"cmd-dev/#add-structure-in-pkgapi-and-register-method-example-of-connection-track-api","title":"Add structure in pkg/api and register method (example of connection track API)","text":" -
CommonAPI embedding Using embedding the CommonAPI for the Methods and variables, to use in the Connecttrack structure.
type Conntrack struct {\n CommonAPI\n}\n
-
Add Structure Configuration and JSON Structure Define the structure for JSON Unmashal.
type CtInformationGet struct {\n CtInfo []ConntrackInformation `json:\"ctAttr\"`\n}\n\ntype ConntrackInformation struct {\n Dip string `json:\"destinationIP\"`\n Sip string `json:\"sourceIP\"`\n Dport uint16 `json:\"destinationPort\"`\n Sport uint16 `json:\"sourcePort\"`\n Proto string `json:\"protocol\"`\n CState string `json:\"conntrackState\"`\n CAct string `json:\"conntrackAct\"`\n}\n
- Define Method Functions in pkg/api/client.go Define the URL in the Resource constant. Defines the function to be used in the command.
const (\n \u2026\n loxiConntrackResource = \"config/conntrack/all\"\n)\n\n\nfunc (l *LoxiClient) Conntrack() *Conntrack {\n return &Conntrack{\n CommonAPI: CommonAPI{\n restClient: &l.restClient,\n requestInfo: RequestInfo{\n provider: loxiProvider,\n apiVersion: loxiApiVersion,\n resource: loxiConntrackResource,\n },\n },\n }\n}\n
"},{"location":"cmd-dev/#add-get-create-delete-functions-within-cmd","title":"Add get, create, delete functions within cmd","text":"Use the Cobra library to define commands, Alise, descriptions, options, and callback functions, and then create a function that returns. Create a function such as PrintGetCTReturn and add logic when the status code is 200.
func NewGetConntrackCmd(restOptions *api.RESTOptions) *cobra.Command {\n var GetctCmd = &cobra.Command{\n Use: \"conntrack\",\n Aliases: []string{\"ct\", \"conntracks\", \"cts\"},\n Short: \"Get a Conntrack\",\n Long: `It shows connection track Information`,\n Run: func(cmd *cobra.Command, args []string) {\n client := api.NewLoxiClient(restOptions)\n ctx := context.TODO()\n var cancel context.CancelFunc\n if restOptions.Timeout > 0 {\n ctx, cancel = context.WithTimeout(context.TODO(), time.Duration(restOptions.Timeout)*time.Second)\n defer cancel()\n }\n resp, err := client.Conntrack().Get(ctx)\n if err != nil {\n fmt.Printf(\"Error: %s\\n\", err.Error())\n return\n }\n if resp.StatusCode == http.StatusOK {\n PrintGetCTResult(resp, *restOptions)\n return\n }\n\n },\n }\n\n return GetctCmd\n}\n
"},{"location":"cmd-dev/#register-command-in-cmd","title":"Register command in cmd","text":"Register Cobra as defined in 3.
func GetCmd(restOptions *api.RESTOptions) *cobra.Command {\n var GetCmd = &cobra.Command{\n Use: \"get\",\n Short: \"A brief description of your command\",\n Long: `A longer description that spans multiple lines and likely contains examples\n and usage of using your command. For example:\n\n Cobra is a CLI library for Go that empowers applications.\n This application is a tool to generate the needed files\n to quickly Get a Cobra application.`,\n Run: func(cmd *cobra.Command, args []string) {\n fmt.Println(\"Get called\")\n },\n }\n GetCmd.AddCommand(NewGetLoadBalancerCmd(restOptions))\n GetCmd.AddCommand(NewGetConntrackCmd(restOptions))\n return GetCmd\n}\n
"},{"location":"cmd-dev/#build-test","title":"Build & Test","text":"make\n
"},{"location":"cmd/","title":"Table of Contents","text":" - What is loxicmd
- How to build
- How to run and configure loxilb
- Load balancer
- Endpoint
- BFD
- Session
- SessionUlCl
- IPaddress
- FDB
- Route
- Neighbor
- Vlan
- Vxlan
- Firewall
- Mirror
- Policy
- Session Recorder
"},{"location":"cmd/#what-is-loxicmd","title":"What is loxicmd","text":"loxicmd is command tool for loxilb's configuration. loxicmd aims to provide all configuation related to loxilb and is based on kubectl's look and feel. When running k8s, kube-loxilb usually takes care of loxilb configuration but nonetheless loxicmd can be used for enhanced config, debugging and observability.
"},{"location":"cmd/#how-to-build","title":"How to build","text":"Note - loxilb docker has this built-in and there is no need to build it when using loxilb docker
Install package dependencies
go get .\n
Make loxicmd
make\n
Install loxicmd
sudo cp -f ./loxicmd /usr/local/sbin\n
"},{"location":"cmd/#how-to-run-and-configure-loxilb","title":"How to run and configure loxilb","text":""},{"location":"cmd/#load-balancer","title":"Load Balancer","text":""},{"location":"cmd/#get-load-balancer-rules","title":"Get load-balancer rules","text":"Get basic information
loxicmd get lb\n
Get detailed information
loxicmd get lb -o wide\n
Get info in json loxicmd get lb -o json\n
"},{"location":"cmd/#configure-load-balancer-rule","title":"Configure load-balancer rule","text":"Simple NAT44 tcp (round-robin) load-balancer
loxicmd create lb 1.1.1.1 --tcp=1828:1920 --endpoints=2.2.3.4:1\n
Note: - Round-robin is default mode in loxilb - End-point format is specified as <CIDR:weight>. For round-robin, weight(1) has no significance. NAT66 (round-robin) load-balancer
loxicmd create lb 2001::1 --tcp=2020:8080 --endpoints=4ffe::1:1,5ffe::1:1,6ffe::1:1\n
NAT64 (round-robin) load-balancer
loxicmd create lb 2001::1 --tcp=2020:8080 --endpoints=31.31.31.1:1,32.32.32.1:1,33.33.33.1:1\n
WRR (Weighted round-robin) load-balancer (Divide traffic in 40%, 40% and 20% ratio among end-points)
loxicmd create lb 20.20.20.1 --select=priority --tcp=2020:8080 --endpoints=31.31.31.1:40,32.32.32.1:40,33.33.33.1:20\n
Sticky end-point selection load-balancer (select end-points based on traffic hash)
loxicmd create lb 20.20.20.1 --select=hash --tcp=2020:8080 --endpoints=31.31.31.1:1,32.32.32.1:1,33.33.33.1:1\n
Load-balancer with forceful tcp-reset session timeout after inactivity of 30s
loxicmd create lb 20.20.20.1 --tcp=2020:8080 --endpoints=31.31.31.1:1,32.32.32.1:1,33.33.33.1:1 --inatimeout=30\n
Load-balancer with one-arm mode
loxicmd create lb 20.20.20.1 --tcp=2020:8080 --endpoints=100.100.100.2:1,100.100.100.3:1,100.100.100.4:1 --mode=onearm\n
Load-balancer with fullnat mode
loxicmd create lb 88.88.88.1 --sctp=38412:38412 --endpoints=192.168.70.3:1 --mode=fullnat\n
- For more information on one-arm and full-nat mode, please check this post Load-balancer config in DSR(direct-server return) mode
loxicmd create lb 20.20.20.1 --tcp=2020:8080 --endpoints=31.31.31.1:1,32.32.32.1:1 --mode=dsr\n
Load-balancer config with active endpoint monitoring
loxicmd create lb 20.20.20.1 --tcp=2020:8080 --endpoints=31.31.31.1:1,32.32.32.1:1 --monitor\n
Note: - By default loxilb does not do active endpoint monitoring i.e it will continue to select end-points which might be inactive - This is due to the fact kubernetes also has its own service monitoring mechanism and it can notify loxilb of any such endpoint health state - Based on user's requirements, one can specify active endpoint checks using \"--monitor\" flag - loxilb has extensive endpoint monitoring methods. Further details can be found in endpoint section "},{"location":"cmd/#load-balancer-yaml-example","title":"Load-balancer yaml example","text":"apiVersion: netlox/v1\nkind: Loadbalancer\nmetadata:\n name: test\nspec:\n serviceArguments:\n externalIP: 1.2.3.1\n port: 80\n protocol: tcp\n sel: 0\n endpoints:\n - endpointIP: 4.3.2.1\n weight: 1\n targetPort: 8080\n - endpointIP: 4.3.2.2\n weight: 1\n targetPort: 8080\n - endpointIP: 4.3.2.3\n weight: 1\n targetPort: 8080\n
"},{"location":"cmd/#delete-a-load-balancer-rule","title":"Delete a load-balancer rule","text":""},{"location":"cmd/#loxicmd-delete-lb-1111-tcp1828","title":"loxicmd delete lb 1.1.1.1 --tcp=1828 \n
","text":""},{"location":"cmd/#endpoint","title":"Endpoint","text":""},{"location":"cmd/#get-load-balancer-end-point-health-information","title":"Get load-balancer end-point health information","text":"loxicmd get ep\n
"},{"location":"cmd/#create-end-point-for-health-probing","title":"Create end-point for health probing","text":"# loxicmd create endpoint IP [--name=<id>] [--probetype=<probetype>] [--probereq=<probereq>] [--proberesp=<proberesp>] [--probeport=<port>] [--period=<period>] [--retries=<retries>]\nloxicmd create endpoint 32.32.32.1 --probetype=http --probeport=8080 --period=60 --retries=2\n
IP(string) : Endpoint target IPaddress name(string) : Endpoint Identifier probetype(string): Probe-type:ping,http,https,udp,tcp,sctp,none probereq(string): If probe is http/https, one can specify additional uri path proberesp(string): If probe is http/https, one can specify custom response string probeport(int): If probe is http,https,tcp,udp,sctp one can specify custom l4port to use period(int): Period of probing retries(int): Number of retries before marking endPoint inactive Notes: - \"name\" is not required when endpoint is created initially. loxilb will allocate the name which can be checked with \"loxicmd get ep\". \"name\" can be given as an Identifier when user wants to modify endpoint probe parameters - Initial state of endpoint will be decided within 15 seconds of rule addition (We cant be sure if service is immediately up so this is the init liveness check timeout. It is not configurable at this time) - After init liveness check, probes will be done as per default (60s) or whatever value is set by the user - When endpoint is inactive we have internal logic and timeouts to minimize blocking calls and maintain stability. Only when endpoint is active, we use probe timeout given by user - For UDP end-points and probe-type, there are two ways to check end-point health currently: - If the service can respond to probe requests with pre-defined responses sent over UDP, we can use the following :
loxicmd create endpoint 172.1.217.133 --name=\"udpep1\" --probetype=udp --probeport=32031 --period=60 --retries=2 --probereq=\"probe\" --proberesp=\"hello\"\n
- If the services cannot support the above mechanism, loxilb will try to check for \"ICMP Port unreachable\" after sending UDP probes. If an \"ICMP Port unreachable\" is received, it means the endpoint is not up. "},{"location":"cmd/#examples","title":"Examples :","text":"loxicmd create endpoint 32.32.32.1 --probetype=http --probeport=8080 --period=60 --retries=2\n\nloxicmd get ep\n| HOST | NAME | PTYPE | PORT | DURATION | RETRIES | MINDELAY | AVGDELAY | MAXDELAY | STATE |\n|------------|----------------------|-------|------|----------|---------|----------|----------|----------|-------|\n| 32.32.32.1 | 32.32.32.1_http_8080 | http: | 8080 | 60 | 2 | 0s | 0s | 0s | ok |\n\n# Modify duration and retry count using name\n\nloxicmd create endpoint 32.32.32.1 --name=32.32.32.1_http_8080 --probetype=http --probeport=8080 --period=40 --retries=4\n\nloxicmd get ep\n| HOST | NAME | PTYPE | PORT | DURATION | RETRIES | MINDELAY | AVGDELAY | MAXDELAY | STATE |\n|------------|----------------------|-------|------|----------|---------|----------|-----------|-----------|-------|\n| 32.32.32.1 | 32.32.32.1_http_8080 | http: | 8080 | 40 | 4 | 0s | 0s | 0s | ok |\n
"},{"location":"cmd/#create-end-point-with-https-probing-information","title":"Create end-point with https probing information","text":"# loxicmd create endpoint IP [--name=<id>] [--probetype=<probetype>] [--probereq=<probereq>] [--proberesp=<proberesp>] [--probeport=<port>] [--period=<period>] [--retries=<retries>]\nloxicmd create endpoint 32.32.32.1 --probetype=https --probeport=8080 --probereq=\"health\" --proberesp=\"OK\" --period=60 --retries=2\n
Note: loxilb requires CA certificate for TLS connection and private certificate and private key for mTLS connection. Admin can keep a common(default) CA certificate for all the endpoints at \"/opt/loxilb/cert/rootCA.crt\" or per-endpoint certificates can be kept as \"/opt/loxilb/cert/\\<IP>/rootCA.crt\", private key must be kept at \"/opt/loxilb/cert/server.key\" and private certificate at \"/opt/loxilb/cert/server.crt\". Please see Minica or Certstrap or this CICD test case to know how to generate certificates."},{"location":"cmd/#endpoint-yaml-example","title":"Endpoint yaml example","text":"apiVersion: netlox/v1\nkind: Endpoint\nmetadata:\n name: test\nspec:\n hostName: \"Test\"\n description: string\n inactiveReTries: 0\n probeType: string\n probeReqUrl: string\n probeDuration: 0\n probePort: 0\n
"},{"location":"cmd/#delete-end-point-informtion","title":"Delete end-point informtion","text":""},{"location":"cmd/#loxicmd-delete-endpoint-31313131","title":"loxicmd delete endpoint 31.31.31.31\n
","text":""},{"location":"cmd/#bfd","title":"BFD","text":""},{"location":"cmd/#get-bfd-session-information","title":"Get BFD Session information","text":"loxicmd get bfd\n
"},{"location":"cmd/#create-bfd-session","title":"Create BFD Session","text":"#loxicmd create bfd <remoteIP> --sourceIP=<sourceIP> --interval=<time in usecs> --retryCount=<count>\nloxicmd create bfd 192.168.80.253 --sourceIP=192.168.80.252 --interval=500000 --retryCount=3\n
remoteIP(string): Remote IP address sourceIP(string): Source IP address for binding interval(int): BFD packet Tx Interval Time in microseconds retryCount(int): Number of retry counts to detect failure. "},{"location":"cmd/#set-bfd-session","title":"Set BFD Session","text":"#loxicmd set bfd <remoteIP> --interval=<time in usecs> --retryCount=<count>\nloxicmd set bfd 192.168.80.253 --interval=400000 --retryCount=5\n
interval(int): BFD packet Tx Interval Time in microseconds retryCount(int): Number of retry counts to detect failure. "},{"location":"cmd/#delete-bfd-session","title":"Delete BFD Session","text":"#loxicmd delete bfd <remoteIP>\nloxicmd delete bfd 192.168.80.253\n
remoteIP(string): Remote IP address sourceIP(string): Source IP address for binding interval(int): BFD packet Tx Interval Time in microseconds retryCount(int): Number of retry counts to detect failure."},{"location":"cmd/#bfd-yaml-example","title":"BFD yaml example","text":"apiVersion: netlox/v1\nkind: BFD\nmetadata:\n name: test\nspec:\n instance: \"default\"\n remoteIp: \"192.168.80.253\"\n sourceIp: \"192.168.80.252\"\n interval: 300000\n retryCount: 4\n
"},{"location":"cmd/#session","title":"Session","text":""},{"location":"cmd/#get-session-information","title":"Get Session information","text":"loxicmd get session\n
"},{"location":"cmd/#create-session-information","title":"Create Session information","text":"#loxicmd create session <userID> <sessionIP> --accessNetworkTunnel=<TeID>:<TunnelIP> --coreNetworkTunnel=<TeID>:<TunnelIP>\nloxicmd create session user1 192.168.20.1 --accessNetworkTunnel=1:1.232.16.1 coreNetworkTunnel=1:1.233.16.1\n
userID(string): User Identifier sessionIP(string): Session IP address accessNetworkTunnel(string): accessNetworkTunnel has pairs that can be specified as 'TeID:IP' coreNetworkTunnel(string): coreNetworkTunnel has pairs that can be specified as 'TeID:IP' "},{"location":"cmd/#session-yaml-example","title":"Session yaml example","text":"apiVersion: netlox/v1\nkind: Session\nmetadata:\n name: test\nspec:\n ident: user1\n sessionIP: 88.88.88.88\n accessNetworkTunnel:\n TeID: 1\n tunnelIP: 11.11.11.11\n coreNetworkTunnel:\n TeID: 1\n tunnelIP: 22.22.22.22\n
"},{"location":"cmd/#delete-session-information","title":"Delete Session information","text":""},{"location":"cmd/#loxicmd-delete-session-user1","title":"loxicmd delete session user1\n
","text":""},{"location":"cmd/#sessionulcl","title":"SessionUlCl","text":""},{"location":"cmd/#get-sessionulcl-information","title":"Get SessionUlCl information","text":"loxicmd get sessionulcl\n
"},{"location":"cmd/#create-sessionulcl-information","title":"Create SessionUlCl information","text":"#loxicmd create sessionulcl <userID> --ulclArgs=<QFI>:<ulclIP>,...\nloxicmd create sessionulcl user1 --ulclArgs=16:192.33.125.1\n
userID(string): User Identifier ulclArgs(string): Port pairs can be specified as 'QFI:UlClIP' "},{"location":"cmd/#sessionulcl-yaml-example","title":"SessionUlCl yaml example","text":"apiVersion: netlox/v1\nkind: SessionULCL\nmetadata:\n name: test\nspec:\n ulclIdent: user1\n ulclArgument:\n qfi: 11\n ulclIP: 8.8.8.8\n
"},{"location":"cmd/#delete-sessionulcl-information","title":"Delete SessionUlCl information","text":"loxicmd delete sessionulcl --ulclArgs=192.33.125.1\n
ulclArgs(string): UlCl IP address can be specified as 'UlClIP'. It don't need QFI."},{"location":"cmd/#ipaddress","title":"IPaddress","text":""},{"location":"cmd/#get-ipaddress-information","title":"Get IPaddress information","text":"loxicmd get ip\n
"},{"location":"cmd/#create-ipaddress-information","title":"Create IPaddress information","text":"#loxicmd create ip <DeviceIPNet> <device>\nloxicmd create ip 192.168.0.1/24 eno7\n
DeviceIPNet(string): Actual IP address with mask device(string): name of the related device "},{"location":"cmd/#ipaddress-yaml-example","title":"IPaddress yaml example","text":"apiVersion: netlox/v1\nkind: IPaddress\nmetadata:\n name: test\nspec:\n dev: eno8\n ipAddress: 192.168.23.1/32\n
"},{"location":"cmd/#delete-ipaddress-information","title":"Delete IPaddress information","text":""},{"location":"cmd/#loxicmd-delete-ip-deviceipnet-device-loxicmd-delete-ip-1921680124-eno7","title":"#loxicmd delete ip <DeviceIPNet> <device>\nloxicmd delete ip 192.168.0.1/24 eno7\n
","text":""},{"location":"cmd/#fdb","title":"FDB","text":""},{"location":"cmd/#get-fdb-information","title":"Get FDB information","text":"loxicmd get fdb\n
"},{"location":"cmd/#create-fdb-information","title":"Create FDB information","text":"#loxicmd create fdb <MacAddress> <DeviceName>\nloxicmd create fdb aa:aa:aa:aa:bb:bb eno7\n
MacAddress(string): mac address DeviceName(string): name of the related device "},{"location":"cmd/#fdb-yaml-example","title":"FDB yaml example","text":"apiVersion: netlox/v1\nkind: FDB\nmetadata:\n name: test\nspec:\n dev: eno8\n macAddress: aa:aa:aa:aa:aa:aa\n
"},{"location":"cmd/#delete-fdb-information","title":"Delete FDB information","text":"#loxicmd delete fdb <MacAddress> <DeviceName>\nloxicmd delete fdb aa:aa:aa:aa:bb:bb eno7\n
"},{"location":"cmd/#route","title":"Route","text":""},{"location":"cmd/#get-route-information","title":"Get Route information","text":"loxicmd get route\n
"},{"location":"cmd/#create-route-information","title":"Create Route information","text":"#loxicmd create route <DestinationIPNet> <gateway>\nloxicmd create route 192.168.212.0/24 172.17.0.254\n
DestinationIPNet(string): Actual IP address route with mask gateway(string): gateway information if any "},{"location":"cmd/#route-yaml-example","title":"Route yaml example","text":"apiVersion: netlox/v1\nkind: Route\nmetadata:\n name: test\nspec:\n destinationIPNet: 192.168.30.0/24\n gateway: 172.17.0.1\n
"},{"location":"cmd/#delete-route-information","title":"Delete Route information","text":""},{"location":"cmd/#loxicmd-delete-route-destinationipnet-loxicmd-delete-route-192168212024","title":"#loxicmd delete route <DestinationIPNet>\nloxicmd delete route 192.168.212.0/24 \n
","text":""},{"location":"cmd/#neighbor","title":"Neighbor","text":""},{"location":"cmd/#get-neighbor-information","title":"Get Neighbor information","text":"loxicmd get neighbor\n
"},{"location":"cmd/#create-neighbor-information","title":"Create Neighbor information","text":"#loxicmd create neighbor <DeviceIP> <DeviceName> [--macAddress=aa:aa:aa:aa:aa:aa]\nloxicmd create neighbor 192.168.0.1 eno7 --macAddress=aa:aa:aa:aa:aa:aa\n
DeviceIP(string): The IP address DeviceName(string): name of the related device macAddress(string): resolved hardware address if any "},{"location":"cmd/#neighbor-yaml-example","title":"Neighbor yaml example","text":"apiVersion: netlox/v1\nkind: Neighbor\nmetadata:\n name: test\nspec:\n dev: eno8\n macAddress: aa:aa:aa:aa:aa:aa\n ipAddress: 192.168.23.21\n
"},{"location":"cmd/#delete-neighbor-information","title":"Delete Neighbor information","text":"#loxicmd delete neighbor <DeviceIP> <device>\nloxicmd delete neighbor 192.168.0.1 eno7\n
"},{"location":"cmd/#vlan","title":"Vlan","text":""},{"location":"cmd/#get-vlan-and-vlan-member-information","title":"Get Vlan and Vlan Member information","text":"loxicmd get vlan\n
loxicmd get vlanmember\n
"},{"location":"cmd/#create-vlan-and-vlan-member-information","title":"Create Vlan and Vlan Member information","text":"#loxicmd create vlan <Vid>\nloxicmd create vlan 100\n
Vid(int): vlan identifier #loxicmd create vlanmember <Vid> <DeviceName> --tagged=<Tagged>\nloxicmd create vlanmember 100 eno7 --tagged=true\nloxicmd create vlanmember 100 eno7\n
Vid(int): vlan identifier DeviceName(string): name of the related device tagged(boolean): tagged or not (default is false) "},{"location":"cmd/#vlan-yaml-example","title":"Vlan yaml example","text":"apiVersion: netlox/v1\nkind: Vlan\nmetadata:\n name: test\nspec:\n vid: 100\n
"},{"location":"cmd/#vlan-member-yaml-example","title":"Vlan Member yaml example","text":"apiVersion: netlox/v1\nkind: VlanMember\nmetadata:\n name: test\n vid: 100\nspec:\n dev: eno8\n Tagged: true\n
"},{"location":"cmd/#delete-vlan-and-vlan-member-information","title":"Delete Vlan and Vlan Member information","text":"#loxicmd delete vlan <Vid>\nloxicmd delete vlan 100\n
#loxicmd delete vlanmember <Vid> <DeviceName> --tagged=<Tagged>\nloxicmd delete vlanmember 100 eno7 --tagged=true\nloxicmd delete vlanmember 100 eno7\n
"},{"location":"cmd/#vxlan","title":"Vxlan","text":""},{"location":"cmd/#get-vxlan-and-vxlan-peer-information","title":"Get Vxlan and Vxlan Peer information","text":"loxicmd get vxlan\n
loxicmd get vxlanpeer\n
"},{"location":"cmd/#create-vxlan-and-vxlan-peer-information","title":"Create Vxlan and Vxlan Peer information","text":"#loxicmd create vxlan <VxlanID> <EndpointDeviceName>\nloxicmd create vxlan 100 eno7\n
VxlanID(int): Vxlan Identifier EndpointDeviceName(string): VTEP Device name(It must have own IP address for peering) #loxicmd create vxlanpeer <VxlanID> <PeerIP>\nloxicmd create vxlan-peer 100 30.1.3.1\n
VxlanID(int): Vxlan Identifier PeerIP(string): Vxlan peer device IP address "},{"location":"cmd/#vxlan-yaml-example","title":"Vxlan yaml example","text":"apiVersion: netlox/v1\nkind: Vxlan\nmetadata:\n name: test\nspec:\n epIntf: eno8\n vxlanID: 100\n
"},{"location":"cmd/#vxlan-peer-yaml-example","title":"Vxlan Peer yaml example","text":"apiVersion: netlox/v1\nkind: VxlanPeer\nmetadata:\n name: test\n vxlanID: 100\nspec:\n peerIP: 21.21.21.1\n
"},{"location":"cmd/#delete-vxlan-and-vxlan-peer-information","title":"Delete Vxlan and Vxlan Peer information","text":"#loxicmd delete vxlan <VxlanID>\nloxicmd delete vxlan 100\n
#loxicmd delete vxlanpeer <VxlanID> <PeerIP>\nloxicmd delete vxlan-peer 100 30.1.3.1\n
"},{"location":"cmd/#firewall","title":"Firewall","text":""},{"location":"cmd/#get-firewall-information","title":"Get Firewall information","text":"loxicmd get firewall\n
"},{"location":"cmd/#create-firewall-information","title":"Create Firewall information","text":"#loxicmd create firewall --firewallRule=<ruleKey>:<ruleValue>, [--allow] [--drop] [--trap] [--redirect=<PortName>] [--setmark=<FwMark>\nloxicmd create firewall --firewallRule=\"sourceIP:1.2.3.2/32,destinationIP:2.3.1.2/32,preference:200\" --allow\nloxicmd create firewall --firewallRule=\"sourceIP:1.2.3.2/32,destinationIP:2.3.1.2/32,preference:200\" --allow --setmark=10\nloxicmd create firewall --firewallRule=\"sourceIP:1.2.3.2/32,destinationIP:2.3.1.2/32,preference:200\" --drop\nloxicmd create firewall --firewallRule=\"sourceIP:1.2.3.2/32,destinationIP:2.3.1.2/32,preference:200\" --trap\nloxicmd create firewall --firewallRule=\"sourceIP:1.2.3.2/32,destinationIP:2.3.1.2/32,preference:200\" --redirect=ensp0\n
firewallRule sourceIP(string) - Source IP in CIDR notation destinationIP(string) - Destination IP in CIDR notation minSourcePort(int) - Minimum source port range maxSourcePort(int) - Maximum source port range minDestinationPort(int) - Minimum destination port range maxDestinationPort(int) - Maximum source port range protocol(int) - the protocol portName(string) - the incoming port preference(int) - User preference for ordering "},{"location":"cmd/#firewall-yaml-example","title":"Firewall yaml example","text":"apiVersion: netlox/v1\nkind: Firewall\nmetadata:\n name: test\nspec:\n ruleArguments:\n sourceIP: 192.169.1.2/24\n destinationIP: 192.169.2.1/24\n preference: 200\n opts:\n allow: true\n
"},{"location":"cmd/#delete-firewall-information","title":"Delete Firewall information","text":""},{"location":"cmd/#loxicmd-delete-firewall-firewallrulerulekeyrulevalue-loxicmd-delete-firewall-firewallrulesourceip123232destinationip231232preference200","title":"#loxicmd delete firewall --firewallRule=<ruleKey>:<ruleValue>\nloxicmd delete firewall --firewallRule=\"sourceIP:1.2.3.2/32,destinationIP:2.3.1.2/32,preference:200\n
","text":""},{"location":"cmd/#mirror","title":"Mirror","text":""},{"location":"cmd/#get-mirror-information","title":"Get Mirror information","text":"loxicmd get mirror\n
"},{"location":"cmd/#create-mirror-information","title":"Create Mirror information","text":"#loxicmd create mirror <mirrorIdent> --mirrorInfo=<InfoOption>:<InfoValue>,... --targetObject=attachement:<port1,rule2>,mirrObjName:<ObjectName>\nloxicmd create mirror mirr-1 --mirrorInfo=\"type:0,port:ensp0\" --targetObject=\"attachement:1,mirrObjName:ensp1\n
mirrorIdent(string): Mirror identifier type(int) : Mirroring type as like 0 == SPAN, 1 == RSPAN, 2 == ERSPAN port(string) : The port where mirrored traffic needs to be sent vlan(int) : for RSPAN we may need to send tagged mirror traffic remoteIP(string) : For ERSPAN we may need to send tunnelled mirror traffic sourceIP(string): For ERSPAN we may need to send tunnelled mirror traffic tunnelID(int): For ERSPAN we may need to send tunnelled mirror traffic "},{"location":"cmd/#mirror-yaml-example","title":"Mirror yaml example","text":"apiVersion: netlox/v1\nkind: Mirror\nmetadata:\n name: test\nspec:\n mirrorIdent: mirr-1\n mirrorInfo:\n type: 0\n port: eno1\n targetObject:\n attachment: 1\n mirrObjName: eno2\n
"},{"location":"cmd/#delete-mirror-information","title":"Delete Mirror information","text":"#loxicmd delete mirror <mirrorIdent>\nloxicmd delete mirror mirr-1\n
"},{"location":"cmd/#policy","title":"Policy","text":""},{"location":"cmd/#get-policy-information","title":"Get Policy information","text":"loxicmd get policy\n
"},{"location":"cmd/#create-policy-information","title":"Create Policy information","text":"#loxicmd create policy IDENT --rate=<Peak>:<Commited> --target=<ObjectName>:<Attachment> [--block-size=<Excess>:<Committed>] [--color] [--pol-type=<policy type>]\nloxicmd create policy pol-0 --rate=100:100 --target=ensp0:1\nloxicmd create policy pol-1 --rate=100:100 --target=ensp0:1 --block-size=12000:6000\nloxicmd create policy pol-1 --rate=100:100 --target=ensp0:1 --color\nloxicmd create policy pol-1 --rate=100:100 --target=ensp0:1 --color --pol-type 0\n
rate(string): Rate pairs can be specified as 'Peak:Commited'. rate unit : Mbps block-size(string): Block Size pairs can be specified as 'Excess:Committed'. block-size unit : bps target(string): Target Interface pairs can be specified as 'ObjectName:Attachment' color(boolean): Policy color enbale or not pol-type(int): Policy traffic control type. 0 : TrTCM, 1 : SrTCM "},{"location":"cmd/#policy-yaml-example","title":"Policy yaml example","text":"apiVersion: netlox/v1\nkind: Policy\nmetadata:\n name: test\nspec:\n policyIdent: pol-eno8\n policyInfo:\n type: 0\n colorAware: false\n committedInfoRate: 100\n peakInfoRate: 100\n targetObject:\n attachment: 1\n polObjName: eno8\n
"},{"location":"cmd/#delete-policy-information","title":"Delete Policy information","text":""},{"location":"cmd/#loxicmd-delete-policy-polident-loxicmd-delete-policy-pol-1","title":"#loxicmd delete policy <Polident>\nloxicmd delete policy pol-1\n
","text":""},{"location":"cmd/#session-recorder","title":"Session Recorder","text":""},{"location":"cmd/#set-n-tuple-policy-for-recording","title":"Set n-tuple policy for recording","text":"loxicmd create firewall --firewallRule=\"destinationIP:31.31.31.0/24,preference:200\" --allow --record\n
loxilb will record any connection track entry which matches this policy (even for reverse direction) as a way to provide extended visibility for debugging "},{"location":"cmd/#check-or-record-with-tcpdump","title":"Check or record with tcpdump","text":"tcpdump -i llb0 -n\n
Any valid tcpdump option can be used including saving to a pcap file"},{"location":"cmd/#get-live-connection-track-information","title":"Get live connection-track information","text":"loxicmd get conntrack\n
"},{"location":"cmd/#get-port-dump-information","title":"Get port-dump information","text":"loxicmd get port\n
"},{"location":"cmd/#save-all-loxilbs-operational-information-in-dbstore","title":"Save all loxilb's operational information in DBStore","text":"loxicmd save -a\n
** This will ensure that whenever loxilb restarts, it will start with last saved state from DBStore"},{"location":"cmd/#configure-loxicmd-with-yamlbeta","title":"Configure loxicmd with yaml(Beta)","text":"The loxicmd support yaml based configuration. The format is same as Kubernetes. This beta version support only one configuraion per one file. That means \"Do not use ---
in yaml file.\" . It will be supported at next release.
"},{"location":"cmd/#command","title":"Command","text":"#loxicmd apply -f <file.yaml>\n#loxicmd delete -f <file.yaml>\nloxicmd apply -f lb.yaml\nloxicmd delete -f lb.yaml\n
"},{"location":"cmd/#file-examplelbyaml","title":"File example(lb.yaml)","text":"apiVersion: netlox/v1\nkind: Loadbalancer\nmetadata:\n name: load\nspec:\n serviceArguments:\n externalIP: 123.123.123.1\n port: 80\n protocol: tcp\n sel: 0\n endpoints:\n - endpointIP: 4.3.2.1\n weight: 1\n targetPort: 8080\n - endpointIP: 4.3.2.2\n weight: 1\n targetPort: 8080\n - endpointIP: 4.3.2.3\n weight: 1\n targetPort: 8080\n
It reuse API's json body as a \"Spec\". If the API URL has no param, it don't need to use \"metadata\". For example, The body of load Balancer rule is shown below. {\n \"serviceArguments\": {\n \"externalIP\": \"123.123.123.1\",\n \"port\": 80,\n \"protocol\": \"tcp\",\n \"sel\": 0\n },\n \"endpoints\": [\n {\n \"endpointIP\": \"4.3.2.1\",\n \"weight\": 1,\n \"targetPort\": 8080\n },\n {\n \"endpointIP\": \"4.3.2.2\",\n \"weight\": 1,\n \"targetPort\": 8080\n },\n {\n \"endpointIP\": \"4.3.2.3\",\n \"weight\": 1,\n \"targetPort\": 8080\n }\n}\n
This json format can be converted Yaml format as shown below. serviceArguments:\n externalIP: 123.123.123.1\n port: 80\n protocol: tcp\n sel: 0\n endpoints:\n - endpointIP: 4.3.2.1\n weight: 1\n targetPort: 8080\n - endpointIP: 4.3.2.2\n weight: 1\n targetPort: 8080\n - endpointIP: 4.3.2.3\n weight: 1\n targetPort: 8080\n
Finally, this is located in the Spec of the entire configuration file as File example(lb.yaml) If you want to add Vlan bridge, IPaddress or something else. Just change the Kind value from Loadbalancer to VlanBridge, IPaddress as like below example.
apiVersion: netlox/v1\nkind: IPaddress\nmetadata:\n name: test\nspec:\n dev: eno8\n ipAddress: 192.168.23.1/32\n
If the URL has param such as adding vlan-member, it must have metadata
.
apiVersion: netlox/v1\nkind: VlanMember\nmetadata:\n name: test\n vid: 100\nspec:\n dev: eno8\n Tagged: true\n
The example of all the settings below, so please refer to it."},{"location":"cmd/#more-information","title":"More information","text":"There are tons of other commands, use help option!
loxicmd help\n
"},{"location":"code/","title":"loxilb is organized as below:","text":"\u251c\u2500\u2500 api\n\u2502\u00a0 \u251c\u2500\u2500 certification\n\u2502\u00a0 \u251c\u2500\u2500 cmd\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 loxilb-rest-api-server\n\u2502\u00a0 \u251c\u2500\u2500 models\n\u2502\u00a0 \u251c\u2500\u2500 restapi\n\u2502\u00a0 \u251c\u2500\u2500 handler\n\u2502\u00a0 \u251c\u2500\u2500 operations\n\u251c\u2500\u2500 common\n\u251c\u2500\u2500 ebpf\n\u2502\u00a0 \u251c\u2500\u2500 common\n\u2502\u00a0 \u251c\u2500\u2500 headers\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 linux\n\u2502\u00a0 \u251c\u2500\u2500 kernel\n\u2502\u00a0 \u251c\u2500\u2500 libbpf\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 include\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 asm\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 linux\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 uapi\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 linux\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 scripts\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 src\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 build\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 usr\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 include\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 bpf\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 lib64\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 pkgconfig\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 sharedobjs\n\u2502\u00a0 \u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 staticobjs\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 travis-ci\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 managers\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 vmtest\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 configs\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 blacklist\n\u2502\u00a0 \u2502\u00a0 \u251c\u2500\u2500 whitelist\n\u2502\u00a0 \u251c\u2500\u2500 utils\n\u251c\u2500\u2500 loxinet\n\u251c\u2500\u2500 options\n\u251c\u2500\u2500 loxilib\n
"},{"location":"code/#api","title":"api","text":"This directory contains source code to host api server to handle CCM configuration requests.
"},{"location":"code/#common","title":"common","text":"Common api to configure which are exposed by loxinet are defined in this directory.
"},{"location":"code/#loxinet","title":"loxinet","text":"This module implements the glue layer or the middle layer between eBPF datapath module and api modules. It defines functions for configuring networking and load balancing rules in the eBPF datapath.
"},{"location":"code/#ebpf","title":"ebpf","text":"This directory contains source code for loxilb eBPF datapath.
"},{"location":"code/#options","title":"options","text":"This directory contains files for managing the command line options.
"},{"location":"code/#loxilib","title":"loxilib","text":"This package contains common routines for logging, statistics and other utilities.
"},{"location":"contribute/","title":"Contributing","text":"When contributing to any of loxilb's repositories, please first discuss the change you wish to make via issue, email, or any other method with the owners of this repository before making a change.
Please note we have a code of conduct, please follow it in all your interactions with the project.
"},{"location":"contribute/#pull-request-process","title":"Pull Request Process","text":" - Ensure any install or build dependencies are removed before the end of the layer when doing a build.
- Update the README.md with details of changes to the interface, this includes new environment variables, exposed ports, useful file locations and container parameters.
- Increase the version numbers in any examples files and the README.md to the new version that this Pull Request would represent. The versioning scheme we use is SemVer.
- You may merge the Pull Request in once you have the sign-off of two other developers, or if you do not have permission to do that, you may request the second reviewer to merge it for you.
- For Pull Requests to be successfully merged to main branch :
- It has to be code-reviewed by the maintainer(s)
-
Integrated Travis-CI runs should pass without errors
-
Detailed instructions to help new developers setup the development/test environment can be found here
- Alternatively, they can email developers at [loxilb-devel@netlox.io]. , checkout existing issues in github , visit the loxilb forum or loxilb slack channel
"},{"location":"contribute/#sign-your-commits","title":"Sign Your Commits","text":"Instructions
"},{"location":"contribute/#dco","title":"DCO","text":"Licensing is important to open source projects. It provides some assurances that the software will continue to be available based under the terms that the author(s) desired. We require that contributors sign off on commits submitted to our project's repositories. The Developer Certificate of Origin (DCO) is a way to certify that you wrote and have the right to contribute the code you are submitting to the project.
You sign-off by adding the following to your commit messages. Your sign-off must match the git user and email associated with the commit.
This is my commit message\n\nSigned-off-by: Your Name <your.name@example.com>\n
Git has a -s
command line option to do this automatically:
git commit -s -m 'This is my commit message'\n
If you forgot to do this and have not yet pushed your changes to the remote repository, you can amend your commit with the sign-off by running
git commit --amend -s\n
"},{"location":"contribute/#code-of-conduct","title":"Code of Conduct","text":""},{"location":"contribute/#our-pledge","title":"Our Pledge","text":"In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
"},{"location":"contribute/#our-standards","title":"Our Standards","text":"Examples of behavior that contributes to creating a positive environment include:
- Using welcoming and inclusive language
- Being respectful of differing viewpoints and experiences
- Gracefully accepting constructive criticism
- Focusing on what is best for the community
- Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
- The use of sexualized language or imagery and unwelcome sexual attention or advances
- Trolling, insulting/derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or electronic address, without explicit permission
- Other conduct which could reasonably be considered inappropriate in a professional setting
"},{"location":"contribute/#our-responsibilities","title":"Our Responsibilities","text":"Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
"},{"location":"contribute/#scope","title":"Scope","text":"This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
"},{"location":"contribute/#enforcement","title":"Enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [loxilb-devel@netlox.io]. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
"},{"location":"contribute/#attribution","title":"Attribution","text":"This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at http://contributor-covenant.org/version/1/4
"},{"location":"debugging/","title":"loxilb - How to debug and troubleshoot","text":""},{"location":"debugging/#loxilb-docker-or-pod-not-coming-in-running-state","title":"* loxilb docker or pod not coming in Running state ?","text":" -
Solution:
-
Check the host machine kernel version. loxilb requires kernel version 5.8 or above.
-
Make sure you are running the correct image as per your environment.
"},{"location":"debugging/#externalip-pending-in-kubectl-get-svc","title":"* externalIP pending in \"kubectl get svc\" ?","text":"If this happens:
1. When running loxilb externally, there could be a connectivity issue.
-
Solution:
- Check if loxiURL annotation in kube-loxilb.yaml was set correctly.
- Check for kube-loxilb(master node) connectivity with loxilb node.
2. When running loxilb in-cluster mode
- Solution: Make sure loxilb pods were spawned.
3. Make sure the annotation \"node.kubernetes.io/exclude-from-external-load-balancers\" is NOT present in the node's configuration.
- Solution: If present, then the node will not be considered as an endpoint by loxilb. You can remove it by editing \"kubectl edit \\<node-name>\"
4. Make sure these annotations are present in your service.yaml
spec:\n loadBalancerClass: loxilb.io/loxilb\n type: LoadBalancer\n
"},{"location":"debugging/#sctp-packets-dropping","title":"* SCTP packets dropping ?","text":"Usually, This happens due to SCTP checksum validation by host kernel and the possible scenarios are:
1. When workload and loxilb are scheduled in the same node.
2. Different CNI creates different types of interfaces i.e. CNIs creates bridges/tunnels/veth pairs and different network policies. These interfaces have different characteristics and implications on loxilb's checksum calculation logic.
-
Solution:
There are two ways to resolve this issue:
- Disable checksum calculation.
echo 1 > /sys/module/sctp/parameters/no_checksums\n echo 0 > /proc/sys/net/netfilter/nf_conntrack_checksum\n
- Or, Let loxilb take care of the checksum calculation completely. For that, We need to install a utility(a kernel module) in all the nodes where loxilb is running. It will make sure the correct checksum is applied at the end.
curl -sfL https://github.com/loxilb-io/loxilb-ebpf/raw/main/kprobe/install.sh | sh -\n
"},{"location":"debugging/#abort-in-sctp","title":"* ABORT in SCTP ?","text":"SCTP ABORT can be seen in many scenarios:
1. When Service IP is same as loxilb IP and SCTP packets does not match the rules.
-
Solution:
- Check if the rule is installed properly
loxicmd get lb\n
- Make sure the client is connecting to the same IP and port as per the configured service LB rule.
2. In one-arm/fullnat mode, loxilb sends SCTP ABORT after receiving SCTP INIT ACK packet.
- Solution: Check the underlying hypervisor interface driver. Some drivers does not provide enough metadata for ebpf processing which makes the packet to follow fallback path to kernel and kernel being unaware of the SCTP connection sends SCTP ABORT. Emulated interfaces in bridge mode are preferred for smooth networking.
3. ABORT after few seconds(Heartbeat re-transmisions)
When initiating the SCTP connection, if the application is not binded with a particular IP then SCTP stack uses all the IPs in the SCTP INIT message. After the successful connection, both endpoints start health check for each network path. As loxilb is in between and unaware of all the endpoint IPs, drops all those packets, which leads to sending SCTP ABORT from the endpoint.
- Solution: In SCTP uni-homing case, it is absolutely necessary to make sure the applications are binded to only one IP to avoid this case.
4. ABORT after few seconds(SCTP Multihoming)
- Solution: Currently, SCTP Multihoming service works only with fullnat mode and externalTrafficPolicy set to \"Local\"
"},{"location":"debugging/#check-loxilb-logs","title":"* Check loxilb logs","text":"loxilb logs its various important events and logs in the file /var/log/loxilb.log. Users can check it by using tail -f or any other command of choice.
root@752531364e2c:/# tail -f /var/log/loxilb.log \nDBG: 2022/07/10 12:49:27 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:49:37 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:49:47 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:49:57 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:50:07 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:50:17 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:50:27 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:50:37 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:50:47 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \nDBG: 2022/07/10 12:50:57 1:dst-10.10.10.1/32,proto-6,dport-2020,,do-dnat:eip-31.31.31.1,ep-5001,w-1,alive|eip-32.32.32.1,ep-5001,w-2,alive|eip-100.100.100.1,ep-5001,w-2,alive| pc 0 bc 0 \n
"},{"location":"debugging/#check-loxicmd-to-debug-loxilbs-internal-state","title":"* Check loxicmd to debug loxilb's internal state","text":"```
"},{"location":"debugging/#spawn-a-bash-shell-of-loxilb-docker","title":"Spawn a bash shell of loxilb docker","text":"docker exec -it loxilb bash
root@752531364e2c:/# loxicmd get lb | EXTERNALIP | PORT | PROTOCOL | SELECT | # OF ENDPOINTS | |------------|------|----------|--------|----------------| | 10.10.10.1 | 2020 | tcp | 0 | 3 |
root@752531364e2c:/# loxicmd get lb -o wide | EXTERNALIP | PORT | PROTOCOL | SELECT | ENDPOINTIP | TARGETPORT | WEIGHT | |------------|------|----------|--------|---------------|------------|--------| | 10.10.10.1 | 2020 | tcp | 0 | 31.31.31.1 | 5001 | 1 | | | | | | 32.32.32.1 | 5001 | 2 | | | | | | 100.100.100.1 | 5001 | 2 |
root@0c4f9175c983:/# loxicmd get conntrack | DESTINATIONIP | SOURCEIP | DESTINATIONPORT | SOURCEPORT | PROTOCOL | STATE | ACT | |---------------|------------|-----------------|------------|----------|-------------|-----| | 127.0.0.1 | 127.0.0.1 | 11111 | 47180 | tcp | closed-wait | | | 127.0.0.1 | 127.0.0.1 | 11111 | 47182 | tcp | est | | | 32.32.32.1 | 31.31.31.1 | 35068 | 35068 | icmp | bidir | |
root@65ad9b2f1b7f:/# loxicmd get port | INDEX | PORTNAME | MAC | LINK/STATE | L3INFO | L2INFO | |-------|----------|-------------------|-------------|---------------|---------------| | 1 | lo | 00:00:00:00:00:00 | true/false | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3801 | | | | | | IPv6 : [] | | | 2 | vlan3801 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3801 | | | | | | IPv6 : [] | | | 3 | llb0 | 42:6e:9b:7f:ff:36 | true/false | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3803 | | | | | | IPv6 : [] | | | 4 | vlan3803 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3803 | | | | | | IPv6 : [] | | | 5 | eth0 | 02:42:ac:1e:01:c1 | true/true | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3805 | | | | | | IPv6 : [] | | | 6 | vlan3805 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3805 | | | | | | IPv6 : [] | | | 7 | enp1 | fe:84:23:ac:41:31 | false/false | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3807 | | | | | | IPv6 : [] | | | 8 | vlan3807 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3807 | | | | | | IPv6 : [] | | | 9 | enp2 | d6:3c:7f:9e:58:5c | false/false | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3809 | | | | | | IPv6 : [] | | | 10 | vlan3809 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3809 | | | | | | IPv6 : [] | | | 11 | enp2v15 | 8a:9e:99:aa:f9:c3 | false/false | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3811 | | | | | | IPv6 : [] | | | 12 | vlan3811 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3811 | | | | | | IPv6 : [] | | | 13 | enp3 | f2:c7:4b:ac:fd:3e | false/false | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3813 | | | | | | IPv6 : [] | | | 14 | vlan3813 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3813 | | | | | | IPv6 : [] | | | 15 | enp4 | 12:d2:c3:79:f3:6a | false/false | Routed: false | IsPVID: true | | | | | | IPv4 : [] | VID : 3815 | | | | | | IPv6 : [] | | | 16 | vlan3815 | aa:bb:cc:dd:ee:ff | true/true | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 3815 | | | | | | IPv6 : [] | | | 17 | vlan100 | 56:2e:76:b2:71:48 | false/false | Routed: false | IsPVID: false | | | | | | IPv4 : [] | VID : 100 | | | | | | IPv6 : [] | |
"},{"location":"ebpf/","title":"What is eBPF ??","text":"eBPF has been making quite some news lately. An elegant way to extend the linux kernel (or windows) has far reaching implications. Although initially, eBPF was used to enhance system observability beyond existing tools, we will explore in this post how eBPF can be used for enhancing Linux networking performance. There are a lot of additional resources about eBPF in the eBPF project page.
"},{"location":"ebpf/#a-quick-recap","title":"A quick recap","text":"The hooks that are of particular interest for this discussion are NIC hook (invoked just after packet is received at NIC) and TC hook (invoked just before Linux starts processing packet with its TCP/IP stack). Programs loaded to the former hook are also known as XDP programs and to the latter are called eBPF TC. Although both use eBPF restricted C syntax, there are significant differences between these types. (We will cover it in a separate blog later). For now, we just need to remember that when dealing with container-to-container or container-to-outside communication eBPF-TC makes much more sense since memory allocation (for skb) will happen either way in such scenarios.
"},{"location":"ebpf/#the-performance-bottlenecks","title":"The performance bottlenecks","text":"Coming back to the focus of our discussion which is of course performance, let us step back and take a look at why Linux sucks at networking performance (or rather why it could perform much faster). Linux networking evolved from the days of dial up modem networking when speed was not of utmost importance. Down the lane, code kept accumulating. Although it is extremely feature rich and RFC compliant, it hardly resembles a powerful data-path networking engine. The following figure shows a call-trace of Linux kernel networking stack:
The point is it has become incredibly complex over the years. Once features like NAT, VXLAN, conntrack etc come into play, Linux networking stops scaling due to cache degradation, lock contention etc.
"},{"location":"ebpf/#one-problem-leads-to-the-another","title":"One problem leads to the another","text":"To avoid performance penalties, many user-space frameworks like DPDK have been widely used, which completely skip the linux kernel networking and directly process packets in the user-space. As simple as that may sound, there are some serious drawbacks in using such frameworks e.g need to dedicate cores (can\u2019t multitask), applications written on a specific user-space driver (PMD) might not run on another as it is, apps are also rendered incompatible across different DPDK releases frequently. Finally, there is a need to redo various parts of the TCP/IP stack and the provisioning involved. In short, it leads to a massive and completely unnecessary need of reinventing the wheel. We will have a detailed post later to discuss these factors. But for now, in short, if we are looking to get more out of a box than doing only networking, DPDK is not the right choice. In the age of distributed edge computing and immersive metaverse, the need to do more out of less is of utmost importance.
"},{"location":"ebpf/#ebpf-comes-to-the-rescue","title":"eBPF comes to the rescue","text":"Now, eBPF changes all of this. eBPF is hosted inside the kernel so the biggest advantage of eBPF is it can co-exist with Linux/OS without the need of using dedicated cores, skipping the Kernel stack or breaking tools used for ages by the community. Handling of new protocols and functionality can be done in the fly without waiting for kernel development to catch up.
"},{"location":"eks-external/","title":"Eks external","text":""},{"location":"eks-external/#create-an-eks-cluster-with-ingress-access-enabled-by-loxilb-external-mode","title":"Create an EKS cluster with ingress access enabled by loxilb (external-mode)","text":"This document details the steps to create an EKS cluster and allow external ingress access using loxilb running in external mode. loxilb will run as EC2 instances in EKS cluster's VPC while loxilb's operator, kube-loxilb, will run as a replica-set inside EKS cluster.
"},{"location":"eks-external/#create-eks-cluster-with-4-worker-nodes-from-a-bastion-node-inside-your-vpc","title":"Create EKS cluster with 4 worker nodes from a bastion node inside your VPC","text":" - It is assumed that aws-cli, kubectl and eksctl are installed in a bastion node
$ eksctl create cluster --version 1.24 --name loxilb-demo --vpc-nat-mode Single --region ap-northeast-2 --node-type t3.small --nodes 4 --with-oidc --managed\n
- Create kube config for kubectl access
$ aws eks update-kubeconfig --region ap-northeast-2 --name loxilb-demo\n
- Double confirm the cluster created
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system aws-node-2fpm4 2/2 Running 0 14m\nkube-system aws-node-6vhlr 2/2 Running 0 14m\nkube-system aws-node-9kzb2 2/2 Running 0 14m\nkube-system aws-node-vvkq5 2/2 Running 0 14m\nkube-system coredns-5ff5b8d45c-gj9kj 1/1 Running 0 21m\nkube-system coredns-5ff5b8d45c-p64fd 1/1 Running 0 21m\nkube-system kube-proxy-5j9gf 1/1 Running 0 14m\nkube-system kube-proxy-5tm8w 1/1 Running 0 14m\nkube-system kube-proxy-894k9 1/1 Running 0 14m\nkube-system kube-proxy-xgfb8 1/1 Running 0 14m\n
"},{"location":"eks-external/#deploy-loxilb-as-ec2-instances-in-ekss-vpc","title":"Deploy loxilb as EC2 instances in EKS's VPC","text":" - Create a file
launch-loxilb.sh
with the following contents (in bastion node) sudo apt-get update && apt-get install -y snapd\nsudo snap install docker\nsleep 30\nsudo docker run -u root --cap-add SYS_ADMIN --net=host --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest\n
- Deploy loxilb ec2 instance(s) using the above init-script
$ aws ec2 run-instances --image-id ami-01ed8ade75d4eee2f --count 1 --instance-type t3.medium --key-name aws-netlox --security-group-ids sg-0e2638db05b256476 --subnet-id subnet-0109b973f5f674f99 --associate-public-ip-address --user-data file://launch-loxilb.sh\n
"},{"location":"eks-external/#note-subnet-id-should-be-any-subnet-with-public-access-enabled-from-the-eks-cluster-rest-of-the-args-can-be-changed-as-applicable","title":"Note : subnet-id should be any subnet with public access enabled from the EKS cluster. Rest of the args can be changed as applicable","text":" - Double confirm loxilb EC2 instances are running properly in amazon aws console or using aws cli.
- Disable source/dest check of the loxilb EC2 instances
aws ec2 modify-network-interface-attribute --network-interface-id eni-02e1cbfa022eb0901 --no-source-dest-check\n
"},{"location":"eks-external/#deploy-loxilbs-operator-kube-loxilb","title":"Deploy loxilb's operator (kube-loxilb)","text":" - Create a file kube-loxilb.yml with the following contents
---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: kube-loxilb\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nrules:\n - apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - pods\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - endpoints\n - services\n - services/status\n verbs:\n - get\n - watch\n - list\n - patch\n - update\n - apiGroups:\n - gateway.networking.k8s.io\n resources:\n - gatewayclasses\n - gatewayclasses/status\n - gateways\n - gateways/status\n - tcproutes\n - udproutes\n verbs: [\"get\", \"watch\", \"list\", \"patch\", \"update\"]\n - apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - get\n - watch\n - list\n - apiGroups:\n - authentication.k8s.io\n resources:\n - tokenreviews\n verbs:\n - create\n - apiGroups:\n - authorization.k8s.io\n resources:\n - subjectaccessreviews\n verbs:\n - create\n - apiGroups:\n - bgppeer.loxilb.io\n resources:\n - bgppeerservices\n verbs:\n - get\n - watch\n - list\n - create\n - update\n - delete\n\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: kube-loxilb\nsubjects:\n - kind: ServiceAccount\n name: kube-loxilb\n namespace: kube-system\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: kube-loxilb\n namespace: kube-system\n labels:\n app: kube-loxilb-app\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: kube-loxilb-app\n template:\n metadata:\n labels:\n app: kube-loxilb-app\n spec:\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet\n tolerations:\n # Mark the pod as a critical add-on for rescheduling.\n - key: CriticalAddonsOnly\n operator: Exists\n priorityClassName: system-node-critical\n serviceAccountName: kube-loxilb\n terminationGracePeriodSeconds: 0\n containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n - --loxiURL=http://192.168.31.175:11111\n - --externalCIDR=0.0.0.0/32\n - --setLBMode=5\n #- --setRoles:0.0.0.0\n resources:\n requests:\n cpu: \"100m\"\n memory: \"50Mi\"\n limits:\n cpu: \"100m\"\n memory: \"50Mi\"\n securityContext:\n privileged: true\n capabilities:\n add: [\"NET_ADMIN\", \"NET_RAW\"]\n
"},{"location":"eks-external/#note1-externalcidr-args-can-be-set-to-any-public-ip-address-via-which-any-of-the-worker-nodes-can-be-accessed-it-can-be-also-set-to-simply-000032-which-means-lb-will-be-performed-on-any-of-the-nodes-where-loxilb-runs-the-decision-of-which-loxilb-nodeinstance-will-be-chosen-as-ingress-in-this-case-can-be-done-by-route53dns","title":"Note1: --externalCIDR args can be set to any Public IP address via which any of the worker nodes can be accessed. It can be also set to simply 0.0.0.0/32 which means LB will be performed on any of the nodes where loxilb runs. The decision of which loxilb node/instance will be chosen as ingress in this case can be done by Route53/DNS.","text":""},{"location":"eks-external/#note2-loxiurl-args-should-be-set-to-privateip-addresses-of-the-loxilb-ec2-instances-accessible-from-the-eks-cluster-currently-kube-loxilb-cant-autodetect-the-ec2-instances-running-loxilb-in-external-mode","title":"Note2: --loxiURL args should be set to privateIP address(es) of the loxilb ec2 instances accessible from the EKS cluster. Currently, kube-loxilb can't autodetect the EC2 instances running loxilb in external mode.","text":" - Deploy kube-loxilb to EKS cluster
$ kubectl apply -f kube-loxilb.yml\nserviceaccount/kube-loxilb created\nclusterrole.rbac.authorization.k8s.io/kube-loxilb created\ndeployment.apps/kube-loxilb created\n
- Check the state of the EKS cluster
$ kubectl get pods -A \nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system aws-node-2fpm4 2/2 Running 0 14m\nkube-system aws-node-6vhlr 2/2 Running 0 14m\nkube-system aws-node-9kzb2 2/2 Running 0 14m\nkube-system aws-node-vvkq5 2/2 Running 0 14m\nkube-system coredns-5ff5b8d45c-gj9kj 1/1 Running 0 21m\nkube-system coredns-5ff5b8d45c-p64fd 1/1 Running 0 21m\nkube-system kube-proxy-5j9gf 1/1 Running 0 14m\nkube-system kube-proxy-5tm8w 1/1 Running 0 14m\nkube-system kube-proxy-894k9 1/1 Running 0 14m\nkube-system kube-proxy-xgfb8 1/1 Running 0 14m\nkube-system kube-loxilb-6477d6897f-vz74f 1/1 Running 0 5m\n
"},{"location":"eks-external/#install-a-test-service","title":"Install a test service","text":" - Create a file nginx.yml with the following contents:
apiVersion: v1\nkind: Service\nmetadata:\n name: nginx-lb1\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: nginx-test\n ports:\n - port: 55002\n targetPort: 80\n type: LoadBalancer\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-test\n labels:\n what: nginx-test\nspec:\n containers:\n - name: nginx-test\n image: nginx:stable\n ports:\n - containerPort: 80\n
- Deploy test nginx service to EKS
$ kubectl apply -f nginx.yml\nservice/nginx-lb1 created\n
-
Check the state of the EKS cluster
$ kubectl get pods -A \nNAMESPACE NAME READY STATUS RESTARTS AGE\ndefault nginx-test 1/1 Running 0 50s\nkube-system aws-node-2fpm4 2/2 Running 0 14m\nkube-system aws-node-6vhlr 2/2 Running 0 14m\nkube-system aws-node-9kzb2 2/2 Running 0 14m\nkube-system aws-node-vvkq5 2/2 Running 0 14m\nkube-system coredns-5ff5b8d45c-gj9kj 1/1 Running 0 21m\nkube-system coredns-5ff5b8d45c-p64fd 1/1 Running 0 21m\nkube-system kube-proxy-5j9gf 1/1 Running 0 14m\nkube-system kube-proxy-5tm8w 1/1 Running 0 14m\nkube-system kube-proxy-894k9 1/1 Running 0 14m\nkube-system kube-proxy-xgfb8 1/1 Running 0 14m\nkube-system kube-loxilb-6477d6897f-vz74f 1/1 Running 0 5m\n
-
Check the external service for service ingress (via loxilb)
$ kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.100.0.1 <none> 443/TCP 10h\nnginx-lb1 LoadBalancer 10.100.244.105 llbanyextip 55005:30055/TCP 24s\n
"},{"location":"eks-external/#test-the-service","title":"Test the service","text":" - Try to access the service from outside (internet). We can use any public IP associated with any of the loxilb ec2 instances
$ curl http://3.37.191.xx:55005 \n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
"},{"location":"eks-external/#note-we-would-need-to-make-sure-aws-security-groups-are-setup-properly-to-allow-access-for-ingress-traffic","title":"Note - We would need to make sure AWS security groups are setup properly to allow access for ingress traffic.","text":""},{"location":"eks-external/#restricting-loxilb-service-for-a-local-zone-node-group","title":"Restricting loxilb service for a local-zone node-group","text":"For limiting loxilb services to a specific node group of a local-zone, we can use kubenetes node-labels to limit the endpoints of that service to that node-group only. For example, if all the nodes in a local-zone node-groups have a label node.kubernetes.io/local-zone2=true
, then we can create a loxilb service with a following annotation :
apiVersion: v1\nkind: Service\nmetadata:\n name: nginx-lb1\n annotations:\n loxilb.io/nodelabel: \"node.kubernetes.io/local-zone2\"\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: nginx-test\n ports:\n - port: 55002\n targetPort: 80\n type: LoadBalancer\n
This will make sure that loxilb will pick only the endpoint nodes which belong to that node-group only."},{"location":"eks-incluster/","title":"Eks incluster","text":""},{"location":"eks-incluster/#create-an-eks-cluster-with-ingress-access-enabled-by-loxilb-incluster-mode","title":"Create an EKS cluster with ingress access enabled by loxilb (incluster-mode)","text":"This document details the steps to create an EKS cluster and allow external ingress access using loxilb running in incluster mode. loxilb will run as a daemon-set in all the worker nodes while loxilb's operator, kube-loxilb, will run as a replica-set.
Although loxilb has built-in support for associating (floating) AWS EIPs to private subnet addresses of EC2 instances, this is not considered in this particular scenario. But if it is needed, the functionality can be enabled by changing a few parameters in yaml config files.
"},{"location":"eks-incluster/#create-eks-cluster-with-4-worker-nodes-from-a-bastion-node-inside-your-vpc","title":"Create EKS cluster with 4 worker nodes from a bastion node inside your VPC","text":" - It is assumed that aws-cli, kubectl and eksctl are installed in a bastion node
$ eksctl create cluster --version 1.24 --name loxilb-demo --vpc-nat-mode Single --region ap-northeast-2 --node-type t3.small --nodes 4 --with-oidc --managed\n
- Create kube config for kubectl access
$ aws eks update-kubeconfig --region ap-northeast-2 --name loxilb-demo\n
- Double confirm the cluster created
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system aws-node-2fpm4 2/2 Running 0 14m\nkube-system aws-node-6vhlr 2/2 Running 0 14m\nkube-system aws-node-9kzb2 2/2 Running 0 14m\nkube-system aws-node-vvkq5 2/2 Running 0 14m\nkube-system coredns-5ff5b8d45c-gj9kj 1/1 Running 0 21m\nkube-system coredns-5ff5b8d45c-p64fd 1/1 Running 0 21m\nkube-system kube-proxy-5j9gf 1/1 Running 0 14m\nkube-system kube-proxy-5tm8w 1/1 Running 0 14m\nkube-system kube-proxy-894k9 1/1 Running 0 14m\nkube-system kube-proxy-xgfb8 1/1 Running 0 14m\n
"},{"location":"eks-incluster/#deploy-loxilb-as-a-daemon-set","title":"Deploy loxilb as a daemon-set","text":" - Create a file loxilb.yml with the following contents
apiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: loxilb-lb\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n app: loxilb-app\n template:\n metadata:\n name: loxilb-lb\n labels:\n app: loxilb-app\n spec:\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet\n containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command: [ \"/root/loxilb-io/loxilb/loxilb\", \"--egr-hooks\", \"--blacklist=cni[0-9a-z]|veth.|flannel.|eni.\" ]\n ports:\n - containerPort: 11111\n - containerPort: 179\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: loxilb-lb-service\n namespace: kube-system\nspec:\n clusterIP: None\n selector:\n app: loxilb-app\n ports:\n - name: loxilb-app\n port: 11111\n targetPort: 11111\n protocol: TCP\n - name: loxilb-app-bgp\n port: 179\n targetPort: 179\n protocol: TCP\n
- Deploy loxilb
$ kubectl apply -f loxilb.yml\ndaemonset.apps/loxilb-lb created\nservice/loxilb-lb-service created\n
- Double confirm loxilb pods are running properly as a daemonset
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system aws-node-2fpm4 2/2 Running 0 19m\nkube-system aws-node-6vhlr 2/2 Running 0 19m\nkube-system aws-node-9kzb2 2/2 Running 0 19m\nkube-system aws-node-vvkq5 2/2 Running 0 19m\nkube-system coredns-5ff5b8d45c-gj9kj 1/1 Running 0 26m\nkube-system coredns-5ff5b8d45c-p64fd 1/1 Running 0 26m\nkube-system kube-proxy-5j9gf 1/1 Running 0 19m\nkube-system kube-proxy-5tm8w 1/1 Running 0 19m\nkube-system kube-proxy-894k9 1/1 Running 0 19m\nkube-system kube-proxy-xgfb8 1/1 Running 0 19m\nkube-system loxilb-lb-7s45t 1/1 Running 0 18s\nkube-system loxilb-lb-fp6nv 1/1 Running 0 18s\nkube-system loxilb-lb-pbzql 1/1 Running 0 18s\nkube-system loxilb-lb-zzth8 1/1 Running 0 18s\n
"},{"location":"eks-incluster/#deploy-loxilbs-operator-kube-loxilb","title":"Deploy loxilb's operator (kube-loxilb)","text":" - Create a file kube-loxilb.yml with the following contents
---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: kube-loxilb\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nrules:\n - apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - pods\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - endpoints\n - services\n - services/status\n verbs:\n - get\n - watch\n - list\n - patch\n - update\n - apiGroups:\n - gateway.networking.k8s.io\n resources:\n - gatewayclasses\n - gatewayclasses/status\n - gateways\n - gateways/status\n - tcproutes\n - udproutes\n verbs: [\"get\", \"watch\", \"list\", \"patch\", \"update\"]\n - apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - get\n - watch\n - list\n - apiGroups:\n - authentication.k8s.io\n resources:\n - tokenreviews\n verbs:\n - create\n - apiGroups:\n - authorization.k8s.io\n resources:\n - subjectaccessreviews\n verbs:\n - create\n - apiGroups:\n - bgppeer.loxilb.io\n resources:\n - bgppeerservices\n verbs:\n - get\n - watch\n - list\n - create\n - update\n - delete\n\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: kube-loxilb\nsubjects:\n - kind: ServiceAccount\n name: kube-loxilb\n namespace: kube-system\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: kube-loxilb\n namespace: kube-system\n labels:\n app: kube-loxilb-app\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: kube-loxilb-app\n template:\n metadata:\n labels:\n app: kube-loxilb-app\n spec:\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet\n tolerations:\n # Mark the pod as a critical add-on for rescheduling.\n - key: CriticalAddonsOnly\n operator: Exists\n priorityClassName: system-node-critical\n serviceAccountName: kube-loxilb\n terminationGracePeriodSeconds: 0\n containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n - --externalCIDR=0.0.0.0/32\n - --setLBMode=5\n #- --setRoles:0.0.0.0\n resources:\n requests:\n cpu: \"100m\"\n memory: \"50Mi\"\n limits:\n cpu: \"100m\"\n memory: \"50Mi\"\n securityContext:\n privileged: true\n capabilities:\n add: [\"NET_ADMIN\", \"NET_RAW\"]\n
"},{"location":"eks-incluster/#note-externalcidr-args-can-be-set-to-any-public-ip-address-via-which-any-of-the-worker-nodes-can-be-accessed-it-can-be-also-set-to-simply-000032-which-means-lb-will-be-performed-on-any-of-the-nodes-where-loxilb-runs-the-decision-of-which-loxilb-nodeinstance-will-be-chosen-as-ingress-in-this-case-can-be-done-by-route53dns","title":"Note: --externalCIDR args can be set to any Public IP address via which any of the worker nodes can be accessed. It can be also set to simply 0.0.0.0/32 which means LB will be performed on any of the nodes where loxilb runs. The decision of which loxilb node/instance will be chosen as ingress in this case can be done by Route53/DNS.","text":" - Deploy kube-loxilb to EKS cluster
$ kubectl apply -f kube-loxilb.yml\nserviceaccount/kube-loxilb created\nclusterrole.rbac.authorization.k8s.io/kube-loxilb created\ndeployment.apps/kube-loxilb created\n
- Check the state of the EKS cluster
$ kubectl get pods -A \nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system aws-node-2fpm4 2/2 Running 0 35m\nkube-system aws-node-6vhlr 2/2 Running 0 35m\nkube-system aws-node-9kzb2 2/2 Running 0 35m\nkube-system aws-node-vvkq5 2/2 Running 0 35m\nkube-system coredns-5ff5b8d45c-gj9kj 1/1 Running 0 42m\nkube-system coredns-5ff5b8d45c-p64fd 1/1 Running 0 42m\nkube-system kube-loxilb-c7cd4fccd-hjg8w 1/1 Running 0 116s\nkube-system kube-proxy-5j9gf 1/1 Running 0 35m\nkube-system kube-proxy-5tm8w 1/1 Running 0 35m\nkube-system kube-proxy-894k9 1/1 Running 0 35m\nkube-system kube-proxy-xgfb8 1/1 Running 0 35m\nkube-system loxilb-lb-7s45t 1/1 Running 0 16m\nkube-system loxilb-lb-fp6nv 1/1 Running 0 16m\nkube-system loxilb-lb-pbzql 1/1 Running 0 16m\nkube-system loxilb-lb-zzth8 1/1 Running 0 16m\n
"},{"location":"eks-incluster/#install-a-test-service","title":"Install a test service","text":" - Create a file nginx.yml with the following contents:
apiVersion: v1\nkind: Service\nmetadata:\n name: nginx-lb1\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: nginx-test\n ports:\n - port: 55002\n targetPort: 80\n type: LoadBalancer\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-test\n labels:\n what: nginx-test\nspec:\n containers:\n - name: nginx-test\n image: nginx:stable\n ports:\n - containerPort: 80\n
- Deploy test nginx service to EKS
$ kubectl apply -f nginx.yml\nservice/nginx-lb1 created\n
-
Check the state of the EKS cluster
$ kubectl get pods -A \nNAMESPACE NAME READY STATUS RESTARTS AGE\ndefault nginx-test 1/1 Running 0 50s\nkube-system aws-node-2fpm4 2/2 Running 0 39m\nkube-system aws-node-6vhlr 2/2 Running 0 39m\nkube-system aws-node-9kzb2 2/2 Running 0 39m\nkube-system aws-node-vvkq5 2/2 Running 0 39m\nkube-system coredns-5ff5b8d45c-gj9kj 1/1 Running 0 46m\nkube-system coredns-5ff5b8d45c-p64fd 1/1 Running 0 46m\nkube-system kube-loxilb-c7cd4fccd-hjg8w 1/1 Running 0 6m13s\nkube-system kube-proxy-5j9gf 1/1 Running 0 39m\nkube-system kube-proxy-5tm8w 1/1 Running 0 39m\nkube-system kube-proxy-894k9 1/1 Running 0 39m\nkube-system kube-proxy-xgfb8 1/1 Running 0 39m\nkube-system loxilb-lb-7s45t 1/1 Running 0 20m\nkube-system loxilb-lb-fp6nv 1/1 Running 0 20m\nkube-system loxilb-lb-pbzql 1/1 Running 0 20m\nkube-system loxilb-lb-zzth8 1/1 Running 0 20m\n
-
Check the external service for service ingress (via loxilb)
$ kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.100.0.1 <none> 443/TCP 6h19m\nnginx-lb1 LoadBalancer 10.100.63.175 llbanyextip 55002:32704/TCP 4hs\n
"},{"location":"eks-incluster/#test-the-service","title":"Test the service","text":" - Try to access the service from outside (internet). We can use any public IP associated with the cluster (worker) nodes
$ curl http://43.201.76.xx:55002 \n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
"},{"location":"eks-incluster/#note-we-would-need-to-make-sure-aws-security-groups-are-setup-properly-to-allow-access-for-ingress-traffic","title":"Note - We would need to make sure AWS security groups are setup properly to allow access for ingress traffic.","text":""},{"location":"eks-incluster/#restricting-loxilb-service-for-a-local-zone-node-group","title":"Restricting loxilb service for a local-zone node-group","text":"For limiting loxilb services to a specific node group of a local-zone, we can use kubenetes node-labels to limit the endpoints of that service to that node-group only. For example, if all the nodes in a local-zone node-groups have a label node.kubernetes.io/local-zone2=true
, then we can create a loxilb service with a following annotation :
apiVersion: v1\nkind: Service\nmetadata:\n name: nginx-lb1\n loxilb.io/nodelabel: \"node.kubernetes.io/local-zone2\"\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: nginx-test\n ports:\n - port: 55002\n targetPort: 80\n type: LoadBalancer\n
This will make sure that loxilb will pick only the endpoint nodes which belong to that node-group only."},{"location":"ext-ep/","title":"How-To - access end-points outside K8s","text":"In Kubernetes, there are two key concepts - Service and Endpoint."},{"location":"ext-ep/#what-is-service","title":"What is Service?","text":"
A \"Service\" is a method that exposes an application running in one or more pods.
"},{"location":"ext-ep/#what-is-an-endpoint","title":"What is an Endpoint?","text":"An \"Endpoint\" defines a list of network endpoints(IP address and port), typically referenced by a Service to define which Pods the traffic can be sent to.
When we create a service in Kubernetes, usually we do not have to worry about the Endpoints' management as it is taken care by Kubernetes itself. But, sometimes not all the services run in a single cluster, some of them are hosted in other cluster(s) e.g. DB, storage, web services, trancoder etc.
When endpoints are outside of the Kubernetes cluster, Endpoint objects can still be used to define and manage those external endpoints. This scenario is common when Kubernetes services need to interact with external systems, APIs, or services located outside of the cluster. Here's a practical example:
Suppose you have a Kubernetes cluster hosting a microservices-based application, and one of the services needs to communicate with an external database hosted outside of the cluster. In this case, you can use an Endpoint object to define the external database endpoint within Kubernetes. In that case, your cloud-native apps needs to connect to the external services with external endpoints.
"},{"location":"ext-ep/#service-with-external-endpoint","title":"Service with External Endpoint","text":"You can create an external service with loxilb as well. For this, You can simply create an Endpoint Object and then create a service using this endpoint object:
endpoint.yml
apiVersion: v1\nkind: Endpoints\nmetadata:\n name: ext-tcp-lb\nsubsets:\n - addresses:\n - ip: 192.168.82.2\n ports:\n - port: 80\n
Create endpoint object:
$ kubectl apply -f endpoint.yml\n
View endpoints:
$ kubectl get ep\nNAME ENDPOINTS AGE\nkubernetes 10.0.2.15:6443 16m\next-tcp-lb 192.168.82.2:80 16m\n
service.yml
apiVersion: v1\nkind: Service\nmetadata:\n name: ext-tcp-lb\nspec:\n loadBalancerClass: loxilb.io/loxilb\n type: LoadBalancer \n ports:\n - protocol: TCP\n port: 8000\n targetPort: 80\n
Create Service:
$ kubectl apply -f service.yml\n
View Service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 16m\next-tcp-lb LoadBalancer 10.43.164.108 llb-20.20.20.1 8000:30355/TCP 15m\n
"},{"location":"faq/","title":"loxilb FAQs","text":" - Does loxilb depend on what kind of CNI is deployed in the cluster ?
Yes, loxilb configuration and operation might be related to which CNI (Calico, Cilium etc) is in use. loxilb just needs a way to find a route to its end-points. This also depends on how the network topology is laid out. For example, if a separated network for nodePort and external LB services is in effect or not. We will have a detailed guide on best practices for loxilb deployment soon. In the meantime, kindly reach out to us via github or loxilb forum
- Can loxilb be possibly run outside the released docker image ?
Yes, loxilb can be run outside the provided docker image. Docker image gives it good portability across various linux like OS's without any performance impact. However, if need is to run outside its own docker, kindly follow README of various loxilb-io repositories.
- Can loxilb also act as a CNI ?
loxilb supports many functionalities of a CNI but loxilb dev team is happy solving external LB and related connectivity problems for the time being. If there is a future requirement not met by currently available CNIs, we might chip in as well
- Is there a commercially supported version of loxilb ?
At this point of time, loxilb-team is working hard to provide a high-quality open-source product. If users need commercial support, kindly get in touch with us
- Can loxilb run in a standalone mode (without Kubernetes) ?
Very much so. loxilb can run in a standalone mode. Please follow various guides available in loxilb repo to run loxilb in a standalone mode.
- How loxilb ensures conformance wtih Kubernetes ?
loxilb uses kubetest/kubetest2 plus various other test-utilities as part of its CI/CD workflows. We are also planning to get ourselves officially supported by distros like RedHat Openshift
- Where is loxilb deployed so far ?
loxilb is currently used in academia for R&D and various organizations are in process of using it for PoCs. We will update the list of deployments as and when they are officially known
"},{"location":"gtp/","title":"Creating a simple test topology for testing GTP with loxilb","text":"To test loxilb in a completely virtual environment, it is possible to quickly create a virtual test topology. We will explain the steps required to create a very simple topology (more complex topologies can be built using this example) :
graph LR;\n UE1[UE1<br>32.32.32.1]-->B[llb1<br>10.10.10.59/24];\n UE2[UE2<br>31.31.31.1]-->B;\n B-- GTP Tunnel ---C[llb2<br>10.10.10.56/24]\n C-->D[EP1<br>31.31.31.1];\n C-->E[EP2<br>32.32.32.1];\n C-->F[EP3<br>17.17.17.1];\n
Prerequisites :
- The system should be x86 based (bare-metal or virtual)
- Docker should be preinstalled
"},{"location":"gtp/#next-step-is-to-run-the-following-script-to-create-and-configure-the-above-topology","title":"Next step is to run the following script to create and configure the above topology.","text":"Please refer scenario3 in loxilb/cicd script
Script will spawn dockers for UEs, loxilbs and endpoints.
In the script, UEs will try to access service IP(88.88.88.88). We are creating sessions and configuring load-balancer rule inside loxilb docker as follows :
dexec=\"docker exec -it \"\n##llb1 config\n\n#Creating session for ue1\n$dexec llb1 loxicmd create session user1 88.88.88.88 --accessNetworkTunnel 1:10.10.10.56 --coreNetworkTunnel=1:10.10.10.59\n\n#Creating ULCL filter with ue1 IP\n$dexec llb1 loxicmd create sessionulcl user1 --ulclArgs=11:32.32.32.1\n\n#Creating session for ue2\n$dexec llb1 loxicmd create session user2 88.88.88.88 --accessNetworkTunnel 2:10.10.10.56 --coreNetworkTunnel=2:10.10.10.59\n\n#Creating ULCL filter with ue2 IP\n$dexec llb1 loxicmd create sessionulcl user2 --ulclArgs=12:31.31.31.1\n\n##llb2 config\n#Creating session for ue1\n$dexec llb2 loxicmd create session user1 32.32.32.1 --accessNetworkTunnel 1:10.10.10.59 --coreNetworkTunnel=1:10.10.10.56\n\n#Creating ULCL filter with service IP for ue1\n$dexec llb2 loxicmd create sessionulcl user1 --ulclArgs=11:88.88.88.88\n\n#Creating session for ue1\n$dexec llb2 loxicmd create session user2 31.31.31.1 --accessNetworkTunnel 2:10.10.10.59 --coreNetworkTunnel=2:10.10.10.56\n\n#Creating ULCL filter with service IP for ue2\n$dexec llb2 loxicmd create sessionulcl user2 --ulclArgs=12:88.88.88.88\n\n\n##Create LB rule\n$dexec llb2 loxicmd create lb 88.88.88.88 --tcp=2020:8080 --endpoints=25.25.25.1:1,26.26.26.1:1,27.27.27.1:1\n
So, we now have two instances loxilb running as a docker. First instance of loxilb, llb1 is simulated as gNB as it is used to encap the incoming traffic from UE1 or UE2. Breakout, forward or loadbalancer rule can be configured on second instance llb2 We can run any workloads as we wish inside the host containers and start testing loxilb.
"},{"location":"ha-deploy-KOR/","title":"loxilb \uace0\uac00\uc6a9\uc131(HA) \ubc30\ud3ec \ubc29\ubc95","text":"\uc774 \ubb38\uc11c\uc5d0\uc11c\ub294 loxilb\ub97c \uace0\uac00\uc6a9\uc131(HA)\uc73c\ub85c \ubc30\ud3ec\ud558\ub294 \ub2e4\uc591\ud55c \uc2dc\ub098\ub9ac\uc624\uc5d0 \ub300\ud574 \uc124\uba85\ud569\ub2c8\ub2e4. \uc774 \ud398\uc774\uc9c0\ub97c \uacc4\uc18d\ud558\uae30 \uc804\uc5d0 kube-loxilb\uc640 loxilb\uac00 \uc9c0\uc6d0\ud558\ub294 \ub2e4\uc591\ud55c NAT \ubaa8\ub4dc\uc5d0 \ub300\ud55c \uae30\ubcf8\uc801\uc778 \uc774\ud574\ub97c \uac00\uc9c0\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. loxilb\ub294 \uc544\ud0a4\ud14d\ucc98 \uc120\ud0dd\uc5d0 \ub530\ub77c \uc778-\ud074\ub7ec\uc2a4\ud130 \ub610\ub294 Kubernetes \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc5d0\uc11c \uc2e4\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ubb38\uc11c\uc5d0\uc11c\ub294 \uc778-\ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubc30\ud3ec\ub97c \uac00\uc815\ud558\uc9c0\ub9cc, \uc720\uc0ac\ud55c \uad6c\uc131\uc774 \uc678\ubd80 \uad6c\uc131 \uc5d0\uc11c\ub3c4 \ub3d9\uc77c\ud558\uac8c \uac00\ub2a5\ud569\ub2c8\ub2e4.
- \uc2dc\ub098\ub9ac\uc624 1 - Flat L2 \ub124\ud2b8\uc6cc\ud0b9 (\uc561\ud2f0\ube0c-\ubc31\uc5c5)
- \uc2dc\ub098\ub9ac\uc624 2 - L3 \ub124\ud2b8\uc6cc\ud06c (BGP\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\ubc31\uc5c5 \ubaa8\ub4dc)
- \uc2dc\ub098\ub9ac\uc624 3 - L3 \ub124\ud2b8\uc6cc\ud06c (BGP ECMP\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\uc561\ud2f0\ube0c)
- \uc2dc\ub098\ub9ac\uc624 4 - \uc5f0\uacb0 \ub3d9\uae30\ud654\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\ubc31\uc5c5
- \uc2dc\ub098\ub9ac\uc624 5 - \ube60\ub978 Fail-over \uac10\uc9c0\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\ubc31\uc5c5(BFD)
"},{"location":"ha-deploy-KOR/#1-flat-l2-","title":"\uc2dc\ub098\ub9ac\uc624 1 - Flat L2 \ub124\ud2b8\uc6cc\ud0b9 (\uc561\ud2f0\ube0c-\ubc31\uc5c5)","text":""},{"location":"ha-deploy-KOR/#_1","title":"\uc124\uc815","text":"\uc774 \ubc30\ud3ec \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 Kubernetes\uc640 loxilb\uac00 \ub2e4\uc74c\uacfc \uac19\uc774 \uc124\uc815\ub429\ub2c8\ub2e4:
Kubernetes\ub294 2\uac1c\uc758 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc640 2\uac1c\uc758 \uc6cc\ucee4 \ub178\ub4dc\ub85c \uad6c\uc131\ub41c \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud558\uba70, \ubaa8\ub4e0 \ub178\ub4dc\ub294 \ub3d9\uc77c\ud55c 192.168.80.0/24 \uc11c\ube0c\ub137\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 loxilb\uac00 \ubaa8\ub4e0 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc5d0\uc11c DaemonSet\uc73c\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4. \uadf8\ub9ac\uace0 kube-loxilb\ub294 Deployment\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_2","title":"\uc801\ud569\ud55c \uc0ac\uc6a9 \uc0ac\ub840","text":" - \ud074\ub77c\uc774\uc5b8\ud2b8\uc640 \uc11c\ube44\uc2a4\uac00 \ub3d9\uc77c\ud55c \uc11c\ube0c\ub137\uc5d0 \uc788\uc5b4\uc57c \ud558\ub294 \uacbd\uc6b0.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \ub3d9\uc77c\ud55c \uc11c\ube0c\ub137\uc5d0 \uc788\uac70\ub098 \uc5c6\uc744 \uc218 \uc788\ub294 \uacbd\uc6b0.
- \uac04\ub2e8\ud55c \ubc30\ud3ec\ub97c \uc6d0\ud558\ub294 \uacbd\uc6b0.
"},{"location":"ha-deploy-KOR/#kube-loxilb","title":"kube-loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \ub85c\uceec \uc11c\ube0c\ub137\uc5d0\uc11c CIDR \uc120\ud0dd.
- SetRoles \uc635\uc158\uc744 \uc120\ud0dd\ud558\uc5ec \ud65c\uc131 loxilb pod\ub97c \uc120\ud0dd\ud560 \uc218 \uc788\ub3c4\ub85d \ud569\ub2c8\ub2e4.
- loxilb\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 Fail-over \uc2dc \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130\ub97c \uc120\ucd9c\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c One-arm \uc11c\ube44\uc2a4 \ubaa8\ub4dc\uc5d0\uc11c loxilb\ub97c \uc124\uc815\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_3","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --externalCIDR=192.168.80.200/24\n - --setRoles=0.0.0.0\n - --setLBMode=1\n
- \"--externalCIDR=192.168.80.200/24\" - svc\uc758 \uc678\ubd80 \uc11c\ube44\uc2a4 IP\ub294 externalCIDR \ubc94\uc704\uc5d0\uc11c \uc120\ud0dd\ub429\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8, svc \ubc0f \ud074\ub7ec\uc2a4\ud130\uac00 \ub3d9\uc77c\ud55c \uc11c\ube0c\ub137\uc5d0 \uc788\uc2b5\ub2c8\ub2e4.
- \"--setRoles=0.0.0.0\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 loxilb \uc778\uc2a4\ud134\uc2a4 \uc911\uc5d0\uc11c \ud65c\uc131-\ubc31\uc5c5\uc744 \uc120\ud0dd\ud558\uace0 svc IP\ub97c \ud65c\uc131 loxilb \ub178\ub4dc\uc5d0 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- \"--setLBMode=1\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c One-arm \ubaa8\ub4dc\uc5d0\uc11c svc\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
\uc0d8\ud50c kube-loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#loxilb","title":"loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - svc\ub85c \ud5a5\ud558\ub294 \uc678\ubd80 \ud2b8\ub798\ud53d\uc744 \ucd94\uc801\ud558\uace0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc804\ub2ec\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \ud65c\uc131 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4(\uad6c\uc131\ub41c \uacbd\uc6b0).
"},{"location":"ha-deploy-KOR/#_4","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command: [ \"/root/loxilb-io/loxilb/loxilb\", \"--egr-hooks\", \"--blacklist=cni[0-9a-z]|veth.|flannel.\" ]\n ports:\n - containerPort: 11111\n - containerPort: 179\n - containerPort: 50051\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n
-
\"--egr-hooks\" - \uc6cc\ud06c\ub85c\ub4dc\uac00 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc5d0 \uc2a4\ucf00\uc904\ub9c1\ub420 \uc218 \uc788\ub294 \uacbd\uc6b0 \ud544\uc694\ud569\ub2c8\ub2e4. \uc6cc\ucee4 \ub178\ub4dc\ub85c \uc6cc\ud06c\ub85c\ub4dc \uc2a4\ucf00\uc904\ub9c1\uc744 \uad00\ub9ac\ud558\ub294 \uacbd\uc6b0 \ud574\ub2f9 \ubcc0\uc218\ub97c \uc124\uc815\ud560 \ud544\uc694\ub294 \uc5c6\uc2b5\ub2c8\ub2e4.
-
\"--blacklist=cni[0-9a-z]|veth.|flannel.\" - \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\uc5d0\uc11c \uc2e4\ud589\ud558\ub294 \uacbd\uc6b0 \ud544\uc218\uc785\ub2c8\ub2e4. loxilb\ub294 \ubaa8\ub4e0 \uc778\ud130\ud398\uc774\uc2a4\uc5d0 ebpf \ud504\ub85c\uadf8\ub7a8\uc744 \ubd80\ucc29\ud558\uc9c0\ub9cc \uae30\ubcf8 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c \uc2e4\ud589 \uc911\uc774\ubbc0\ub85c \ubaa8\ub4e0 \uc778\ud130\ud398\uc774\uc2a4(CNI \uc778\ud130\ud398\uc774\uc2a4 \ud3ec\ud568)\uac00 \ub178\ucd9c\ub418\uace0 loxilb\ub294 \uc774\ub7ec\ud55c \uc778\ud130\ud398\uc774\uc2a4\uc5d0 ebpf \ud504\ub85c\uadf8\ub7a8\uc744 \ubd80\ucc29\ud558\uac8c \ub429\ub2c8\ub2e4. \uc774\ub294 \uc6d0\ud558\ub294 \ubc14\uac00 \uc544\ub2c8\ubbc0\ub85c \uc0ac\uc6a9\uc790\ub294 \uc774\ub7ec\ud55c \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc81c\uc678\ud558\ub294 \uc815\uaddc \ud45c\ud604\uc2dd\uc744 \uc5b8\uae09\ud574\uc57c \ud569\ub2c8\ub2e4. \uc8fc\uc5b4\uc9c4 \uc608\uc81c\uc758 \uc815\uaddc \ud45c\ud604\uc2dd\uc740 flannel \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc81c\uc678\ud569\ub2c8\ub2e4. \"--blacklist=cali.|tunl.|vxlan[.]calico|veth.|cni[0-9a-z]\" \uc815\uaddc \ud45c\ud604\uc2dd\uc740 calico CNI\uc640 \ud568\uaed8 \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.
\uc0d8\ud50c loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#fail-over","title":"Fail-Over","text":"\uc774 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 Fail-over \uc2dc\ub098\ub9ac\uc624\ub97c \uc124\uba85\ud569\ub2c8\ub2e4:
kube-loxilb\ub294 loxilb\uc758 \uc0c1\ud0dc\ub97c \uc9c0\uc18d\uc801\uc73c\ub85c \ubaa8\ub2c8\ud130\ub9c1\ud569\ub2c8\ub2e4. \uc7a5\uc560\uac00 \ubc1c\uc0dd\ud558\uba74 loxilb\uc758 \uc0c1\ud0dc \ubcc0\uacbd\uc744 \uac10\uc9c0\ud558\uace0 \uc0ac\uc6a9 \uac00\ub2a5\ud55c loxilb pod \ud480\uc5d0\uc11c \uc0c8\ub85c\uc6b4 \u201c\ud65c\uc131\u201d\uc744 \ud560\ub2f9\ud569\ub2c8\ub2e4. \uc0c8\ub85c\uc6b4 pod\ub294 \uc774\uc804\uc5d0 \ub2e4\ub978 loxilb pod\uc5d0 \ud560\ub2f9\ub41c svcIP\ub97c \uc0c1\uc18d\ubc1b\uc544 \uc11c\ube44\uc2a4\uac00 \uc0c8\ub86d\uac8c \ud65c\uc131\ud654\ub41c loxilb pod\uc5d0 \uc758\ud574 \uc81c\uacf5\ub429\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#2-l3-bgp-","title":"\uc2dc\ub098\ub9ac\uc624 2 - L3 \ub124\ud2b8\uc6cc\ud06c (BGP\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\ubc31\uc5c5 \ubaa8\ub4dc)","text":""},{"location":"ha-deploy-KOR/#_5","title":"\uc124\uc815","text":"\uc774 \ubc30\ud3ec \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 Kubernetes\uc640 loxilb\uac00 \ub2e4\uc74c\uacfc \uac19\uc774 \uc124\uc815\ub429\ub2c8\ub2e4:
Kubernetes\ub294 2\uac1c\uc758 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc640 2\uac1c\uc758 \uc6cc\ucee4 \ub178\ub4dc\ub85c \uad6c\uc131\ub41c \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud558\uba70, \ubaa8\ub4e0 \ub178\ub4dc\ub294 \ub3d9\uc77c\ud55c 192.168.80.0/24 \uc11c\ube0c\ub137\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. SVC\ub294 \ud074\ub7ec\uc2a4\ud130/\ub85c\uceec \uc11c\ube0c\ub137\uc774 \uc544\ub2cc \uc678\ubd80 IP\ub97c \uac00\uc9d1\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 loxilb\uac00 \ubaa8\ub4e0 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc5d0\uc11c DaemonSet\uc73c\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4. \uadf8\ub9ac\uace0 kube-loxilb\ub294 Deployment\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_6","title":"\uc801\ud569\ud55c \uc0ac\uc6a9 \uc0ac\ub840","text":" - \ud074\ub77c\uc774\uc5b8\ud2b8\uc640 \ud074\ub7ec\uc2a4\ud130\uac00 \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0 \uc788\ub294 \uacbd\uc6b0.
- \ud074\ub77c\uc774\uc5b8\ud2b8\uc640 svc VIP\uac00 \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0 \uc788\uc5b4\uc57c \ud558\ub294 \uacbd\uc6b0(\ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub3c4 \ub2e4\ub978 \ub124\ud2b8\uc6cc\ud06c\uc5d0 \uc788\uc744 \uc218 \uc788\uc74c).
- \ud074\ub77c\uc6b0\ub4dc \ubc30\ud3ec\uc5d0 \uc774\uc0c1\uc801\uc785\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#kube-loxilb_1","title":"kube-loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0\uc11c CIDR\uc744 \uc120\ud0dd\ud569\ub2c8\ub2e4.
- SetRoles \uc635\uc158\uc744 \uc120\ud0dd\ud558\uc5ec \ud65c\uc131 loxilb pod\ub97c \uc120\ud0dd\ud560 \uc218 \uc788\ub3c4\ub85d \ud569\ub2c8\ub2e4.
- loxilb\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 Fail-over \uc2dc \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130\ub97c \uc120\ucd9c\ud569\ub2c8\ub2e4.
- loxilb Pod \uac04\uc758 BGP \ud53c\uc5b4\ub9c1 \ud504\ub85c\ube44\uc800\ub2dd\uc744 \uc790\ub3d9\ud654\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c One-arm \uc11c\ube44\uc2a4 \ubaa8\ub4dc\uc5d0\uc11c loxilb\ub97c \uc124\uc815\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_7","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --externalCIDR=123.123.123.1/24\n - --setRoles=0.0.0.0\n - --setLBMode=1\n - --setBGP=65100\n - --extBGPPeers=50.50.50.1:65101\n
* \"--externalCIDR=123.123.123.1/24\" - svc\uc758 \uc678\ubd80 \uc11c\ube44\uc2a4 IP\ub294 externalCIDR \ubc94\uc704\uc5d0\uc11c \uc120\ud0dd\ub429\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8, svc \ubc0f \ud074\ub7ec\uc2a4\ud130\uac00 \ubaa8\ub450 \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0 \uc788\uc2b5\ub2c8\ub2e4. * \"--setRoles=0.0.0.0\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 loxilb \uc778\uc2a4\ud134\uc2a4 \uc911\uc5d0\uc11c \ud65c\uc131-\ubc31\uc5c5\uc744 \uc120\ud0dd\ud558\uace0 svc IP\ub97c \ud65c\uc131 loxilb \ub178\ub4dc\uc5d0 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. * \"--setLBMode=1\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c One-arm \ubaa8\ub4dc\uc5d0\uc11c svc\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. * \"--setBGP=65100\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 BGP \uc778\uc2a4\ud134\uc2a4\uc5d0 \ub85c\uceec AS \ubc88\ud638\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. * \"--extBGPPeers=50.50.50.1:65101\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 BGP \uc778\uc2a4\ud134\uc2a4\uc758 \uc678\ubd80 \uc774\uc6c3\uc744 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc0d8\ud50c kube-loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#loxilb_1","title":"loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \uc0c1\ud0dc(\ud65c\uc131 \ub610\ub294 \ubc31\uc5c5)\uc5d0 \ub530\ub77c SVC IP\ub97c \uad11\uace0\ud569\ub2c8\ub2e4.
- svc\ub85c \ud5a5\ud558\ub294 \uc678\ubd80 \ud2b8\ub798\ud53d\uc744 \ucd94\uc801\ud558\uace0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc804\ub2ec\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \ud65c\uc131 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4(\uad6c\uc131\ub41c \uacbd\uc6b0).
"},{"location":"ha-deploy-KOR/#_8","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command: [ \"/root/loxilb-io/loxilb/loxilb\", \"--bgp\", \"--egr-hooks\", \"--blacklist=cni[0-9a-z]|veth.|flannel.\" ]\n ports:\n - containerPort: 11111\n - containerPort: 179\n - containerPort: 50051\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n
-
\"--bgp\" - \uc635\uc158\uc740 loxilb\uac00 goBGP \uc778\uc2a4\ud134\uc2a4\uc640 \ud568\uaed8 \uc2e4\ud589\ub418\uc5b4 \ud65c\uc131/\ubc31\uc5c5 \uc0c1\ud0dc\uc5d0 \ub530\ub77c \uc801\uc808\ud55c \uc6b0\uc120\uc21c\uc704\ub85c \uacbd\ub85c\ub97c \uad11\uace0\ud560 \uc218 \uc788\uac8c \ud569\ub2c8\ub2e4.
-
\"--egr-hooks\" - \uc6cc\ud06c\ub85c\ub4dc\uac00 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc5d0 \uc2a4\ucf00\uc904\ub9c1\ub420 \uc218 \uc788\ub294 \uacbd\uc6b0 \ud544\uc694\ud569\ub2c8\ub2e4. \uc6cc\ucee4 \ub178\ub4dc\ub85c \uc6cc\ud06c\ub85c\ub4dc \uc2a4\ucf00\uc904\ub9c1\uc744 \uad00\ub9ac\ud558\ub294 \uacbd\uc6b0 \uc774 \uc778\uc218\ub97c \uc5b8\uae09\ud560 \ud544\uc694\ub294 \uc5c6\uc2b5\ub2c8\ub2e4.
-
\"--blacklist=cni[0-9a-z]|veth.|flannel.\" - \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\uc5d0\uc11c \uc2e4\ud589\ud558\ub294 \uacbd\uc6b0 \ud544\uc218\uc785\ub2c8\ub2e4. loxilb\ub294 \ubaa8\ub4e0 \uc778\ud130\ud398\uc774\uc2a4\uc5d0 ebpf \ud504\ub85c\uadf8\ub7a8\uc744 \ubd80\ucc29\ud558\uc9c0\ub9cc \uae30\ubcf8 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c \uc2e4\ud589 \uc911\uc774\ubbc0\ub85c \ubaa8\ub4e0 \uc778\ud130\ud398\uc774\uc2a4(CNI \uc778\ud130\ud398\uc774\uc2a4 \ud3ec\ud568)\uac00 \ub178\ucd9c\ub418\uace0 loxilb\ub294 \uc774\ub7ec\ud55c \uc778\ud130\ud398\uc774\uc2a4\uc5d0 ebpf \ud504\ub85c\uadf8\ub7a8\uc744 \ubd80\ucc29\ud558\uac8c \ub429\ub2c8\ub2e4. \uc774\ub294 \uc6d0\ud558\ub294 \ubc14\uac00 \uc544\ub2c8\ubbc0\ub85c \uc0ac\uc6a9\uc790\ub294 \uc774\ub7ec\ud55c \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc81c\uc678\ud558\ub294 \uc815\uaddc \ud45c\ud604\uc2dd\uc744 \uc5b8\uae09\ud574\uc57c \ud569\ub2c8\ub2e4. \uc8fc\uc5b4\uc9c4 \uc608\uc81c\uc758 \uc815\uaddc \ud45c\ud604\uc2dd\uc740 flannel \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc81c\uc678\ud569\ub2c8\ub2e4. \"--blacklist=cali.|tunl.|vxlan[.]calico|veth.|cni[0-9a-z]\" \uc815\uaddc \ud45c\ud604\uc2dd\uc740 calico CNI\uc640 \ud568\uaed8 \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.
\uc0d8\ud50c loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#fail-over_1","title":"Fail-over","text":"\uc774 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 Fail-over \uc2dc\ub098\ub9ac\uc624\ub97c \uc124\uba85\ud569\ub2c8\ub2e4:
kube-loxilb\ub294 loxilb\uc758 \uc0c1\ud0dc\ub97c \uc9c0\uc18d\uc801\uc73c\ub85c \ubaa8\ub2c8\ud130\ub9c1\ud569\ub2c8\ub2e4. \uc7a5\uc560\uac00 \ubc1c\uc0dd\ud558\uba74 loxilb\uc758 \uc0c1\ud0dc \ubcc0\uacbd\uc744 \uac10\uc9c0\ud558\uace0 \uc0ac\uc6a9 \uac00\ub2a5\ud55c loxilb pod \ud480\uc5d0\uc11c \uc0c8\ub85c\uc6b4 \u201c\ud65c\uc131\u201d\uc744 \ud560\ub2f9\ud569\ub2c8\ub2e4. \uc0c8\ub85c\uc6b4 pod\ub294 \uc774\uc804\uc5d0 \ub2e4\ub978 loxilb pod\uc5d0 \ud560\ub2f9\ub41c svcIP\ub97c \uc0c1\uc18d\ubc1b\uc544 \uc0c8 \uc0c1\ud0dc\uc5d0 \ub530\ub77c SVC IP\ub97c \uad11\uace0\ud569\ub2c8\ub2e4. \ud074\ub77c\uc774\uc5b8\ud2b8\ub294 SVCIP\uc5d0 \ub300\ud55c \uc0c8\ub85c\uc6b4 \uacbd\ub85c\ub97c \uc218\uc2e0\ud558\uace0 \uc11c\ube44\uc2a4\ub294 \uc0c8\ub85c \ud65c\uc131\ud654\ub41c loxilb pod\uc5d0 \uc758\ud574 \uc81c\uacf5\ub429\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#3-l3-bgp-ecmp-","title":"\uc2dc\ub098\ub9ac\uc624 3 - L3 \ub124\ud2b8\uc6cc\ud06c (BGP ECMP\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\uc561\ud2f0\ube0c)","text":""},{"location":"ha-deploy-KOR/#_9","title":"\uc124\uc815","text":"\uc774 \ubc30\ud3ec \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 Kubernetes\uc640 loxilb\uac00 \ub2e4\uc74c\uacfc \uac19\uc774 \uc124\uc815\ub429\ub2c8\ub2e4:
Kubernetes\ub294 2\uac1c\uc758 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc640 2\uac1c\uc758 \uc6cc\ucee4 \ub178\ub4dc\ub85c \uad6c\uc131\ub41c \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud558\uba70, \ubaa8\ub4e0 \ub178\ub4dc\ub294 \ub3d9\uc77c\ud55c 192.168.80.0/24 \uc11c\ube0c\ub137\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. SVC\ub294 \ud074\ub7ec\uc2a4\ud130/\ub85c\uceec \uc11c\ube0c\ub137\uc774 \uc544\ub2cc \uc678\ubd80 IP\ub97c \uac00\uc9d1\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 loxilb\uac00 \ubaa8\ub4e0 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc5d0\uc11c DaemonSet\uc73c\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4. \uadf8\ub9ac\uace0 kube-loxilb\ub294 Deployment\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_10","title":"\uc801\ud569\ud55c \uc0ac\uc6a9 \uc0ac\ub840","text":" - \ud074\ub77c\uc774\uc5b8\ud2b8\uc640 \ud074\ub7ec\uc2a4\ud130\uac00 \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0 \uc788\ub294 \uacbd\uc6b0.
- \ud074\ub77c\uc774\uc5b8\ud2b8\uc640 svc VIP\uac00 \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0 \uc788\uc5b4\uc57c \ud558\ub294 \uacbd\uc6b0(\ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub3c4 \ub2e4\ub978 \ub124\ud2b8\uc6cc\ud06c\uc5d0 \uc788\uc744 \uc218 \uc788\uc74c).
- \ud074\ub77c\uc6b0\ub4dc \ubc30\ud3ec\uc5d0 \uc774\uc0c1\uc801\uc785\ub2c8\ub2e4.
- \uc561\ud2f0\ube0c-\uc561\ud2f0\ube0c \ud074\ub7ec\uc2a4\ud130\ub9c1\uc73c\ub85c \uc778\ud574 \ub354 \ub098\uc740 \uc131\ub2a5\uc774 \ud544\uc694\ud558\uc9c0\ub9cc \ub124\ud2b8\uc6cc\ud06c \uc7a5\uce58/\ud638\uc2a4\ud2b8\uac00 ECMP\ub97c \uc9c0\uc6d0\ud574\uc57c \ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#kube-loxilb_2","title":"kube-loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0\uc11c CIDR\uc744 \uc120\ud0dd\ud569\ub2c8\ub2e4.
- \uc774 \uacbd\uc6b0 SetRoles \uc635\uc158\uc744 \uc120\ud0dd\ud558\uc9c0 \ub9c8\uc138\uc694(svcIP\uac00 \ub3d9\uc77c\ud55c attributes/prio/med \ub85c \uad11\uace0\ub429\ub2c8\ub2e4).
- loxilb Pod \uac04\uc758 BGP \ud53c\uc5b4\ub9c1 \ud504\ub85c\ube44\uc800\ub2dd\uc744 \uc790\ub3d9\ud654\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c One-arm \uc11c\ube44\uc2a4 \ubaa8\ub4dc\uc5d0\uc11c loxilb\ub97c \uc124\uc815\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_11","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --externalCIDR=123.123.123.1/24\n - --setLBMode=1\n - --setBGP=65100\n - --extBGPPeers=50.50.50.1:65101\n
- \"--externalCIDR=123.123.123.1/24\" - svc\uc758 \uc678\ubd80 \uc11c\ube44\uc2a4 IP\ub294 externalCIDR \ubc94\uc704\uc5d0\uc11c \uc120\ud0dd\ub429\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8, svc \ubc0f \ud074\ub7ec\uc2a4\ud130\uac00 \ubaa8\ub450 \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0 \uc788\uc2b5\ub2c8\ub2e4.
- \"--setLBMode=1\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c One-arm \ubaa8\ub4dc\uc5d0\uc11c svc\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- \"--setBGP=65100\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 BGP \uc778\uc2a4\ud134\uc2a4\uc5d0 \ub85c\uceec AS \ubc88\ud638\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- \"--extBGPPeers=50.50.50.1:65101\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 BGP \uc778\uc2a4\ud134\uc2a4\uc758 \uc678\ubd80 \uc774\uc6c3\uc744 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
\uc0d8\ud50c kube-loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#loxilb_2","title":"loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \ub3d9\uc77c\ud55c \uc18d\uc131\uc73c\ub85c SVC IP\ub97c \uad11\uace0\ud569\ub2c8\ub2e4.
- svc\ub85c \ud5a5\ud558\ub294 \uc678\ubd80 \ud2b8\ub798\ud53d\uc744 \ucd94\uc801\ud558\uace0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc804\ub2ec\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \ud65c\uc131 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4(\uad6c\uc131\ub41c \uacbd\uc6b0).
"},{"location":"ha-deploy-KOR/#_12","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command: [ \"/root/loxilb-io/loxilb/loxilb\", \"--bgp\", \"--egr-hooks\", \"--blacklist=cni[0-9a-z]|veth.|flannel.\" ]\n ports:\n - containerPort: 11111\n - containerPort: 179\n - containerPort: 50051\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n
-
\"--bgp\" - \uc635\uc158\uc740 loxilb\uac00 \ub3d9\uc77c\ud55c \uc18d\uc131\uc73c\ub85c \uacbd\ub85c\ub97c \uad11\uace0\ud560 \uc218 \uc788\uac8c \ud569\ub2c8\ub2e4.
-
\"--egr-hooks\" - \uc6cc\ud06c\ub85c\ub4dc\uac00 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc5d0 \uc2a4\ucf00\uc904\ub9c1\ub420 \uc218 \uc788\ub294 \uacbd\uc6b0 \ud544\uc694\ud569\ub2c8\ub2e4. \uc6cc\ucee4 \ub178\ub4dc\ub85c \uc6cc\ud06c\ub85c\ub4dc \uc2a4\ucf00\uc904\ub9c1\uc744 \uad00\ub9ac\ud558\ub294 \uacbd\uc6b0 \uc774 \uc778\uc218\ub97c \uc5b8\uae09\ud560 \ud544\uc694\ub294 \uc5c6\uc2b5\ub2c8\ub2e4.
-
\"--blacklist=cni[0-9a-z]|veth.|flannel.\" - \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\uc5d0\uc11c \uc2e4\ud589\ud558\ub294 \uacbd\uc6b0 \ud544\uc218\uc785\ub2c8\ub2e4. loxilb\ub294 \ubaa8\ub4e0 \uc778\ud130\ud398\uc774\uc2a4\uc5d0 ebpf \ud504\ub85c\uadf8\ub7a8\uc744 \ubd80\ucc29\ud558\uc9c0\ub9cc \uae30\ubcf8 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c \uc2e4\ud589 \uc911\uc774\ubbc0\ub85c \ubaa8\ub4e0 \uc778\ud130\ud398\uc774\uc2a4(CNI \uc778\ud130\ud398\uc774\uc2a4 \ud3ec\ud568)\uac00 \ub178\ucd9c\ub418\uace0 loxilb\ub294 \uc774\ub7ec\ud55c \uc778\ud130\ud398\uc774\uc2a4\uc5d0 ebpf \ud504\ub85c\uadf8\ub7a8\uc744 \ubd80\ucc29\ud558\uac8c \ub429\ub2c8\ub2e4. \uc774\ub294 \uc6d0\ud558\ub294 \ubc14\uac00 \uc544\ub2c8\ubbc0\ub85c \uc0ac\uc6a9\uc790\ub294 \uc774\ub7ec\ud55c \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc81c\uc678\ud558\ub294 \uc815\uaddc \ud45c\ud604\uc2dd\uc744 \uc5b8\uae09\ud574\uc57c \ud569\ub2c8\ub2e4. \uc8fc\uc5b4\uc9c4 \uc608\uc81c\uc758 \uc815\uaddc \ud45c\ud604\uc2dd\uc740 flannel \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc81c\uc678\ud569\ub2c8\ub2e4. \"--blacklist=cali.|tunl.|vxlan[.]calico|veth.|cni[0-9a-z]\" \uc815\uaddc \ud45c\ud604\uc2dd\uc740 calico CNI\uc640 \ud568\uaed8 \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.
\uc0d8\ud50c loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#fail-over_2","title":"Fail-over","text":"\uc774 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 Fail-over \uc2dc\ub098\ub9ac\uc624\ub97c \uc124\uba85\ud569\ub2c8\ub2e4:
\uc7a5\uc560\uac00 \ubc1c\uc0dd\ud55c \uacbd\uc6b0, \ud074\ub77c\uc774\uc5b8\ud2b8\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 BGP\ub294 ECMP \uacbd\ub85c\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\uace0 \ud2b8\ub798\ud53d\uc744 \ud65c\uc131 ECMP \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \ubcf4\ub0b4\uae30 \uc2dc\uc791\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#4-","title":"\uc2dc\ub098\ub9ac\uc624 4 - \uc5f0\uacb0 \ub3d9\uae30\ud654\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\ubc31\uc5c5","text":""},{"location":"ha-deploy-KOR/#_13","title":"\uc124\uc815","text":"\uc774 \ubc30\ud3ec \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 Kubernetes\uc640 loxilb\uac00 \ub2e4\uc74c\uacfc \uac19\uc774 \uc124\uc815\ub429\ub2c8\ub2e4:
\uc774 \uae30\ub2a5\uc740 loxilb\uac00 \uae30\ubcf8 \ubaa8\ub4dc \ub610\ub294 Full NAT \ubaa8\ub4dc\uc5d0\uc11c Kubernetes \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc5d0\uc11c \uc2e4\ud589\ub420 \ub54c\ub9cc \uc9c0\uc6d0\ub429\ub2c8\ub2e4. Kubernetes\ub294 2\uac1c\uc758 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc640 2\uac1c\uc758 \uc6cc\ucee4 \ub178\ub4dc\ub85c \uad6c\uc131\ub41c \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud558\uba70, \ubaa8\ub4e0 \ub178\ub4dc\ub294 \ub3d9\uc77c\ud55c 192.168.80.0/24 \uc11c\ube0c\ub137\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. SVC\ub294 \uc678\ubd80 IP\ub97c \uac00\uc9d1\ub2c8\ub2e4.
\uc678\ubd80 \ud074\ub77c\uc774\uc5b8\ud2b8, loxilb \ubc0f Kubernetes \ud074\ub7ec\uc2a4\ud130\uc758 \uc5f0\uacb0\uc5d0 \ub530\ub77c \uba87 \uac00\uc9c0 \uac00\ub2a5\ud55c \uc2dc\ub098\ub9ac\uc624\uac00 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 L3 \uc5f0\uacb0\uc744 \uace0\ub824\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_14","title":"\uc801\ud569\ud55c \uc0ac\uc6a9 \uc0ac\ub840","text":" - loxilb pod \uc7a5\uc560 \uc2dc \uc7a5\uae30 \uc2e4\ud589 \uc5f0\uacb0\uc744 \uc720\uc9c0\ud574\uc57c \ud558\ub294 \uacbd\uc6b0
- DSR \ubaa8\ub4dc\ub85c \uc54c\ub824\uc9c4 \ub2e4\ub978 LB \ubaa8\ub4dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc5f0\uacb0\uc744 \uc720\uc9c0\ud560 \uc218 \uc788\uc9c0\ub9cc \ub2e4\uc74c\uacfc \uac19\uc740 \uc81c\ud55c \uc0ac\ud56d\uc774 \uc788\uc2b5\ub2c8\ub2e4:
- \uc0c1\ud0dc \uae30\ubc18 \ud544\ud130\ub9c1 \ubc0f \uc5f0\uacb0 \ucd94\uc801\uc744 \ubcf4\uc7a5\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4.
- \uba40\ud2f0\ud638\ubc0d \uae30\ub2a5\uc744 \uc9c0\uc6d0\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \ub2e4\ub978 5-\ud29c\ud50c\uc774 \ub3d9\uc77c\ud55c \uc5f0\uacb0\uc5d0 \uc18d\ud560 \uc218 \uc788\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#kube-loxilb_3","title":"kube-loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \ud544\uc694\ud55c \ub300\ub85c CIDR\uc744 \uc120\ud0dd\ud569\ub2c8\ub2e4.
- SetRoles \uc635\uc158\uc744 \uc120\ud0dd\ud558\uc5ec \ud65c\uc131 loxilb\ub97c \uc120\ud0dd\ud560 \uc218 \uc788\ub3c4\ub85d \ud569\ub2c8\ub2e4(svcIP\uac00 \ub2e4\ub978 attributes/prio/med \ub85c \uad11\uace0\ub429\ub2c8\ub2e4).
- loxilb \ucee8\ud14c\uc774\ub108 \uac04\uc758 BGP \ud53c\uc5b4\ub9c1 \ud504\ub85c\ube44\uc800\ub2dd\uc744 \uc790\ub3d9\ud654\ud569\ub2c8\ub2e4(\ud544\uc694\ud55c \uacbd\uc6b0).
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c Full NAT \uc11c\ube44\uc2a4 \ubaa8\ub4dc\uc5d0\uc11c loxilb\ub97c \uc124\uc815\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_15","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --setRoles=0.0.0.0\n - --loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\n - --externalCIDR=123.123.123.1/24\n - --setLBMode=2\n - --setBGP=65100\n - --extBGPPeers=50.50.50.1:65101\n
- \"--setRoles=0.0.0.0\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 loxilb \uc778\uc2a4\ud134\uc2a4 \uc911\uc5d0\uc11c \ud65c\uc131-\ubc31\uc5c5\uc744 \uc120\ud0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- \"--loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\" - \uc5f0\uacb0\ud560 loxilb URL\uc785\ub2c8\ub2e4.
- \"--externalCIDR=123.123.123.1/24\" - svc\uc758 \uc678\ubd80 \uc11c\ube44\uc2a4 IP\ub294 externalCIDR \ubc94\uc704\uc5d0\uc11c \uc120\ud0dd\ub429\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8, svc \ubc0f \ud074\ub7ec\uc2a4\ud130\uac00 \ubaa8\ub450 \ub2e4\ub978 \uc11c\ube0c\ub137\uc5d0 \uc788\uc2b5\ub2c8\ub2e4.
- \"--setLBMode=2\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c Full NAT \ubaa8\ub4dc\uc5d0\uc11c svc\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- \"--setBGP=65100\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 BGP \uc778\uc2a4\ud134\uc2a4\uc5d0 \ub85c\uceec AS \ubc88\ud638\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- \"--extBGPPeers=50.50.50.1:65101\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 BGP \uc778\uc2a4\ud134\uc2a4\uc758 \uc678\ubd80 \uc774\uc6c3\uc744 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
\uc0d8\ud50c kube-loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#loxilb_3","title":"loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \uc0c1\ud0dc(\ud65c\uc131/\ubc31\uc5c5)\uc5d0 \ub530\ub77c SVC IP\ub97c \uad11\uace0\ud569\ub2c8\ub2e4.
- svc\ub85c \ud5a5\ud558\ub294 \uc678\ubd80 \ud2b8\ub798\ud53d\uc744 \ucd94\uc801\ud558\uace0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc804\ub2ec\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \ud65c\uc131 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4(\uad6c\uc131\ub41c \uacbd\uc6b0).
- \uc7a5\uae30 \uc2e4\ud589 \uc5f0\uacb0\uc744 \ub2e4\ub978 \uad6c\uc131\ub41c loxilb \ud53c\uc5b4\uc640 \ub3d9\uae30\ud654\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_16","title":"\uc2e4\ud589 \uc635\uc158","text":"#llb1\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=$llb2IP --self=0 -b\n\n#llb2\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=$llb1IP --self=1 -b\n
- \"--cluster=\\<llb-peer-IP>\" - \uc635\uc158\uc740 \ub3d9\uae30\ud654\ub97c \uc704\ud55c \ud53c\uc5b4 loxilb IP\ub97c \uad6c\uc131\ud569\ub2c8\ub2e4.
- \"--self=0/1\" - \uc635\uc158\uc740 \uc778\uc2a4\ud134\uc2a4\ub97c \uc2dd\ubcc4\ud569\ub2c8\ub2e4.
- \"-b\" - \uc635\uc158\uc740 loxilb\uac00 \ud65c\uc131/\ubc31\uc5c5 \uc0c1\ud0dc\uc5d0 \ub530\ub77c \uc801\uc808\ud55c \uc6b0\uc120\uc21c\uc704\ub85c \uacbd\ub85c\ub97c \uad11\uace0\ud560 \uc218 \uc788\ub3c4\ub85d goBGP \uc778\uc2a4\ud134\uc2a4\uc640 \ud568\uaed8 \uc2e4\ud589\ub418\ub3c4\ub85d \ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#fail-over_3","title":"Fail-over","text":"\uc774 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 Fail-over \uc2dc\ub098\ub9ac\uc624\ub97c \uc124\uba85\ud569\ub2c8\ub2e4:
\uc7a5\uc560\uac00 \ubc1c\uc0dd\ud558\uba74 kube-loxilb\ub294 \uc7a5\uc560\ub97c \uac10\uc9c0\ud569\ub2c8\ub2e4. \ud65c\uc131 loxilb \ud480\uc5d0\uc11c \uc0c8\ub85c\uc6b4 loxilb\ub97c \uc120\ud0dd\ud558\uace0 \uc774\ub97c \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130\ub85c \uc5c5\ub370\uc774\ud2b8\ud569\ub2c8\ub2e4. \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130 loxilb\ub294 \ub192\uc740 \uc6b0\uc120\uc21c\uc704\ub85c svcIP\ub97c \uad11\uace0\ud558\uc5ec \ud074\ub77c\uc774\uc5b8\ud2b8\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 BGP\uac00 \ud2b8\ub798\ud53d\uc744 \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130 loxilb\ub85c \ubcf4\ub0b4\ub3c4\ub85d \uac15\uc81c\ud569\ub2c8\ub2e4. \uc5f0\uacb0\uc774 \ubaa8\ub450 \ub3d9\uae30\ud654\ub418\uc5b4 \uc788\uc73c\ubbc0\ub85c \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130 loxilb\ub294 \uc9c0\uc815\ub41c \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \ud2b8\ub798\ud53d\uc744 \ubcf4\ub0b4\uae30 \uc2dc\uc791\ud569\ub2c8\ub2e4.
\uc774 \uae30\ub2a5\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\ub824\uba74 \"Hitless HA\" \ube14\ub85c\uadf8\ub97c \uc77d\uc5b4\ubcf4\uc138\uc694.
"},{"location":"ha-deploy-KOR/#5-fail-over-","title":"\uc2dc\ub098\ub9ac\uc624 5 - \ube60\ub978 Fail-over \uac10\uc9c0\ub97c \uc0ac\uc6a9\ud558\ub294 \uc561\ud2f0\ube0c-\ubc31\uc5c5","text":""},{"location":"ha-deploy-KOR/#_17","title":"\uc124\uc815","text":"\uc774 \ubc30\ud3ec \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 Kubernetes\uc640 loxilb\uac00 \ub2e4\uc74c\uacfc \uac19\uc774 \uc124\uc815\ub429\ub2c8\ub2e4:
\uc774 \uae30\ub2a5\uc740 loxilb\uac00 Kubernetes \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc5d0\uc11c \uc2e4\ud589\ub420 \ub54c\ub9cc \uc9c0\uc6d0\ub429\ub2c8\ub2e4. Kubernetes\ub294 2\uac1c\uc758 \ub9c8\uc2a4\ud130 \ub178\ub4dc\uc640 2\uac1c\uc758 \uc6cc\ucee4 \ub178\ub4dc\ub85c \uad6c\uc131\ub41c \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud558\uba70, \ubaa8\ub4e0 \ub178\ub4dc\ub294 \ub3d9\uc77c\ud55c 192.168.80.0/24 \uc11c\ube0c\ub137\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. SVC\ub294 \uc678\ubd80 IP\ub97c \uac00\uc9d1\ub2c8\ub2e4.
\uc678\ubd80 \ud074\ub77c\uc774\uc5b8\ud2b8, loxilb \ubc0f Kubernetes \ud074\ub7ec\uc2a4\ud130\uc758 \uc5f0\uacb0\uc5d0 \ub530\ub77c \uba87 \uac00\uc9c0 \uac00\ub2a5\ud55c \uc2dc\ub098\ub9ac\uc624\uac00 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 \uc5f0\uacb0 \ub3d9\uae30\ud654\uc640 \ud568\uaed8 L2 \uc5f0\uacb0\uc744 \uace0\ub824\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_18","title":"\uc801\ud569\ud55c \uc0ac\uc6a9 \uc0ac\ub840","text":" - \ube60\ub978 Fail-over \uac10\uc9c0 \ubc0f \uc11c\ube44\uc2a4 \uc5f0\uc18d\uc131\uc774 \ud544\uc694\ud55c \uacbd\uc6b0.
- \uc774 \uae30\ub2a5\uc740 L2 \ub610\ub294 L3 \ub124\ud2b8\uc6cc\ud06c \uc124\uc815\uc5d0\uc11c \ubaa8\ub450 \uc791\ub3d9\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#kube-loxilb_4","title":"kube-loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \ud544\uc694\ud55c \ub300\ub85c CIDR\uc744 \uc120\ud0dd\ud569\ub2c8\ub2e4.
- SetRoles \uc635\uc158\uc744 \ube44\ud65c\uc131\ud654\ud558\uc5ec \ud65c\uc131 loxilb\ub97c \uc120\ud0dd\ud558\uc9c0 \uc54a\ub3c4\ub85d \ud569\ub2c8\ub2e4.
- loxilb \ucee8\ud14c\uc774\ub108 \uac04\uc758 BGP \ud53c\uc5b4\ub9c1 \ud504\ub85c\ube44\uc800\ub2dd\uc744 \uc790\ub3d9\ud654\ud569\ub2c8\ub2e4(\ud544\uc694\ud55c \uacbd\uc6b0).
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c \uad6c\uc131\ub41c \uc11c\ube44\uc2a4 \ubaa8\ub4dc\uc5d0\uc11c loxilb\ub97c \uc124\uc815\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_19","title":"\uad6c\uc131 \uc635\uc158","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n # Disable setRoles option\n #- --setRoles=0.0.0.0\n - --loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\n - --externalCIDR=192.168.80.5/32\n - --setLBMode=2\n
- \"--setRoles=0.0.0.0\" - SetRoles \uc635\uc158\uc744 \ube44\ud65c\uc131\ud654\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774 \uc635\uc158\uc744 \ud65c\uc131\ud654\ud558\uba74 kube-loxilb\uac00 loxilb \uc778\uc2a4\ud134\uc2a4 \uc911\uc5d0\uc11c \ud65c\uc131-\ubc31\uc5c5\uc744 \uc120\ud0dd\ud558\uac8c \ub429\ub2c8\ub2e4.
- \"--loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\" - \uc5f0\uacb0\ud560 loxilb URL\uc785\ub2c8\ub2e4.
- \"--externalCIDR=192.168.80.5/32\" - svc\uc758 \uc678\ubd80 \uc11c\ube44\uc2a4 IP\ub294 externalCIDR \ubc94\uc704\uc5d0\uc11c \uc120\ud0dd\ub429\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8, svc \ubc0f \ud074\ub7ec\uc2a4\ud130\uac00 \ubaa8\ub450 \ub3d9\uc77c\ud55c \uc11c\ube0c\ub137\uc5d0 \uc788\uc2b5\ub2c8\ub2e4.
- \"--setLBMode=2\" - \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 kube-loxilb\uac00 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud5a5\ud55c Full NAT \ubaa8\ub4dc\uc5d0\uc11c svc\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
\uc0d8\ud50c kube-loxilb.yaml\uc740 \uc5ec\uae30\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#loxilb_4","title":"loxilb\uc758 \uc5ed\ud560\uacfc \ucc45\uc784:","text":" - \uc0c1\ud0dc(\ud65c\uc131/\ubc31\uc5c5)\uc5d0 \ub530\ub77c SVC IP\ub97c \uad11\uace0\ud569\ub2c8\ub2e4.
- svc\ub85c \ud5a5\ud558\ub294 \uc678\ubd80 \ud2b8\ub798\ud53d\uc744 \ucd94\uc801\ud558\uace0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc804\ub2ec\ud569\ub2c8\ub2e4.
- \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \ud65c\uc131 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4(\uad6c\uc131\ub41c \uacbd\uc6b0).
- \uc7a5\uae30 \uc2e4\ud589 \uc5f0\uacb0\uc744 \ub2e4\ub978 \uad6c\uc131\ub41c loxilb \ud53c\uc5b4\uc640 \ub3d9\uae30\ud654\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#_20","title":"\uc2e4\ud589 \uc635\uc158","text":"#llb1\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=192.168.80.2 --self=0 --ka=192.168.80.2:192.168.80.1\n\n#llb2\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=192.168.80.1 --self=1 --ka=192.168.80.1:192.168.80.2\n
- \"--ka=\\<llb-peer-IP>:\\<llb-self-IP>\" - \uc635\uc158\uc740 BFD\ub97c \uc704\ud55c \ud53c\uc5b4 loxilb IP\uc640 \uc18c\uc2a4 IP\ub97c \uad6c\uc131\ud569\ub2c8\ub2e4.
- \"--cluster=\\<llb-peer-IP>\" - \uc635\uc158\uc740 \ub3d9\uae30\ud654\ub97c \uc704\ud55c \ud53c\uc5b4 loxilb IP\ub97c \uad6c\uc131\ud569\ub2c8\ub2e4.
- \"--self=0/1\" - \uc635\uc158\uc740 \uc778\uc2a4\ud134\uc2a4\ub97c \uc2dd\ubcc4\ud569\ub2c8\ub2e4.
"},{"location":"ha-deploy-KOR/#fail-over_4","title":"Fail-over","text":"\uc774 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 Fail-over \uc2dc\ub098\ub9ac\uc624\ub97c \uc124\uba85\ud569\ub2c8\ub2e4:
\uc7a5\uc560\uac00 \ubc1c\uc0dd\ud558\uba74 BFD\uac00 \uc7a5\uc560\ub97c \uac10\uc9c0\ud569\ub2c8\ub2e4. \ubc31\uc5c5 loxilb\ub294 \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130\ub85c \uc0c1\ud0dc\ub97c \uc5c5\ub370\uc774\ud2b8\ud569\ub2c8\ub2e4. \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130 loxilb\ub294 gARP \ub610\ub294 BGP\ub97c \uc2e4\ud589 \uc911\uc778 \uacbd\uc6b0 \ub354 \ub192\uc740 \uc6b0\uc120\uc21c\uc704\ub85c svcIP\ub97c \uad11\uace0\ud558\uc5ec \ud074\ub77c\uc774\uc5b8\ud2b8\uac00 \ud2b8\ub798\ud53d\uc744 \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130 loxilb\ub85c \ubcf4\ub0b4\ub3c4\ub85d \uac15\uc81c\ud569\ub2c8\ub2e4. \uc5f0\uacb0\uc774 \ubaa8\ub450 \ub3d9\uae30\ud654\ub418\uc5b4 \uc788\uc73c\ubbc0\ub85c \uc0c8\ub85c\uc6b4 \ub9c8\uc2a4\ud130 loxilb\ub294 \uc9c0\uc815\ub41c \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \ud2b8\ub798\ud53d\uc744 \ubcf4\ub0b4\uae30 \uc2dc\uc791\ud569\ub2c8\ub2e4.
\uc774 \uae30\ub2a5\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\ub824\uba74 \"Fast Failover Detection with BFD\" \ube14\ub85c\uadf8\ub97c \uc77d\uc5b4\ubcf4\uc138\uc694.
"},{"location":"ha-deploy-KOR/#_21","title":"\ucc38\uace0:","text":"loxilb\ub97c DSR \ubaa8\ub4dc, DNS \ub4f1\uc5d0\uc11c \uc0ac\uc6a9\ud558\ub294 \ubc29\ubc95\uc740 \uc774 \ubb38\uc11c\uc5d0\uc11c \uc790\uc138\ud788 \ub2e4\ub8e8\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4. \uc2dc\ub098\ub9ac\uc624\ub97c \uacc4\uc18d \uc5c5\ub370\uc774\ud2b8\ud560 \uc608\uc815\uc785\ub2c8\ub2e4.
"},{"location":"ha-deploy/","title":"How to deploy loxilb with High Availability","text":"This article describes different scenarios about how to deploy loxilb with High Availability. Before continuing to this page, all readers are advised to have a basic understanding about kube-loxilb and the different NAT modes supported by loxilb. loxilb can run in-cluster or external to kubernetes cluster depending on architectural choices. For this documentation, we have assumed an incluster deployment wherever applicable but similar configuration should suffice for an external deployment as well.
- Scenario 1 - Flat L2 Networking (active-backup)
- Scenario 2 - L3 network (active-backup mode using BGP)
- Scenario 3 - L3 network (active-active with BGP ECMP)
- Scenario 4 - ACTIVE-BACKUP with Connection Sync
- Scenario 5 - ACTIVE-BACKUP with Fast Failover Detection(BFD)
"},{"location":"ha-deploy/#scenario-1-flat-l2-networking-active-backup","title":"Scenario 1 - Flat L2 Networking (active-backup)","text":""},{"location":"ha-deploy/#setup","title":"Setup","text":"For this deployment scenario, kubernetes and loxilb are setup as follows:
Kubernetes uses a cluster with 2 Master Nodes and 2 Worker Nodes, all the nodes use the same 192.168.80.0/24 subnet. In this scenario, loxilb will be deployed as a DaemonSet in all the master nodes. And, kube-loxilb will be deployed as Deployment.
"},{"location":"ha-deploy/#ideal-for-use-when","title":"Ideal for use when","text":" - Clients and services need to be in same-subnet.
- End-points may or may not be in same subnet.
- Simpler deployment is desired.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-kube-loxilb","title":"Roles and Responsiblities for kube-loxilb:","text":" - Choose CIDR from local subnet.
- Choose SetRoles option so it can choose active loxilb pod.
- Monitors loxilb's health and elect new master on failover.
- Sets up loxilb in one-arm svc mode towards end-points.
"},{"location":"ha-deploy/#configuration-options","title":"Configuration options","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --externalCIDR=192.168.80.200/24\n - --setRoles=0.0.0.0\n - --setLBMode=1\n
- \"--externalCIDR=192.168.80.200/24\" - The external service IP for a svc is chosen from the externalCIDR range. In this scenario, the Client, svc and cluster are in the same subnet.
- \"--setRoles=0.0.0.0\" - This option will enable kube-loxilb to choose active-backup amongst the loxilb instance and the svc IP to be configured on the active loxilb node.
- \"--setLBMode=1\" - This option will enable kube-loxilb to configure svc in one-arm mode towards the endpoints.
Sample kube-loxilb.yaml can be found here.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-loxilb","title":"Roles and Responsiblities for loxilb:","text":" - Tracks and directs the external traffic destined to svc to the endpoints.
- Monitors endpoint's health and chooses active endpoints, if configured.
"},{"location":"ha-deploy/#configuration-options_1","title":"Configuration options","text":" containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command: [ \"/root/loxilb-io/loxilb/loxilb\", \"--egr-hooks\", \"--blacklist=cni[0-9a-z]|veth.|flannel.\" ]\n ports:\n - containerPort: 11111\n - containerPort: 179\n - containerPort: 50051\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n
-
\"--egr-hooks\" - required for those cases in which workloads can be scheduled in the master nodes. No need to mention this argument when you are managing the workload scheduling to worker nodes.
-
\"--blacklist=cni[0-9a-z]|veth.|flannel.\" - mandatory for running in in-cluster mode. As loxilb attaches it's ebpf programs on all the interfaces but since we running it in the default namespace then all the interfaces including CNI interfaces will be exposed and loxilb will attach it's ebpf program in those interfaces which is definitely not desired. So, user needs to mention a regex for excluding all those interfaces. The regex in the given example will exclude the flannel interfaces. \"--blacklist=cali.|tunl.|vxlan[.]calico|veth.|cni[0-9a-z]\" regex must be used with calico CNI.
Sample loxilb.yaml can be found here.
"},{"location":"ha-deploy/#failover","title":"Failover","text":"This diagram describes the failover scenario:
kube-loxilb actively monitors loxilb's health. In case of failure, it detects change in state of loxilb and assigns new \u201cactive\u201d from available healthy loxilb pod pool. The new pod inherits svcIP assigned previously to other loxilb pod and Services are served by newly active loxilb pod.
"},{"location":"ha-deploy/#scenario-2-l3-network-active-backup-mode-using-bgp","title":"Scenario 2 - L3 network (active-backup mode using BGP)","text":""},{"location":"ha-deploy/#setup_1","title":"Setup","text":"For this deployment scenario, kubernetes and loxilb are setup as follows:
Kubernetes uses a cluster with 2 Master Nodes and 2 Worker Nodes, all the nodes use the same 192.168.80.0/24 subnet. SVCs will have an external IP, not from the cluster/local subnet. In this scenario, loxilb will be deployed as a DaemonSet in all the master nodes. And, kube-loxilb will be deployed as Deployment.
"},{"location":"ha-deploy/#ideal-for-use-when_1","title":"Ideal for use when","text":" - Clients and Cluster are in different subnets.
- Clients and svc VIP need to be in different subnet (cluster end-points may also be in different networks).
- Ideal for cloud deployments.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-kube-loxilb_1","title":"Roles and Responsiblities for kube-loxilb:","text":" - Choose CIDR from a different subnet.
- Choose SetRoles option so it can choose active loxilb pod.
- Monitors loxilb's health and elect new master on failover.
- Automates provisioning of bgp-peering between loxilb pods.
- Sets up loxilb in one-arm svc mode towards end-points.
"},{"location":"ha-deploy/#configuration-options_2","title":"Configuration options","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --externalCIDR=123.123.123.1/24\n - --setRoles=0.0.0.0\n - --setLBMode=1\n - --setBGP=65100\n - --extBGPPeers=50.50.50.1:65101\n
- \"--externalCIDR=123.123.123.1/24\" - The external service IP for a svc is chosen from the externalCIDR range. In this scenario, the Client, svc and cluster are all in the different subnet.
- \"--setRoles=0.0.0.0\" - This option will enable kube-loxilb to choose active-backup amongst the loxilb instances and the svc IP to be configured on the active loxilb node.
- \"--setLBMode=1\" - This option will enable kube-loxilb to configure svc in one-arm mode towards the endpoints.
- \"--setBGP=65100\" - This option will let kube-loxilb to configure local AS number in the bgp instance.
- \"--extBGPPeers=50.50.50.1:65101\" - This option will configure the bgp instance's external neighbors.
Sample kube-loxilb.yaml can be found here.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-loxilb_1","title":"Roles and Responsiblities for loxilb:","text":" - Advertises SVC IP as per the state(active or backup).
- Tracks and directs the external traffic destined to svc to the endpoints.
- Monitors endpoint's health and chooses active endpoints, if configured.
"},{"location":"ha-deploy/#configuration-options_3","title":"Configuration options","text":" containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command: [ \"/root/loxilb-io/loxilb/loxilb\", \"--bgp\", \"--egr-hooks\", \"--blacklist=cni[0-9a-z]|veth.|flannel.\" ]\n ports:\n - containerPort: 11111\n - containerPort: 179\n - containerPort: 50051\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n
-
\"--bgp\" - option enables loxilb to run with goBGP instance which will advertise the routes with appropriate preference as per active/backup state.
-
\"--egr-hooks\" - required for those cases in which workloads can be scheduled in the master nodes. No need to mention this argument when you are managing the workload scheduling to worker nodes.
-
\"--blacklist=cni[0-9a-z]|veth.|flannel.\" - mandatory for running in in-cluster mode. As loxilb attaches it's ebpf programs on all the interfaces but since we running it in the default namespace then all the interfaces including CNI interfaces will be exposed and loxilb will attach it's ebpf program in those interfaces which is definitely not desired. So, user needs to mention a regex for excluding all those interfaces. The regex in the given example will exclude the flannel interfaces. \"--blacklist=cali.|tunl.|vxlan[.]calico|veth.|cni[0-9a-z]\" regex must be used with calico CNI.
Sample loxilb.yaml can be found here.
"},{"location":"ha-deploy/#failover_1","title":"Failover","text":"This diagram describes the failover scenario:
kube-loxilb actively monitors loxilb's health. In case of failure, it detects change in state of loxilb and assigns new \u201cactive\u201d from available healthy loxilb pod pool. The new pod inherits svcIP assigned previously to other loxilb pod and advertises the SVC IP with the preference as per the new state. Client receives the new route to SVCIP and the Services are served by newly active loxilb pod.
"},{"location":"ha-deploy/#scenario-3-l3-network-active-active-with-bgp-ecmp","title":"Scenario 3 - L3 network (active-active with BGP ECMP)","text":""},{"location":"ha-deploy/#setup_2","title":"Setup","text":"For this deployment scenario, kubernetes and loxilb are setup as follows:
Kubernetes uses a cluster with 2 Master Nodes and 2 Worker Nodes, all the nodes use the same 192.168.80.0/24 subnet. SVCs will have an external IP, not from the cluster/local subnet. In this scenario, loxilb will be deployed as a DaemonSet in all the master nodes. And, kube-loxilb will be deployed as Deployment.
"},{"location":"ha-deploy/#ideal-for-use-when_2","title":"Ideal for use when","text":" - Clients and Cluster are in different subnets.
- Clients and svc VIP need to be in different subnet (cluster end-points may also be in different networks).
- Ideal for cloud deployments.
- Better performance is desired due to active-active clustering but network devices/hosts must be capable of supporting ecmp.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-kube-loxilb_2","title":"Roles and Responsiblities for kube-loxilb:","text":" - Choose CIDR from a different subnet.
- Do not choose SetRoles option in this case (svcIPs will be advertised with same attributes/prio/med).
- Automates provisioning of bgp-peering between loxilb pods.
- Sets up loxilb in one-arm svc mode towards end-points.
"},{"location":"ha-deploy/#configuration-options_4","title":"Configuration options","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --externalCIDR=123.123.123.1/24\n - --setLBMode=1\n - --setBGP=65100\n - --extBGPPeers=50.50.50.1:65101\n
- \"--externalCIDR=123.123.123.1/24\" - The external service IP for a svc is chosen from the externalCIDR range. In this scenario, the Client, svc and cluster are all in the different subnet.
- \"--setLBMode=1\" - This option will enable kube-loxilb to configure svc in one-arm mode towards the endpoints.
- \"--setBGP=65100\" - This option will let kube-loxilb to configure local AS number in the bgp instance.
- \"--extBGPPeers=50.50.50.1:65101\" - This option will configure the bgp instance's external neighbors
Sample kube-loxilb.yaml can be found here.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-loxilb_2","title":"Roles and Responsiblities for loxilb:","text":" - Advertises SVC IP with same attributes.
- Tracks and directs the external traffic destined to svc to the endpoints.
- Monitors endpoint's health and chooses active endpoints, if configured.
"},{"location":"ha-deploy/#configuration-options_5","title":"Configuration options","text":" containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command: [ \"/root/loxilb-io/loxilb/loxilb\", \"--bgp\", \"--egr-hooks\", \"--blacklist=cni[0-9a-z]|veth.|flannel.\" ]\n ports:\n - containerPort: 11111\n - containerPort: 179\n - containerPort: 50051\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n
-
\"--bgp\" - option enables loxilb to run with goBGP instance which will advertise the routes with same attributes.
-
\"--egr-hooks\" - required for those cases in which workloads can be scheduled in the master nodes. No need to mention this argument when you are managing the workload scheduling to worker nodes.
-
\"--blacklist=cni[0-9a-z]|veth.|flannel.\" - mandatory for running in in-cluster mode. As loxilb attaches it's ebpf programs on all the interfaces but since we running it in the default namespace then all the interfaces including CNI interfaces will be exposed and loxilb will attach it's ebpf program in those interfaces which is definitely not desired. So, user needs to mention a regex for excluding all those interfaces. The regex in the given example will exclude the flannel interfaces. \"--blacklist=cali.|tunl.|vxlan[.]calico|veth.|cni[0-9a-z]\" regex must be used with calico CNI.
Sample loxilb.yaml can be found here.
"},{"location":"ha-deploy/#failover_2","title":"Failover","text":"This diagram describes the failover scenario:
In case of failure, BGP running on the client will update the ECMP route and start sending the traffic to the active ECMP endpoints.
"},{"location":"ha-deploy/#scenario-4-active-backup-with-connection-sync","title":"Scenario 4 - ACTIVE-BACKUP with Connection Sync","text":""},{"location":"ha-deploy/#setup_3","title":"Setup","text":"For this deployment scenario, kubernetes and loxilb are setup as follows:
This feature is only supported when loxilb runs externally outside the Kubernetes cluster in either default or fullnat mode. Kubernetes uses a cluster with 2 Master Nodes and 2 Worker Nodes, all the nodes use the same 192.168.80.0/24 subnet. SVCs will have an external IP.
There are few possible scenarios which depends upon the connectivity of External Client, loxilb and the Kubernetes cluster. For this scenario, we are here considering L3 connectivity.
"},{"location":"ha-deploy/#ideal-for-use-when_3","title":"Ideal for use when","text":" - Need to preserve long running connections during lb pod failures
- Another LB mode known as DSR mode can be used to preserve connections but has the following limitations :
- Can't ensure stateful filtering and connection-tracking.
- Can't support multihoming features since different 5-tuples might belong to the same connection.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-kube-loxilb_3","title":"Roles and Responsiblities for kube-loxilb:","text":" - Choose CIDR as required.
- Choose SetRoles option so it can choose active loxilb (svcIPs will be advertised with different attributes/prio/med).
- Automates provisioning of bgp-peering between loxilb containers (if required).
- Sets up loxilb in fullnat svc mode towards end-points.
"},{"location":"ha-deploy/#configuration-options_6","title":"Configuration options","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n - --setRoles=0.0.0.0\n - --loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\n - --externalCIDR=123.123.123.1/24\n - --setLBMode=2\n - --setBGP=65100\n - --extBGPPeers=50.50.50.1:65101\n
- \"--setRoles=0.0.0.0\" - This option will enable kube-loxilb to choose active-backup amongst the loxilb instance.
- \"--loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\" - loxilb URLs to connect with.
- \"--externalCIDR=123.123.123.1/24\" - The external service IP for a svc is chosen from the externalCIDR range. In this scenario, the Client, svc and cluster are all in the different subnet.
- \"--setLBMode=2\" - This option will enable kube-loxilb to configure svc in fullnat mode towards the endpoints.
- \"--setBGP=65100\" - This option will let kube-loxilb to configure local AS number in the bgp instance.
- \"--extBGPPeers=50.50.50.1:65101\" - This option will configure the bgp instance's external neighbors
Sample kube-loxilb.yaml can be found here.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-loxilb_3","title":"Roles and Responsiblities for loxilb:","text":" - Advertises SVC IP as per the state(active/backup).
- Tracks and directs the external traffic destined to svc to the endpoints.
- Monitors endpoint's health and chooses active endpoints, if configured.
- Syncs the long-lived connections to all other configured loxilb peers.
"},{"location":"ha-deploy/#running-options","title":"Running options","text":"#llb1\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=$llb2IP --self=0 -b\n\n#llb2\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=$llb1IP --self=1 -b\n
- \"--cluster=\\<llb-peer-IP>\" - option configures the peer loxilb IP for syncing.
- \"--self=0/1\" - option to identify the instance.
- \"-b\" - option enables loxilb to run with goBGP instance which will advertise the routes with appropriate preference as per active/backup state.
"},{"location":"ha-deploy/#failover_3","title":"Failover","text":"This diagram describes the failover scenario:
In case of failure, kube-loxilb will detect the failure. It will select a new loxilb from the pool of active loxilbs and update it's state to new master. New master loxilb will advertise the svcIPs with higher proference which will force the BGP running on the client to send the traffic towards new Master loxilb. Since, the connections are all synced up, new master loxilb will start sending the traffic to the designated endpoints.
Please read this detailed blog about \"Hitless HA\" to know about this feature.
"},{"location":"ha-deploy/#scenario-5-active-backup-with-fast-failover-detection","title":"Scenario 5 - ACTIVE-BACKUP with Fast Failover Detection","text":""},{"location":"ha-deploy/#setup_4","title":"Setup","text":"For this deployment scenario, kubernetes and loxilb are setup as follows:
This feature is only supported when loxilb runs externally outside the Kubernetes cluster. Kubernetes uses a cluster with 2 Master Nodes and 2 Worker Nodes, all the nodes use the same 192.168.80.0/24 subnet. SVCs will have an external IP.
There are few possible scenarios which depends upon the connectivity of External Client, loxilb and the Kubernetes cluster. For this scenario, we are here considering L2 connectivity with connection sync.
"},{"location":"ha-deploy/#ideal-for-use-when_4","title":"Ideal for use when","text":" - Need fast failover detection and service continuity.
- This feature works in both L2 or L3 network settings.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-kube-loxilb_4","title":"Roles and Responsiblities for kube-loxilb:","text":" - Choose CIDR as required.
- Disable SetRoles option so it should not choose active loxilb.
- Automates provisioning of bgp-peering between loxilb containers (if required).
- Sets up loxilb in configured svc mode towards end-points.
"},{"location":"ha-deploy/#configuration-options_7","title":"Configuration options","text":" containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n ....\n ....\n # Disable setRoles option\n #- --setRoles=0.0.0.0\n - --loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\n - --externalCIDR=192.168.80.5/32\n - --setLBMode=2\n
- \"--setRoles=0.0.0.0\" - We have to make sure to disable this option as it will enable kube-loxilb to choose active-backup amongst the loxilb instance.
- \"--loxiURL=http://192.168.80.1:11111,http://192.168.80.2:11111\" - loxilb URLs to connect with.
- \"--externalCIDR=192.168.80.5/32\" - The external service IP for a svc is chosen from the externalCIDR range. In this scenario, the Client, svc and cluster are all in the same subnet.
- \"--setLBMode=2\" - This option will enable kube-loxilb to configure svc in fullnat mode towards the endpoints.
Sample kube-loxilb.yaml can be found here.
"},{"location":"ha-deploy/#roles-and-responsiblities-for-loxilb_4","title":"Roles and Responsiblities for loxilb:","text":" - Advertises SVC IP as per the state(active/backup).
- Tracks and directs the external traffic destined to svc to the endpoints.
- Monitors endpoint's health and chooses active endpoints, if configured.
- Syncs the long-lived connections to all other configured loxilb peers.
"},{"location":"ha-deploy/#running-options_1","title":"Running options","text":"#llb1\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=192.168.80.2 --self=0 --ka=192.168.80.2:192.168.80.1\n\n#llb2\n docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=192.168.80.1 --self=1 --ka=192.168.80.1:192.168.80.2\n
- \"--ka=\\<llb-peer-IP>:\\<llb-self-IP>\" - option configures the peer loxilb IP and source IP for BFD.
- \"--cluster=\\<llb-peer-IP>\" - option configures the peer loxilb IP for syncing.
- \"--self=0/1\" - option to identify the instance.
"},{"location":"ha-deploy/#failover_4","title":"Failover","text":"This diagram describes the failover scenario:
In case of failure, BFD will detect the failure. BACKUP loxilb will update it's state to new master. New master loxilb will advertise the svcIPs through gARP or with higher proference, if running with BGP, which will force the client to send the traffic towards new Master loxilb. Since, the connections are all synced up, new master loxilb will start sending the traffic to the designated endpoints.
Please read this detailed blog about \"Fast Failover Detection with BFD\" to know about this feature.
"},{"location":"ha-deploy/#note","title":"Note :","text":"There are ways to use loxilb in DSR mode, DNS etc which is still not covered in details in this doc. We will keep updating the scenarios.
"},{"location":"https/","title":"HTTPS guide","text":"Key and Cert files are required for HTTPS, and they are not detailed, but explain how to generate them and where LoxiLB can read and use user-generated Key and Cert files.
--tls enable TLS [$TLS]\n --tls-host= the IP to listen on for tls, when not specified it's the same as --host [$TLS_HOST]\n --tls-port= the port to listen on for secure connections (default: 8091) [$TLS_PORT]\n --tls-certificate= the certificate to use for secure connections (default:\n /opt/loxilb/cert/server.crt) [$TLS_CERTIFICATE]\n --tls-key= the private key to use for secure connections (default:\n /opt/loxilb/cert/server.key) [$TLS_PRIVATE_KEY]\n
To enable https on LoxiLB, we changed it to enable it using the --tls
option. Tls-host and tls-port are the contents of deciding which IP to listen to. The default IP address used as tls-host is 0.0.0.0, which is everywhere, but for future security, we recommend doing only certain values. The port is 8091 as the default. You can also find and change this from a value that does not overlap with the service you use.
LoxiLB reads the key by default as /opt/loxilb/cert/path with server.key and the Cert file as server.crt in the same path. In this article, we will learn how to create the server.key and server.crt files.
You can enable and run HTTLS (TLS) with the following commands.
./loxilb --tls\n
"},{"location":"https/#preparation","title":"Preparation","text":"First of all, the simplest way is to create it using openssl. To install openssl, you can install it using the command below.
apt install openssl\n
The LoxiLB team confirmed that it operates on 1.1.1f version of openssl. openssl version\nOpenSSL 1.1.1f 31 Mar 2020\n
"},{"location":"https/#1-create-serverkey","title":"1. Create server.key","text":"openssl genrsa -out server.key 2048\n
The way to generate server.key is simple. You can create a new key by typing the command above. In fact, if you type in the command, you can see that the process is output and the server.key is generated.
openssl genrsa -out server.key 2048\nGenerating RSA private key, 2048 bit long modulus (2 primes)\n..............................................+++++\n...........................................+++++\ne is 65537 (0x010001)\n
"},{"location":"https/#2-create-servercsr","title":"2. Create server.csr","text":"openssl req -new -key server.key -out server.csr\n
Create a csr file by putting the desired value in the corresponding item. This file is not used directly for https, but it is necessary to create a Cert file to be created later. When you type in the command above, a long sentence appears asking you to enter information, and you can fill in the corresponding value according to your situation.
openssl req -new -key server.key -out server.csr\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:\nState or Province Name (full name) [Some-State]:\nLocality Name (eg, city) []:\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:\nOrganizational Unit Name (eg, section) []:\nCommon Name (e.g. server FQDN or YOUR name) []:\nEmail Address []:\n\nPlease enter the following 'extra' attributes\nto be sent with your certificate request\nA challenge password []:\nAn optional company name []:\n
"},{"location":"https/#3-create-servercrt","title":"3. Create server.crt","text":"openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt\n
This is the process of creating server.crt using server.key and server.csr generated above. You can issue a certificate with a limited deadline by setting the expiration date of the certificate well and putting a value after -day. The server.crt file is created with the following output. openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt\nSignature ok\nsubject=C = AU, ST = Some-State, O = Internet Widgits Pty Ltd\nGetting Private key\n
"},{"location":"https/#4-validation","title":"4. Validation","text":"You can enable https with the server.key and server.cert files generated through the above process.
If you move all of these files to the /opt/loxilb
path and check them, you can see that they work well.
sudo cp server.key /opt/loxilb/cert/.\nsudo cp server.crt /opt/loxilb/cert/.\n./loxilb --tls\n
curl http://0.0.0.0:11111/netlox/v1/config/loadbalancer/all\n{\"lbAttr\":[]}\n\n curl -k https://0.0.0.0:8091/netlox/v1/config/loadbalancer/all\n{\"lbAttr\":[]}\n
It should appear in the log as follows.
2024/04/12 16:19:48 Serving loxilb rest API at http://[::]:11111\n2024/04/12 16:19:48 Serving loxilb rest API at https://[::]:8091\n
"},{"location":"integrate_bgp/","title":"loxilb & calico BGP \uc5f0\ub3d9","text":"\uc774 \ubb38\uc11c\uc5d0\uc11c\ub294 calico CNI\ub97c \uc0ac\uc6a9\ud558\ub294 kubernetes\uc640 loxilb\ub97c \uc5f0\ub3d9\ud558\ub294 \ubc29\ubc95\uc744 \uc124\uba85\ud569\ub2c8\ub2e4.
"},{"location":"integrate_bgp/#_1","title":"\ud658\uacbd","text":"\uc774 \uc608\uc81c\uc5d0\uc11c\ub294 kubernetes\uc640 loxilb\uac00 \ub2e4\uc74c\uacfc \uac19\uc774 \uc5f0\uacb0\ub418\uc5b4 \uc788\ub2e4\uace0 \uac00\uc815\ud569\ub2c8\ub2e4. kubernetes\ub294 \ub2e8\uc21c\ud568\uc744 \uc704\ud574\uc11c \ub2e8\uc77c \ub9c8\uc2a4\ud130 \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud558\uba70 \ubaa8\ub4e0 \ud074\ub7ec\uc2a4\ud130\ub294 192.168.57.0/24 \ub3d9\uc77c\ud55c \uc11c\ube0c\ub137\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. loxilb \ucee8\ud14c\uc774\ub108\uac00 \uc2e4\ud589\uc911\uc778 \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub178\ub4dc \uc5ed\uc2dc kubernetes\uc640 \ub3d9\uc77c\ud55c \uc11c\ube0c\ub137\uc5d0 \uc5f0\uacb0\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \uc678\ubd80\uc5d0\uc11c kubernetes\uc811\uc18d\uc740 \ubaa8\ub450 \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub178\ub4dc\uc640 loxilb\ub97c \uac70\uce58\ub3c4\ub85d \uc124\uc815\ud588\uc2b5\ub2c8\ub2e4.
\ud574\ub2f9 \uc608\uc81c\uc5d0\uc11c\ub294 docker\ub97c \uc0ac\uc6a9\ud574 loxilb \ucee8\ud14c\uc774\ub108\ub97c \uc2e4\ud589\ud569\ub2c8\ub2e4. \ud574\ub2f9 \uc608\uc81c\uc5d0\uc11c\ub294 kubernetes & calico\ub294 \uc774\ubbf8 \uc124\uce58\ub418\uc5b4 \uc788\ub2e4\uace0 \uac00\uc815\ud558\uace0 \uc124\uba85\ud569\ub2c8\ub2e4.
"},{"location":"integrate_bgp/#1-loxilb-container","title":"1. loxilb container \uc0dd\uc131","text":""},{"location":"integrate_bgp/#11-docker-network","title":"1.1 docker network \uc0dd\uc131","text":"\uc6b0\uc120 loxilb\uc640 kubernetes \uc5f0\ub3d9\uc744 \uc704\ud574\uc11c\ub294 \uc11c\ub85c \ud1b5\uc2e0\ud560 \uc218 \uc788\uc5b4\uc57c \ud569\ub2c8\ub2e4. kubernetes & \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub178\ub4dc\uac00 \uc5f0\uacb0\ub418\uc5b4 \uc788\ub294 \ub124\ud2b8\uc6cc\ud06c\uc5d0 loxilb \ucee8\ud14c\uc774\ub108\ub3c4 \uc5f0\uacb0\ub418\ub3c4\ub85d docker network\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4. \ud604\uc7ac \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub178\ub4dc\ub294 eno6 \uc778\ud130\ud398\uc774\uc2a4\ub97c \ud1b5\ud574 kubernetes\uc640 \uc5f0\uacb0\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \ub530\ub77c\uc11c eno6 \uc778\ud130\ud398\uc774\uc2a4\ub97c parent\ub85c \uc0ac\uc6a9\ud558\ub294 macvlan \ud0c0\uc785 docker network \ub9cc\ub4e4\uc5b4\uc11c loxilb \ucee8\ud14c\uc774\ub108\uc5d0 \uc81c\uacf5\ud558\ub3c4\ub85d \ud558\uaca0\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c \uba85\ub839\uc5b4\ub85c docker network\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4.
sudo docker network create -d macvlan -o parent=eno6 \\\n --subnet 192.168.57.0/24 \\\n --gateway 192.168.57.1 \\\n --aux-address 'cp1=192.168.57.101' \\\n --aux-address 'cp2=192.168.57.102' \\\n --aux-address 'cp3=192.168.57.103' k8snet\n
|\uc635\uc158|\uc124\uba85| |----|----| |-d macvlan|\ub124\ud2b8\uc6cc\ud06c \ud0c0\uc785\uc744 macvlan\uc73c\ub85c \uc9c0\uc815| |-o parent=eno6|eno6 \uc778\ud130\ud398\uc774\uc2a4\ub97c parent\ub85c \uc0ac\uc6a9\ud574\uc11c macvlan type \ub124\ud2b8\uc6cc\ud06c \uc0dd\uc131| |--subnet 192.168.57.0/24|\ub124\ud2b8\uc6cc\ud06c \uc11c\ube0c\ub137 \uc9c0\uc815| |--gateway 192.168.57.1|\uac8c\uc774\ud2b8\uc6e8\uc774 \uc124\uc815(\uc0dd\ub7b5 \uac00\ub2a5)| |--aux-address 'serverName=serverIP'|\ud574\ub2f9 \ub124\ud2b8\uc6cc\ud06c\uc5d0\uc11c \uc774\ubbf8 \uc0ac\uc6a9\uc911\uc778 IP \uc8fc\uc18c\ub4e4\uc774 \uc911\ubcf5\uc73c\ub85c \ucee8\ud14c\uc774\ub108\uc5d0 \ud560\ub2f9\ub418\uc9c0 \uc54a\ub3c4\ub85d \ubbf8\ub9ac \ub4f1\ub85d\ud558\ub294 \uc635\uc158| |k8snet|\ub124\ud2b8\uc6cc\ud06c \uc774\ub984\uc744 k8snet\uc73c\ub85c \uc9c0\uc815| \uc678\ubd80\uc5d0\uc11c kubernetes \uc11c\ube44\uc2a4\ub85c \uc811\uadfc\ud558\ub294 \ud2b8\ub798\ud53d \uc5ed\uc2dc loxilb\ub97c \uac70\uce58\ub3c4\ub85d, \uc678\ubd80\uc640 \ud1b5\uc2e0\uc774 \uac00\ub2a5\ud55c docker network \uc5ed\uc2dc \uc0dd\uc131\ud569\ub2c8\ub2e4. \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub178\ub4dc\ub294 eno8\uc744 \ud1b5\ud574 \uc678\ubd80\uc640 \uc5f0\uacb0\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.
sudo docker network create -d macvlan -o parent=eno8 \\\n --subnet 192.168.20.0/24 \\\n --gateway 192.168.20.1 llbnet\n
docker network list \uba85\ub839\uc5b4\ub85c \uc0dd\uc131\ud55c \ub124\ud2b8\uc6cc\ud06c\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
netlox@nd8:~$ sudo docker network list\nNETWORK ID NAME DRIVER SCOPE\n5c97ae74fc32 bridge bridge local\n6142f53e8be6 host host local\n24ee7dbd7707 k8snet macvlan local\n81c96ceda375 llbnet macvlan local\n7bcd1738501b none null local\n
"},{"location":"integrate_bgp/#12-loxilb-container","title":"1.2 loxilb container \uc0dd\uc131","text":"loxilb container \uc774\ubbf8\uc9c0\ub294 github\uc5d0\uc11c \uc81c\uacf5\ub418\uace0 \uc788\uc2b5\ub2c8\ub2e4. \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub9cc \uba3c\uc800 \ub2e4\uc6b4\ub85c\ub4dc\ud558\uace0 \uc2f6\uc744 \uacbd\uc6b0 \ub2e4\uc74c \uba85\ub839\uc5b4\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.
docker pull ghcr.io/loxilb-io/loxilb:latest\n
\ub2e4\uc74c \uba85\ub839\uc5b4\ub85c loxilb \ucee8\ud14c\uc774\ub108\ub97c \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
sudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped \\\n --privileged -dit -v /dev/log:/dev/log \\\n --net=k8snet --ip=192.168.57.4 --name loxilb ghcr.io/loxilb-io/loxilb:latest \\\n --host=0.0.0.0\n
\uc0ac\uc6a9\uc790\uac00 \uc9c0\uc815\ud574\uc57c \ud558\ub294 \uc635\uc158\uc740 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4. |\uc635\uc158|\uc124\uba85| |----|----| |--net=k8snet|\ucee8\ud14c\uc774\ub108\ub97c \uc5f0\uacb0\ud560 \ub124\ud2b8\uc6cc\ud06c| |--ip=192.168.57.4|\ucee8\ud14c\uc774\ub108\uac00 \uc0ac\uc6a9\ud560 IP address \uc9c0\uc815. \uc9c0\uc815\ud558\uc9c0 \uc54a\uc744 \uacbd\uc6b0 network \uc11c\ube0c\ub0c7 \ubc94\uc704 \ub0b4\uc5d0\uc11c \uc784\uc758\uc758 IP \uc0ac\uc6a9| |--name loxilb|\ucee8\ud14c\uc774\ub108 \uc774\ub984 \uc124\uc815| docker ps \uba85\ub839\uc5b4\ub85c \uc0dd\uc131\ud55c \ucee8\ud14c\uc774\ub108\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
netlox@nd8:~$ sudo docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\neae349a283ae loxilbio/loxilb:beta \"/root/loxilb-io/lox\u2026\" 11 days ago Up 11 days loxilb\n
\uc704\uc5d0\uc11c \ucee8\ud14c\uc774\ub108 \uc0dd\uc131\ud560 \ub54c kubernetes \ub124\ud2b8\uc6cc\ud06c\ub9cc \uc5f0\uacb0\ud588\uae30 \ub54c\ubb38\uc5d0, \uc678\ubd80 \ud1b5\uc2e0\uc6a9 docker network\uc640\ub3c4 \uc5f0\uacb0\ud574\uc57c \ud569\ub2c8\ub2e4. \ub2e4\uc74c \uba85\ub839\uc5b4\ub85c \ucee8\ud14c\uc774\ub108\uc5d0 \ub124\ud2b8\uc6cc\ud06c\ub97c \uc8fc\uac00\ub85c \uc5f0\uacb0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
sudo docker network connect llbnet loxilb\n
\uc5f0\uacb0\uc774 \uc644\ub8cc\ub418\uba74 \ub2e4\uc74c\uacfc \uac19\uc774 \ucee8\ud14c\uc774\ub108\uc758 \uc778\ud130\ud398\uc774\uc2a4 2\uac1c\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4
netlox@netlox:~$ sudo docker exec -ti loxilb ip route\ndefault via 192.168.20.1 dev eth0\n192.168.20.0/24 dev eth0 proto kernel scope link src 192.168.20.4\n192.168.30.0/24 dev eth1 proto kernel scope link src 192.168.30.2\n
"},{"location":"integrate_bgp/#2-kubernetes-loxi-ccm","title":"2. kubernetes\uc5d0 loxi-ccm \uc124\uce58","text":"loxi-ccm\uc740 loxilb \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub97c kubernetes\uc5d0\uac8c \uc81c\uacf5\ud558\uae30 \uc704\ud55c cloud-controller-manager \ub85c\uc11c, kubernetes\uc640 loxilb \uc5f0\ub3d9\uc5d0 \ubc18\ub4dc\uc2dc \ud544\uc694\ud569\ub2c8\ub2e4. \ud574\ub2f9 \ubb38\uc11c\ub97c \ucc38\uace0\ud574\uc11c, configMap\uc758 apiServerURL\uc744 \uc704\uc5d0\uc11c \uc0dd\uc131\ud55c loxilb\uc758 IP \uc8fc\uc18c\ub85c \ubcc0\uacbd\ud55c \ud6c4 kubernetes\uc5d0 \uc124\uce58\ud558\uc2dc\uba74 \ub429\ub2c8\ub2e4. loxi-ccm\uae4c\uc9c0 \uc815\uc0c1\uc801\uc73c\ub85c \uc124\uce58\ub418\uc5c8\ub2e4\uba74 \uc5f0\ub3d9 \uc791\uc5c5\uc774 \uc644\ub8cc\ub429\ub2c8\ub2e4.
"},{"location":"integrate_bgp/#3","title":"3. \uae30\ubcf8 \uc5f0\ub3d9 \ud655\uc778","text":"2\ubc88 \ud56d\ubaa9\uae4c\uc9c0 \uc644\ub8cc\ub418\uc5c8\ub2e4\uba74, \uc774\uc81c kubernetes\uc5d0\uc11c LoadBalancer \ud0c0\uc785 \uc11c\ube44\uc2a4\ub97c \uc0dd\uc131\ud558\uba74 External IP\uac00 \ubd80\uc5ec\ub429\ub2c8\ub2e4. \ub2e4\uc74c\uacfc \uac19\uc774 \ud14c\uc2a4\ud2b8\uc6a9\uc73c\ub85c test-nginx-svc.yaml \ud30c\uc77c\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4.
apiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n name: http-web-svc\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n type: LoadBalancer\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 8888\n targetPort: http-web-svc\n
\ud30c\uc77c\uc744 \uc0dd\uc131\ud55c \ub2e4\uc74c, \uc544\ub798 \uba85\ub839\uc73c\ub85c nginx pod\uc640 LoadBalancer \uc11c\ube44\uc2a4\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4.
kubectl apply -f test-nginx-svc.yaml\n
\uc11c\ube44\uc2a4 nginx-service\uac00 LoadBalancer \ud0c0\uc785\uc73c\ub85c \uc0dd\uc131\ub418\uc5c8\uace0 External IP\ub97c \ud560\ub2f9\ubc1b\uc558\uc74c\uc744 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\uc81c IP 123.123.123.15\uc640 port 8888\uc744 \uc0ac\uc6a9\ud574 \uc678\ubd80\uc5d0\uc11c kubernetes \uc11c\ube44\uc2a4\ub85c \uc811\uadfc\uc774 \uac00\ub2a5\ud569\ub2c8\ub2e4.
vagrant@node1:~$ sudo kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.233.0.1 <none> 443/TCP 28d\nnginx-service LoadBalancer 10.233.21.235 123.123.123.15 8888:31655/TCP 3s\n
LoadBalancer \ub8f0\uc740 loxilb \ucee8\ud14c\uc774\ub108\uc5d0\ub3c4 \uc0dd\uc131\ub429\ub2c8\ub2e4. \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub178\ub4dc\uc5d0\uc11c \ub2e4\uc74c\uacfc \uac19\uc774 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
netlox@nd8:~$ sudo docker exec -ti loxilb loxicmd get lb\n| EXTERNAL IP | PORT | PROTOCOL | SELECT | # OF ENDPOINTS |\n|----------------|------|----------|--------|----------------|\n| 123.123.123.15 | 8888 | tcp | 0 | 2 |\n
"},{"location":"integrate_bgp/#4-calico-bgp-loxilb","title":"4. calico BGP & loxilb \uc5f0\ub3d9","text":"calico\uc5d0\uc11c BGP \ubaa8\ub4dc\ub85c \ub124\ud2b8\uc6cc\ud06c\ub97c \uad6c\uc131\ud560 \uacbd\uc6b0, loxilb \uc5ed\uc2dc BGP \ubaa8\ub4dc\ub85c \ub3d9\uc791\ud574\uc57c \ud569\ub2c8\ub2e4. loxilb\ub294 goBGP \uae30\ubc18\uc73c\ub85c BGP \ub124\ud2b8\uc6cc\ud06c \uae30\ub2a5\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \uc774\ud558 \ub0b4\uc6a9\uc740 calico\uac00 BGP mode\ub85c \uc124\uc815\ub418\uc5b4 \uc788\ub2e4\uace0 \uac00\uc815\ud558\uace0 \uc124\uba85\ud569\ub2c8\ub2e4.
"},{"location":"integrate_bgp/#41-loxilb-bgp","title":"4.1 loxilb BGP \ubaa8\ub4dc\ub85c \uc2e4\ud589","text":"\ub2e4\uc74c \uba85\ub839\uc5b4\ub85c loxilb \ucee8\ud14c\uc774\ub108\ub97c \uc0dd\uc131\ud558\uba74 BGP \ubaa8\ub4dc\ub85c \uc2e4\ud589\ub429\ub2c8\ub2e4. \uba85\ub839\uc5b4 \ub9c8\uc9c0\ub9c9\uc758 -b \uc635\uc158\uc774 BGP \ubaa8\ub4dc \uc635\uc158\uc785\ub2c8\ub2e4.
sudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped \\\n --privileged -dit -v /dev/log:/dev/log \\\n --net=k8snet --ip=192.168.57.4 --name loxilb ghcr.io/loxilb-io/loxilb:latest \\\n --host=0.0.0.0 -b\n
"},{"location":"integrate_bgp/#42-gobgp_loxilbyaml","title":"4.2 gobgp_loxilb.yaml \ud30c\uc77c \uc0dd\uc131","text":"loxilb \ucee8\ud14c\uc774\ub108\uc758 /opt/loxilb/ \ub514\ub809\ud1a0\ub9ac\uc5d0 gobgp_loxilb.yaml \ud30c\uc77c\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4.
global:\n config:\n as: 65002\n router-id: 172.1.0.2\nneighbors:\n - config:\n neighbor-address: 192.168.57.101\n peer-as: 64512\n - config:\n neighbor-address: 192.168.20.55\n peer-as: 64001\n
global \ud56d\ubaa9\uc5d0\ub294 loxilb \ucee8\ud14c\uc774\ub108\uc758 as-id\uc640 router-id \ub4f1 BGP \uc815\ubcf4\ub97c \ub4f1\ub85d\ud574\uc57c \ud569\ub2c8\ub2e4. neighbors\ub294 loxilb\uc640 Peering\ub418\ub294 BGP \ub77c\uc6b0\ud130\uc758 IP \uc8fc\uc18c \ubc0f as-id \uc815\ubcf4\ub97c \ub4f1\ub85d\ud569\ub2c8\ub2e4. \ud574\ub2f9 \uc608\uc81c\uc5d0\uc11c\ub294 calico\uc758 BGP \uc815\ubcf4(192.168.57.101)\uc640 \uc678\ubd80 BGP \uc815\ubcf4(129.168.20.55) \ub97c \ub4f1\ub85d\ud588\uc2b5\ub2c8\ub2e4.
"},{"location":"integrate_bgp/#43-loxilb-lo-router-id","title":"4.3 loxilb \ucee8\ud14c\uc774\ub108\uc758 lo \uc778\ud130\ud398\uc774\uc2a4\uc5d0 router-id \ucd94\uac00","text":"gobgp_loxilb.yaml \ud30c\uc77c\uc5d0\uc11c router-id\ub85c \ub4f1\ub85d\ud55c IP\ub97c lo \uc778\ud130\ud398\uc774\uc2a4\uc5d0 \ucd94\uac00\ud574\uc57c \ud569\ub2c8\ub2e4.
sudo docker exec -ti loxilb ip addr add 172.1.0.2/32 dev lo\n
"},{"location":"integrate_bgp/#44-loxilb","title":"4.4 loxilb \ucee8\ud14c\uc774\ub108 \uc7ac\uc2dc\uc791","text":"gobgp_loxilb.yaml\uc5d0 \uc791\uc131\ud55c \uc124\uc815\uc774 \uc801\uc6a9\ub418\ub3c4\ub85d \ucee8\ud14c\uc774\ub108\ub97c \uc7ac\uc2dc\uc791\ud569\ub2c8\ub2e4.
sudo docker stop loxilb\nsudo docker start loxilb\n
"},{"location":"integrate_bgp/#45-calico-bgp-peer","title":"4.5 calico\uc5d0 BGP Peer \uc815\ubcf4 \ucd94\uac00","text":"calico\uc5d0\ub3c4 loxilb\uc758 BGP Peer \uc815\ubcf4\ub97c \ucd94\uac00\ud574\uc57c \ud569\ub2c8\ub2e4. \ub2e4\uc74c\uacfc \uac19\uc774 calico-bgp-config.yaml \ud30c\uc77c\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4.
apiVersion: projectcalico.org/v3\nkind: BGPPeer\nmetadata:\n name: my-global-peers2\nspec:\n peerIP: 192.168.57.4\n asNumber: 65002\n
peerIP\uc5d0 loxilb\uc758 IP \uc8fc\uc18c\ub97c \uc785\ub825\ud569\ub2c8\ub2e4. asNumber\uc5d0\ub294 \uc704\uc5d0\uc11c \uc124\uc815\ud55c loxilb BGP\uc758 as-ID\ub97c \uc785\ub825\ud569\ub2c8\ub2e4. \ud30c\uc77c\uc744 \uc0dd\uc131\ud55c \ub2e4\uc74c, \uc544\ub798 \uba85\ub839\uc5b4\ub85c calico\uc5d0 BGP Peer \uc815\ubcf4\ub97c \ucd94\uac00\ud569\ub2c8\ub2e4.
sudo calicoctl apply -f calico-bgp-config.yaml\n
"},{"location":"integrate_bgp/#46-bgp","title":"4.6 BGP \uc124\uc815 \ud655\uc778","text":"\uc774\uc81c \ub2e4\uc74c\uacfc \uac19\uc774 loxilb \ucee8\ud14c\uc774\ub108\uc5d0\uc11c BGP \uc5f0\uacb0\uc744 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
netlox@nd8:~$ sudo docker exec -ti loxilb3 gobgp neigh\nPeer AS Up/Down State |#Received Accepted\n192.168.57.101 64512 00:00:59 Establ | 4 4\n
\uc815\uc0c1\uc801\uc73c\ub85c \uc5f0\uacb0\ub418\uc5c8\ub2e4\uba74 State\uac00 Establish\ub85c \ud45c\uc2dc\ub429\ub2c8\ub2e4. gobgp global rib \uba85\ub839\uc73c\ub85c calico\uc758 route \uc815\ubcf4\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
netlox@nd8:~$ sudo docker exec -ti loxilb3 gobgp global rib\n Network Next Hop AS_PATH Age Attrs\n*> 10.233.71.0/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n*> 10.233.74.64/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n*> 10.233.75.0/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n*> 10.233.102.128/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n
"},{"location":"integrate_bgp_eng/","title":"How to run loxilb with calico CNI in BGP mode","text":"This article describes how to integrate loxilb using calico CNI in Kubernetes.
"},{"location":"integrate_bgp_eng/#setup","title":"Setup","text":"For this example, kubernetes and loxilb are setup as follows:
Kubernetes uses a single master cluster for simplicity, all clusters use the same 192.168.57.0/24 subnet. The load balancer node where the loxilb container is running is also connected to the same subnet as kubernetes. Externally, all kubernetes connections are configured to go through the \"loxilb\" load balancer node.
This example uses docker to run the loxilb container. These examples assume that kubernetes & calico are already installed.
"},{"location":"integrate_bgp_eng/#1-loxilb-setup","title":"1. loxilb setup","text":""},{"location":"integrate_bgp_eng/#11-docker-network-setup","title":"1.1 docker network setup","text":"In order to integrate loxilb and kubernetes, loxilb needs to be able to communicate with Kubernetes. First, we create a docker network so that the loxilb container is also connected to the same network which is used by kubernetes nodes. Currently, the load balancer node is connected to kubernetes through the eno6 interface. Therefore, we will create a macvlan-type docker network that uses the eno6 interface as a parent and provide it to the loxilb docker. Create a docker network with the following command:
sudo docker network create -d macvlan -o parent=eno6 \\\n --subnet 192.168.57.0/24 \\\n --gateway 192.168.57.1 \\\n --aux-address 'cp1=192.168.57.101' \\\n --aux-address 'cp2=192.168.57.102' \\\n --aux-address 'cp3=192.168.57.103' k8snet\n
|Options|Description| |----|----| |-d macvlan|Specify network type as macvlan| |-o parent=eno6|Create a macvlan type network using the eno6 interface as parent| |--subnet 192.168.57.0/24|Specify network subnet| |--gateway 192.168.57.1|Set gateway (optional)| |--aux-address 'serverName=serverIP'|Option to register in advance so that IP addresses already in use on the network are not duplicated| |k8snet|Name the network k8snet| A docker network that can communicate with the outside is also created so that traffic accessing the kubernetes service from outside also goes through loxilb. The load balancer node is connected to the outside through eno8.
sudo docker network create -d macvlan -o parent=eno8 \\\n --subnet 192.168.20.0/24 \\\n --gateway 192.168.20.1 llbnet\n
We can check the network created with the docker network list command.
netlox@nd8:~$ sudo docker network list\nNETWORK ID NAME DRIVER SCOPE\n5c97ae74fc32 bridge bridge local\n6142f53e8be6 host host local\n24ee7dbd7707 k8snet macvlan local\n81c96ceda375 llbnet macvlan local\n7bcd1738501b none null local\n
"},{"location":"integrate_bgp_eng/#12-loxilb-docker-setup","title":"1.2 loxilb docker setup","text":"The loxilb container image is provided at github. To download the docker image, use the following command.
docker pull ghcr.io/loxilb-io/loxilb:latest\n
To run loxilb docker, we can use the following command.
sudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped \\\n --privileged -dit -v /dev/log:/dev/log \\\n --net=k8snet --ip=192.168.57.4 --name loxilb ghcr.io/loxilb-io/loxilb:latest \\\n --host=0.0.0.0\n
The options that need to specified are: Options Description --net=k8snet Network to connect to container --ip=192.168.57.4 Specifies the IP address the container will use. If not specified, use any IP within the network subnet range --name loxilb Set container name We can check the docker created with the docker ps command.
netlox@nd8:~$ sudo docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\neae349a283ae loxilbio/loxilb:beta \"/root/loxilb-io/lox\u2026\" 11 days ago Up 11 days loxilb\n
Since we only connected the kubernetes network (k8snet) when running the docker above, we also need to connect to the docker network for external communication. Currently, docker only supports one network to be connected when running \"docker run\" command. But, its easy to connect other network to the docker with the following command: sudo docker network connect llbnet loxilb\n
Once the connection is complete, we can see the docker container's interfaces as follows:
netlox@netlox:~$ sudo docker exec -ti loxilb ip route\ndefault via 192.168.20.1 dev eth0\n192.168.20.0/24 dev eth0 proto kernel scope link src 192.168.20.4\n192.168.30.0/24 dev eth1 proto kernel scope link src 192.168.30.2\n
"},{"location":"integrate_bgp_eng/#2-loxi-ccm-setup-in-kubernetes","title":"2. loxi-ccm setup in kubernetes","text":"loxi-ccm is a ccm provider to provide a loxilb load balancer to kubernetes, and it is essential for interworking with kubernetes and loxilb. Refer to the relevant document, change the apiServerURL of configMap to the IP address of loxilb created above and install it in kubernetes. If loxi-ccm is installed properly, the setup is complete.
"},{"location":"integrate_bgp_eng/#3basic-load-balancer-test","title":"3.Basic load-balancer test","text":"We can now give an External IP when you create a LoadBalancer type service in kubernetes. Create a test-nginx-svc.yaml file for testing as follows:
apiVersion: v1\nkind: Pod\nmetadata:\n name: nginx\n labels:\n app.kubernetes.io/name: proxy\nspec:\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n name: http-web-svc\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\nspec:\n type: LoadBalancer\n selector:\n app.kubernetes.io/name: proxy\n ports:\n - name: name-of-service-port\n protocol: TCP\n port: 8888\n targetPort: http-web-svc\n
The above steps create the nginx pod and then associates a LoadBalancer service to it. The command to apply is as follows
kubectl apply -f test-nginx-svc.yaml\n
We can verify that the service nginx-service has been created as a LoadBalancer type and has been assigned an External IP. Now you can access the kubernetes service from outside using IP 123.123.123.15 and port 8888.
vagrant@node1:~$ sudo kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.233.0.1 <none> 443/TCP 28d\nnginx-service LoadBalancer 10.233.21.235 123.123.123.15 8888:31655/TCP 3s\n
The LoadBalancer rule is also created in the loxilb container. We can check in the loxilb load-balancer node as follows
netlox@nd8:~$ sudo docker exec -ti loxilb loxicmd get lb\n| EXTERNAL IP | PORT | PROTOCOL | SELECT | # OF ENDPOINTS |\n|----------------|------|----------|--------|----------------|\n| 123.123.123.15 | 8888 | tcp | 0 | 2 |\n
"},{"location":"integrate_bgp_eng/#4-calico-bgp-loxilb-setup","title":"4. calico BGP & loxilb setup","text":"If calico configures the network in BGP mode, loxilb must also operate in BGP mode. loxilb supports BGP functions based on goBGP. The following description assumes that calico is already set to use BGP mode.
"},{"location":"integrate_bgp_eng/#41-loxilb-bgp-mode-setup","title":"4.1 loxilb BGP mode setup","text":"If we create a loxilb container with the following command, it will run in BGP mode. The -b option at the end of the command is to enable BGP mode in loxilb.
sudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped \\\n --privileged -dit -v /dev/log:/dev/log \\\n --net=k8snet --ip=192.168.57.4 --name loxilb ghcr.io/loxilb-io/loxilb:latest \\\n --host=0.0.0.0 -b\n
"},{"location":"integrate_bgp_eng/#42-gobgp_loxilbyaml-file-setup","title":"4.2 gobgp_loxilb.yaml file setup","text":"Create a gobgp_loxilb.yaml file in the /etc/gobgp/ directory of the loxilb container.
global:\n config:\n as: 65002\n router-id: 172.1.0.2\nneighbors:\n - config:\n neighbor-address: 192.168.57.101\n peer-as: 64512\n - config:\n neighbor-address: 192.168.20.55\n peer-as: 64001\n
BGP information such as as-id and router-id of the loxilb container must be registered as global items. The neighbors item need info about the IP address and as-id information of the BGP router peering with loxilb. In this example, calico's BGP information (192.168.57.101) and external BGP information (129.168.20.55) were registered.
"},{"location":"integrate_bgp_eng/#43-add-router-id-to-the-lo-interface-of-the-loxilb-container","title":"4.3 Add router-id to the lo interface of the loxilb container","text":"We need to add the IP registered as loxilb router-id in the gobgp_loxilb.yaml file to the lo interface of loxilb docker.
sudo docker exec -ti loxilb ip addr add 172.1.0.2/32 dev lo\n
"},{"location":"integrate_bgp_eng/#44-loxilb-docker-restart","title":"4.4 loxilb docker restart","text":"Restart the loxilb docker for the settings in gobgp_loxilb.yaml to take effect.
sudo docker stop loxilb\nsudo docker start loxilb\n
"},{"location":"integrate_bgp_eng/#45-setup-bgp-peer-information-in-calico","title":"4.5 Setup BGP Peer information in Calico","text":"We also need to add loxilb's BGP peer information to calico. Create the calico-bgp-config.yaml file as follows:
apiVersion: projectcalico.org/v3\nkind: BGPPeer\nmetadata:\n name: my-global-peers2\nspec:\n peerIP: 192.168.57.4\n asNumber: 65002\n
In peerIP, enter the IP address of loxilb. In asNumber, enter the as-ID of the loxilb BGP set above. After creating the file, add BGP peer information to calico with the command below.
sudo calicoctl apply -f calico-bgp-config.yaml\n
"},{"location":"integrate_bgp_eng/#46-check-bgp-status","title":"4.6 Check BGP status","text":"We now check the BGP connectivity in the loxilb docker like this:
netlox@nd8:~$ sudo docker exec -ti loxilb3 gobgp neigh\nPeer AS Up/Down State |#Received Accepted\n192.168.57.101 64512 00:00:59 Establ | 4 4\n
If the connection is successful, the State will be shown as \"Established\". We can check the route information of calico with the gobgp global rib command.
netlox@nd8:~$ sudo docker exec -ti loxilb3 gobgp global rib\n Network Next Hop AS_PATH Age Attrs\n*> 10.233.71.0/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n*> 10.233.74.64/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n*> 10.233.75.0/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n*> 10.233.102.128/26 192.168.57.101 64512 01:02:03 [{Origin: i}]\n
"},{"location":"k0s_quick_start/","title":"K0s/loxilb with kube-router","text":""},{"location":"k0s_quick_start/#loxilb-quick-start-guide-with-k0skube-router","title":"LoxiLB Quick Start Guide with k0s/kube-router","text":"This guide will explain how to:
- Deploy a single-node K0s cluster with kube-router networking
- Expose services with loxilb as an external load balancer
"},{"location":"k0s_quick_start/#prerequisites","title":"Prerequisite(s)","text":" - Single node with Linux
- Install docker runtime to manage loxilb
"},{"location":"k0s_quick_start/#topology","title":"Topology","text":"For quickly bringing up loxilb with K0s/kube-router, we will be deploying all components in a single node :
loxilb is run as a docker and will use macvlan for the incoming traffic. This is to mimic a topology close to cloud-hosted k8s where LB nodes run outside a cluster. loxilb can be used in more complex in-cluster mode as well, but not used here for simplicity.
"},{"location":"k0s_quick_start/#install-loxilb-docker","title":"Install loxilb docker","text":"## Set underlying interface of the VM/cluster-node to promisc mode for mac-vlan to work\nsudo ifconfig eth1 promisc\n\n## Run loxilb\nsudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged --entrypoint /root/loxilb-io/loxilb/loxilb -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest\n\n# Create mac-vlan on top of underlying eth1 interface\nsudo docker network create -d macvlan -o parent=eth1 --subnet 192.168.82.0/24 --gateway 192.168.82.1 --aux-address 'host=192.168.82.252' llbnet\n\n# Assign mac-vlan to loxilb docker with specified IP (which will be used as LB VIP)\n# Please note that this node should already have an IP assigned belonging to the same subnet on underlying interface\nsudo docker network connect llbnet loxilb --ip=192.168.82.100\n\n# Add iptables rule to allow traffic from source/host IP(192.168.82.1) to loxilb\nsudo iptables -A DOCKER -s 192.168.82.1 -j ACCEPT\n
All the above steps related to docker setup can be further automated using docker-compose.
"},{"location":"k0s_quick_start/#setup-k0skube-router-in-single-node","title":"Setup k0s/kube-router in single-node","text":"#K0s installation steps\ncurl -sSLf https://get.k0s.sh | sudo sh\nsudo k0s install controller --single\nsudo k0s start\n
"},{"location":"k0s_quick_start/#check-k0s-status","title":"Check k0s status","text":"sudo k0s status\n
"},{"location":"k0s_quick_start/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used to deploy loxilb with Kubernetes.
wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/ext-cluster/kube-loxilb.yaml\n
Change args in kube-loxilb.yaml
args:\n - --loxiURL=http://172.17.0.2:11111\n - --externalCIDR=192.168.82.100/32\n - --setMode=1\n
In the above snippet, loxiURL uses docker interface IP of loxilb, which can be different for each setup. Apply in k0s:
$ sudo k0s kubectl apply -f kube-loxilb.yaml\n
"},{"location":"k0s_quick_start/#create-the-service","title":"Create the service","text":"$ sudo k0s kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/docker-k0s-lb/tcp-svc-lb.yml\n
"},{"location":"k0s_quick_start/#check-the-status","title":"Check the status","text":"In k0s:
$ sudo k0s kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 80m\ntcp-lb-onearm LoadBalancer 10.43.183.123 llb-192.168.82.100 56002:30001/TCP 6m50s\n
In loxilb docker: $ sudo docker exec -it loxilb loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-----------------------|------|-----|--------|-----------|-------|--------|--------|----------|\n| 192.168.82.100 | | 56002 | tcp | default_tcp-lb-onearm | 0 | rr | onearm | 10.0.2.15 | 30001 | 1 | active | 12:880 |\n
"},{"location":"k0s_quick_start/#connect-from-hostclient","title":"Connect from host/client","text":"$ curl http://192.168.82.100:56002\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
All of the above steps are also available as part of loxilb CICD workflow. Follow the steps below to replicate the above (please note that you will need vagrant tool installed to run:
$ git clone https://github.com/loxilb-io/loxilb.git\n$ cd cicd/docker-k0s-lb/\n\n# To setup the single node k0s setup with kube-router networking and loxilb as external load balancer\n$ ./config.sh\n\n# To validate the results\n$ ./validation.sh\n\n# Cleanup\n$ ./rmconfig.sh\n
"},{"location":"k0s_quick_start_incluster/","title":"K0s/loxilb in-cluster mode","text":""},{"location":"k0s_quick_start_incluster/#quick-start-guide-with-k0s-and-loxilb-in-cluster-mode","title":"Quick Start Guide with K0s and LoxiLB in-cluster mode","text":"This document will explain how to install a K0s cluster with loxilb as a serviceLB provider running in-cluster mode.
"},{"location":"k0s_quick_start_incluster/#prerequisites","title":"Prerequisite(s)","text":" - Single node with Linux
"},{"location":"k0s_quick_start_incluster/#topology","title":"Topology","text":"For quickly bringing up loxilb in-cluster and K0s, we will be deploying all components in a single node :
loxilb and kube-loxilb components run as pods managed by kubernetes in this scenario.
"},{"location":"k0s_quick_start_incluster/#setup-k0s-in-a-single-node","title":"Setup k0s in a single-node","text":"# k0s installation steps\ncurl -sSLf https://get.k0s.sh | sudo sh\nsudo k0s install controller --single\nsudo k0s start\n
"},{"location":"k0s_quick_start_incluster/#check-k0s-status","title":"Check k0s status","text":"$ sudo k0s status\nVersion: v1.29.2+k0s.0\nProcess ID: 2631\nWorkloads: true\nSingleNode: true\nKube-api probing successful: true\nKube-api probing last error: \n
"},{"location":"k0s_quick_start_incluster/#how-to-deploy-loxilb","title":"How to deploy loxilb ?","text":"loxilb can be deloyed by using the following command in the K3s node
sudo k0s kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/k0s-incluster/loxilb.yml\n
"},{"location":"k0s_quick_start_incluster/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used as an operator to manage loxilb.
wget https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/k0s-incluster/kube-loxilb.yml\n
kube-loxilb.yaml args:\n #- --loxiURL=http://172.17.0.2:11111\n - --externalCIDR=192.168.82.100/32\n - --setRoles=0.0.0.0\n #- --monitor\n #- --setBGP\n
In the above snippet, loxiURL is commented out which denotes to utilize in-cluster mode to discover loxilb pods automatically. External CIDR represents the IP pool from where serviceLB VIP will be allocated. Apply after making changes (if any) :
sudo k0s kubectl apply -f kube-loxilb.yaml\n
"},{"location":"k0s_quick_start_incluster/#create-the-service","title":"Create the service","text":"sudo k0s kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/k0s-incluster/tcp-svc-lb.yml\n
"},{"location":"k0s_quick_start_incluster/#check-status-of-various-components-in-k0s-node","title":"Check status of various components in k0s node","text":"In k0s node:
## Check the pods created\n$ sudo k0s kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system kube-proxy-vczxm 1/1 Running 0 4m48s\nkube-system kube-router-gjp7g 1/1 Running 0 4m48s\nkube-system metrics-server-7556957bb7-25hsk 1/1 Running 0 4m50s\nkube-system coredns-6cd46fb86c-xllg2 1/1 Running 0 4m50s\nkube-system loxilb-lb-4fmdp 1/1 Running 0 3m43s\nkube-system kube-loxilb-6f44cdcdf5-ffdcv 1/1 Running 0 2m22s\ndefault tcp-onearm-test 1/1 Running 0 92s\n\n\n## Check the services created\n$ sudo k0s kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5m28s\ntcp-lb-onearm LoadBalancer 10.96.108.109 llb-192.168.82.100 56002:32033/TCP 111s\n
In loxilb pod, we can check internal LB rules: $ sudo k0s kubectl exec -it -n kube-system loxilb-lb-4fmdp -- loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-----------------------|------|-----|--------|-----------|-------|--------|--------|----------|\n| 192.168.82.100 | | 56002 | tcp | default_tcp-lb-onearm | 0 | rr | onearm | 10.0.2.15 | 32033 | 1 | active | 25:1842 |\n
"},{"location":"k0s_quick_start_incluster/#connect-from-hostclient","title":"Connect from host/client","text":"$ curl http://192.168.82.100:56002\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
For more detailed information on incluster deployment of loxilb with bgp in a full-blown cluster, kindly follow this blog. All of the above steps are also available as part of loxilb CICD workflow. Follow the steps below to replicate the above (please note that you will need vagrant tool installed to run:
$ git clone https://github.com/loxilb-io/loxilb.git\n$ cd cicd/k0s-incluster/\n\n# To setup the single node k0s setup with kube-router networking and loxilb as external load balancer\n$ ./config.sh\n\n# To validate the results\n$ ./validation.sh\n\n# To login to the node and check the installation\n$ vagrant ssh k0s-node1\n\n# Cleanup\n$ ./rmconfig.sh\n
"},{"location":"k3s-multi-master/","title":"How-To - Deploy multi-server K3s HA with loxilb","text":""},{"location":"k3s-multi-master/#guide-to-deploy-multi-master-ha-k3s-with-loxilb","title":"Guide to deploy multi-master HA K3s with loxilb","text":"This document will explain how to install a multi-master HA K3s cluster with loxilb as a serviceLB provider running in-cluster mode. K3s is a lightweight Kubernetes distribution and is increasingly used for prototyping as well as for production workloads. K3s nodes are deployed as: 1) k3s-server nodes for k3s control plane components like apiserver and etcd. 2) k3s-agent nodes hosting user workloads/apps. When we deploy multi-master nodes, it is necessary that they be accessed from the k3s-agents in HA configuration and behind a load-balancer. Usually deploying such a load-balancer is outside the scope of kubernetes.
In this guide, we will see how to deploy loxilb not only as cluster's serviceLB provider but also as a VIP-LB for accessing server/master node(s) services.
"},{"location":"k3s-multi-master/#topology","title":"Topology","text":"For multi-master setup we need an odd number of server nodes to maintain quorum. So, we will have 3 k3s-server nodes for this setup. Overall, we will be deploying the components as per the following topology :
"},{"location":"k3s-multi-master/#k3s-installation-and-setup","title":"K3s installation and Setup","text":""},{"location":"k3s-multi-master/#in-k3s-server1-node-","title":"In k3s-server1 node -","text":"$ curl -fL https://get.k3s.io | sh -s - server --node-ip=192.168.80.10 \\\n --disable servicelb --disable traefik --cluster-init external-hostname=192.168.80.10 \\\n --node-external-ip=192.168.80.80 --disable-cloud-controller\n
It is to be noted that --node-external-ip=192.168.80.80
is used since we will utilize 192.168.80.80 as the VIP to access the multi-master setup from k3s-agents and other clients."},{"location":"k3s-multi-master/#setup-the-node-for-loxilb","title":"Setup the node for loxilb :","text":"sudo mkdir -p /etc/loxilb\n
Create the following files in /etc/loxilb
- lbconfig.txt with following contents (change as per your requirement)
{\n \"lbAttr\":[\n {\n \"serviceArguments\":{\n \"externalIP\":\"192.168.80.80\",\n \"port\":6443,\n \"protocol\":\"tcp\",\n \"sel\":0,\n \"mode\":2,\n \"BGP\":false,\n \"Monitor\":true,\n \"inactiveTimeOut\":240,\n \"block\":0\n },\n \"secondaryIPs\":null,\n \"endpoints\":[\n {\n \"endpointIP\":\"192.168.80.10\",\n \"targetPort\":6443,\n \"weight\":1,\n \"state\":\"active\",\n \"counter\":\"\"\n },\n {\n \"endpointIP\":\"192.168.80.11\",\n \"targetPort\":6443,\n \"weight\":1,\n \"state\":\"active\",\n \"counter\":\"\"\n },\n {\n \"endpointIP\":\"192.168.80.12\",\n \"targetPort\":6443,\n \"weight\":1,\n \"state\":\"active\",\n \"counter\":\"\"\n }\n ]\n }\n ]\n}\n
2. EPconfig.txt with the following contents (change as per your requirement) {\n \"Attr\":[\n {\n \"hostName\":\"192.168.80.10\",\n \"name\":\"192.168.80.10_tcp_6443\",\n \"inactiveReTries\":2,\n \"probeType\":\"tcp\",\n \"probeReq\":\"\",\n \"probeResp\":\"\",\n \"probeDuration\":10,\n \"probePort\":6443\n },\n {\n \"hostName\":\"192.168.80.11\",\n \"name\":\"192.168.80.11_tcp_6443\",\n \"inactiveReTries\":2,\n \"probeType\":\"tcp\",\n \"probeReq\":\"\",\n \"probeResp\":\"\",\n \"probeDuration\":10,\n \"probePort\":6443\n },\n {\n \"hostName\":\"192.168.80.12\",\n \"name\":\"192.168.80.12_tcp_6443\",\n \"inactiveReTries\":2,\n \"probeType\":\"tcp\",\n \"probeReq\":\"\",\n \"probeResp\":\"\",\n \"probeDuration\":10,\n \"probePort\":6443\n }\n ]\n}\n
The above serve as bootstrap LB rules for load-balancing into the k3s-server nodes as we will see later.
"},{"location":"k3s-multi-master/#in-k3s-server2-node-","title":"In k3s-server2 node -","text":"$ curl -fL https://get.k3s.io | K3S_TOKEN=${NODE_TOKEN} sh -s - server --server https://192.168.80.10:6443 \\\n --disable traefik --disable servicelb --node-ip=192.168.80.11 \\\n external-hostname=192.168.80.11 --node-external-ip=192.168.80.80 -t ${NODE_TOKEN}\n
where NODE_TOKEN contain simply contents of /var/lib/rancher/k3s/server/node-token from server1. For example, it can be set using a command equivalent to the following : export NODE_TOKEN=$(cat node-token)\n
"},{"location":"k3s-multi-master/#setup-the-node-for-loxilb_1","title":"Setup the node for loxilb:","text":"Simply follow the steps as outlined for server1.
"},{"location":"k3s-multi-master/#in-k3s-server3-node-","title":"In k3s-server3 node -","text":"$ curl -fL https://get.k3s.io | K3S_TOKEN=${NODE_TOKEN} sh -s - server --server https://192.168.80.10:6443 \\\n --disable traefik --disable servicelb --node-ip=192.168.80.12 \\\n external-hostname=192.168.80.12 --node-external-ip=192.168.80.80 -t ${NODE_TOKEN}\n
where NODE_TOKEN contain simply contents of /var/lib/rancher/k3s/server/node-token from server1. For example, it can be set using a command equivalent to the following : export NODE_TOKEN=$(cat node-token)\n
"},{"location":"k3s-multi-master/#setup-the-node-for-loxilb_2","title":"Setup the node for loxilb:","text":"First, follow the steps as outlined for server1. Additionally, we will have to start loxilb pod instances as follows :
$ sudo kubectl apply -f - <<EOF\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: loxilb-lb\n namespace: kube-system\nspec:\n selector:\n matchLabels:\n app: loxilb-app\n template:\n metadata:\n name: loxilb-lb\n labels:\n app: loxilb-app\n spec:\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet\n tolerations:\n - key: \"node-role.kubernetes.io/master\"\n operator: Exists\n - key: \"node-role.kubernetes.io/control-plane\"\n operator: Exists\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: \"node-role.kubernetes.io/master\"\n operator: Exists\n - key: \"node-role.kubernetes.io/control-plane\"\n operator: Exists\n volumes:\n - name: hllb\n hostPath:\n path: /etc/loxilb\n type: DirectoryOrCreate\n containers:\n - name: loxilb-app\n image: \"ghcr.io/loxilb-io/loxilb:latest\"\n imagePullPolicy: Always\n command:\n - /root/loxilb-io/loxilb/loxilb\n args:\n - --egr-hooks\n - --blacklist=cni[0-9a-z]|veth.|flannel.\n volumeMounts:\n - name: hllb\n mountPath: /etc/loxilb\n ports:\n - containerPort: 11111\n - containerPort: 179\n securityContext:\n privileged: true\n capabilities:\n add:\n - SYS_ADMIN\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: loxilb-lb-service\n namespace: kube-system\nspec:\n clusterIP: None\n selector:\n app: loxilb-app\n ports:\n - name: loxilb-app\n port: 11111\n targetPort: 11111\n protocol: TCP\nEOF\n
Kindly note that the args for loxilb might change depending on the scenario. This scenario considers loxilb running in-cluster mode. For service-proxy mode, please follow this yaml for exact args. Next, we will install loxilb's operator kube-loxilb as follows :
sudo kubectl apply -f - <<EOF\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: kube-loxilb\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nrules:\n - apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - pods\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - endpoints\n - services\n - services/status\n verbs:\n - get\n - watch\n - list\n - patch\n - update\n - apiGroups:\n - gateway.networking.k8s.io\n resources:\n - gatewayclasses\n - gatewayclasses/status\n - gateways\n - gateways/status\n - tcproutes\n - udproutes\n verbs: [\"get\", \"watch\", \"list\", \"patch\", \"update\"]\n - apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - get\n - watch\n - list\n - apiGroups:\n - authentication.k8s.io\n resources:\n - tokenreviews\n verbs:\n - create\n - apiGroups:\n - authorization.k8s.io\n resources:\n - subjectaccessreviews\n verbs:\n - create\n - apiGroups:\n - bgppeer.loxilb.io\n resources:\n - bgppeerservices\n verbs:\n - get\n - watch\n - list\n - create\n - update\n - delete\n - apiGroups:\n - bgppolicydefinedsets.loxilb.io\n resources:\n - bgppolicydefinedsetsservices\n verbs:\n - get\n - watch\n - list\n - create\n - update\n - delete\n - apiGroups:\n - bgppolicydefinition.loxilb.io\n resources:\n - bgppolicydefinitionservices\n verbs:\n - get\n - watch\n - list\n - create\n - update\n - delete\n - apiGroups:\n - bgppolicyapply.loxilb.io\n resources:\n - bgppolicyapplyservices\n verbs:\n - get\n - watch\n - list\n - create\n - update\n - delete\n\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: kube-loxilb\nsubjects:\n - kind: ServiceAccount\n name: kube-loxilb\n namespace: kube-system\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: kube-loxilb\n namespace: kube-system\n labels:\n app: kube-loxilb-app\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: kube-loxilb-app\n template:\n metadata:\n labels:\n app: kube-loxilb-app\n spec:\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet\n tolerations:\n # Mark the pod as a critical add-on for rescheduling.\n - key: CriticalAddonsOnly\n operator: Exists\n priorityClassName: system-node-critical\n serviceAccountName: kube-loxilb\n terminationGracePeriodSeconds: 0\n containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:latest\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n #- --loxiURL=http://192.168.80.10:11111\n - --externalCIDR=192.168.80.200/32\n #- --externalSecondaryCIDRs=124.124.124.1/24,125.125.125.1/24\n #- --setBGP=64512\n #- --listenBGPPort=1791\n - --setRoles=0.0.0.0\n #- --monitor\n #- --extBGPPeers=50.50.50.1:65101,51.51.51.1:65102\n #- --setLBMode=1\n #- --config=/opt/loxilb/agent/kube-loxilb.conf\n resources:\n requests:\n cpu: \"100m\"\n memory: \"50Mi\"\n limits:\n cpu: \"100m\"\n memory: \"50Mi\"\n securityContext:\n privileged: true\n capabilities:\n add: [\"NET_ADMIN\", \"NET_RAW\"]\nEOF\n
At this point we can check the pods running in our kubernetes cluster (in server1, server2 & server3 at this point):
$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-6799fbcd5-7jhcx 1/1 Running 0 3h15m\nkube-system kube-loxilb-5d99c445f7-j4x6k 1/1 Running 0 3h6m\nkube-system local-path-provisioner-6c86858495-pjn9j 1/1 Running 0 3h15m\nkube-system loxilb-lb-8bddf 1/1 Running 0 3h6m\nkube-system loxilb-lb-nsrr9 1/1 Running 0 3h6m\nkube-system loxilb-lb-fp2z6 1/1 Running 0 3h6m\nkube-system metrics-server-54fd9b65b-g5lfn 1/1 Running 0 3h15m\n
"},{"location":"k3s-multi-master/#in-k3s-agent1-node-","title":"In k3s-agent1 node -","text":"The following steps need to be followed to install k3s in the agent nodes:
$ curl -sfL https://get.k3s.io | K3S_TOKEN=${NODE_TOKEN} sh -s - agent --server https://192.168.80.80:6443 --node-ip=${WORKER_ADDR} --node-external-ip=${WORKER_ADDR} -t ${NODE_TOKEN}\n
where WORKER_ADDR is the IP address of the agent node itself (in this case 192.168.80.101) and NODE_TOKEN has contents of /var/lib/rancher/k3s/server/node-token from server1. It is also to be noted that we use VIP - 192.168.80.80 provided by loxilb to access the server(master) K3s nodes and not the actual private node addresses.
For rest of the agent nodes, we can follow the same set of steps as outlined above for k3s-agent1.
"},{"location":"k3s-multi-master/#validation","title":"Validation","text":"After setting up all the k3s-server and k3s-agents, we should be able to see all nodes up and running
$ sudo kubectl get nodes -A\nNAME STATUS ROLES AGE VERSION\nmaster1 Ready control-plane,etcd,master 4h v1.29.3+k3s1\nmaster2 Ready control-plane,etcd,master 4h v1.29.3+k3s1\nmaster3 Ready control-plane,etcd,master 4h v1.29.3+k3s1 \nworker1 Ready <none> 4h v1.29.3+k3s1\nworker2 Ready <none> 4h v1.29.3+k3s1\nworker3 Ready <none> 4h v1.29.3+k3s1\n
To verify, let's shutdown master1 k3s-server.
## Run shutdown the master1 node\n$ sudo shutdown -t now\n
And try to access cluster information from other master nodes or worker nodes :
$ sudo kubectl get nodes -A\nNAME STATUS ROLES AGE VERSION\nmaster1 NotReady control-plane,etcd,master 4h10m v1.29.3+k3s1\nmaster2 Ready control-plane,etcd,master 4h10m v1.29.3+k3s1\nmaster3 Ready control-plane,etcd,master 4h10m v1.29.3+k3s1\nworker1 Ready <none> 4h10m v1.29.3+k3s1\nworker2 Ready <none> 4h10m v1.29.3+k3s1\n
Also, we can confirm pods getting rescheduled to other \"ready\" nodes :
$ sudo kubectl get pods -A -o wide\nNAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\nkube-system coredns-6799fbcd5-6dvm7 1/1 Running 0 27m 10.42.2.2 master3 <none> <none>\nkube-system coredns-6799fbcd5-mrjgt 1/1 Terminating 0 3h58m 10.42.0.4 master1 <none> <none>\nkube-system kube-loxilb-5d99c445f7-x7qd6 1/1 Running 0 3h58m 192.168.80.11 master2 <none> <none>\nkube-system local-path-provisioner-6c86858495-6f8rz 1/1 Terminating 0 3h58m 10.42.0.2 master1 <none> <none>\nkube-system local-path-provisioner-6c86858495-z2p6m 1/1 Running 0 27m 10.42.3.2 worker1 <none> <none>\nkube-system loxilb-lb-65jnz 1/1 Running 0 3h58m 192.168.80.10 master1 <none> <none>\nkube-system loxilb-lb-pfkf8 1/1 Running 0 3h58m 192.168.80.12 master3 <none> <none>\nkube-system loxilb-lb-xhr95 1/1 Running 0 3h58m 192.168.80.11 master2 <none> <none>\nkube-system metrics-server-54fd9b65b-l5pqz 1/1 Running 0 27m 10.42.4.2 worker2 <none> <none>\nkube-system metrics-server-54fd9b65b-x9bd7 1/1 Terminating 0 3h58m 10.42.0.3 master1 <none> <none>\n
If the above set of command works fine in any of the \"ready\" nodes, it indicates that the api server is available even when one of k3s server (master) goes down. The same can be followed if need be for any services apart from K8s/K3s apiserver as well.
"},{"location":"k3s-rmq/","title":"K3s rmq","text":""},{"location":"k3s-rmq/#quick-start-guide-with-k3s-and-loxilb-in-cluster-service-proxy-mode","title":"Quick Start Guide with K3s and LoxiLB in-cluster \"service-proxy\" mode","text":"This document will explain how to install a K3s cluster with loxilb as a serviceLB provider running in-cluster \"service-proxy\" mode. \u00a0 \u00a0
"},{"location":"k3s-rmq/#what-is-service-proxy-mode","title":"What is service-proxy mode?","text":"service-proxy mode is where kubernetes cluster networking is entirely streamlined by loxilb for better performance.
Looking at the left side of the image, you will notice the traffic flow of the packet as it enters the Kubernetes cluster. Kube-proxy, the de-facto networking agent in the Kubernetes which runs on each node of the cluster which monitors the services and translates them to either iptables or IPVS tangible rules. If we talk about the functionality or a cluster with low volume traffic then kube-proxy is fine but when it comes to scalability or a high volume traffic then it acts as a bottle-neck. loxilb \"service-proxy\" mode works with Flannel and kube-proxy in IPVS mode. It simplifies all the IPVS rules and injects them in it's in-kernel eBPF data-path. Traffic will reach at the interface, will be processed by eBPF and sent directly to the pod or to the other node, bypassing all the layers of Linux networking. This way, all the services, be it External, NodePort or ClusterIP, can be managed through LoxiLB.
"},{"location":"k3s-rmq/#topology","title":"Topology","text":"For quickly bringing up loxilb in-cluster and K3s, we will be deploying a 4 nodes k3s cluster : \u00a0
loxilb and kube-loxilb components run as pods managed by kubernetes \u00a0in this scenario.
"},{"location":"k3s-rmq/#setup-k3s","title":"Setup K3s","text":""},{"location":"k3s-rmq/#configure-master-node","title":"Configure Master node","text":"$ curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable traefik \\\n--disable servicelb \\\n--disable-cloud-controller \\\n--kube-proxy-arg proxy-mode=ipvs \\\n--flannel-iface=eth1 \\\n--disable-network-policy \\\n--node-ip=${MASTER_IP} \\\n--node-external-ip=${MASTER_IP} \\\n--bind-address=${MASTER_IP}\" sh -\n
"},{"location":"k3s-rmq/#configure-worker-nodes","title":"Configure Worker nodes","text":"$ curl -sfL https://get.k3s.io | K3S_URL=\"https://${MASTER_IP}:6443\"\\\n\u00a0K3S_TOKEN=\"${NODE_TOKEN}\" \\\n\u00a0INSTALL_K3S_EXEC=\"--node-ip=${WORKER_ADDR} \\\n--node-external-ip=${WORKER_IP} \\\n--kube-proxy-arg proxy-mode=ipvs \\\n--flannel-iface=eth1\" sh -\n
"},{"location":"k3s-rmq/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used as an operator to manage loxilb.
wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/mesh/kube-loxilb.yml\n
kube-loxilb.yaml
\u00a0 \u00a0 \u00a0 \u00a0args:\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0#- --loxiURL=http://172.17.0.2:11111\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0- --externalCIDR=192.168.82.100/32\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0- --setRoles=0.0.0.0\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0#- --monitor\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0#- --setBGP\n
In the above snippet, loxiURL is commented out which denotes to utilize in-cluster mode to discover loxilb pods automatically. External CIDR represents the IP pool from where serviceLB VIP will be allocated. Apply after making changes (if any) :
sudo kubectl apply -f kube-loxilb.yaml\n
"},{"location":"k3s-rmq/#how-to-deploy-loxilb","title":"How to deploy loxilb ?","text":"loxilb can be deloyed by using the following command in the K3s node
sudo kubectl apply -f https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/mesh/loxilb-mesh.yml\n
"},{"location":"k3s-rmq/#seup-rabbitmq-operator","title":"Seup RabbitMQ Operator","text":"sudo kubectl apply -f \"https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml\"\n
"},{"location":"k3s-rmq/#setup-rabbitmq-application-with-loxilb","title":"Setup RabbitMQ application with loxilb","text":"wget https://raw.githubusercontent.com/rabbitmq/cluster-operator/main/docs/examples/hello-world/rabbitmq.yaml\n
Change the following: apiVersion: rabbitmq.com/v1beta1\nkind: RabbitmqCluster\nmetadata:\n\u00a0 name: hello-world\nspec:\n\u00a0 replicas: 3\n\u00a0 service:\n\u00a0 \u00a0 type: LoadBalancer\n\u00a0 override:\n\u00a0 \u00a0 service:\n\u00a0 \u00a0 \u00a0 spec:\n\u00a0 \u00a0 \u00a0 \u00a0 loadBalancerClass: loxilb.io/loxilb\n\u00a0 \u00a0 \u00a0 \u00a0 externalTrafficPolicy: Local\n\u00a0 \u00a0 \u00a0 \u00a0 ports:\n\u00a0 \u00a0 \u00a0 \u00a0 - port: 5672\n
"},{"location":"k3s-rmq/#create-the-service","title":"Create the service","text":"sudo kubectl apply -f rabbitmq.yaml\n
"},{"location":"k3s-rmq/#check-the-status","title":"Check the status","text":"In k3s node:
## Check the pods created\n$ sudo kubectl get pods -A\nNAMESPACE \u00a0 \u00a0 \u00a0 \u00a0 NAME \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0READY \u00a0 STATUS \u00a0 \u00a0RESTARTS \u00a0 AGE\nkube-system \u00a0 \u00a0 \u00a0 local-path-provisioner-6c86858495-65tbc \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0137m\nkube-system \u00a0 \u00a0 \u00a0 coredns-6799fbcd5-5h2dw \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0137m\nkube-system \u00a0 \u00a0 \u00a0 metrics-server-67c658944b-mtv9q \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0137m\nrabbitmq-system \u00a0 rabbitmq-cluster-operator-ccf488f4c-sphfm \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a08m12s\nkube-system \u00a0 \u00a0 \u00a0 kube-loxilb-5fb5566999-4dj2v \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a01/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a04m18s\nkube-system \u00a0 \u00a0 \u00a0 loxilb-lb-txtfm \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a03m57s\nkube-system \u00a0 \u00a0 \u00a0 loxilb-lb-fnv97 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a03m57s\nkube-system \u00a0 \u00a0 \u00a0 loxilb-lb-r7mks \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a03m57s\nkube-system \u00a0 \u00a0 \u00a0 loxilb-lb-xxn29 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a03m57s\ndefault \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 hello-world-server-0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a01/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a072s\ndefault \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 hello-world-server-1 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a01/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a072s\ndefault \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 hello-world-server-2 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a01/1 \u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a072s\n\n\n## Check the services created\n$ sudo kubectl get svc\nsudo kubectl get svc\nNAME \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0TYPE \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 CLUSTER-IP \u00a0 \u00a0 \u00a0EXTERNAL-IP \u00a0 \u00a0 \u00a0 \u00a0 PORT(S) \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 AGE\nkubernetes \u00a0 \u00a0 \u00a0 \u00a0 \u00a0ClusterIP \u00a0 \u00a0 \u00a010.43.0.1 \u00a0 \u00a0 \u00a0 <none> \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0443/TCP \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 136m\nhello-world-nodes \u00a0 ClusterIP \u00a0 \u00a0 \u00a0None \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0<none> \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a04369/TCP,25672/TCP \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a07s\nhello-world \u00a0 \u00a0 \u00a0 \u00a0 LoadBalancer \u00a0 10.43.190.199 \u00a0 llb-192.168.82.100 \u00a0 15692:31224/TCP,5672:30817/TCP,15672:30698/TCP \u00a0 7s\n
In loxilb pod, we can check internal LB rules: $ sudo kubectl exec -it -n kube-system loxilb-lb-8l85d -- loxicmd get lb -o wide\nsudo kubectl exec -it loxilb-lb-txtfm -n kube-system -- loxicmd get lb -o wide\n| \u00a0 \u00a0EXT IP \u00a0 \u00a0 | SEC IPS | PORT \u00a0| PROTO | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 NAME \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | MARK | SEL | \u00a0MODE \u00a0 | \u00a0 \u00a0ENDPOINT \u00a0 \u00a0| EPORT | WEIGHT | STATE \u00a0| COUNTERS |\n|---------------|---------|-------|-------|------------------------------|------|-----|---------|----------------|-------|--------|--------|----------|\n| 10.0.2.15 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 30698 | tcp \u00a0 | ipvs_10.0.2.15:30698-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.0.2.15 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 30817 | tcp \u00a0 | ipvs_10.0.2.15:30817-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.0.2.15 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 31224 | tcp \u00a0 | ipvs_10.0.2.15:31224-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.42.0.0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 30698 | tcp \u00a0 | ipvs_10.42.0.0:30698-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.42.0.0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 30817 | tcp \u00a0 | ipvs_10.42.0.0:30817-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.42.0.0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 31224 | tcp \u00a0 | ipvs_10.42.0.0:31224-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.42.0.1 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 30698 | tcp \u00a0 | ipvs_10.42.0.1:30698-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.42.0.1 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 30817 | tcp \u00a0 | ipvs_10.42.0.1:30817-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.42.0.1 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 31224 | tcp \u00a0 | ipvs_10.42.0.1:31224-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.0.10 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a053 | tcp \u00a0 | ipvs_10.43.0.10:53-tcp \u00a0 \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| default | 10.42.0.3 \u00a0 \u00a0 \u00a0| \u00a0 \u00a053 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.0.10 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a053 | udp \u00a0 | ipvs_10.43.0.10:53-udp \u00a0 \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| default | 10.42.0.3 \u00a0 \u00a0 \u00a0| \u00a0 \u00a053 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.0.10 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0 \u00a0 | \u00a09153 | tcp \u00a0 | ipvs_10.43.0.10:9153-tcp \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| default | 10.42.0.3 \u00a0 \u00a0 \u00a0| \u00a09153 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.0.1 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 443 | tcp \u00a0 | ipvs_10.43.0.1:443-tcp \u00a0 \u00a0 \u00a0 | \u00a0 \u00a00 | rr \u00a0| default | 192.168.80.10 \u00a0| \u00a06443 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.190.199 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a05672 | tcp \u00a0 | ipvs_10.43.190.199:5672-tcp \u00a0| \u00a0 \u00a00 | rr \u00a0| default | 10.42.1.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.190.199 | \u00a0 \u00a0 \u00a0 \u00a0 | 15672 | tcp \u00a0 | ipvs_10.43.190.199:15672-tcp | \u00a0 \u00a00 | rr \u00a0| default | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.190.199 | \u00a0 \u00a0 \u00a0 \u00a0 | 15692 | tcp \u00a0 | ipvs_10.43.190.199:15692-tcp | \u00a0 \u00a00 | rr \u00a0| default | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 10.43.5.58 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 443 | tcp \u00a0 | ipvs_10.43.5.58:443-tcp \u00a0 \u00a0 \u00a0| \u00a0 \u00a00 | rr \u00a0| default | 10.42.0.4 \u00a0 \u00a0 \u00a0| 10250 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.10 | \u00a0 \u00a0 \u00a0 \u00a0 | 30698 | tcp \u00a0 | ipvs_192.168.80.10:30698-tcp | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.10 | \u00a0 \u00a0 \u00a0 \u00a0 | 30817 | tcp \u00a0 | ipvs_192.168.80.10:30817-tcp | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.10 | \u00a0 \u00a0 \u00a0 \u00a0 | 31224 | tcp \u00a0 | ipvs_192.168.80.10:31224-tcp | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.20 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a05672 | tcp \u00a0 | default_hello-world \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a00 | rr \u00a0| onearm \u00a0| 192.168.80.101 | 30817 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 192.168.80.102 | 30817 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 192.168.80.103 | 30817 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.20 | \u00a0 \u00a0 \u00a0 \u00a0 | 15672 | tcp \u00a0 | default_hello-world \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a00 | rr \u00a0| onearm \u00a0| 192.168.80.101 | 30698 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 192.168.80.102 | 30698 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 192.168.80.103 | 30698 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.20 | \u00a0 \u00a0 \u00a0 \u00a0 | 15692 | tcp \u00a0 | default_hello-world \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a00 | rr \u00a0| onearm \u00a0| 192.168.80.101 | 31224 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 192.168.80.102 | 31224 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 192.168.80.103 | 31224 | \u00a0 \u00a0 \u00a01 | - \u00a0 \u00a0 \u00a0| 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.20 | \u00a0 \u00a0 \u00a0 \u00a0 | 30698 | tcp \u00a0 | ipvs_192.168.80.20:30698-tcp | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.20 | \u00a0 \u00a0 \u00a0 \u00a0 | 30817 | tcp \u00a0 | ipvs_192.168.80.20:30817-tcp | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| \u00a05672 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| 192.168.80.20 | \u00a0 \u00a0 \u00a0 \u00a0 | 31224 | tcp \u00a0 | ipvs_192.168.80.20:31224-tcp | \u00a0 \u00a00 | rr \u00a0| fullnat | 10.42.1.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.2.6 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n| \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 \u00a0| \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 | 10.42.3.7 \u00a0 \u00a0 \u00a0| 15692 | \u00a0 \u00a0 \u00a01 | active | 0:0 \u00a0 \u00a0 \u00a0|\n
"},{"location":"k3s-rmq/#get-rabbitmq-credentials","title":"Get RabbitMQ Credentials","text":"username=\"$(sudo kubectl get secret hello-world-default-user -o jsonpath='{.data.username}' | base64 --decode)\"\npassword=\"$(sudo kubectl get secret hello-world-default-user -o jsonpath='{.data.password}' | base64 --decode)\"\n
"},{"location":"k3s-rmq/#test-rabbitmq-from-hostclient","title":"Test RabbitMQ from host/client","text":"sudo docker run -it --rm --net=host pivotalrabbitmq/perf-test:latest --uri amqp://$username:$password@192.168.80.20:5672 -x 10 \u00a0-y 10 -u \"throughput-test-4\" -a --id \"test 4\" \u00a0-z100\n
For more detailed performance comparison with other solutions, kindly follow this blog and for more detailed information on incluster deployment of loxilb with bgp in a full-blown cluster, kindly follow this blog.\u00a0 \u00a0"},{"location":"k3s_quick_start_calico/","title":"K3s/loxilb with calico","text":""},{"location":"k3s_quick_start_calico/#loxilb-quick-start-guide-with-calico","title":"LoxiLB Quick Start Guide with Calico","text":"This guide will explain how to:
- Deploy a single-node K3s cluster with calico networking
- Expose services with loxilb as an external load balancer
"},{"location":"k3s_quick_start_calico/#pre-requisite","title":"Pre-requisite","text":" - Single node with Linux
- Install docker runtime to manage loxilb
"},{"location":"k3s_quick_start_calico/#topology","title":"Topology","text":"For quickly bringing up loxilb with K3s and calico, we will be deploying all components in a single node :
loxilb is run as a docker and will use macvlan for the incoming traffic. This is to mimic a topology close to cloud-hosted k8s where LB nodes run outside a cluster. loxilb can be used in more complex in-cluster mode as well, but not used here for simplicity.
"},{"location":"k3s_quick_start_calico/#install-loxilb-docker","title":"Install loxilb docker","text":"## Set underlying interface of the VM/cluster-node to promisc mode for mac-vlan to work\nsudo ifconfig eth1 promisc\n\n## Run loxilb\nsudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged --entrypoint /root/loxilb-io/loxilb/loxilb -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest\n\n# Create mac-vlan on top of underlying eth1 interface\nsudo docker network create -d macvlan -o parent=eth1 --subnet 192.168.82.0/24 --gateway 192.168.82.1 --aux-address 'host=192.168.82.252' llbnet\n\n# Assign mac-vlan to loxilb docker with specified IP (which will be used as LB VIP)\n# Please note that this node should already have an IP assigned belonging to the same subnet on underlying interface\nsudo docker network connect llbnet loxilb --ip=192.168.82.100\n\n# Add iptables rule to allow traffic from source/host IP(192.168.82.1) to loxilb\nsudo iptables -A DOCKER -s 192.168.82.1 -j ACCEPT\n
All the above steps related to docker setup can be further automated using docker-compose.
"},{"location":"k3s_quick_start_calico/#setup-k3s-with-calico","title":"Setup K3s with Calico","text":"# Install IPVS\nsudo apt-get -y install ipset ipvsadm\n\n# Install K3s with Calico and kube-proxy in IPVS mode\ncurl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable traefik,metrics-server,servicelb --disable-cloud-controller --kubelet-arg cloud-provider=external --flannel-backend=none --disable-network-policy\" K3S_KUBECONFIG_MODE=\"644\" sh -s - server --kube-proxy-arg proxy-mode=ipvs\n\n# Install Calico\nkubectl $KUBECONFIG create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/tigera-operator.yaml\n\nkubectl $KUBECONFIG create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/custom-resources.yaml\n\n# Remove taints in k3s if any (usually happens if started without cloud-manager)\nsudo kubectl taint nodes --all node.cloudprovider.kubernetes.io/uninitialized=false:NoSchedule-\n
"},{"location":"k3s_quick_start_calico/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used to deploy loxilb with Kubernetes.
wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/ext-cluster/kube-loxilb.yaml\n
kube-loxilb.yaml
args:\n - --loxiURL=http://172.17.0.2:11111\n - --externalCIDR=192.168.82.100/32\n - --setMode=1\n
In the above snippet, loxiURL uses docker interface IP of loxilb, which can be different for each setup. Apply in k8s:
kubectl apply -f kube-loxilb.yaml\n
"},{"location":"k3s_quick_start_calico/#create-the-service","title":"Create the service","text":"kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/docker-k3s-cilium/tcp-svc-lb.yml\n
"},{"location":"k3s_quick_start_calico/#check-the-status","title":"Check the status","text":"In k3s:
kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 2m48s\ntcp-lb-onearm LoadBalancer 10.43.183.123 llb-192.168.82.100 56002:30001/TCP 30s\n
In loxilb docker: $ sudo docker exec -it loxilb loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-----------------------|------|-----|--------|-----------|-------|--------|--------|----------|\n| 192.168.82.100 | | 56002 | tcp | default_tcp-lb-onearm | 0 | rr | onearm | 10.0.2.15 | 30001 | 1 | active | 0:0 |\n
"},{"location":"k3s_quick_start_calico/#connect-from-hostclient","title":"Connect from host/client","text":"$ curl http://192.168.82.100:56002\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
All of the above steps are also available as part of loxilb CICD workflow. Follow the steps below to replicate the above:
$ cd cicd/docker-k3s-calico/\n\n# To setup the single node k3s setup with calico as CNI and loxilb as external load balancer\n$ ./config.sh\n\n# To validate the results\n$ ./validation.sh\n\n# Cleanup\n$ ./rmconfig.sh\n
"},{"location":"k3s_quick_start_flannel/","title":"K3s/loxilb with default flannel","text":""},{"location":"k3s_quick_start_flannel/#loxilb-quick-start-guide-with-k3sflannel","title":"LoxiLB Quick Start Guide with K3s/Flannel","text":"This guide will explain how to:
- Deploy a single-node K3s cluster with flannel networking
- Expose services with loxilb as an external load balancer
"},{"location":"k3s_quick_start_flannel/#pre-requisite","title":"Pre-requisite","text":" - Single node with Linux
- Install docker runtime to manage loxilb
"},{"location":"k3s_quick_start_flannel/#topology","title":"Topology","text":"For quickly bringing up loxilb with K3s/Flannel, we will be deploying all components in a single node :
loxilb is run as a docker and will use macvlan for the incoming traffic. This is to mimic a topology close to cloud-hosted k8s where LB nodes run outside a cluster. loxilb can be used in more complex in-cluster mode as well, but not used here for simplicity.
"},{"location":"k3s_quick_start_flannel/#install-loxilb-docker","title":"Install loxilb docker","text":"## Set underlying interface of the VM/cluster-node to promisc mode for mac-vlan to work\nsudo ifconfig eth1 promisc\n\n## Run loxilb\nsudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged --entrypoint /root/loxilb-io/loxilb/loxilb -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest\n\n# Create mac-vlan on top of underlying eth1 interface\nsudo docker network create -d macvlan -o parent=eth1 --subnet 192.168.82.0/24 --gateway 192.168.82.1 --aux-address 'host=192.168.82.252' llbnet\n\n# Assign mac-vlan to loxilb docker with specified IP (which will be used as LB VIP)\n# Please note that this node should already have an IP assigned belonging to the same subnet on underlying interface\nsudo docker network connect llbnet loxilb --ip=192.168.82.100\n\n# Add iptables rule to allow traffic from source/host IP(192.168.82.1) to loxilb\nsudo iptables -A DOCKER -s 192.168.82.1 -j ACCEPT\n
All the above steps related to docker setup can be further automated using docker-compose.
"},{"location":"k3s_quick_start_flannel/#setup-k3sflannel","title":"Setup K3s/Flannel","text":"#K3s installation\ncurl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"server --disable traefik --disable servicelb --disable-cloud-controller --kube-proxy-arg metrics-bind-address=0.0.0.0 --kubelet-arg cloud-provider=external\" K3S_KUBECONFIG_MODE=\"644\" sh -\n\n# Remove taints in k3s if any (usually happens if started without cloud-manager)\nsudo kubectl taint nodes --all node.cloudprovider.kubernetes.io/uninitialized=false:NoSchedule-\n
"},{"location":"k3s_quick_start_flannel/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used to deploy loxilb with Kubernetes.
wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/ext-cluster/kube-loxilb.yaml\n
kube-loxilb.yaml
args:\n - --loxiURL=http://172.17.0.2:11111\n - --externalCIDR=192.168.82.100/32\n - --setMode=1\n
In the above snippet, loxiURL uses docker interface IP of loxilb, which can be different for each setup. Apply in k8s:
kubectl apply -f kube-loxilb.yaml\n
"},{"location":"k3s_quick_start_flannel/#create-the-service","title":"Create the service","text":"kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/docker-k3s-cilium/tcp-svc-lb.yml\n
"},{"location":"k3s_quick_start_flannel/#check-the-status","title":"Check the status","text":"In k3s:
kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 80m\ntcp-lb-onearm LoadBalancer 10.43.183.123 llb-192.168.82.100 56002:30001/TCP 6m50s\n
In loxilb docker: $ sudo docker exec -it loxilb loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-----------------------|------|-----|--------|-----------|-------|--------|--------|----------|\n| 192.168.82.100 | | 56002 | tcp | default_tcp-lb-onearm | 0 | rr | onearm | 10.0.2.15 | 30001 | 1 | active | 12:880 |\n
"},{"location":"k3s_quick_start_flannel/#connect-from-hostclient","title":"Connect from host/client","text":"$ curl http://192.168.82.100:56002\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
"},{"location":"k3s_quick_start_incluster/","title":"K3s/loxilb in-cluster mode","text":""},{"location":"k3s_quick_start_incluster/#quick-start-guide-with-k3s-and-loxilb-in-cluster-mode","title":"Quick Start Guide with K3s and LoxiLB in-cluster mode","text":"This document will explain how to install a K3s cluster with loxilb as a serviceLB provider running in-cluster mode.
"},{"location":"k3s_quick_start_incluster/#topology","title":"Topology","text":"For quickly bringing up loxilb in-cluster and K3s, we will be deploying all components in a single node :
loxilb and kube-loxilb components run as pods managed by kubernetes in this scenario.
"},{"location":"k3s_quick_start_incluster/#setup-k3s","title":"Setup K3s","text":"# K3s installation\n$ curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"server --disable traefik --disable servicelb --disable-cloud-controller --kube-proxy-arg metrics-bind-address=0.0.0.0 --kubelet-arg cloud-provider=external\" K3S_KUBECONFIG_MODE=\"644\" sh -\n\n# Remove taints in k3s if any (usually happens if started without cloud-manager)\n$ sudo kubectl taint nodes --all node.cloudprovider.kubernetes.io/uninitialized=false:NoSchedule-\n
"},{"location":"k3s_quick_start_incluster/#how-to-deploy-loxilb","title":"How to deploy loxilb ?","text":"loxilb can be deloyed by using the following command in the K3s node
sudo kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/k3s-incluster/loxilb.yml\n
"},{"location":"k3s_quick_start_incluster/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used as an operator to manage loxilb.
wget https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/k3s-incluster/kube-loxilb.yml\n
kube-loxilb.yaml
args:\n #- --loxiURL=http://172.17.0.2:11111\n - --externalCIDR=192.168.82.100/32\n - --setRoles=0.0.0.0\n #- --monitor\n #- --setBGP\n
In the above snippet, loxiURL is commented out which denotes to utilize in-cluster mode to discover loxilb pods automatically. External CIDR represents the IP pool from where serviceLB VIP will be allocated. Apply after making changes (if any) :
sudo kubectl apply -f kube-loxilb.yaml\n
"},{"location":"k3s_quick_start_incluster/#create-the-service","title":"Create the service","text":"sudo kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/k3s-incluster/tcp-svc-lb.yml\n
"},{"location":"k3s_quick_start_incluster/#check-the-status","title":"Check the status","text":"In k3s node:
## Check the pods created\n$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system local-path-provisioner-6c86858495-snvcm 1/1 Running 0 4m37s\nkube-system coredns-6799fbcd5-cpj6x 1/1 Running 0 4m37s\nkube-system metrics-server-67c658944b-42ptz 1/1 Running 0 4m37s\nkube-system loxilb-lb-8l85d 1/1 Running 0 3m40s\nkube-system kube-loxilb-6f44cdcdf5-5fdtl 1/1 Running 0 2m19s\ndefault tcp-onearm-test 1/1 Running 0 88s\n\n\n## Check the services created\n$ sudo kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5m12s\ntcp-lb-onearm LoadBalancer 10.43.47.60 llb-192.168.82.100 56002:30001/TCP 108s\n
In loxilb pod, we can check internal LB rules: $ sudo kubectl exec -it -n kube-system loxilb-lb-8l85d -- loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-----------------------|------|-----|--------|-----------|-------|--------|--------|----------|\n| 192.168.82.100 | | 56002 | tcp | default_tcp-lb-onearm | 0 | rr | onearm | 10.0.2.15 | 30001 | 1 | active | 39:2874 |\n
"},{"location":"k3s_quick_start_incluster/#connect-from-hostclient","title":"Connect from host/client","text":"$ curl http://192.168.82.100:56002\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
For more detailed information on incluster deployment of loxilb with bgp in a full-blown cluster, kindly follow this blog. "},{"location":"k8s_bgp_policy_crd/","title":"License","text":"This document is based on the original work by GOBGP. Changes have been made to the original document.
Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0\n
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
"},{"location":"k8s_bgp_policy_crd/#policy-configuration","title":"Policy Configuration","text":"This page explains LoxiLB with GoBGP policy feature for controlling the route advertisement. It might be called Route Map in other BGP implementations.
And This document was written with reference to this goBGP official document.
We explain the overview firstly, then the details.
"},{"location":"k8s_bgp_policy_crd/#prerequisites","title":"Prerequisites","text":"Assumed that you run loxilb with -b
option. Or If you control loxilb through kube-loxilb, be sure to set the --set-bgp
option in the kube-loxilb.yaml file.
docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest -b\n
or in the kube-loxilb.yaml And adding - --enableBGPCRDs
option in kube-loxilb.yaml args:\n - --loxiURL=http://12.12.12.1:11111\n - --externalCIDR=123.123.123.1/24\n - --setBGP=65100\n - --enableBGPCRDs\n
And apply CRD yamls as first step.
kubectl apply -f manifest/crds/bgp-policy-apply-service.yaml\nkubectl apply -f manifest/crds/bgp-policy-defined-sets-service.yaml\nkubectl apply -f manifest/crds/bgp-policy-definition-service.yaml\n
(Note) Currently, gobgp does not support the Policy command in global state. Therefore, only the policy for neighbors is applied, and we plan to apply the global policy through additional development. To apply a policy in a neighbor, you must form a peer by adding the route-server-client
option when using gobgp in loxilb. This does not provide a separate API and will be provided in the future. For examples in gobgp, please refer to the following documents.
"},{"location":"k8s_bgp_policy_crd/#contents","title":"Contents","text":" - Policy Configuration
- Prerequisites
- Contents
- Overview
- Route Server Policy Model
- Policy Structure
- Configure Policies
- 1. Defining defined-sets
- prefix-sets
- Examples
- neighbor-sets
- Examples
- 2. Defining bgp-defined-sets
- community-sets
- Examples
- ext-community-sets
- Examples
- as-path-sets
- Examples
- 3. Defining policy-definitions
- Execution condition of Action
- Examples
- 4. Attaching policy
- 4.1. Attach policy to route-server-client
- Policy and Soft Reset
"},{"location":"k8s_bgp_policy_crd/#overview","title":"Overview","text":"Policy is a way to control how BGP routes inserted to RIB or advertised to peers. Policy has two parts, Condition and Action. When a policy is configured, Action is applied to routes which meet Condition before routes proceed to next step.
GoBGP supports Condition like prefix
, neighbor
(source/destination of the route), aspath
etc.., and Action like accept
, reject
, MED/aspath/community manipulation
etc...
You can configure policy by configuration file, CLI or gRPC API. Here, we show how to configure policy via configuration file.
"},{"location":"k8s_bgp_policy_crd/#route-server-policy-model","title":"Route Server Policy Model","text":"The following figure shows how policy works in route server BGP configuration.
In route server mode, Import and Export policies are defined with respect to a peer. The Import policy defines what routes will be imported into the master RIB. The Export policy defines what routes will be exported from the master RIB.
You can check each policy by the following commands in the loxilb.
$ gobgp neighbor <neighbor-addr> policy import\n$ gobgp neighbor <neighbor-addr> policy export\n
"},{"location":"k8s_bgp_policy_crd/#policy-structure","title":"Policy Structure","text":"A policy consists of statements. Each statement has condition(s) and action(s).
Conditions are categorized into attributes below:
- prefix
- neighbor
- aspath
- aspath length
- community
- extended community
- rpki validation result
- route type (internal/external/local)
- large community
- afi-safi in
As showed in the figure above, some of the conditions point to defined sets, which are a container for each condition item (e.g. prefixes).
Actions are categorized into attributes below:
- accept or reject
- add/replace/remove community or remove all communities
- add/subtract or replace MED value
- set next-hop (specific address/own local address/don't modify)
- set local-pref
- prepend AS number in the AS_PATH attribute
When ALL conditions in the statement are true
, the action(s) in the statement are executed.
You can check policy configuration by the following commands.
$ kubectl get bgppolicydefinedsetsservice\n\n$ kubectl get bgppolicydefinitionservice\n\n$ kubectl get bgppolicyapplyservice\n
"},{"location":"k8s_bgp_policy_crd/#configure-policies","title":"Configure Policies","text":"Policy Configuration comes from two parts, definition and attachment. For definition, we have defined-sets and policy-definition. defined-sets defines condition item for some of the condition type. policy-definitions defines policies based on actions and conditions.
-
defined-sets A single defined-sets entry has prefix match that is named prefix-sets and neighbor match part that is named neighbor-sets. It also has bgp-defined-sets, a subset of defined-sets that defines conditions referring to BGP attributes such as aspath. This defined-sets has a name and it's used to refer to defined-sets items from outside.
-
policy-definitions policy-definitions is a list of policy. A single element has statements part that combines conditions with an action.
Below are the steps for policy configuration
- define defined-sets
- define prefix-sets
- define neighbor-sets
- define bgp-defined-sets
- define community-sets
- define ext-community-sets
- define as-path-setList
- define large-community-sets
- define policy-definitions
- attach neighbor
"},{"location":"k8s_bgp_policy_crd/#1-defining-defined-sets","title":"1. Defining defined-sets","text":"defined-sets has prefix information and neighbor information in prefix-sets and neighbor-sets section, and GoBGP uses these information to evaluate routes. Defining defined-sets is needed at first. prefix-sets and neighbor-sets section are prefix match part and neighbor match part.
- defined-sets example
```yaml # prefix match part apiVersion: bgppolicydefinedsets.loxilb.io/v1 kind: BGPPolicyDefinedSetsService metadata: name: policy-prefix spec: name: \"ps1\" definedType: \"prefix\" prefixList: - ipPrefix: \"10.33.0.0/16\" masklengthRange: \"21..24\"
"},{"location":"k8s_bgp_policy_crd/#neighbor-match-part","title":"neighbor match part","text":"apiVersion: bgppolicydefinedsets.loxilb.io/v1 kind: BGPPolicyDefinedSetsService metadata: name: policy-neighbor spec: name: \"ns1\" definedType: \"neighbor\" List: - \"10.0.255.1/32\"
```
"},{"location":"k8s_bgp_policy_crd/#prefix-sets","title":"prefix-sets","text":"prefix-sets has prefix-set-list, and prefix-set-list has prefix-set-name and prefix-list as its element. prefix-set-list is used as a condition. Note that prefix-sets has either v4 or v6 addresses.
prefix has 1 element and list of sub-elements.
Element Description Example Optional name name of prefix-set \"ps1\" prefixList list of prefix and range of length PrefixList has 2 elements.
Element Description Example Optional ipPrefix prefix value \"10.33.0.0/16\" masklengthRange range of length \"21..24\" Yes"},{"location":"k8s_bgp_policy_crd/#examples","title":"Examples","text":" - example 1
- Match routes whose high order 2 octets of NLRI is 10.33 and its prefix length is between from 21 to 24
- If you define a prefix-list that doesn't have MasklengthRange, it matches routes that have just 10.33.0.0/16 as NLRI.
# example 1\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-prefix\nspec:\n name: \"ps1\"\n definedType: \"prefix\"\n prefixList:\n - ipPrefix: \"10.33.0.0/16\"\n masklengthRange: \"21..24\"\n
- example 2
- If you want to evaluate multiple routes with a single prefix-set-list, you can do this by adding an another prefix-list like this:
- This prefix-set-list match checks if a route has 10.33.0.0/21 to 24 or 10.50.0.0/21 to 24.
# example 2\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-prefix\nspec:\n name: \"ps1\"\n definedType: \"prefix\"\n prefixList:\n - ipPrefix: \"10.33.0.0/16\"\n masklengthRange: \"21..24\"\n - ipPrefix: \"10.50.0.0/16\"\n masklengthRange: \"21..24\"\n
- example 3
- prefix-set-name under prefix-set-list is reference to a single prefix-set.
- If you want to add different prefix-set more, you can add other blocks that form the same structure with example 1.
apiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-prefix\nspec:\n name: \"ps1\"\n definedType: \"prefix\"\n prefixList:\n - ipPrefix: \"10.33.0.0/16\"\n masklengthRange: \"21..24\"\n---\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-prefix\nspec:\n name: \"ps2\"\n definedType: \"prefix\"\n prefixList:\n - ipPrefix: \"10.50.0.0/16\"\n masklengthRange: \"21..24\"\n
"},{"location":"k8s_bgp_policy_crd/#neighbor-sets","title":"neighbor-sets","text":"neighbor-sets has neighbor-set-list, and neighbor-set-list has neighbor-set-name and neighbor-info-list as its element. It is necessary to specify a neighbor address in neighbor-info-list. neighbor-set-list is used as a condition. Attention: an empty neighbor-set will match against ANYTHING and not invert based on the match option
neighbor has 1 element and list of sub-elements.
Element Description Example Optional name name of neighbor \"ns1\" List list of neighbor address neighbor-info-list has 1 element.
Element Description Example Optional - neighbor address \"10.0.255.1\""},{"location":"k8s_bgp_policy_crd/#examples_1","title":"Examples","text":" - example 1
apiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-neighbor\nspec:\n name: \"ns1\"\n definedType: \"neighbor\"\n List:\n - \"10.0.255.1/32\"\n---\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-neighbor\nspec:\n name: \"ns2\"\n definedType: \"neighbor\"\n List:\n - \"10.0.0.0/24\"\n
- example 2
- As with prefix-set-list, neighbor-set-list can have multiple neighbor-info-list like this.
# example 2\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-neighbor\nspec:\n name: \"ns1\"\n definedType: \"neighbor\"\n List:\n - \"10.0.255.1/32\"\n - \"10.0.255.2/32\"\n ```\n\n- example 3\n - As with prefix-set-list, multiple neighbor-set-lists can be defined.\n\n```yaml\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-neighbor\nspec:\n name: \"ns1\"\n definedType: \"neighbor\"\n List:\n - \"10.0.255.1/32\"\n---\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-neighbor\nspec:\n name: \"ns2\"\n definedType: \"neighbor\"\n List:\n - \"10.0.254.1/32\"\n
"},{"location":"k8s_bgp_policy_crd/#2-defining-bgp-defined-sets","title":"2. Defining bgp-defined-sets","text":"bgp-defined-sets has Community information, Extended Community information and AS_PATH information in each Sets section respectively. And it is a child element of defined-sets. community-sets, ext-community-sets and as-path-sets section are each match part. Like prefix-sets and neighbor-sets, each can have multiple sets and each set can have multiple values.
- bgp-defined-sets example
# Community match part\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-community\nspec:\n name: \"community1\"\n definedType: \"community\"\n List:\n - \"65100:10\"\n\n# Extended Community match part\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-extcommunity\nspec:\n name: \"ecommunity1\"\n definedType: \"extcommunity\"\n List:\n - \"RT:65100:100\"\n\n# AS_PATH match part\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-aspath\nspec:\n name: \"aspath1\"\n definedType: \"asPath\"\n List:\n - \"^65100\"\n\n# Large Community match part\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-largecommunity\nspec:\n name: \"lcommunity1\"\n definedType: \"largecommunity\"\n List:\n - \"65100:100:100\"\n
"},{"location":"k8s_bgp_policy_crd/#community-sets","title":"community-sets","text":"community-sets has community-set-name and community-list as its element. The Community value are used to evaluate communities held by the destination.
Element Description Example Optional name name of CommunitySet \"community1\" List list of community value community-list has 1 element.
Element Description Example Optional - community value \"65100:10\" You can use regular expressions to specify community in community-list.
"},{"location":"k8s_bgp_policy_crd/#examples_2","title":"Examples","text":" - example 1
- Match routes which has \"65100:10\" as a community value.
# example 1\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-community\nspec:\n name: \"community1\"\n definedType: \"community\"\n List:\n - \"65100:10\"\n
- example 2
- Specifying community by regular expression
- You can use regular expressions based on POSIX 1003.2 regular expressions.
# example 2\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-community\nspec:\n name: \"community2\"\n definedType: \"community\"\n List:\n - \"6[0-9]+:[0-9]+\"\n
"},{"location":"k8s_bgp_policy_crd/#ext-community-sets","title":"ext-community-sets","text":"ext-community-sets has ext-community-set-name and ext-community-list as its element. The values are used to evaluate extended communities held by the destination.
Element Description Example Optional name name of ExtCommunitySet \"ecommunity1\" List list of extended community value List has 1 element.
Element Description Example Optional - extended community value \"RT:65001:200\" You can use regular expressions to specify extended community in ext-community-list. However, the first one element separated by (part of \"RT\") does not support to the regular expression. The part of \"RT\" indicates a subtype of extended community and subtypes that can be used are as follows:
- RT: mean the route target.
- SoO: mean the site of origin(route origin).
- encap: mean the encapsulation tunnel type, currently gobgp supports the following encap tunnels: l2tp3 gre ip-in-ip vxlan nvgre mpls mpls-in-gre vxlan-gre mpls-in-udp sr-policy geneve
- LB: mean the link-bandwidth (in bytes).
"},{"location":"k8s_bgp_policy_crd/#examples_3","title":"Examples","text":" - example 1
- Match routes which has \"RT:65001:200\" as a extended community value.
# example 1\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-extcommunity\nspec:\n name: \"ecommunity1\"\n definedType: \"extcommunity\"\n List:\n - \"RT:65100:100\"\n
- example 2
- Specifying extended community by regular expression
- You can use regular expressions that is available in Golang.
# example 2\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-extcommunity\nspec:\n name: \"ecommunity2\"\n definedType: \"extcommunity\"\n List:\n - \"RT:6[0-9]+:[0-9]+\"\n
"},{"location":"k8s_bgp_policy_crd/#as-path-sets","title":"as-path-sets","text":"as-path-sets has as-path-set-name and as-path-list as its element. The numbers are used to evaluate AS numbers in the destination's AS_PATH attribute.
Element Description Example Optional name name of as-path-set \"aspath1\" List list of as path value List has 1 elements.
Element Description Example Optional - as path value \"^65100\" The AS path regular expression is compatible with Quagga and Cisco. Note Character _
has special meaning. It is abbreviation for (^|[,{}() ]|$)
.
Some examples follow:
- From:
^65100_
means the route is passed from AS 65100 directly. - Any:
_65100_
means the route comes through AS 65100. - Origin:
_65100$
means the route is originated by AS 65100. - Only:
^65100$
means the route is originated by AS 65100 and comes from it directly. ^65100_65001
65100_[0-9]+_.*$
^6[0-9]_5.*_65.?00$
"},{"location":"k8s_bgp_policy_crd/#examples_4","title":"Examples","text":" - example 1
- Match routes which come from AS 65100.
# example 1\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-aspath\nspec:\n name: \"aspath1\"\n definedType: \"asPath\"\n List:\n - \"^65100\"\n
- example 2
- Match routes which come Origin AS 65100 and use regular expressions to other AS.
# example 2\napiVersion: bgppolicydefinedsets.loxilb.io/v1\nkind: BGPPolicyDefinedSetsService\nmetadata:\n name: policy-aspath\nspec:\n name: \"aspath1\"\n definedType: \"asPath\"\n List:\n - \"[0-9]+_65[0-9]+_65100$\"\n
"},{"location":"k8s_bgp_policy_crd/#3-defining-policy-definitions","title":"3. Defining policy-definitions","text":"policy-definitions consists of condition and action. Condition part is used to evaluate routes from neighbors, if matched, action will be applied.
- an example of policy-definitions
apiVersion: bgppolicydefinition.loxilb.io/v1\nkind: BGPPolicyDefinitionService\nmetadata:\n name: example-policy\nspec:\n name: example-policy\n statements:\n - name: statement1\n conditions:\n matchPrefixSet:\n prefixSet: ps1\n matchSetOptions: any\n matchNeighborSet:\n neighborSet: ns1\n matchSetOptions: invert\n bgpConditions:\n matchCommunitySet:\n communitySet: community1\n matchSetOptions: any\n matchExtCommunitySet:\n communitySet: ecommunity1\n matchSetOptions: any\n matchAsPathSet:\n asPathSet: aspath1\n matchSetOptions: any\n asPathLength:\n operator: eq\n value: 2\n afiSafiIn:\n - l3vpn-ipv4-unicast\n - ipv4-unicast\n actions:\n routeDisposition: accept-route\n bgpActions:\n setMed: \"-200\"\n setAsPathPrepend:\n as: \"65005\"\n repeatN: 5\n setCommunity:\n options: add\n setCommunityMethod:\n communitiesList:\n - 65100:20 \n
The elements of policy-definitions are as follows:
-
policy-definitions
Element Description Example name policy's name \"example-policy\" - statements Element Description Example name statements's name \"statement1\" - conditions - match-prefix-set Element Description Example prefixSet name for defined-sets.prefix-sets.prefix-set-list that is used in this policy \"ps1\" matchSetOptions option for the check: \"any\" or \"invert\". default is \"any\" \"any\" - conditions - match-neighbor-set Element Description Example neighborSet name for defined-sets.neighbor-sets.neighbor-set-list that is used in this policy \"ns1\" matchSetOptions option for the check: \"any\" or \"invert\". default is \"any\" \"any\" - conditions - bgp-conditions - match-community-set Element Description Example communitySet name for defined-sets.bgp-defined-sets.community-sets.CommunitySetList that is used in this policy \"community1\" matchSetOptions option for the check: \"any\" or \"all\" or \"invert\". default is \"any\" \"invert\" - conditions - bgp-conditions - match-ext-community-set Element Description Example communitySet name for defined-sets.bgp-defined-sets.ext-community-sets that is used in this policy \"ecommunity1\" matchSetOptions option for the check: \"any\" or \"all\" or \"invert\". default is \"any\" \"invert\" - conditions - bgp-conditions - match-as-path-set Element Description Example asPathSet name for defined-sets.bgp-defined-sets.as-path-sets that is used in this policy \"aspath1\" matchSetOptions option for the check: \"any\" or \"all\" or \"invert\". default is \"any\" \"invert\" - conditions - bgp-conditions - match-as-path-length Element Description Example operator operator to compare the length of AS number in AS_PATH attribute. \"eq\",\"ge\",\"le\" can be used. \"eq\" means that length of AS number is equal to Value element \"ge\" means that length of AS number is equal or greater than the Value element \"le\" means that length of AS number is equal or smaller than the Value element \"eq\" value value used to compare with the length of AS number in AS_PATH attribute 2 - statements - actions Element Description Example routeDisposition stop following policy/statement evaluation and accept/reject the route: \"accept-route\" or \"reject-route\" \"accept-route\" - statements - actions - bgp-actions Element Description Example setMed set-med used to change the med value of the route. If only numbers have been specified, replace the med value of route. if number and operater(+ or -) have been specified, adding or subtracting the med value of route. \"-200\" - statements - actions - bgp-actions - set-community Element Description Example options operator to manipulate Community attribute in the route \"ADD\" communities communities used to manipulate the route's community according to options below \"65100:20\" - statements - actions - bgp-actions - set-as-path-prepend Element Description Example as AS number to prepend. You can use \"last-as\" to prepend the leftmost AS number in the aspath attribute. \"65100\" repeatN repeat count to prepend AS 5
"},{"location":"k8s_bgp_policy_crd/#execution-condition-of-action","title":"Execution condition of Action","text":"Action statement is executed when the result of each Condition, including match-set-options is all true. match-set-options is defined how to determine the match result, in the condition with multiple evaluation set as follows:
Value Description any match is true if given value matches any member of the defined set all match is true if given value matches all members of the defined set invert match is true if given value does not match any member of the defined set"},{"location":"k8s_bgp_policy_crd/#examples_5","title":"Examples","text":" - example 1
- This policy definition has prefix-set ps1 and neighbor-set ns1 as its condition and routes matches the condition is rejected.
# example 1\napiVersion: bgppolicydefinition.loxilb.io/v1\nkind: BGPPolicyDefinitionService\nmetadata:\n name: policy1\nspec:\n name: policy1\n statements:\n - name: statement1\n conditions:\n matchPrefixSet:\n prefixSet: ps1\n matchSetOptions: any\n matchNeighborSet:\n neighborSet: ns1\n matchSetOptions: any\n actions:\n routeDisposition: reject-route\n
- example 2
- policy-definition has two statements
- If a route matches the condition inside the first statement(1), GoBGP applies its action and quits the policy evaluation.
# example 2\napiVersion: bgppolicydefinition.loxilb.io/v1\nkind: BGPPolicyDefinitionService\nmetadata:\n name: policy1\nspec:\n name: policy1\n statements:\n - name: statement1\n conditions:\n matchPrefixSet:\n prefixSet: ps1\n matchSetOptions: any\n matchNeighborSet:\n neighborSet: ns1\n matchSetOptions: any\n actions:\n routeDisposition: reject-route\n - name: statement2\n conditions:\n matchPrefixSet:\n prefixSet: ps2\n matchSetOptions: any\n matchNeighborSet:\n neighborSet: ns2\n matchSetOptions: any\n actions:\n routeDisposition: reject-route\n
- example 3
- If you want to add other policies, just add policy-definitions block following the first one like this
apiVersion: bgppolicydefinition.loxilb.io/v1\nkind: BGPPolicyDefinitionService\nmetadata:\n name: policy1\nspec:\n name: policy1\n statements:\n - name: statement1\n conditions:\n matchPrefixSet:\n prefixSet: ps1\n matchSetOptions: any\n matchNeighborSet:\n neighborSet: ns1\n matchSetOptions: any\n actions:\n routeDisposition: reject-route\n---\napiVersion: bgppolicydefinition.loxilb.io/v1\nkind: BGPPolicyDefinitionService\nmetadata:\n name: policy2\nspec:\n name: policy2\n - name: statement2\n conditions:\n matchPrefixSet:\n prefixSet: ps2\n matchSetOptions: any\n matchNeighborSet:\n neighborSet: ns2\n matchSetOptions: any\n actions:\n routeDisposition: reject-route\n
- example 4
- This PolicyDefinition has multiple conditions including BgpConditions as follows:
- prefix-set: ps1
- neighbor-set: ns1
- community-set: community1
- ext-community-set: ecommunity1
- as-path-set: aspath1
- as-path length: equal 2
- If a route matches all these conditions, it will be accepted with community \"65100:20\", next-hop 10.0.0.1, local-pref 110, med subtracted 200, as-path prepended 65005 five times.
# example 4\napiVersion: bgppolicydefinition.loxilb.io/v1\nkind: BGPPolicyDefinitionService\nmetadata:\n name: example-policy\nspec:\n name: example-policy\n statements:\n - name: statement1\n conditions:\n matchPrefixSet:\n prefixSet: ps1\n matchSetOptions: any\n matchNeighborSet:\n neighborSet: ns1\n matchSetOptions: invert\n bgpConditions:\n matchCommunitySet:\n communitySet: community1\n matchSetOptions: any\n matchExtCommunitySet:\n communitySet: ecommunity1\n matchSetOptions: any\n matchAsPathSet:\n asPathSet: aspath1\n matchSetOptions: any\n asPathLength:\n operator: eq\n value: 2\n afiSafiIn:\n - l3vpn-ipv4-unicast\n - ipv4-unicast\n actions:\n routeDisposition: accept-route\n bgpActions:\n setMed: \"-200\"\n setAsPathPrepend:\n as: \"65005\"\n repeatN: 5\n setCommunity:\n options: add\n setCommunityMethod:\n communitiesList:\n - 65100:20 \n
"},{"location":"k8s_bgp_policy_crd/#4-attaching-policy","title":"4. Attaching policy","text":"Here we explain how to attach defined policies to neighbor local rib.
"},{"location":"k8s_bgp_policy_crd/#41-attach-policy-to-route-server-client","title":"4.1. Attach policy to route-server-client","text":"You can use policies defined above as Import or Export or In policy by attaching them to neighbors which is configured to be route-server client.
To attach policies to neighbors, you need to add policy's name to neighbors.apply-policy
in the neighbor's setting. This example attaches policy1 to Import policy and policy2 to Export policy and policy3 is used as the In policy.
apiVersion: bgppolicyapply.loxilb.io/v1\nkind: BGPPolicyApplyService\nmetadata:\n name: policy-apply\nspec:\n ipAddress: \"10.0.255.2\"\n policyType: \"import\"\n polices:\n - \"policy1\"\n routeAction: \"accept\"\n
neighbors has a section to specify policies and the section's name is apply-policy. The apply-policy has 4 elements.
Element Description Example ipAddress neighbor IP address \"10.0.255.2\" policyType option for the Policy type: \"import\" or \"export\" . \"import\" polices The list of the policy - \"policy1\" routeAction action when the route doesn't match any policy or none of the matched policy specifies route-disposition: \"accept\" or \"reject\". \"accept\""},{"location":"k8s_bgp_policy_crd/#policy-and-soft-reset","title":"Policy and Soft Reset","text":"When you change an import policy and reset the inbound routing table (aka soft reset in), a withdraw for a route rejected by the latest import policies will be sent to peers. However, when you change an export policy and reset the outbound routing table (aka soft reset out), even if a route is rejected by the latest export policies, a withdraw for the route will not be sent.
The outbound routing table doesn't exist for saving memory usage, it's impossible to know whether the route was actually sent to peer or the route also was rejected by the previous export policies and not sent. GoBGP doesn't send such withdraw rather than possible unwilling leaking information.
"},{"location":"kube-loxilb-KOR/","title":"kube loxilb KOR","text":""},{"location":"kube-loxilb-KOR/#kube-loxilb","title":"kube-loxilb\ub780 \ubb34\uc5c7\uc778\uac00?","text":"kube-loxilb\ub294 loxilb\uc758 Kubernetes Operator \ub85c\uc368, Kubernetes \uc11c\ube44\uc2a4 \ub85c\ub4dc \ubc38\ub7f0\uc11c \uc0ac\uc591\uc744 \ud3ec\ud568\ud558\uace0 \uc788\uc73c\uba70, \ub85c\ub4dc \ubc38\ub7f0\uc11c \ud074\ub798\uc2a4, \uace0\uae09 IPAM(\uacf5\uc720 \ub610\ub294 \ub2e8\ub3c5 \ubaa8\ub4dc) \ub4f1\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. kube-loxilb\ub294 kube-system \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c Deployment \ud615\ud0dc\ub85c \uc2e4\ud589\ub429\ub2c8\ub2e4. \uc774\ub294 \ud56d\uc0c1 k8s \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ub418\uba70 \ub178\ub4dc/\uc5d4\ub4dc\ud3ec\uc778\ud2b8/\ub3c4\ub2ec \uac00\ub2a5\uc131/LB \uc11c\ube44\uc2a4 \ub4f1\uc758 \ubcc0\uacbd \uc0ac\ud56d\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud558\ub294 \uc81c\uc5b4 \ud50c\ub808\uc778 \uc5ed\ud560\uc744 \uc218\ud589\ud569\ub2c8\ub2e4. \uc774\ub294 K8s \uc624\ud37c\ub808\uc774\ud130\ub85c\uc11c loxilb\ub97c \uad00\ub9ac\ud569\ub2c8\ub2e4. loxilb \uad6c\uc131 \uc694\uc18c\ub294 \uc2e4\uc81c \uc11c\ube44\uc2a4 \uc5f0\uacb0 \ubc0f \ub85c\ub4dc \ubc38\ub7f0\uc2f1 \uc791\uc5c5\uc744 \uc218\ud589\ud569\ub2c8\ub2e4. \ub530\ub77c\uc11c \ubc30\ud3ec \uad00\uc810\uc5d0\uc11c kube-loxilb\ub294 K8s \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ub418\uc5b4\uc57c \ud558\uc9c0\ub9cc, loxilb\ub294 \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ub610\ub294 \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc5d0\uc11c \ubc30\ud3ec\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
\uad8c\uc7a5 \ubc29\ubc95\uc740 kube-loxilb \uad6c\uc131 \uc694\uc18c\ub97c \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ud558\uace0 \uc774 \uac00\uc774\ub4dc\uc5d0 \uc124\uba85\ub41c \ub300\ub85c \uc678\ubd80 \ub178\ub4dc/VM\uc5d0 loxilb \ub3c4\ucee4\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uc774\ub294 \uc0ac\uc6a9\uc790\uac00 \uc628-\ud504\ub808\ubbf8\uc2a4 \ub610\ub294 \ud37c\ube14\ub9ad \ud074\ub77c\uc6b0\ub4dc \ud658\uacbd\uc5d0\uc11c loxilb\ub97c \uc2e4\ud589\ud560 \ub54c \uc720\uc0ac\ud55c \ud615\ud0dc\ub85c \uc81c\uacf5\ud558\uae30 \uc704\ud568\uc785\ub2c8\ub2e4. \ud37c\ube14\ub9ad \ud074\ub77c\uc6b0\ub4dc \ud658\uacbd\uc5d0\uc11c\ub294 \ubcf4\ud1b5 \ub85c\ub4dc \ubc38\ub7f0\uc11c/\ubc29\ud654\ubcbd\uc744 \uc2e4\uc81c \uc6cc\ud06c\ub85c\ub4dc \uc678\ubd80\uc5d0 \uc788\ub294 \ubcf4\uc548/DMZ \uc601\uc5ed\uc5d0\uc11c \uc2e4\ud589\ud569\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uc0ac\uc6a9\uc790\ub294 \ud3b8\uc758\uc640 \uc2dc\uc2a4\ud15c \uc544\ud0a4\ud14d\ucc98\uc5d0 \ub530\ub77c \uc678\ubd80 \ub178\ub4dc \ubaa8\ub4dc\uc640 \uc778-\ud074\ub7ec\uc2a4\ud130 \ubaa8\ub4dc\ub97c \uc120\ud0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c \ube14\ub85c\uadf8\ub4e4\uc740 \uc774\ub7ec\ud55c \ubaa8\ub4dc\ub4e4\uc5d0 \ub300\ud574 \uc124\uba85\ud569\ub2c8\ub2e4:
- AWS EKS\uc5d0\uc11c \uc678\ubd80 \ub178\ub4dc\ub85c loxilb \uc2e4\ud589
- \uc628-\ud504\ub808\ubbf8\uc2a4\uc5d0\uc11c \uc778-\ud074\ub7ec\uc2a4\ud130\ub85c loxilb \uc2e4\ud589
\uc0ac\uc6a9\uc790\ub4e4\uc740 \uc678\ubd80 \ubaa8\ub4dc\uc5d0\uc11c \ud574\ub2f9 loxilb \ub97c \ub204\uac00 \uad00\ub9ac\ud560\uc9c0\uc5d0 \ub300\ud55c \uc9c8\ubb38\uc774 \uc0dd\uae38 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud37c\ube14\ub9ad \ud074\ub77c\uc6b0\ub4dc\uc5d0\uc11c\ub294 VPC\uc5d0\uc11c \uc0c8\ub85c\uc6b4 \uc778\uc2a4\ud134\uc2a4\ub97c \uc0dd\uc131\ud558\uace0 loxilb \ub3c4\ucee4\ub97c \uc2e4\ud589\ud558\ub294 \uac83\uc73c\ub85c \uac04\ub2e8\ud788 \uc0ac\uc6a9 \uac00\ub2a5\ud569\ub2c8\ub2e4. \uc628-\ud504\ub808\ubbf8\uc2a4\uc758 \uacbd\uc6b0, \uc5ec\ubd84\uc758 \ub178\ub4dc/VM\uc5d0\uc11c loxilb \ub3c4\ucee4\ub97c \uc2e4\ud589\ud574\uc57c \ud569\ub2c8\ub2e4. loxilb \ub3c4\ucee4\ub294 \uc790\uccb4 \ud3ec\ud568 \uc5d4\ud130\ud2f0\ub85c, docker, containerd, podman \ub4f1 \uc798 \uc54c\ub824\uc9c4 \ub3c4\uad6c\ub85c \uc27d\uac8c \uad00\ub9ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub3c5\ub9bd\uc801\uc73c\ub85c \uc7ac\uc2dc\uc791/\uc5c5\uadf8\ub808\uc774\ub4dc\ud560 \uc218 \uc788\uc73c\uba70, kube-loxilb\ub294 Kubernetes \ub85c\ub4dc\ubc38\ub7f0\uc11c \uc11c\ube44\uc2a4\uac00 \ub9e4\ubc88 \uc801\uc808\ud788 \uad6c\uc131\ub418\ub3c4\ub85d \ubcf4\uc7a5\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\ub85c \ubc30\ud3ec\ud560 \ub54c\ub294 \ubaa8\ub4e0 \uac83\uc774 Kubernetes\uc5d0 \uc758\ud574 \uad00\ub9ac\ub418\uba70 \uc218\ub3d9 \uac1c\uc785\uc774 \uac70\uc758 \ud544\uc694\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.
"},{"location":"kube-loxilb-KOR/#_1","title":"\uc804\uccb4 \ud1a0\ud3f4\ub85c\uc9c0","text":" - \uc678\ubd80 \ubaa8\ub4dc\uc758 \uacbd\uc6b0, \ubaa8\ub4e0 \uad6c\uc131 \uc694\uc18c\ub97c \ud3ec\ud568\ud55c \uc804\uccb4 \ud1a0\ud3f4\ub85c\uc9c0\ub294 \ub2e4\uc74c\uacfc \uc720\uc0ac\ud574\uc57c \ud569\ub2c8\ub2e4:
- \uc778-\ud074\ub7ec\uc2a4\ud130 \ubaa8\ub4dc\uc758 \uacbd\uc6b0, \ubaa8\ub4e0 \uad6c\uc131 \uc694\uc18c\ub97c \ud3ec\ud568\ud55c \uc804\uccb4 \ud1a0\ud3f4\ub85c\uc9c0\ub294 \ub2e4\uc74c\uacfc \uc720\uc0ac\ud574\uc57c \ud569\ub2c8\ub2e4:
"},{"location":"kube-loxilb-KOR/#kube-loxilb_1","title":"kube-loxilb \ubc30\ud3ec \ubc29\ubc95","text":" -
\uc678\ubd80 \ubaa8\ub4dc\ub97c \uc120\ud0dd\ud55c \uacbd\uc6b0, loxilb \ub3c4\ucee4\uac00 \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc758 \ub178\ub4dc\uc5d0 \uc801\uc808\ud788 \ub2e4\uc6b4\ub85c\ub4dc \ubc0f \uc124\uce58\ub418\uc5c8\ub294\uc9c0 \ud655\uc778\ud558\uc138\uc694. \uc5ec\uae30\uc758 \uac00\uc774\ub4dc\ub97c \ub530\ub974\uac70 \ub2e4\uc74c \ubb38\uc11c\ub97c \ucc38\uc870\ud558\uc138\uc694. \uc774 \ub178\ub4dc\uc5d0\uc11c k8s \ud074\ub7ec\uc2a4\ud130 \ub178\ub4dc(\ub098\uc911\uc5d0 kube-loxilb\uac00 \uc2e4\ud589\ub420)\ub85c\uc758 \ub124\ud2b8\uc6cc\ud06c \uc5f0\uacb0\uc774 \ud544\uc694\ud569\ub2c8\ub2e4. (PS - \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\ub85c \uc2e4\ud589 \uc911\uc778 \uacbd\uc6b0 \uc774 \ub2e8\uacc4\ub294 \uac74\ub108\ub6f8 \uc218 \uc788\uc2b5\ub2c8\ub2e4)
-
kube-loxilb \uc124\uc815 yaml\uc744 \ub2e4\uc6b4\ub85c\ub4dc\ud558\uc138\uc694:
wget https://github.com/loxilb-io/kube-loxilb/raw/main/manifest/ext-cluster/kube-loxilb.yaml\n
- \uc0ac\uc6a9\uc790\uc758 \ud544\uc694\uc5d0 \ub9de\uac8c \ubcc0\uc218\ub97c \uc218\uc815\ud558\uc138\uc694:
args:\n - --loxiURL=http://12.12.12.1:11111\n - --externalCIDR=123.123.123.1/24\n #- --externalSecondaryCIDRs=124.124.124.1/24,125.125.125.1/24\n #- --externalCIDR6=3ffe::1/96\n #- --monitor\n #- --setBGP=65100\n #- --extBGPPeers=50.50.50.1:65101,51.51.51.1:65102\n #- --setRoles=0.0.0.0\n #- --setLBMode=1\n #- --setUniqueIP=false\n
\ubcc0\uc218\uc758 \uc758\ubbf8\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4:
\uc774\ub984 \uc124\uba85 loxiURL loxilb\uc758 API \uc11c\ubc84 \uc8fc\uc18c\uc785\ub2c8\ub2e4. \uc774\ub294 1\ub2e8\uacc4\uc758 loxilb \ub3c4\ucee4\uc758 \ub3c4\ucee4 IP \uc8fc\uc18c\uc785\ub2c8\ub2e4. \uc9c0\uc815\ub418\uc9c0 \uc54a\uc73c\uba74 kube-loxilb\ub294 loxilb\uac00 \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\ub85c \uc2e4\ud589 \uc911\uc774\ub77c\uace0 \uac00\uc815\ud558\uace0 \uc790\ub3d9\uc73c\ub85c \uad6c\uc131\ud569\ub2c8\ub2e4. externalCIDR \uc8fc\uc18c\ub97c \ud560\ub2f9\ud560 CIDR \ub610\ub294 IP \uc8fc\uc18c \ubc94\uc704\uc785\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c \ud560\ub2f9\ub41c \uc8fc\uc18c\ub294 \uc11c\ub85c \ub2e4\ub978 \uc11c\ube44\uc2a4\uc5d0 \uacf5\uc720\ub429\ub2c8\ub2e4(\uacf5\uc720 \ubaa8\ub4dc). externalCIDR6 \uc8fc\uc18c\ub97c \ud560\ub2f9\ud560 IPv6 CIDR \ub610\ub294 IP \uc8fc\uc18c \ubc94\uc704\uc785\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c \ud560\ub2f9\ub41c \uc8fc\uc18c\ub294 \uc11c\ub85c \ub2e4\ub978 \uc11c\ube44\uc2a4\uc5d0 \uacf5\uc720\ub429\ub2c8\ub2e4(\uacf5\uc720 \ubaa8\ub4dc). monitor LB \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \ub77c\uc774\ube0c\ub2c8\uc2a4 \ud504\ub85c\ube0c\ub97c \ud65c\uc131\ud654\ud569\ub2c8\ub2e4(\uae30\ubcf8\uac12: \ube44\ud65c\uc131\ud654). setBGP \uc774 \uc11c\ube44\uc2a4\ub97c \uad11\uace0\ud560 BGP AS-ID\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \uc9c0\uc815\ub418\uc9c0 \uc54a\uc73c\uba74 BGP\uac00 \ube44\ud65c\uc131\ud654\ub429\ub2c8\ub2e4. \uc791\ub3d9 \ubc29\uc2dd\uc740 \uc5ec\uae30\ub97c \ucc38\uc870\ud558\uc138\uc694. extBGPPeers \uc801\uc808\ud55c \uc6d0\uaca9 AS\uc640 \ud568\uaed8 \uc678\ubd80 BGP \ud53c\uc5b4\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. setRoles \uc874\uc7ac\ud558\ub294 \uacbd\uc6b0, kube-loxilb\ub294 \ud074\ub7ec\uc2a4\ud130 \ubaa8\ub4dc\uc5d0\uc11c loxilb \uc5ed\ud560\uc744 \uc870\uc815\ud569\ub2c8\ub2e4. \ub610\ud55c \ud2b9\ubcc4\ud55c VIP(\uc18c\uc2a4 IP\ub85c \uc120\ud0dd\ub428)\ub97c \uc124\uc815\ud558\uc5ec \ud480 NAT \ubaa8\ub4dc\uc5d0\uc11c \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc640 \ud1b5\uc2e0\ud569\ub2c8\ub2e4. setLBMode 0, 1, 2 0 - \uae30\ubcf8\uac12 (DNAT\ub9cc \uc218\ud589, \uc18c\uc2a4 IP \uc720\uc9c0) 1 - OneARM(\uc18c\uc2a4 IP\ub97c \ub85c\ub4dc \ubc38\ub7f0\uc11c\uc758 \uc778\ud130\ud398\uc774\uc2a4 IP\ub85c \ubcc0\uacbd) 2 - Full NAT(\uc18c\uc2a4 IP\ub97c \uac00\uc0c1 IP\ub85c \ubcc0\uacbd) setUniqueIP LB \uc11c\ube44\uc2a4\ub2f9 \uace0\uc720\ud55c \uc11c\ube44\uc2a4 IP\ub97c \ud560\ub2f9\ud569\ub2c8\ub2e4(\uae30\ubcf8\uac12: false). externalSecondaryCIDRs \uba40\ud2f0\ud638\ubc0d \uc9c0\uc6d0\uc758 \uacbd\uc6b0, \uc8fc\uc18c\ub97c \ud560\ub2f9\ud560 \ubcf4\uc870 CIDR \ub610\ub294 IP \uc8fc\uc18c \ubc94\uc704\uc785\ub2c8\ub2e4. \uc704\uc758 \ub9ce\uc740 \ud50c\ub798\uadf8\uc640 \uc778\uc218\ub294 loxilb \ud2b9\uc815 \ubcc0\uc218\uc744 \uae30\ubc18\uc73c\ub85c \uc11c\ube44\uc2a4\ubcc4\ub85c \uc7ac\uc815\uc758\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- kube-loxilb \uc9c0\uc6d0 \ubcc0\uc218:
\ubcc0\uc218 \uc124\uba85 loxilb.io/multus-nets Multus \ub97c \uc0ac\uc6a9\ud560 \ub54c, Multus \ub124\ud2b8\uc6cc\ud06c\ub3c4 \uc11c\ube44\uc2a4 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc0ac\uc6a9\ud560 Multus \ub124\ud2b8\uc6cc\ud06c \uc774\ub984\uc744 \ub4f1\ub85d\ud569\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: multus-service\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/multus-nets: macvlan1,macvlan2spec:\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0app: pod-01\u00a0\u00a0ports:\u00a0\u00a0- port: 55002\u00a0\u00a0\u00a0\u00a0targetPort: 5002\u00a0\u00a0type: LoadBalancer loxilb.io/num-secondary-networks SCTP \uba40\ud2f0\ud638\ubc0d \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud560 \ub54c, \uc11c\ube44\uc2a4\uc5d0 \ud560\ub2f9\ud560 \ubcf4\uc870 IP\uc758 \uc218\ub97c \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4(\ucd5c\ub300 3\uac1c). loxilb.io/secondaryIPs \uc8fc\uc11d\uacfc \ud568\uaed8 \uc0ac\uc6a9\ud560 \ub54c\ub294 loxilb.io/num-secondary-networks\uc5d0 \uc124\uc815\ub41c \uac12\uc774 \ubb34\uc2dc\ub429\ub2c8\ub2e4. (loxilb.io/secondaryIPs \uc8fc\uc11d\uc774 \uc6b0\uc120\uc21c\uc704\ub97c \uac00\uc9d1\ub2c8\ub2e4)\uc608:metadata:\u00a0\u00a0name: sctp-lb1\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/num-secondary-networks: \u201c2\u201dspec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 55002\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/secondaryIPs SCTP \uba40\ud2f0\ud638\ubc0d \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud560 \ub54c, \uc11c\ube44\uc2a4\uc5d0 \ud560\ub2f9\ud560 \ubcf4\uc870 IP\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. \uc5ec\ub7ec IP(\ucd5c\ub300 3\uac1c)\ub97c \ucf64\ub9c8(,)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub3d9\uc2dc\uc5d0 \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. loxilb.io/num-secondary-networks \uc8fc\uc11d\uacfc \ud568\uaed8 \uc0ac\uc6a9\ud560 \ub54c\ub294 loxilb.io/secondaryIPs\uac00 \uc6b0\uc120\uc21c\uc704\ub97c \uac00\uc9d1\ub2c8\ub2e4.\uc608:metadata:name: sctp-lb-secips\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/lbmode: \"fullnat\"loxilb.io/secondaryIPs: \"1.1.1.1,2.2.2.2\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb-secips\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0type: LoadBalancer loxilb.io/staticIP \ub85c\ub4dc \ubc38\ub7f0\uc11c \uc11c\ube44\uc2a4\uc5d0 \ud560\ub2f9\ud560 \uc678\ubd80 IP\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c \uc678\ubd80 IP\ub294 kube-loxilb\uc5d0 \uc124\uc815\ub41c externalCIDR \ubc94\uc704 \ub0b4\uc5d0\uc11c \ud560\ub2f9\ub418\uc9c0\ub9cc, \uc8fc\uc11d\uc744 \uc0ac\uc6a9\ud558\uc5ec \ubc94\uc704 \uc678\ubd80\uc758 IP\ub97c \uc815\uc801\uc73c\ub85c \uc9c0\uc815\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb-fullnat\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/lbmode: \"fullnat\"\u00a0\u00a0\u00a0\u00a0loxilb.io/staticIP: \"192.168.255.254\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-fullnat-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/liveness \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc120\ud0dd\uc5d0 \uae30\ubc18\ud55c loxilb\uac00 \uc0c1\ud0dc \ud655\uc778(\ud504\ub85c\ube0c)\uc744 \uc218\ud589\ud558\ub3c4\ub85d \uc124\uc815\ud569\ub2c8\ub2e4(\ud50c\ub798\uadf8\uac00 \uc124\uc815\ub41c \uacbd\uc6b0, \ud65c\uc131 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub9cc \uc120\ud0dd\ub429\ub2c8\ub2e4). \uae30\ubcf8\uac12\uc740 \ube44\ud65c\uc131\ud654\uc774\uba70, \uac12\uc774 yes\ub85c \uc124\uc815\ub418\uba74 \ud574\ub2f9 \uc11c\ube44\uc2a4\uc758 \ud504\ub85c\ube0c \uae30\ub2a5\uc774 \ud65c\uc131\ud654\ub429\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb-fullnat\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-fullnat-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/lbmode \uac01 \uc11c\ube44\uc2a4\uc5d0 \ub300\ud574 \uac1c\ubcc4\uc801\uc73c\ub85c LB \ubaa8\ub4dc\ub97c \uc124\uc815\ud569\ub2c8\ub2e4. \uc9c0\uc815\ud560 \uc218 \uc788\ub294 \uac12 \uc911 \ud558\ub098\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4: \u201cdefault\u201d, \u201conearm\u201d, \u201cfullnat\u201d \ub610\ub294 \"dsr\". \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 \uc774 \ubb38\uc11c\ub97c \ucc38\uc870\ud558\uc138\uc694.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb-fullnat\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/lbmode: \"fullnat\"\u00a0\u00a0\u00a0\u00a0spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-fullnat-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/ipam \uc11c\ube44\uc2a4\uac00 \uc0ac\uc6a9\ud560 IPAM \ubaa8\ub4dc\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. \"ipv4\", \"ipv6\", \ub610\ub294 \"ipv6to4\" \uc911 \ud558\ub098\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4. \uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/ipam : \"ipv4\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/timeout \uc11c\ube44\uc2a4\uc758 \uc138\uc158 \uc720\uc9c0 \uc2dc\uac04\uc744 \uc124\uc815\ud569\ub2c8\ub2e4. \uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/timeout : \"60\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probetype \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ud504\ub85c\ube0c \uc791\uc5c5\uc5d0 \uc0ac\uc6a9\ud560 \ud504\ub85c\ud1a0\ucf5c \uc720\ud615\uc744 \uc9c0\uc815\ud569\ub2c8\ub2e4. \"udp\", \"tcp\", \"https\", \"http\", \"sctp\", \"ping\", \ub610\ub294 \"none\" \uc911 \ud558\ub098\ub97c \uc120\ud0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. lbMode\ub97c \"fullnat\" \ub610\ub294 \"onearm\"\uc73c\ub85c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0, probetype\uc744 \ud504\ub85c\ud1a0\ucf5c \uc720\ud615\uc73c\ub85c \uc124\uc815\ud569\ub2c8\ub2e4. \ud574\uc81c\ud558\ub824\uba74 probetype : \"none\"\uc744 \uc0ac\uc6a9\ud558\uc138\uc694.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"ping\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probeport \ud504\ub85c\ube0c \uc791\uc5c5\uc5d0 \uc0ac\uc6a9\ud560 \ud3ec\ud2b8\ub97c \uc124\uc815\ud569\ub2c8\ub2e4. loxilb.io/probetype \uc8fc\uc11d\uc774 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\uac70\ub098 \uc720\ud615\uc774 icmp \ub610\ub294 none\uc778 \uacbd\uc6b0 \uc801\uc6a9\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"tcp\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probeport : \"3000\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probereq \ud504\ub85c\ube0c \uc694\uccad\uc744 \uc704\ud55c API\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. loxilb.io/probetype \uc8fc\uc11d\uc774 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\uac70\ub098 \uc720\ud615\uc774 icmp \ub610\ub294 none\uc778 \uacbd\uc6b0 \uc801\uc6a9\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"tcp\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probeport : \"3000\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probereq : \"health\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/proberesp \ud504\ub85c\ube0c \uc694\uccad\uc5d0 \ub300\ud55c \uc751\ub2f5\uc744 \uc9c0\uc815\ud569\ub2c8\ub2e4. loxilb.io/probetype \uc8fc\uc11d\uc774 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\uac70\ub098 \uc720\ud615\uc774 icmp \ub610\ub294 none\uc778 \uacbd\uc6b0 \uc801\uc6a9\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"tcp\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probeport : \"3000\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probereq : \"health\"\u00a0\u00a0\u00a0\u00a0loxilb.io/proberesp : \"ok\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probetimeout \ud504\ub85c\ube0c \uc694\uccad\uc758 \ud0c0\uc784\uc544\uc6c3 \uc2dc\uac04(\ucd08)\uc744 \uc9c0\uc815\ud569\ub2c8\ub2e4. \uae30\ubcf8\uac12\uc740 60\ucd08\uc785\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probetimeout : \"10\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/proberetries \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ube44\ud65c\uc131\uc73c\ub85c \uac04\uc8fc\ud558\uae30 \uc804\uc5d0 \ud504\ub85c\ube0c \uc694\uccad\uc744 \ub2e4\uc2dc \uc2dc\ub3c4\ud558\ub294 \ud69f\uc218\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. \uae30\ubcf8\uac12\uc740 2\uc785\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probetimeout : \"10\"\u00a0\u00a0\u00a0\u00a0loxilb.io/proberetries : \"3\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/epselect \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc120\ud0dd \uc54c\uace0\ub9ac\uc998\uc744 \uc9c0\uc815\ud569\ub2c8\ub2e4(e.g \"rr\", \"hash\", \"persist\", \"lc\" \ub4f1). \uae30\ubcf8\uac12\uc740 \ub77c\uc6b4\ub4dc \ub85c\ube48\uc785\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probetimeout : \"10\"\u00a0\u00a0\u00a0\u00a0loxilb.io/proberetries : \"3\"\u00a0\u00a0\u00a0\u00a0loxilb.io/epselect : \"hash\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/prefLocalPod \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\uc5d0\uc11c \ud56d\uc0c1 \ub85c\uceec \ud30c\ub4dc\ub97c \uc120\ud0dd\ud558\ub3c4\ub85d \uc124\uc815\ud569\ub2c8\ub2e4. \uae30\ubcf8\uac12\uc740 false\uc785\ub2c8\ub2e4.\uc608:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/prefLocalPod : \"yes\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer - \ud544\uc694\ud55c \ubcc0\uacbd\uc744 \uc644\ub8cc\ud55c \ud6c4 yaml\uc744 \uc801\uc6a9\ud558\uc138\uc694:
kubectl apply -f kube-loxilb.yaml\n
- \uc704 \uba85\ub839\uc5b4\ub294 kube-loxilb\uac00 \uc131\uacf5\uc801\uc73c\ub85c \uc2e4\ud589\ub418\ub3c4\ub85d \ubcf4\uc7a5\ud569\ub2c8\ub2e4. kube-loxilb\uac00 \uc2e4\ud589 \uc911\uc778\uc9c0 \ud655\uc778\ud558\uc138\uc694:
k8s@master:~$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system local-path-provisioner-84db5d44d9-pczhz 1/1 Running 0 16h\nkube-system coredns-6799fbcd5-44qpx 1/1 Running 0 16h\nkube-system metrics-server-67c658944b-t4x5d 1/1 Running 0 16h\nkube-system kube-loxilb-5fb5566999-ll4gs 1/1 Running 0 14h\n
- \ub9c8\uc9c0\ub9c9\uc73c\ub85c \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc704\ud55c \uc11c\ube44\uc2a4 LB\ub97c \uc0dd\uc131\ud558\ub824\uba74 \ub2e4\uc74c \ud15c\ud50c\ub9bf yaml\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc801\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
(\ucc38\uace0 - loadBalancerClass \ubc0f \uae30\ud0c0 loxilb \ud2b9\uc815 \uc8fc\uc11d\uc744 \ud655\uc778\ud558\uc138\uc694):
apiVersion: v1\n kind: Service\n metadata:\n name: iperf-service\n annotations:\n # If there is a need to do liveness check from loxilb\n loxilb.io/liveness: \"yes\"\n # Specify LB mode - one of default, onearm or fullnat \n loxilb.io/lbmode: \"default\"\n # Specify loxilb IPAM mode - one of ipv4, ipv6 or ipv6to4 \n loxilb.io/ipam: \"ipv4\"\n # Specify number of secondary networks for multi-homing\n # Only valid for SCTP currently\n # loxilb.io/num-secondary-networks: \"2\n # Specify a static externalIP for this service\n # loxilb.io/staticIP: \"123.123.123.2\"\n spec:\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: perf-test\n ports:\n - port: 55001\n targetPort: 5001\n type: LoadBalancer\n ---\n apiVersion: v1\n kind: Pod\n metadata:\n name: iperf1\n labels:\n what: perf-test\n spec:\n containers:\n - name: iperf\n image: eyes852/ubuntu-iperf-test:0.5\n command:\n - iperf\n - \"-s\"\n ports:\n - containerPort: 5001\n
\uc0ac\uc6a9\uc790\ub294 \uc704 \ub0b4\uc6a9\uc744 \ud544\uc694\uc5d0 \ub530\ub77c \ubcc0\uacbd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.
- LB \uc11c\ube44\uc2a4\uac00 \uc0dd\uc131\ub418\uc5c8\ub294\uc9c0 \ud655\uc778\ud558\uc138\uc694:
k8s@master:~$ sudo kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 13h\niperf1 LoadBalancer 10.43.8.156 llb-192.168.80.20 55001:5001/TCP 8m20s\n
- \ub354 \ub9ce\uc740 \uc608\uc81c yaml \ud15c\ud50c\ub9bf\uc740 kube-loxilb\uc758 \ub9e4\ub2c8\ud398\uc2a4\ud2b8 \ub514\ub809\ud130\ub9ac\ub97c \ucc38\uc870\ud558\uc138\uc694.
"},{"location":"kube-loxilb-KOR/#loxilb","title":"\ucd94\uac00 \ub2e8\uacc4: loxilb\ub97c (\ud074\ub7ec\uc2a4\ud130 \ub0b4) \ubaa8\ub4dc\ub85c \ubc30\ud3ec","text":"loxilb\ub97c \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ubaa8\ub4dc\ub85c \uc2e4\ud589\ud558\ub824\uba74, kube-loxilb.yaml\uc758 URL \uc778\uc218\ub97c \uc8fc\uc11d \ucc98\ub9ac\ud574\uc57c \ud569\ub2c8\ub2e4:
args:\n #- --loxiURL=http://12.12.12.1:11111\n - --externalCIDR=123.123.123.1/24\n
\uc774\ub294 kube-loxilb\uc758 \uc790\uccb4 \uac80\uc0c9 \ubaa8\ub4dc\ub97c \ud65c\uc131\ud654\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 loxilb \ud30c\ub4dc\ub97c \ucc3e\uace0 \uc811\uadfc\ud560 \uc218 \uc788\uac8c \ud569\ub2c8\ub2e4. \ub9c8\uc9c0\ub9c9\uc73c\ub85c \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c loxilb \ud30c\ub4dc\ub97c \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4:
sudo kubectl apply -f https://github.com/loxilb-io/kube-loxilb/raw/main/manifest/in-cluster/loxilb.yaml\n
\ubaa8\ub4e0 \ud30c\ub4dc\uac00 \uc0dd\uc131\ub41c \ud6c4, \ub2e4\uc74c\uacfc \uac19\uc774 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4(\ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c kube-loxilb\uc640 loxilb \uad6c\uc131 \uc694\uc18c\uac00 \uc2e4\ud589 \uc911\uc778 \uac83\uc744 \ubcfc \uc218 \uc788\uc2b5\ub2c8\ub2e4):
k8s@master:~$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system local-path-provisioner-84db5d44d9-pczhz 1/1 Running 0 16h\nkube-system coredns-6799fbcd5-44qpx 1/1 Running 0 16h\nkube-system metrics-server-67c658944b-t4x5d 1/1 Running 0 16h\nkube-system kube-loxilb-5fb5566999-ll4gs 1/1 Running 0 14h\nkube-system loxilb-lb-mklj2 1/1 Running 0 13h\nkube-system loxilb-lb-stp5k 1/1 Running 0 13h\nkube-system loxilb-lb-j8fc6 1/1 Running 0 13h\nkube-system loxilb-lb-5m85p 1/1 Running 0 13h\n
\uc774\ud6c4 \uc11c\ube44\uc2a4 \uc0dd\uc131 \uacfc\uc815\uc740 \uc774\uc804 \uc139\uc158\uc5d0\uc11c \uc124\uba85\ud55c \uac83\uacfc \ub3d9\uc77c\ud569\ub2c8\ub2e4.
"},{"location":"kube-loxilb-KOR/#kube-loxilb-crd","title":"kube-loxilb CRD \uc0ac\uc6a9 \ubc29\ubc95","text":"kube-loxilb\ub294 \ucee4\uc2a4\ud140 \ub9ac\uc18c\uc2a4 \uc815\uc758(CRD)\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. \ud604\uc7ac \uc9c0\uc6d0\ub418\ub294 \uc791\uc5c5\uc740 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4(\uacc4\uc18d \uc5c5\ub370\uc774\ud2b8\ub420 \uc608\uc815\uc785\ub2c8\ub2e4): - BGP \ud53c\uc5b4 \ucd94\uac00 - BGP \ud53c\uc5b4 \uc0ad\uc81c
CRD \uc608\uc81c\ub294 manifest/crds\uc5d0 \uc800\uc7a5\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. BGP \ud53c\uc5b4 \uc124\uc815 \uc608\uc81c\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4:
- \uc0ac\uc804 \ucc98\ub9ac(kube-loxilb CRD\ub97c K8s\uc5d0 \ub4f1\ub85d). \uccab \ubc88\uc9f8 \ub2e8\uacc4\ub85c lbpeercrd.yaml\uc744 \uc801\uc6a9\ud569\ub2c8\ub2e4:
kubectl apply -f manifest/crds/lbpeercrd.yaml\n
- CRD \uc815\uc758
BGP \ud53c\uc5b4\ub97c \ucd94\uac00\ud558\ub294 yaml \ud30c\uc77c\uc744 \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \uc544\ub798 \uc608\uc81c\ub294 123.123.123.2\uc758 \ud53c\uc5b4 IP \uc8fc\uc18c\uc640 \uc6d0\uaca9 AS \ubc88\ud638 65123\uc73c\ub85c \ud53c\uc5b4\ub97c \uc0dd\uc131\ud558\ub294 \uc608\uc81c\uc785\ub2c8\ub2e4. bgp-peer.yaml\uc774\ub77c\ub294 \ud30c\uc77c\uc744 \uc0dd\uc131\ud558\uace0 \uc544\ub798 \ub0b4\uc6a9\uc744 \ucd94\uac00\ud569\ub2c8\ub2e4:
apiVersion: \"bgppeer.loxilb.io/v1\"\nkind: BGPPeerService\nmetadata:\n name: bgp-peer-test\nspec:\n ipAddress: 123.123.123.2\n remoteAs: 65123\n remotePort: 179\n
- \uc0c8\ub85c\uc6b4 BGP \ud53c\uc5b4\ub97c \ucd94\uac00\ud558\uae30 \uc704\ud574 CRD\ub97c \uc801\uc6a9\ud569\ub2c8\ub2e4:
kubectl apply -f bgp-peer.yaml\n
- \uc801\uc6a9\ub41c CRD \ud655\uc778
\ub450 \uac00\uc9c0 \ubc29\ubc95\uc73c\ub85c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uccab \ubc88\uc9f8\ub294 loxicmd(loxilb \ucee8\ud14c\uc774\ub108 \ub0b4)\ub97c \ud1b5\ud574 \ud655\uc778\ud558\ub294 \ubc29\ubc95\uc774\uace0, \ub450 \ubc88\uc9f8\ub294 kubectl\uc744 \ud1b5\ud574 \ud655\uc778\ud558\ub294 \ubc29\ubc95\uc785\ub2c8\ub2e4.
# loxicmd\nkubectl exec -it {loxilb} -n kube-system -- loxicmd get bgpneigh \n| PEER | AS | UP/DOWN | STATE | \n|----------------|-------|-------------|-------------|\n| 123.123.123.2 | 65123 | never | ACTIVE |\n\n# kubectl\nkubectl get bgppeerservice\nNAME PEER AS \nbgp-peer-test 123.123.123.2 65123 \n
"},{"location":"kube-loxilb/","title":"Understanding loxilb deployment in K8s with kube-loxilb","text":""},{"location":"kube-loxilb/#what-is-kube-loxilb","title":"What is kube-loxilb ?","text":"kube-loxilb is loxilb's implementation of kubernetes service load-balancer spec which includes support for load-balancer class, advanced IPAM (shared or exclusive) etc. kube-loxilb runs as a deloyment set in kube-system namespace. It is a control-plane component that always runs inside k8s cluster and watches k8s system for changes to nodes/end-points/reachability/LB services etc. It acts as a K8s Operator of loxilb. The loxilb component takes care of doing actual job of providing service connectivity and load-balancing. So, from deployment perspective we need to run kube-loxilb inside K8s cluster but we have option to deploy loxilb in-cluster or external to the cluster.
The preferred way is to run kube-loxilb component inside the cluster and provision loxilb docker in any external node/vm as mentioned in this guide. The rationale is to provide users a similar look and feel whether running loxilb in an on-prem or public cloud environment. Public-cloud environments usually run load-balancers/firewalls externally in order to provide a secure/dmz perimeter layer outside actual workloads. But users are free to choose any mode (in-cluster mode or external mode) as per convenience and their system architecture. The following blogs give detailed steps for :
- Running loxilb in external node with AWS EKS
- Running in-cluster LB with K3s for on-prem use-cases
This usually leads to another query - In external mode, who will be responsible for managing this entity ? On public cloud(s), it is as simple as spawning a new instance in your VPC and launch loxilb docker in it. For on-prem cases, you need to run loxilb docker in a spare node/vm as applicable. loxilb docker is a self-contained entity and easily managed with well-known tools like docker, containerd, podman etc. It can be independently restarted/upgraded anytime and kube-loxilb will make sure all the k8s LB services are properly configured each time. When deploying in-cluster mode, everything is managed by Kubernetes itself with little-to-no manual intervention.
"},{"location":"kube-loxilb/#overall-topology","title":"Overall topology","text":" - For external mode, the overall topology including all components should be similar to the following :
- For in-cluster mode, the overall topology including all components should be similar to the following :
"},{"location":"kube-loxilb/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":" -
If you have chosen external-mode, please make sure loxilb docker is downloaded and installed properly in a node external to your cluster. One can follow guides here or refer to various other documentation . It is important to have network connectivity from this node to the k8s cluster nodes (where kube-loxilb will eventually run) as seen in the above figure. (PS - This step can be skipped if running in-cluster mode)
-
Download the kube-loxilb config yaml :
wget https://github.com/loxilb-io/kube-loxilb/raw/main/manifest/ext-cluster/kube-loxilb.yaml\n
- Modify arguments as per user's needs :
args:\n - --loxiURL=http://12.12.12.1:11111\n - --externalCIDR=123.123.123.1/24\n #- --externalSecondaryCIDRs=124.124.124.1/24,125.125.125.1/24\n #- --externalCIDR6=3ffe::1/96\n #- --monitor\n #- --setBGP=65100\n #- --extBGPPeers=50.50.50.1:65101,51.51.51.1:65102\n #- --setRoles=0.0.0.0\n #- --setLBMode=1\n #- --setUniqueIP=false\n
The arguments have the following meaning :
Name Description loxiURL API server address of loxilb. This is the docker IP address loxilb docker of Step 1. If unspecified, kube-loxilb assumes loxilb is running in-cluster mode and autoconfigures this. externalCIDR CIDR or IPAddress range to allocate addresses from. By default address allocated are shared for different services(shared Mode) externalCIDR6 Ipv6 CIDR or IPAddress range to allocate addresses from. By default address allocated are shared for different services(shared Mode) monitor Enable liveness probe for the LB end-points (default : unset) setBGP Use specified BGP AS-ID to advertise this service. If not specified BGP will be disabled. Please check here how it works. extBGPPeers Specifies external BGP peers with appropriate remote AS setRoles If present, kube-loxilb arbitrates loxilb role(s) in cluster-mode. Further, it sets a special VIP (selected as sourceIP) to communicate with end-points in full-nat mode. setLBMode 0, 1, 2 0 - default (only DNAT, preserves source-IP) 1 - onearm (source IP is changed to load balancer\u2019s interface IP) 2 - fullNAT (sourceIP is changed to virtual IP) setUniqueIP Allocate unique service-IP per LB service (default : false) externalSecondaryCIDRs Secondary CIDR or IPAddress ranges to allocate addresses from in case of multi-homing support Many of the above flags and arguments can be overriden on a per-service basis based on loxilb specific annotation as mentioned below.
- kube-loxilb supported annotations:
Annotations Description loxilb.io/multus-nets When using multus, the multus network can also be used as a service endpoint.Register the multus network name to be used.Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: multus-service\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/multus-nets: macvlan1,macvlan2spec:\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0app: pod-01\u00a0\u00a0ports:\u00a0\u00a0- port: 55002\u00a0\u00a0\u00a0\u00a0targetPort: 5002\u00a0\u00a0type: LoadBalancer loxilb.io/num-secondary-networks When using the SCTP multi-homing function, you can specify the number of secondary IPs(upto 3) to be assigned to the service. When used with the loxilb.io/secondaryIPs annotation, the value set in loxilb.io/num-secondary-networks is ignored. (loxilb.io/secondaryIPs annotation takes precedence)Example:metadata:\u00a0\u00a0name: sctp-lb1\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/num-secondary-networks: \u201c2\u201dspec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 55002\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/secondaryIPs When using the SCTP multi-homing function, specify the secondary IP to be assigned to the service. Multiple IPs(upto 3) can be specified at the same time using a comma(,). When used with the loxilb.io/num-secondary-networks annotation, loxilb.io/secondaryIPs takes priority.)Example:metadata:name: sctp-lb-secips\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/lbmode: \"fullnat\"loxilb.io/secondaryIPs: \"1.1.1.1,2.2.2.2\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb-secips\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0type: LoadBalancer loxilb.io/staticIP Specifies the External IP to assign to the LoadBalancer service. By default, an external IP is assigned within the externalCIDR range set in kube-loxilb, but using the annotation, IPs outside the range can also be statically specified. Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb-fullnat\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/lbmode: \"fullnat\"\u00a0\u00a0\u00a0\u00a0loxilb.io/staticIP: \"192.168.255.254\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-fullnat-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/liveness Set LoxiLB to perform a health check (probe) based endpoint selection(If flag is set, only active endpoints will be selected). The default value is no, and when the value is set to yes, the probe function of the corresponding service is activated.Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb-fullnat\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-fullnat-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/lbmode Set LB mode individually for each service. Select one among types of values \u200b\u200bthat can be specified: \u201cdefault\u201d, \u201conearm\u201d, \u201cfullnat\u201d or \"dsr\". Please refer to this document for more details.Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb-fullnat\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/lbmode: \"fullnat\"\u00a0\u00a0\u00a0\u00a0spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-fullnat-test\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/ipam Specify which IPAM mode the service will use. Select one of three options: \u201cipv4\u201d, \u201cipv6\u201d, or \u201cipv6to4\u201d. Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/ipam : \"ipv4\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/timeout Set the session retention time for the service. Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/timeout : \"60\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probetype Specifies the protocol type to use for endpoint probe operations. You can select one of \u201cudp\u201d, \u201ctcp\u201d, \u201chttps\u201d, \u201chttp\u201d, \u201csctp\u201d, \u201cping\u201d, or \u201cnone\u201d. Probetype is set to protocol type, if you are using lbMode as \"fullnat\" or \"onearm\". To set it off, use probetype : \"none\" Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"ping\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probeport Set the port to use for probe operation. It is not applied if the loxilb.io/probetype annotation is not used or if it is of type icmp or none.Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"tcp\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probeport : \"3000\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probereq Specifies API for the probe request. It is not applied if the loxilb.io/probetype annotation is not used or if it is of type icmp or none.Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"tcp\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probeport : \"3000\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probereq : \"health\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/proberesp Specifies the response to the probe request. It is not applied if the loxilb.io/probetype annotation is not used or if it is of type icmp or none.Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/probetype : \"tcp\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probeport : \"3000\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probereq : \"health\"\u00a0\u00a0\u00a0\u00a0loxilb.io/proberesp : \"ok\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/probetimeout Specifies the timeout for starting a probe request (in seconds). The default value is 60 seconds Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probetimeout : \"10\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/proberetries Specifies the number of probe request retries before considering an endpoint as inoperative. The default value is 2 Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probetimeout : \"10\"\u00a0\u00a0\u00a0\u00a0loxilb.io/proberetries : \"3\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/epselect Specifies the algorithm for end-point slection e.g \"rr\", \"hash\", \"persist\", \"lc\" etc. The default value is roundrobin. Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/liveness : \"yes\"\u00a0\u00a0\u00a0\u00a0loxilb.io/probetimeout : \"10\"\u00a0\u00a0\u00a0\u00a0loxilb.io/proberetries : \"3\"\u00a0\u00a0\u00a0\u00a0loxilb.io/epselect : \"hash\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer loxilb.io/prefLocalPod Specifies whether to always prefer to select a local pod in in-cluster mode. The default value is false. Example:apiVersion: v1kind: Servicemetadata:\u00a0\u00a0name: sctp-lb\u00a0\u00a0annotations:\u00a0\u00a0\u00a0\u00a0loxilb.io/prefLocalPod : \"yes\"spec:\u00a0\u00a0loadBalancerClass: loxilb.io/loxilb\u00a0\u00a0externalTrafficPolicy: Local\u00a0\u00a0selector:\u00a0\u00a0\u00a0\u00a0what: sctp-lb\u00a0\u00a0ports:\u00a0\u00a0\u00a0- port: 56004\u00a0\u00a0\u00a0\u00a0\u00a0protocol: SCTP\u00a0\u00a0\u00a0\u00a0\u00a0targetPort: 9999\u00a0\u00a0type: LoadBalancer - Apply the yaml after making necessary changes :
kubectl apply -f kube-loxilb.yaml\n
* The above should make sure kube-loxilb is successfully running. Check kube-loxilb is running : k8s@master:~$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system local-path-provisioner-84db5d44d9-pczhz 1/1 Running 0 16h\nkube-system coredns-6799fbcd5-44qpx 1/1 Running 0 16h\nkube-system metrics-server-67c658944b-t4x5d 1/1 Running 0 16h\nkube-system kube-loxilb-5fb5566999-ll4gs 1/1 Running 0 14h\n
- Finally to create service LB for a workload, we can use and apply the following template yaml
(Note - Check loadBalancerClass and other loxilb specific annotation) :
apiVersion: v1\n kind: Service\n metadata:\n name: iperf-service\n annotations:\n # If there is a need to do liveness check from loxilb\n loxilb.io/liveness: \"yes\"\n # Specify LB mode - one of default, onearm or fullnat \n loxilb.io/lbmode: \"default\"\n # Specify loxilb IPAM mode - one of ipv4, ipv6 or ipv6to4 \n loxilb.io/ipam: \"ipv4\"\n # Specify number of secondary networks for multi-homing\n # Only valid for SCTP currently\n # loxilb.io/num-secondary-networks: \"2\n # Specify a static externalIP for this service\n # loxilb.io/staticIP: \"123.123.123.2\"\n spec:\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: perf-test\n ports:\n - port: 55001\n targetPort: 5001\n type: LoadBalancer\n ---\n apiVersion: v1\n kind: Pod\n metadata:\n name: iperf1\n labels:\n what: perf-test\n spec:\n containers:\n - name: iperf\n image: eyes852/ubuntu-iperf-test:0.5\n command:\n - iperf\n - \"-s\"\n ports:\n - containerPort: 5001\n
Users can change the above as per their needs.
- Verify LB service is created
k8s@master:~$ sudo kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 13h\niperf1 LoadBalancer 10.43.8.156 llb-192.168.80.20 55001:5001/TCP 8m20s\n
- For more example yaml templates, kindly refer to kube-loxilb's manifest directory
"},{"location":"kube-loxilb/#additional-steps-to-deploy-loxilb-in-cluster-mode","title":"Additional steps to deploy loxilb (in-cluster) mode","text":"To run loxilb in-cluster mode, the URL argument in kube-loxilb.yaml needs to be commented out:
args:\n #- --loxiURL=http://12.12.12.1:11111\n - --externalCIDR=123.123.123.1/24\n
This enables a self-discovery mode of kube-loxilb where it can find and reach loxilb pods running inside the cluster. Last but not the least we need to create the loxilb pods in cluster :
sudo kubectl apply -f https://github.com/loxilb-io/kube-loxilb/raw/main/manifest/in-cluster/loxilb.yaml\n
Once all the pods are created, the same can be verified as follows (you can see both kube-loxilb and loxilb components running:
k8s@master:~$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system local-path-provisioner-84db5d44d9-pczhz 1/1 Running 0 16h\nkube-system coredns-6799fbcd5-44qpx 1/1 Running 0 16h\nkube-system metrics-server-67c658944b-t4x5d 1/1 Running 0 16h\nkube-system kube-loxilb-5fb5566999-ll4gs 1/1 Running 0 14h\nkube-system loxilb-lb-mklj2 1/1 Running 0 13h\nkube-system loxilb-lb-stp5k 1/1 Running 0 13h\nkube-system loxilb-lb-j8fc6 1/1 Running 0 13h\nkube-system loxilb-lb-5m85p 1/1 Running 0 13h\n
Thereafter, the process of service creation remains the same as explained in previous sections.
"},{"location":"kube-loxilb/#how-to-use-kube-loxilb-crds","title":"How to use kube-loxilb CRDs ?","text":"Kube-loxilb provides Custom Resource Definition (CRD). Current the following operations are supported (which would be continually updated): - Add a BGP Peer - Delete a BGP Peer
An example of CRD is stored in manifest/crds. Setting up a BGP Peer as an example is as follows:
- Pre-Processing (Register kube-loxilb CRDs with K8s). Apply lbpeercrd.yaml as first step
kubectl apply -f manifest/crds/lbpeercrd.yaml\n
- CRD definition
You need to create a yaml file that adds a peer for BGP. The example below is an example of creating a Peer with a RemoteAS number of Peer IP address 65123 at 123.123.123.2. Create a file named bgp-peer.yaml and add the contents below.
apiVersion: \"bgppeer.loxilb.io/v1\"\nkind: BGPPeerService\nmetadata:\n name: bgp-peer-test\nspec:\n ipAddress: 123.123.123.2\n remoteAs: 65123\n remotePort: 179\n
3. Apply CRD to add a new BGP Peer kubectl apply -f bgp-peer.yaml\n
4. Verify the applied CRD You can check it in two ways. The first one can be checked through loxicmd(in loxilb container), and the second one can be checked through kubectl.
# loxicmd\nkubectl exec -it {loxilb} -n kube-system -- loxicmd get bgpneigh \n| PEER | AS | UP/DOWN | STATE | \n|----------------|-------|-------------|-------------|\n| 123.123.123.2 | 65123 | never | ACTIVE |\n\n# kubectl\nkubectl get bgppeerservice\nNAME PEER AS \nbgp-peer-test 123.123.123.2 65123 \n
"},{"location":"lb-algo/","title":"loxilb load-balancer algorithms","text":""},{"location":"lb-algo/#load-balancer-algorithms-in-loxilb","title":"Load-balancer algorithms in loxilb","text":"loxilb implements a variety of algortihms to achieve load-balancing and distribute incoming traffic to the server end-points
"},{"location":"lb-algo/#1-round-robin-rr","title":"1. Round-Robin (rr)","text":"This is default algo used by loxilb. In this mode, loxilb selects the end-points configured for a service in simple round-robin fashion for each new incoming connection
"},{"location":"lb-algo/#2-weighted-round-robin-wrr","title":"2. Weighted round-robin (wrr)","text":"In this mode, loxilb selects the end-points as per weight(in terms of percentage of overall traffic connections) associated with the end-points of a service. For example, if we have three end-points, we can have 70%, 10% and 20% distribution.
"},{"location":"lb-algo/#3-persistence-persist","title":"3. Persistence (persist)","text":"In this mode, every client (sourceIP) will always get connected to a particular end-point. In essence there is no real load-balancing involved but it can be useful for applications which require client session-affinity e.g FTP which requires two connections with the end-point.
"},{"location":"lb-algo/#4-flow-hash-hash","title":"4. Flow-hash (hash)","text":"In this mode, loxilb will select the end-point based on 5-tuple hash on incoming traffic. This 5-tuple consists of SourceIP, SourcePort, DestinationIP, DestinationPort and IP protocol number. Please note that in this mode connections from same client can also get mapped to different end-points since SourcePort is usually selected randomly by operating systems resulting in a different hash value.
"},{"location":"lb-algo/#5-least-connections-lc","title":"5. Least-Connections (lc)","text":"In this mode, loxilb will select end-point which has the least active connections (or least loaded) at a given point of time.
"},{"location":"lb/","title":"What is service type external load-balancer in Kubernetes ?","text":"There are many different types of Kubernetes services like NodePort, ClusterIP etc. However, service type external load-balancer provides a way of exposing your application internally and/or externally in the perspective of the k8s cluster. Usually, Kubernetes CCM provider ensures that a load balancer of some sort is created, deleted and updated in your cloud. For on-prem or edge deployments however, organizations need to provide their own CCM load-balancer functions. MetalLB (initially developed at Google) has been the choice for such cases for long.
But edge services need to support so many exotic protocols in play like GTP, SCTP, SRv6 etc and integrating everything into a seamlessly working solution has been quite difficult. This is an area where loxilb aims to play a pivotal role.
The following is a simple yaml config file which needs to be applied to create a service type load-balancer :
\"type\": \"LoadBalancer\"\n {\n \"kind\": \"Service\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"name\": \"sample-service\"\n },\n \"spec\": {\n \"ports\": [{\n \"port\": 9001,\n \"targetPort\": 5001\n }],\n \"selector\": {\n \"app\": \"sample\"\n },\n \"type\": \"LoadBalancer\"\n }\n }\n
However, if there is no K8s CCM plugin implementing external service load-balancer, such services won't be created and remain in pending state forever.
"},{"location":"loxilb-ingress/","title":"k3s/Run loxilb-ingress","text":""},{"location":"loxilb-ingress/#how-to-run-loxilb-ingress","title":"How to run loxilb-ingress","text":"In Kubernetes, there is usually a lot of overlap between network load-balancer and an Ingress functionality. This creates a lot of confusion. Overall, the differences between an Ingress and a load-balancer service can be categorized as follows:
Feature Ingress Load-balancer Protocol HTTP(s) level - Layer7 Network Layer4 Additional Features Ingress Rules, Resource-Backends, Based on L4 Session Params Path-Types, HostName Yaml Manifest apiVersion: networking.k8s.io/v2 type: LoadBalancer When using Ingress, the clients connect to one of the pods through Ingress. The clients first perform a DNS lookup which returns IP address of the ingress. This IP address is usually funnelled through a L4 Load-balancer. The client sends a HTTP(s) request to Ingress specifying URL, hostname and other HTTP headers. Based on the HTTP payload, the ingress finds an associated Service and its EndPoint Objects. The Ingress then forwards the client's request to appopriate pod. It can also serve as HTTS termination point or as a mTLS hub.
With Kubernetes ingress, we can expose multiple paths with the same service IP. This might be helpful if one is using public cloud, where one has to pay for managed LB services. Hence, creating a single service and exposing mulitple URL paths might be optimal in such use-cases.
loxilb-ingress is optimized for cases which require long-lived connections and https termination with eBPF.
"},{"location":"loxilb-ingress/#getting-started","title":"Getting Started","text":"The following getting started example is based on K3s as the kubernetes platform but should work on any kubernetes implementation or distribution like EKS, GKE etc but should work well with any. We will use K3s as the kubernetes platform.
"},{"location":"loxilb-ingress/#install-k3s","title":"Install K3s","text":"curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable traefik,servicelb\" K3S_KUBECONFIG_MODE=\"644\" sh -\n
"},{"location":"loxilb-ingress/#install-loxilb-as-a-l4-service-lb","title":"Install loxilb as a L4 service LB","text":"Follow any of the loxilb getting started guides as per requirement. In this example, we will run loxilb-lb in external mode. Check all the pods are up and running as expected :
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-6799fbcd5-cp5lv 1/1 Running 0 3h26m\nkube-system kube-loxilb-755f6fb85-gbg7f 1/1 Running 0 3h26m\nkube-system local-path-provisioner-6f5d79df6-47n2b 1/1 Running 0 3h26m\nkube-system metrics-server-54fd9b65b-b6c6x 1/1 Running 0 3h26m\n
"},{"location":"loxilb-ingress/#prepare-tlsssl-certificates-for-ingress","title":"Prepare TLS/SSL certificates for Ingress","text":"Self-signed TLS/SSL certificates and private keys can be built using various tools like OpenSSL or Minica. Basically, one will need to have two files - server.crt and server.key for loxilb-ingress usage. Once these files are in place, a Kubernetes secret can be created using the following yaml:
apiVersion: v1\ndata:\n server.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUI3RENDQVhPZ0F3SUJBZ0lJU.....\n server.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JRzJBZ0VBTUJBR0J5cUdTTTQ5Q.....\nkind: Secret\nmetadata:\n creationTimestamp: null\n name: loxilb-ssl\n namespace: kube-system\ntype: Opaque\n
The above values are just dummy values but it is important to note that they need to be in base64 format not in pem format. How do we get the base64 values from server.crt and server.key files ?
$ base64 server.crt\nLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUI3RENDQVhPZ0F3SUJBZ0lJU.....\n$ base64 server.key\nLS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JRzJBZ0VBTUJBR0J5cUdTTTQ5Q.....\n
Now, after applying the yaml, we can check the created secret :
$ kubectl get secret -n kube-system loxilb-ssl\nNAME TYPE DATA AGE\nloxilb-ssl Opaque 2 106m\n
In the subsequent steps, this secret loxilb-ssl
will be used throughout.
"},{"location":"loxilb-ingress/#install-loxilb-ingress","title":"Install loxilb-ingress","text":"kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb-ingress/main/manifests/loxilb-ingress-deploy.yml\n
Check status of running pods :
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-6799fbcd5-cp5lv 1/1 Running 0 3h26m\nkube-system kube-loxilb-755f6fb85-gbg7f 1/1 Running 0 3h26m\nkube-system local-path-provisioner-6f5d79df6-47n2b 1/1 Running 0 3h26m\nkube-system loxilb-ingress-hn5ld 1/1 Running 0 61m\nkube-system metrics-server-54fd9b65b-b6c6x 1/1 Running 0 3h26m\n
"},{"location":"loxilb-ingress/#install-service-backend-app-and-ingress-rules","title":"Install service, backend app and ingress rules","text":" - Create a LB service for exposing ingress ports
apiVersion: v1\nkind: Service\nmetadata:\n name: loxilb-ingress-manager\n namespace: kube-system\n annotations:\n loxilb.io/lbmode: \"onearm\"\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n app.kubernetes.io/instance: loxilb-ingress\n app.kubernetes.io/name: loxilb-ingress\n ports:\n - name: http\n port: 80\n protocol: TCP\n targetPort: 80\n - name: https\n port: 443\n protocol: TCP\n targetPort: 443\n type: LoadBalancer\n
Check the services created :
$ kubectl get svc -A\nNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ndefault kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 3h28m\nkube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 3h28m\nkube-system loxilb-ingress-manager LoadBalancer 10.43.136.1 llb-192.168.80.9 80:31686/TCP,443:31994/TCP 62m\nkube-system metrics-server ClusterIP 10.43.236.60 <none> 443/TCP 3h28m\n
At this point of time, all services exposed via ingress can be accessed via \"192.168.80.9\". This IP could be different as per use-case and scenario. This IP can then be associated with DNS for name based access.
- Create backend apps for
domain1.loxilb.io
and configure ingress rules with the following yaml : apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: site\nspec:\n replicas: 1\n selector:\n matchLabels:\n name: site-handler\n template:\n metadata:\n labels:\n name: site-handler\n spec:\n containers:\n - name: blog\n image: ghcr.io/loxilb-io/nginx:stable\n imagePullPolicy: Always\n ports:\n - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: site-handler-service\nspec:\n ports:\n - name: http\n port: 80\n protocol: TCP\n targetPort: 80\n selector:\n name: site-handler\n---\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: site-loxilb-ingress\nspec:\n #ingressClassName: loxilb\n tls:\n - hosts:\n - domain1.loxilb.io\n secretName: loxilb-ssl\n rules:\n - host: domain1.loxilb.io\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: site-handler-service\n port:\n number: 80\n
Double check status of pods, services and ingress:
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\ndefault site-869fd54548-t82bq 1/1 Running 0 64m\nkube-system coredns-6799fbcd5-cp5lv 1/1 Running 0 3h31m\nkube-system kube-loxilb-755f6fb85-gbg7f 1/1 Running 0 3h31m\nkube-system local-path-provisioner-6f5d79df6-47n2b 1/1 Running 0 3h31m\nkube-system loxilb-ingress-hn5ld 1/1 Running 0 66m\nkube-system metrics-server-54fd9b65b-b6c6x 1/1 Running 0 3h31m\n\n$ kubectl get svc -A\nNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ndefault kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 3h31m\ndefault site-handler-service ClusterIP 10.43.101.77 <none> 80/TCP 64m\nkube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 3h31m\nkube-system loxilb-ingress-manager LoadBalancer 10.43.136.1 llb-192.168.80.9 80:31686/TCP,443:31994/TCP 65m\nkube-system metrics-server ClusterIP 10.43.236.60 <none> 443/TCP 3h31m\n\n\n$ kubectl get ingress -A\nNAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE\ndefault site-loxilb-ingress <none> domain1.loxilb.io 80, 443 65m\n
We can for the above example and create backend apps for other hostnames e.g. domain2.loxilb.io
and configure ingress rules.
"},{"location":"loxilb-ingress/#testing-loxilb-ingress","title":"Testing loxilb ingress","text":"If you are testing locally you can simply add the following for dns resolution in your bastion/host :
$ tail -n 2 /etc/hosts\n192.168.80.9 domain1.loxilb.io\n
The above step is similar to adding A records in a DNS like route53. - Finally, try to access the service \"domain1.loxilb.io\" :
$ curl -H \"HOST: domain1.loxilb.io\" https://domain1.loxilb.io\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
"},{"location":"loxilb-nginx-ingress/","title":"How-To - Deploy loxilb with ingress-nginx","text":""},{"location":"loxilb-nginx-ingress/#how-to-run-loxilb-with-ingress-nginx","title":"How to run loxilb with ingress-nginx","text":"In Kubernetes, there is usually a lot of overlap between network load-balancer and an Ingress functionality. This creates a lot of confusion. Overall, the differences between an Ingress and a load-balancer service can be categorized as follows:
Feature Ingress Load-balancer Protocol HTTP(s) level - Layer7 Network Layer4 Additional Features Ingress Rules, Resource-Backends Based on L4 Session Params Yaml Manifest apiVersion: networking.k8s.io/v2 type: LoadBalancer With Kubernetes ingress, we can expose multiple paths with the same service IP. This might be helpful if one is using public cloud, where one has to pay for managed LB services. Hence, creating a single service and exposing mulitple URL paths might be optimal in such use-cases.
For this example, we will use ingress-nginx which is a kubernetes community driven ingress. loxilb has its own ingress implementation, which is optimized (with eBPF helpers) for cases which require long-lived connections, https termination etc. However, if someone needs to use it any other ingress implementation, they can follow this guide. This guide uses ingress-nginx as the ingress implementation.
"},{"location":"loxilb-nginx-ingress/#considerations","title":"Considerations","text":"This example is not specific to any particular managed kubernetes implementation like EKS, GKE etc but should work well with any. We will simply use K3s as a based kubernetes platform.
"},{"location":"loxilb-nginx-ingress/#install-k3s","title":"Install K3s","text":"curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable traefik,servicelb\" K3S_KUBECONFIG_MODE=\"644\" sh -\n
"},{"location":"loxilb-nginx-ingress/#install-loxilb","title":"Install loxilb","text":"Follow any of the getting started guides as per requirement. Check all the pods are up and running as expected :
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-6799fbcd5-4n4kl 1/1 Running 0 56m\nkube-system kube-loxilb-b466c99bb-fpgll 1/1 Running 0 56m\nkube-system local-path-provisioner-6f5d79df6-f52sw 1/1 Running 0 56m\nkube-system loxilb-lb-gbkw7 1/1 Running 0 30s\nkube-system metrics-server-54fd9b65b-dchv2 1/1 Running 0 56m\n
"},{"location":"loxilb-nginx-ingress/#install-ingress-nginx","title":"Install ingress-nginx","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/baremetal/deploy.yaml\n
Double confirm :
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\ningress-nginx ingress-nginx-admission-create-9vq66 0/1 Completed 0 113s\ningress-nginx ingress-nginx-admission-patch-k4d74 0/1 Completed 1 113s\ningress-nginx ingress-nginx-controller-845698f4f6-xq6hm 1/1 Running 0 113s\nkube-system coredns-6799fbcd5-4n4kl 1/1 Running 0 59m\nkube-system kube-loxilb-b466c99bb-fpgll 1/1 Running 0 59m\nkube-system local-path-provisioner-6f5d79df6-f52sw 1/1 Running 0 59m\nkube-system loxilb-lb-gbkw7 1/1 Running 0 3m33s\nkube-system metrics-server-54fd9b65b-dchv2 1/1 Running 0 59m\n
"},{"location":"loxilb-nginx-ingress/#install-service-backend-app-and-ingress-rules","title":"Install service, backend app and ingress rules","text":" - Create a LB service for exposing ingress ports
apiVersion: v1\nkind: Service\nmetadata:\n name: ingress-nginx-controller-loadbalancer\n namespace: ingress-nginx\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n app.kubernetes.io/component: controller\n app.kubernetes.io/instance: ingress-nginx\n app.kubernetes.io/name: ingress-nginx\n ports:\n - name: http\n port: 80\n protocol: TCP\n targetPort: 80\n - name: https\n port: 443\n protocol: TCP\n targetPort: 443\n type: LoadBalancer\n
Check the services created :
$ kubectl get svc -A\nNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ndefault kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 61m\ningress-nginx ingress-nginx-controller NodePort 10.43.114.138 <none> 80:30958/TCP,443:31794/TCP 3m22s\ningress-nginx ingress-nginx-controller-admission ClusterIP 10.43.107.66 <none> 443/TCP 3m22s\ningress-nginx ingress-nginx-controller-loadbalancer LoadBalancer 10.43.27.248 llb-192.168.80.10 80:32218/TCP,443:32617/TCP 9s\nkube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 61m\nkube-system loxilb-lb-service ClusterIP None <none> 11111/TCP,179/TCP,50051/TCP 5m2s\nkube-system metrics-server ClusterIP 10.43.20.55 <none> 443/TCP 61m\n
At this point of time, all services exposed via ingress can be accessed via \"192.168.80.10\". This IP could be different as per use-case and scenario. This IP can then be associated with DNS for name based access.
- Create backend apps for
domain1.loxilb.io
and configure ingress rules with the following yaml apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: site\nspec:\n replicas: 1\n selector:\n matchLabels:\n name: site-nginx-frontend\n template:\n metadata:\n labels:\n name: site-nginx-frontend\n spec:\n containers:\n - name: blog\n image: ghcr.io/loxilb-io/nginx:stable\n imagePullPolicy: Always\n ports:\n - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: site-nginx-service\nspec:\n ports:\n - name: http\n port: 80\n protocol: TCP\n targetPort: 80\n selector:\n name: site-nginx-frontend\n---\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: site-nginx-ingress\n annotations:\n #app.kubernetes.io/ingress.class: \"nginx\"\n nginx.ingress.kubernetes.io/ssl-redirect: \"false\"\nspec:\n ingressClassName: nginx\n rules:\n - host: domain1.loxilb.io\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: site-nginx-service\n port:\n number: 80\n
Double check status of pods, services and ingress:
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\ndefault site-69d64fcd49-j4qhj 1/1 Running 0 46s\ningress-nginx ingress-nginx-admission-create-9vq66 0/1 Completed 0 8m21s\ningress-nginx ingress-nginx-admission-patch-k4d74 0/1 Completed 1 8m21s\ningress-nginx ingress-nginx-controller-845698f4f6-xq6hm 1/1 Running 0 8m21s\nkube-system coredns-6799fbcd5-4n4kl 1/1 Running 0 66m\nkube-system kube-loxilb-b466c99bb-fpgll 1/1 Running 0 66m\nkube-system local-path-provisioner-6f5d79df6-f52sw 1/1 Running 0 66m\nkube-system loxilb-lb-gbkw7 1/1 Running 0 10m\nkube-system metrics-server-54fd9b65b-dchv2 1/1 Running 0 66m\n\n$ kubectl get svc -A\nNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ndefault kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 67m\ndefault site-nginx-service ClusterIP 10.43.16.35 <none> 80/TCP 108s\ningress-nginx ingress-nginx-controller NodePort 10.43.114.138 <none> 80:30958/TCP,443:31794/TCP 9m23s\ningress-nginx ingress-nginx-controller-admission ClusterIP 10.43.107.66 <none> 443/TCP 9m23s\ningress-nginx ingress-nginx-controller-loadbalancer LoadBalancer 10.43.27.248 llb-192.168.80.10 80:32218/TCP,443:32617/TCP 6m10s\nkube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 67m\nkube-system loxilb-lb-service ClusterIP None <none> 11111/TCP,179/TCP,50051/TCP 11m\nkube-system metrics-server ClusterIP 10.43.20.55 <none> 443/TCP 67m\n\n\n$ kubectl get ingress -A\nNAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE\ndefault site-nginx-ingress nginx domain1.loxilb.io 10.0.2.15 80 2m10s\n
- Now, lets create backend apps for
domain2.loxilb.io
and configure ingress rules with the following yaml apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: site2\nspec:\n replicas: 1\n selector:\n matchLabels:\n name: site-nginx-frontend2\n template:\n metadata:\n labels:\n name: site-nginx-frontend2\n spec:\n containers:\n - name: blog\n image: ghcr.io/loxilb-io/nginx:stable\n imagePullPolicy: Always\n ports:\n - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: site-nginx-service2\nspec:\n ports:\n - name: http\n port: 80\n protocol: TCP\n targetPort: 80\n selector:\n name: site-nginx-frontend2\n---\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: site-nginx-ingress2\n annotations:\n #app.kubernetes.io/ingress.class: \"nginx\"\n nginx.ingress.kubernetes.io/ssl-redirect: \"false\"\nspec:\n ingressClassName: nginx\n rules:\n - host: domain2.loxilb.io\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service:\n name: site-nginx-service2\n port:\n number: 80\n
Again, we can check the status of pods, service and ingress:
$ kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\ndefault site-69d64fcd49-j4qhj 1/1 Running 0 9m12s\ndefault site2-7fff6cfbbf-8d6rp 1/1 Running 0 2m34s\ningress-nginx ingress-nginx-admission-create-9vq66 0/1 Completed 0 16m\ningress-nginx ingress-nginx-admission-patch-k4d74 0/1 Completed 1 16m\ningress-nginx ingress-nginx-controller-845698f4f6-xq6hm 1/1 Running 0 16m\nkube-system coredns-6799fbcd5-4n4kl 1/1 Running 0 74m\nkube-system kube-loxilb-b466c99bb-fpgll 1/1 Running 0 74m\nkube-system local-path-provisioner-6f5d79df6-f52sw 1/1 Running 0 74m\nkube-system loxilb-lb-gbkw7 1/1 Running 0 18m\nkube-system metrics-server-54fd9b65b-dchv2 1/1 Running 0 74m\n\n$ kubectl get svc -A\nNAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ndefault kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 75m\ndefault site-nginx-service ClusterIP 10.43.16.35 <none> 80/TCP 9m32s\ndefault site-nginx-service2 ClusterIP 10.43.107.99 <none> 80/TCP 2m54s\ningress-nginx ingress-nginx-controller NodePort 10.43.114.138 <none> 80:30958/TCP,443:31794/TCP 17m\ningress-nginx ingress-nginx-controller-admission ClusterIP 10.43.107.66 <none> 443/TCP 17m\ningress-nginx ingress-nginx-controller-loadbalancer LoadBalancer 10.43.27.248 llb-192.168.80.10 80:32218/TCP,443:32617/TCP 13m\nkube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 75m\nkube-system loxilb-lb-service ClusterIP None <none> 11111/TCP,179/TCP,50051/TCP 18m\nkube-system metrics-server ClusterIP 10.43.20.55 <none> 443/TCP 75m\n\n$ kubectl get ingress -A\nNAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE\ndefault site-nginx-ingress nginx domain1.loxilb.io 10.0.2.15 80 9m49s\ndefault site-nginx-ingress2 nginx domain2.loxilb.io 10.0.2.15 80 3m11s\n
"},{"location":"loxilb-nginx-ingress/#test","title":"Test","text":"If you are testing locally you can simply add the following for dns resolution in your bastion/host :
$ tail -n 2 /etc/hosts\n192.168.80.10 domain1.loxilb.io\n192.168.80.10 domain2.loxilb.io\n
The above step is similar to adding A records in a DNS like route53. -
Finally, try to access the service \"domain1.loxilb.io\" :
$ curl -H \"HOST: domain1.loxilb.io\" domain1.loxilb.io\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
-
And then try to access services domain2.loxilb.io:
$ curl -H \"HOST: domain2.loxilb.io\" domain2.loxilb.io\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
"},{"location":"loxilbebpf/","title":"loxilb eBPF implementation details","text":"In this section, we will look into details of loxilb ebpf implementation in little details and try to check what goes on under the hood. When loxilb is build, it builds two object files as follows :
llb@nd2:~/loxilb$ ls -l /opt/loxilb/\ntotal 396\ndrwxrwxrwt 3 root root 0 6?? 20 11:17 dp\n-rw-rw-r-- 1 llb llb 305536 6?? 29 09:39 llb_ebpf_main.o\n-rw-rw-r-- 1 llb llb 95192 6?? 29 09:39 llb_xdp_main.o\n
As the name suggests and based on hook point, xdp version does XDP packet processing while ebpf version is used at TC layer for TC eBPF processing. Interesting enough, the packet forwarding code is largely agnostic of its final hook point due to usage of a light abstraction layer to hide differences between eBPF and XDP layer.
Now this beckons the question why separate hook points and how does it all work together ? loxilb does bulk of its processing at TC eBPF layer as this layer is most optimized for doing L4+ processing needed for loxilb operation. XDP's frame format is different than what is used by skb (linux kernel's generic socket buffer). This makes it very difficult (if not impossible) to do tcp checksum offload and other such features used by linux networking stack for quite some time now. In short, if we need to do such operations, XDP performance will be inherently slow. XDP as such is perfect for quick operations at l2 layer. loxilb uses XDP to do certain operations like mirroring. Due to how TC eBPF works, it is difficult to work with multiple packet copies and loxilb's TC eBPF offloads some functinality to XDP layer in such special cases.
"},{"location":"loxilbebpf/#loading-of-loxilb-ebpf-program","title":"Loading of loxilb eBPF program","text":"loxilb's goLang based agent by default loads the loxilb ebpf programs in all the interfaces(only physical/real/bond/wireguard) available in the system. As loxilb is designed to run in its own docker/container, this is convenient for users who dont want to have to manually load/unload eBPF programs. However, it is still possible to do so manually if need arises :
To load :
ntc filter add dev eth1 ingress bpf da obj /opt/loxilb/llb_ebpf_main.o sec tc_packet_hook0\n
To unload:
ntc filter del dev eth1 ingress\n
To check:
root@nd2:/home/llb# ntc filter show dev eth1 ingress\nfilter protocol all pref 49152 bpf chain 0 \nfilter protocol all pref 49152 bpf chain 0 handle 0x1 llb_ebpf_main.o:[tc_packet_hook0] direct-action not_in_hw id 8715 tag 43a829222e969bce jited \n
Please not that ntc is the customized tc tool from iproute2 package which can be found in loxilb's repository
"},{"location":"loxilbebpf/#entry-points-of-loxilb-ebpf","title":"Entry points of loxilb eBPF","text":"loxilb's eBPF code is usually divided into two program sections with the following entry functions :
- tc_packet_func
This alongwith the consequent code does majority of the packet processing. If conntrack entries are in established state, this is also responsible for packet tx. However if conntrack entry for a particular packet flow is not established, it makes a bpf tail call to the tc_packet_func_slow
- tc_packet_func_slow
This is responsible mainly for doing NAT lookup and stateful conntrack implementation. Once conntrack entry transitions to established state, the forwarding then can happen directly from tc_packet_func
loxilb's XDP code is contained in the following section :
- xdp_packet_func
This is the entry point for packet processing when hook point is XDP instead of TC eBPF
"},{"location":"loxilbebpf/#pinned-maps-of-loxilb-ebpf","title":"Pinned Maps of loxilb eBPF","text":"All maps used by loxilb eBPF are mounted in the file-system as below :
root@nd2:/home/llb/loxilb# ls -lart /opt/loxilb/dp/\ntotal 4\ndrwxrwxrwt 3 root root 0 6?? 20 11:17 .\ndrwxr-xr-x 3 root root 4096 6?? 29 10:19 ..\ndrwx------ 3 root root 0 6?? 29 10:19 bpf\nroot@nd2:/home/llb/loxilb# mount | grep bpf\nnone on /opt/netlox/loxilb type bpf (rw,relatime)\n\nroot@nd2:/home/llb/loxilb# ls -lart /opt/loxilb/dp/bpf/\ntotal 0\ndrwxrwxrwt 3 root root 0 6?? 20 11:17 ..\nlrwxrwxrwx 1 root root 0 6?? 20 11:17 xdp -> /opt/loxilb/dp/bpf//tc/\ndrwx------ 3 root root 0 6?? 20 11:17 tc\nlrwxrwxrwx 1 root root 0 6?? 20 11:17 ip -> /opt/loxilb/dp/bpf//tc/\n-rw------- 1 root root 0 6?? 29 10:19 xfis\n-rw------- 1 root root 0 6?? 29 10:19 xfck\n-rw------- 1 root root 0 6?? 29 10:19 xctk\n-rw------- 1 root root 0 6?? 29 10:19 tx_intf_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 tx_intf_map\n-rw------- 1 root root 0 6?? 29 10:19 tx_bd_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 tmac_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 tmac_map\n-rw------- 1 root root 0 6?? 29 10:19 smac_map\n-rw------- 1 root root 0 6?? 29 10:19 rt_v6_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 rt_v4_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 rt_v4_map\n-rw------- 1 root root 0 6?? 29 10:19 polx_map\n-rw------- 1 root root 0 6?? 29 10:19 pkts\n-rw------- 1 root root 0 6?? 29 10:19 pkt_ring\n-rw------- 1 root root 0 6?? 29 10:19 pgm_tbl\n-rw------- 1 root root 0 6?? 29 10:19 nh_map\n-rw------- 1 root root 0 6?? 29 10:19 nat_v4_map\n-rw------- 1 root root 0 6?? 29 10:19 mirr_map\n-rw------- 1 root root 0 6?? 29 10:19 intf_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 intf_map\n-rw------- 1 root root 0 6?? 29 10:19 fc_v4_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 fc_v4_map\n-rw------- 1 root root 0 6?? 29 10:19 fcas\n-rw------- 1 root root 0 6?? 29 10:19 dmac_map\n-rw------- 1 root root 0 6?? 29 10:19 ct_v4_map\n-rw------- 1 root root 0 6?? 29 10:19 bd_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 acl_v6_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 acl_v4_stats_map\n-rw------- 1 root root 0 6?? 29 10:19 acl_v4_map\n
Using bpftool, it is easy to check state of these maps as follows :
root@nd2:/home/llb# bpftool map dump pinned /opt/loxilb/dp/bpf/intf_map \n[{\n \"key\": {\n \"ifindex\": 2,\n \"ing_vid\": 0,\n \"pad\": 0\n },\n \"value\": {\n \"ca\": {\n \"act_type\": 11,\n \"ftrap\": 0,\n \"oif\": 0,\n \"cidx\": 0\n },\n \"\": {\n \"set_ifi\": {\n \"xdp_ifidx\": 1,\n \"zone\": 0,\n \"bd\": 3801,\n \"mirr\": 0,\n \"polid\": 0,\n \"r\": [0,0,0,0,0,0\n ]\n }\n }\n }\n },{\n \"key\": {\n \"ifindex\": 3,\n \"ing_vid\": 0,\n \"pad\": 0\n },\n \"value\": {\n \"ca\": {\n \"act_type\": 11,\n \"ftrap\": 0,\n \"oif\": 0,\n \"cidx\": 0\n },\n \"\": {\n \"set_ifi\": {\n \"xdp_ifidx\": 3,\n \"zone\": 0,\n \"bd\": 3803,\n \"mirr\": 0,\n \"polid\": 0,\n \"r\": [0,0,0,0,0,0\n ]\n }\n }\n }\n }\n]\n
As our development progresses, we will keep updating details about these map's internals
"},{"location":"loxilbebpf/#loxilb-ebpf-pipeline-at-a-glance","title":"loxilb eBPF pipeline at a glance","text":"The following figure shows a very high-level diagram of packet flow through loxilb eBPF pipeline :
We use eBPF tail calls to jump from one section to another majorly due to the fact that there is clear separation for CT (conntrack) functionality and packet-forwarding logic. At the same time, since kernel's built in eBPF-verifier imposes a maximum code size limit for a single program/section, it also helps to circumvent this issue.
"},{"location":"microk8s_quick_start_incluster/","title":"MicroK8s/loxilb in-cluster mode","text":""},{"location":"microk8s_quick_start_incluster/#quick-start-guide-with-microk8s-and-loxilb-in-cluster-mode","title":"Quick Start Guide with MicroK8s and LoxiLB in-cluster mode","text":"This document will explain how to install a MicroK8s cluster with loxilb as a serviceLB provider running in-cluster mode.
"},{"location":"microk8s_quick_start_incluster/#prerequisites","title":"Prerequisite(s)","text":" - Single node with Linux
"},{"location":"microk8s_quick_start_incluster/#topology","title":"Topology","text":"For quickly bringing up loxilb in-cluster and MicroK8s, we will be deploying all components in a single node :
loxilb and kube-loxilb components run as pods managed by kubernetes(MicroK8s) in this scenario.
"},{"location":"microk8s_quick_start_incluster/#setup-microk8s-in-a-single-node","title":"Setup MicroK8s in a single-node","text":"# MicroK8s installation steps\nsudo apt-get update\nsudo apt install -y snapd\nsudo snap install microk8s --classic --channel=1.28/stable\n
"},{"location":"microk8s_quick_start_incluster/#check-microk8s-status","title":"Check MicroK8s status","text":"$ sudo microk8s status --wait-ready\nmicrok8s is running\nhigh-availability: no\n datastore master nodes: 127.0.0.1:19001\n datastore standby nodes: none\naddons:\n enabled:\n dns # (core) CoreDNS\n ha-cluster # (core) Configure high availability on the current node\n helm # (core) Helm - the package manager for Kubernetes\n helm3 # (core) Helm 3 - the package manager for Kubernetes\n disabled:\n cert-manager # (core) Cloud native certificate management\n cis-hardening # (core) Apply CIS K8s hardening\n community # (core) The community addons repository\n dashboard # (core) The Kubernetes dashboard\n gpu # (core) Automatic enablement of Nvidia CUDA\n host-access # (core) Allow Pods connecting to Host services smoothly\n hostpath-storage # (core) Storage class; allocates storage from host directory\n ingress # (core) Ingress controller for external access\n kube-ovn # (core) An advanced network fabric for Kubernetes\n mayastor # (core) OpenEBS MayaStor\n metrics-server # (core) K8s Metrics Server for API access to service metrics\n minio # (core) MinIO object storage\n observability # (core) A lightweight observability stack for logs, traces and metrics\n prometheus # (core) Prometheus operator for monitoring and logging\n rbac # (core) Role-Based Access Control for authorisation\n registry # (core) Private image registry exposed on localhost:32000\n rook-ceph # (core) Distributed Ceph storage using Rook\n storage # (core) Alias to hostpath-storage add-on, deprecated\n
"},{"location":"microk8s_quick_start_incluster/#how-to-deploy-loxilb","title":"How to deploy loxilb ?","text":"loxilb can be deloyed by using the following command in the MicroK8s node
sudo microk8s kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/microk8s-incluster/loxilb.yml\n
"},{"location":"microk8s_quick_start_incluster/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used as an operator to manage loxilb.
wget https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/microk8s-incluster/kube-loxilb.yml\n
kube-loxilb.yaml args:\n #- --loxiURL=http://172.17.0.2:11111\n - --externalCIDR=192.168.82.100/32\n - --setRoles=0.0.0.0\n #- --monitor\n #- --setBGP\n
In the above snippet, loxiURL is commented out which denotes to utilize in-cluster mode to discover loxilb pods automatically. External CIDR represents the IP pool from where serviceLB VIP will be allocated. Apply after making changes (if any) :
sudo microk8s kubectl apply -f kube-loxilb.yaml\n
"},{"location":"microk8s_quick_start_incluster/#create-the-service","title":"Create the service","text":"sudo microk8s kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/microk8s-incluster/tcp-svc-lb.yml\n
"},{"location":"microk8s_quick_start_incluster/#check-status-of-various-components-in-microk8s-node","title":"Check status of various components in MicroK8s node","text":"In MicroK8s node:
## Check the pods created\n$ sudo microk8s kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system calico-node-fjfvz 1/1 Running 0 10m\nkube-system coredns-864597b5fd-xtmt4 1/1 Running 0 10m\nkube-system calico-kube-controllers-77bd7c5b-4kldr 1/1 Running 0 10m\nkube-system loxilb-lb-7xctp 1/1 Running 0 9m11s\nkube-system kube-loxilb-6f44cdcdf5-4864j 1/1 Running 0 7m40s\ndefault tcp-onearm-test 1/1 Running 0 6m49s\n\n## Check the services created\n$ sudo microk8s kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.152.183.1 <none> 443/TCP 18m\ntcp-lb-onearm LoadBalancer 10.152.183.216 llb-192.168.82.100 56002:32186/TCP 14m\n
In loxilb pod, we can check internal LB rules: $ sudo microk8s kubectl exec -it -n kube-system loxilb-lb-7xctp -- loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-----------------------|------|-----|--------|-----------|-------|--------|--------|----------|\n| 192.168.82.100 | | 56002 | tcp | default_tcp-lb-onearm | 0 | rr | onearm | 10.0.2.15 | 32186 | 1 | active | 25:1842 |\n
"},{"location":"microk8s_quick_start_incluster/#connect-from-hostclient","title":"Connect from host/client","text":"$ curl http://192.168.82.100:56002\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
For more detailed information on incluster deployment of loxilb with bgp in a full-blown cluster, kindly follow this blog. All of the above steps are also available as part of loxilb CICD workflow. Follow the steps below to replicate the above (please note that you will need vagrant tool installed to run:
$ git clone https://github.com/loxilb-io/loxilb.git\n$ cd cicd/microk8s-incluster/\n\n# To setup the single node microk8s setup and loxilb in-cluster\n$ ./config.sh\n\n# To validate the results\n$ ./validation.sh\n\n# To login to the node and check the installation\n$ vagrant ssh k8s-node1\n\n# Cleanup\n$ ./rmconfig.sh\n
"},{"location":"multi-cloud-ha/","title":"How-To - Deploy loxilb with multi-cloud HA support","text":""},{"location":"multi-cloud-ha/#deploy-loxilb-with-multi-cloud-ha-support","title":"Deploy LoxiLB with multi-cloud HA support","text":"LoxiLB supports stateful HA configuration in various cloud environments such as AWS. Especially for AWS, one can configure HA using the Floating IP pattern, together with LoxiLB.
"},{"location":"multi-cloud-ha/#overall-scenario","title":"Overall Scenario","text":"Overall scenario will look like this:
Setup configuration for Multi-Cloud/Multi-region will be similar to Multi-AZ-HA configuration
"},{"location":"multi-cloud-ha/#important-considerations","title":"Important considerations","text":" - The steps mentioned in the above documentation are for a single AWS region. For cross-region, similar configuration needs to be done in other AWS regions.
- Two LoxiLB instances - loxilb1 and loxilb2 will be deployed in different AZs per region. These two loxilbs form a HA pair and operate in active-backup roles.
- One instance of kube-loxilb will be deployed per region.
- Every region\u2019s private CIDR will be different and one region\u2019s privateCIDR should be reachable to others through VPC peering.
- As elastic IP is bound to a particular region, it is impossible to provide connection synchronization for cross-region HA. Only warm stand-by cross-region HA is supported.
- Full Support for elasticIP in GCP is not available yet. For testing HA with GCP, run single loxilb and kube-loxilb with standard configuration. There will not be any privateCIDR in kube-loxilb.yaml. Mention loxilb IP as externalCIDR.
To summarize, when a failover occurs within the region, the public ElasticIP address is always associated to the active LoxiLB instance, so users who were previously accessing EKS using the same ElasticIP address can continue to do so without being affected by any node failure or other issues. When a region-wise failover occurs, DNS will redirect the requests to the different region.
"},{"location":"multi-cloud-ha/#an-example-configuration","title":"An example configuration","text":"Please follow the steps to create cluster and prepare VM instances mentioned here.
"},{"location":"multi-cloud-ha/#configuring-loxilb-ec2-instances","title":"Configuring LoxiLB EC2 Instances","text":""},{"location":"multi-cloud-ha/#kube-loxilb-deployment","title":"kube-loxilb deployment","text":"kube-loxilb is a K8s operator for LoxiLB. Download the manifest file required for your deployment in EKS. Create the ServiceAccount and other necessary settings for the cluster before start deploying kube-loxilb per cluster.
---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: kube-loxilb\n namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nrules:\n - apiGroups:\n - \"\"\n resources:\n - nodes\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - pods\n verbs:\n - get\n - watch\n - list\n - patch\n - apiGroups:\n - \"\"\n resources:\n - endpoints\n - services\n - services/status\n verbs:\n - get\n - watch\n - list\n - patch\n - update\n - apiGroups:\n - gateway.networking.k8s.io\n resources:\n - gatewayclasses\n - gatewayclasses/status\n - gateways\n - gateways/status\n - tcproutes\n - udproutes\n verbs: [\"get\", \"watch\", \"list\", \"patch\", \"update\"]\n - apiGroups:\n - discovery.k8s.io\n resources:\n - endpointslices\n verbs:\n - get\n - watch\n - list\n - apiGroups:\n - authentication.k8s.io\n resources:\n - tokenreviews\n verbs:\n - create\n - apiGroups:\n - authorization.k8s.io\n resources:\n - subjectaccessreviews\n verbs:\n - create\n - apiGroups:\n - bgppeer.loxilb.io\n resources:\n - bgppeerservices\n verbs:\n - get\n - watch\n - list\n - create\n - update\n - delete\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: kube-loxilb\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: kube-loxilb\nsubjects:\n - kind: ServiceAccount\n name: kube-loxilb\n namespace: kube-system\n
"},{"location":"multi-cloud-ha/#change-the-args-inside-the-yaml-belowas-applicable-and-install-it-for-every-region","title":"Change the args inside the yaml below(as applicable) and install it for every region.","text":"kube-loxilb-osaka-deployment.yaml
apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: kube-loxilb-osaka\n namespace: kube-system\n labels:\n app: kube-loxilb-app\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: kube-loxilb-app\n template:\n metadata:\n labels:\n app: kube-loxilb-app\n spec:\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet\n tolerations:\n # Mark the pod as a critical add-on for rescheduling.\n - key: CriticalAddonsOnly\n operator: Exists\n priorityClassName: system-node-critical\n serviceAccountName: kube-loxilb\n terminationGracePeriodSeconds: 0\n containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:aws-support\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n - --loxiURL=http://192.168.218.60:11111,192.168.218.61:11111\n - --externalCIDR=13.208.X.X/32\n - --privateCIDR=192.168.248.254/32\n - --setLBMode=2\n - --zone=osaka\n resources:\n requests:\n cpu: \"100m\"\n memory: \"50Mi\"\n limits:\n cpu: \"100m\"\n memory: \"50Mi\"\n securityContext:\n privileged: true\n capabilities:\nadd: [\"NET_ADMIN\", \"NET_RAW\"]\n
kube-loxilb-seoul-deployment.yaml
apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: kube-loxilb-seoul\n namespace: kube-system\n labels:\n app: kube-loxilb-app\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: kube-loxilb-app\n template:\n metadata:\n labels:\n app: kube-loxilb-app\n spec:\n hostNetwork: true\n dnsPolicy: ClusterFirstWithHostNet\n tolerations:\n # Mark the pod as a critical add-on for rescheduling.\n - key: CriticalAddonsOnly\n operator: Exists\n priorityClassName: system-node-critical\n serviceAccountName: kube-loxilb\n terminationGracePeriodSeconds: 0\n containers:\n - name: kube-loxilb\n image: ghcr.io/loxilb-io/kube-loxilb:aws-support\n imagePullPolicy: Always\n command:\n - /bin/kube-loxilb\n args:\n - --loxiURL=http://192.168.119.11:11111,192.168.119.12:11111\n - --externalCIDR=14.112.X.X/32\n - --privateCIDR=192.168.150.254/32\n - --setLBMode=2\n - --zone=seoul\n resources:\n requests:\n cpu: \"100m\"\n memory: \"50Mi\"\n limits:\n cpu: \"100m\"\n memory: \"50Mi\"\n securityContext:\n privileged: true\n capabilities:\nadd: [\"NET_ADMIN\", \"NET_RAW\"]\n
For every region, Edit kube-loxilb-region.yaml * Modify loxiURL with the IPs of the LoxiLB EC2 instances created in the region above. * For externalCIDR, specify the Elastic IP created above. * PrivateCIDR specifies the VIP that will be associated with the Elastic IP."},{"location":"multi-cloud-ha/#run-loxilb-pods","title":"Run LoxiLB Pods","text":""},{"location":"multi-cloud-ha/#install-docker-on-loxilb-instances","title":"Install docker on LoxiLB instance(s)","text":"LoxiLB is deployed as a container on each instance. To use containers, docker must first be installed on the instance. Docker installation guide can be found here
"},{"location":"multi-cloud-ha/#running-loxilb-container","title":"Running LoxiLB container","text":"The following command is for a LoxiLB instance (loxilb1) using subnet-a.
sudo docker run -u root --cap-add SYS_ADMIN \\\n --restart unless-stopped \\\n --net=host \\\n --privileged \\\n -dit \\\n -v /dev/log:/dev/log -e AWS_REGION=ap-northeast-3 \\\n --name loxilb \\\n ghcr.io/loxilb-io/loxilb:aws-support \\\n --cloud=aws --cloudcidrblock=192.168.248.0/24 --cluster=192.168.218.61 --self=0\n
- In the cloudcidrblock option, specify the IP band that includes the VIP set in kube-loxilb's privateCIDR. master LoxiLB uses the value set here to create a new subnet in the AZ where it is located and uses it for HA operation.
- The cluster option specifies the IP of the partner instance (LoxiLB instance using subnet-b) for which HA is configured.
- The self option is set to 0. It is just a identier used internally to identify each instance
Similarily we can run loxilb2 instance in the second EC2 instance using subnet-b:
sudo docker run -u root --cap-add SYS_ADMIN \\\n --restart unless-stopped \\\n --net=host \\\n --privileged \\\n -dit \\\n -v /dev/log:/dev/log -e AWS_REGION=ap-northeast-3 \\\n --name loxilb \\\n ghcr.io/loxilb-io/loxilb:aws-support \\\n --cloud=aws --cloudcidrblock=192.168.248.0/24 --cluster=192.168.218.60 --self=1\n
For each instance, HA status can be checked as follows:
When the container runs, you can check the HA status as follows:
ubuntu@ip-192-168-218-60:~$ sudo docker exec -ti loxilb bash\nroot@ip-192-168-228-108:/# loxicmd get ha\n| INSTANCE | HASTATE |\n|----------|---------|\n| default | MASTER |\nroot@ip-192-168-218-60:/#\n
"},{"location":"multi-cloud-ha/#creating-a-service","title":"Creating a service","text":"Let's create a test service to test HA functionality. Below are the manifest files for the nginx pod and service that we will use for testing.
apiVersion: v1\nkind: Service\nmetadata:\n name: nginx-lb1\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: nginx-test\n ports:\n - port: 55002\n targetPort: 80\n type: LoadBalancer\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-test\n labels:\n what: nginx-test\nspec:\n containers:\n - name: nginx-test\n image: nginx:stable\n ports:\n - containerPort: 80\n
After creating an nginx service with the above, weu can see that the ElasticIP has been designated as the externalIP of the service. LEIS6N3:~/workspace/aws-demo$ kubectl apply -f nginx.yaml\nservice/nginx-lb1 created\npod/nginx-test created\nLEIS6N3:~/workspace/aws-demo$ kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.100.0.1 <none> 443/TCP 22h\nnginx-lb1 LoadBalancer 10.100.178.3 llb-13.208.X.X 55002:32403/TCP 15s\n
"},{"location":"nat/","title":"NAT Modes of loxilb","text":""},{"location":"nat/#nat-modes-in-loxilb","title":"NAT modes in loxilb","text":"loxilb implements a variety of NAT modes to achieve load-balancing for different scenarios as far as L4 load-balancing is concerned. These NAT modes have subtle differences and this guide will shed light on these details
"},{"location":"nat/#1-normal-nat","title":"1. Normal NAT","text":"This is basic NAT mode used by loxilb. In this mode, loxilb employs simple DNAT for incoming requests i.e destination IP (which is also the service IP) is changed to the chosen end-point IP. For the outgoing responses it does the exactly opposite(SNAT). Since loxilb relies on statefulness for this mode, it is necessary that return packets also traverse through loxilb. The following figure illustrates this operation -
In this mode, original source IP is preserved till the end-point and provides best visibility for anyone needing it. Finally, this also means the end-points should know how to reach the source.
"},{"location":"nat/#2-one-arm","title":"2. One-ARM","text":"Traditionally one-arm NAT mode meant that the LB node used to have a single arm (or connection) to the LAN instead of separate ingress and egress networks. loxilb's one-arm NAT mode is a little extended version of the traditional one-arm mode. In one-arm mode, loxilb chooses its LAN IP as source-IP when sending incoming requests towards end-points nodes. Even if the originating source is not on the same LAN, this is loxilb's default behaviour for one arm mode.
"},{"location":"nat/#3-full-nat","title":"3. Full-NAT","text":"In the full-NAT mode, loxilb replaces the source-IP of an incoming request to a special instance IP. This instance IP is associated with each instance in a cluster deployment and maintained internally by loxilb. In this mode, various instances of loxilb cluster will have unique instance IPs and each of them will be advertised by BGP towards the end-point to set the return PATH accordingly. This helps in optimal distribution and spread of traffic in case an active-active clustering mode is desired.
"},{"location":"nat/#4-l2-dsr-mode","title":"4. L2-DSR mode","text":"In L2-DSR (direct server return) mode, loxilb performs load-balancing operation but without changing any IP addresses. It just updates the layer2 header as per selected end-point. Also in DSR mode, loxilb does not need statefulness and end-point can choose a different return path not involving loxilb. This maybe advantageous for certain scenarios where there is a need to reduce load in LB nodes by allowing return traffic to bypass the LB.
"},{"location":"nat/#5-l3-dsr-mode","title":"5. L3-DSR mode","text":"In L3-DSR (direct server return) mode, loxilb performs load-balancing operation but encapsulates the original payload with an IPinIP tunnel towards the end-points. Also like L2-DSR mode, loxilb does not need statefulness and end-point can choose a different/direct return path not involving loxilb.
"},{"location":"perf-multi/","title":"Perf multi","text":""},{"location":"perf-multi/#bare-metal-performance","title":"Bare-Metal Performance","text":"The topology for this test is as follows :
In this test, all the hosts, end-points and load-balancer run in separate dedicated servers/nodes. Server specs used - Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz - 40 core RAM 125GB, Kernel 5.15.0-52-generic. The following command can be used to configure loxilb for the given topology:
# loxicmd create lb 20.20.20.1 --tcp=2020:5001 --endpoints=31.31.31.1:1,32.32.32.1:1,33.33.33.1:1\n
The default mode of LoxiLB is RR(round-robin) while other popular distribution modes such as consistent hash(Maglev), WRR etc are also supported. We run popular tool netperf for the above topology. A quick explanation of terminologies used :
RPS - requests per seconds. Given a fixed number of connections, this denotes how many requests/message per second can be supported CPS - connections per second. This denotes how many new TCP connection setup/teardowns can be supported per second and hence one of the most important indicators of load-balancer performance CRR - connect/request/response. This is same as CPS but netperf tool uses this term to refer to CPS as part of its test scenario RR - request/response. This is another netperf test option. We used it to measure min and avg latency
We are comparing loxilb with ipvs and haproxy.
The results are as follows :
"},{"location":"perf-multi/#connections-per-second-tcp_crr","title":"Connections per second (TCP_CRR)","text":""},{"location":"perf-multi/#requests-per-second-tcp_rr","title":"Requests per second (TCP_RR)","text":""},{"location":"perf-multi/#minimum-latency","title":"Minimum Latency","text":""},{"location":"perf-multi/#average-latency","title":"Average Latency","text":""},{"location":"perf-multi/#conclusionnotes-","title":"Conclusion/Notes -","text":" - loxilb provides enhanced performance across the spectrum of tests. There is a noticeable gain in CPS.
- loxilb's CPS scales linearly with number of cores
- haproxy version used - 2.0.29
- netperf test scripts can be found here
"},{"location":"perf-single/","title":"Perf single","text":""},{"location":"perf-single/#single-node-performance","title":"Single-node performance","text":"The hosts/LB/end-points are run as docker pods inside a single server/node. The topology is as follows :
The following command can be used to configure lb for the given topology:
# loxicmd create lb 20.20.20.1 --tcp=2020:5001 --endpoints=31.31.31.1:1,32.32.32.1:1,33.33.33.1:1\n
The testing is done with full stateful connection tracking enabled (non dsr mode). To create the above topology for testing loxilb, users can follow this guide. A go webserver with an empty response is used for benchmark purposes. The code is as following : package main\n\nimport (\n \"log\"\n \"net/http\"\n)\n\nfunc main() {\n http.HandleFunc(\"/\", func(w http.ResponseWriter, r *http.Request) {\n\n })\n if err := http.ListenAndServe(\":5001\", nil); err != nil {\n log.Fatal(\"ListenAndServe: \", err)\n }\n}\n
The above code runs in each of the load-balancer end-points as following : go run ./webserver.go\n
wrk based HTTP benchmarking is one of the tools used in this test. This tool is run with the following parameters:
root@loxilb:/home/loxilb # wrk -t8 -c400 -d30s http://20.20.20.1:2020/\n
- where t: No. of threads, c: No. of connections. d: Duration of test We also run other popular performance testing tools like netperf, iperf along with wrk for the above topology. A quick explanation of terminologies used :
RPS - requests per seconds. Given a fixed number of connections, this denotes how many requests/message per second can be supported CPS - connections per second. This denotes how many new TCP connection setup/teardowns can be supported per second and hence one of the most important indicators of load-balancer performance CRR - connect/request/response. This is same as CPS but netperf tool uses this term to refer to CPS as part of its test scenario RR - request/response. This is another netperf test option. We used it to measure min and avg latency
The results are as follows :
"},{"location":"perf-single/#case-1-system-configuration-intelr-coretm-i7-4770hq-cpu-220ghz-3-core-6gb-ram-kernel-5150-52-generic","title":"Case 1. System Configuration - Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz , 3-Core, 6GB RAM, Kernel 5.15.0-52-generic","text":"Tool loopback loxilb ipvs wrk(RPS) 38040 44833 40012 wrk(CPS) n/a 7020 6048 netperf(CRR) n/a 11674 9901 netperf(RR min) 12.31 us 15.2us 19.75us netperf(RR avg) 61.27 us 78.1us 131us iperf 43.5Gbps 41.2Gbps 34.4Gbps"},{"location":"perf-single/#case-2-system-configuration-intelr-xeonr-silver-4210r-cpu-240ghz-40-core-124gb-ram-kernel-5150-52-generic","title":"Case 2. System Configuration - Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz, 40-core, 124GB RAM, Kernel 5.15.0-52-generic","text":"Tool loopback loxilb ipvs haproxy wrk(RPS) 406953 421746 388021 217004 wrk(CPS) n/a 45064 24400 22000 netperf(CRR) n/a 375k 174k 21k netperf(RR min) n/a 12 us 15us 27us netperf(RR avg) n/a 15.78 us 18.25us 35.76us iperf 456Gbps 402Gbps 374Gbps 91Gbps"},{"location":"perf-single/#conclusionnotes-","title":"Conclusion/Notes -","text":" - loxilb provides enhanced performance across the spectrum of tests. There is a noticeable gain in CPS
- loxilb's CPS is limited only by the fact that this is a single node scenario with shared resources
- \"loopback\" here refers to client and server running in the same host/pod. This is supposed to be the best case scenario but since there is only a single end-point for lo compared to 3 for LB testing , hence the RPS measurements are on the lower side.
- iperf is run with 100 threads ( iperf X.X.X.X -P 100 )
- haproxy version used - 2.0.29
- netperf test scripts can be found here
"},{"location":"perf-single/#watch-the-video","title":"Watch the video","text":"https://github.com/loxilb-io/loxilbdocs/assets/106566094/6cf85c4e-7cb4-4d23-b5f6-a7854e07cd7b
"},{"location":"perf/","title":"loxilb Performance","text":" - Single-node (cnf) performance report
- Bare-metal performance report
"},{"location":"quick_start_with_cilium/","title":"K3s/loxilb with cilium","text":""},{"location":"quick_start_with_cilium/#loxilb-quick-start-guide-with-cilium","title":"LoxiLB Quick Start Guide with Cilium","text":"This guide will explain how to:
- Deploy a single-node K3s cluster with cilium networking
- Expose services with loxilb as an external load balancer
"},{"location":"quick_start_with_cilium/#pre-requisite","title":"Pre-requisite","text":" - Single node with Linux
- Install docker runtime to manage loxilb
"},{"location":"quick_start_with_cilium/#topology","title":"Topology","text":"For quickly bringing up loxilb with cilium CNI, we will be deploying all components in a single node :
loxilb and cilium both uses ebpf technology for load balancing and implementing policies. So, to avoid the conflict we have to run them in separate network space. This is reason we are going to run loxilb in a docker and use macvlan for the incoming traffic. Also, this is to mimic a topology close to cloud-hosted k8s where LB nodes run outside a cluster.
"},{"location":"quick_start_with_cilium/#install-loxilb-docker","title":"Install loxilb docker","text":"## Set promisc mode for mac-vlan to work\nsudo ifconfig eth1 promisc\n\nsudo docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged --entrypoint /root/loxilb-io/loxilb/loxilb -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest\n\n# Create mac-vlan on top of underlying eth1 interface\nsudo docker network create -d macvlan -o parent=eth1 --subnet 192.168.82.0/24 --gateway 192.168.82.1 --aux-address 'host=192.168.82.252' llbnet\n\n# Assign mac-vlan to loxilb docker with specified IP (which will be used as LB VIP)\nsudo docker network connect llbnet loxilb --ip=192.168.82.100\n\n# Add iptables rule to allow traffic from source IP(192.168.82.1) to loxilb\nsudo iptables -A DOCKER -s 192.168.82.1 -j ACCEPT\n
"},{"location":"quick_start_with_cilium/#setup-k3s-with-cilium","title":"Setup K3s with cilium","text":"#K3s installation\ncurl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable traefik --disable servicelb --disable-cloud-controller \\\n--flannel-backend=none \\\n--disable-network-policy\" sh -\n\n#Install Cilium\nCILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)\nCLI_ARCH=amd64\nif [ \"$(uname -m)\" = \"aarch64\" ]; then CLI_ARCH=arm64; fi\ncurl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}\nsha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum\nsudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin\nrm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}\nmkdir -p ~/.kube/\nsudo cat /etc/rancher/k3s/k3s.yaml > ~/.kube/config\ncilium install\n\necho $MASTER_IP > /vagrant/master-ip\nsudo cp /var/lib/rancher/k3s/server/node-token /vagrant/node-token\nsudo cp /etc/rancher/k3s/k3s.yaml /vagrant/k3s.yaml\nsudo sed -i -e \"s/127.0.0.1/${MASTER_IP}/g\" /vagrant/k3s.yaml\n
"},{"location":"quick_start_with_cilium/#how-to-deploy-kube-loxilb","title":"How to deploy kube-loxilb ?","text":"kube-loxilb is used to deploy loxilb with Kubernetes.
wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/ext-cluster/kube-loxilb.yaml\n
kube-loxilb.yaml
args:\n - --loxiURL=http://172.17.0.2:11111\n - --externalCIDR=192.168.82.100/32\n - --setMode=1\n
In the above snippet, loxiURL uses docker interface IP of loxilb, which can be different for each setup. Apply in k8s:
kubectl apply -f kube-loxilb.yaml\n
"},{"location":"quick_start_with_cilium/#create-the-service","title":"Create the service","text":"kubectl apply -f https://raw.githubusercontent.com/loxilb-io/loxilb/main/cicd/docker-k3s-cilium/tcp-svc-lb.yml\n
"},{"location":"quick_start_with_cilium/#check-the-status","title":"Check the status","text":"In k3s:
kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 80m\ntcp-lb-onearm LoadBalancer 10.43.183.123 llb-192.168.82.100 56002:30001/TCP 6m50s\n
In loxilb docker: $ sudo docker exec -it loxilb loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-----------------------|------|-----|--------|-----------|-------|--------|--------|----------|\n| 192.168.82.100 | | 56002 | tcp | default_tcp-lb-onearm | 0 | rr | onearm | 10.0.2.15 | 30001 | 1 | active | 12:880 |\n
"},{"location":"quick_start_with_cilium/#connect-from-client","title":"Connect from client","text":"$ curl http://192.168.82.100:56002\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href=\"http://nginx.org/\">nginx.org</a>.<br/>\nCommercial support is available at\n<a href=\"http://nginx.com/\">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n
All of the above steps are also available as part of loxilb CICD workflow. Follow the steps below to replicate the above:
$ cd cicd/docker-k3s-cilium/\n\n# To setup the single node k3s setup with cilium as CNI and loxilb as external load balancer\n$ ./config.sh\n\n# To validate the results\n$ ./validation.sh\n\n# Cleanup\n$ ./rmconfig.sh\n
"},{"location":"requirements/","title":"System Requirements","text":""},{"location":"requirements/#loxilb-system-requirements","title":"LoxiLB system requirements","text":"To run loxilb, we need to have the following -
"},{"location":"requirements/#host-os-requirements","title":"Host OS requirements","text":"To install LoxiLB software packages, you need the 64-bit version of one of these OS versions:
- Ubuntu 20.04(LTS)
- Ubuntu 22.04(LTS)
- Fedora 36
- RockyOS
- Enterprise Redhat (Planned)
- Windows Server(Planned)
"},{"location":"requirements/#kernel-requirements","title":"Kernel Requirements","text":" - Linux Kernel Version >= 5.15.x && < 6.5.x
- Windows (Planned)
"},{"location":"requirements/#compatible-kubernetes-versions","title":"Compatible Kubernetes Versions","text":" - Kubernetes 1.19 ~ 1.29 (k0s, k3s, k8s, eks, openshift, kind etc)
"},{"location":"requirements/#hardware-requirements","title":"Hardware Requirements","text":" - None as long as above criteria are met (2vcpu/2GB should be enough for starters)
"},{"location":"roadmap/","title":"Release Notes (For major release milestones)","text":""},{"location":"roadmap/#070-beta-aug-2022","title":"0.7.0 beta (Aug, 2022)","text":"Initial release of loxilb
-
Functional Features:
- Two-Arm Load-Balancer (NAT+Routed mode)
- Upto 16 end-points support
- Load-balancer selection policy
- Round-robin, traffic-hash (fallback to RR if hash fails)
- Conntrack support in eBPF - TCP/UDP/ICMP/SCTP profiles
- GTP with QFI extension support
- ULCL classifier support
- Native QoS-Policer support (SRTCM/TRTCM)
- GoBGP Integration
- Extended visibility and statistics
-
LB Spec Support:
- IP allocation policy
- Kubernetes 1.20 base support
- Support for Calico Networking
-
Utilities:
- loxicmd support : Configuration utlity with the look and feel of kubectl
"},{"location":"roadmap/#080-dec-2022","title":"0.8.0 (Dec, 2022)","text":" -
Functional Features:
- Enhanced load-balancer support including SCTP statefulness, WRR distribution
- Integrated Firewall support
- Integrated end-point health-checks
- One-ARM, FullNAT, DSR LB mode support
- NAT66/NAT64 support
- Clustering support
- Integration with Linux egress TC hooks
-
LB Spec:
- Stand-alone mode to support LB Spec kube-loxilb
- Load-balancer class support
- Advanced IPAM for ipv4/ipv6 with shared/exclusive mode
- Kubernetes 1.25 Integration
-
Utilities:
- loxicmd support : Data-Store support, more commands
"},{"location":"roadmap/#090-nov-2023","title":"0.9.0 (Nov, 2023)","text":" -
Functional Features:
- Hardened NAT Support - CGNAT'ish
- L3 DSR mode Support
- Https end-point liveness probes
- Maglev clustering
- SCTP multihoming support
- Integration with Linux native QoS
- Support for Cilium, Weave Networking
- Grafana based dashboard
- IPSEC Support (with VTI)
- Initial support for in-cluster mode
-
kube-loxilb/LB Spec Support:
- OpenShift Integration
- Support for per-service liveness-checks, IPAM type, multi-homing annotations
- Kubernetes 1.26 (k0s, k3s, k8s )
- Operator support
- AWS EKS support
"},{"location":"roadmap/#093-may-2024","title":"0.9.3 (May, 2024)","text":" -
Functional Features:
- Kube-proxy replacement support
- IPVS compatibility mode
- Master-plane HA support
- BFD and GARP support for Hitless HA
- Enhancements for Multus support
- SCTP multi-homing end-to-end support
- Cloud Availability zone(s) support
- Redhat9 and Ubuntu24 support
- Support for upto Linux Kernel 6.8
- Full Support for Oracle OCI
- SockAddr eBPF for LocalVIP access
- Container size enhancements
- HA enhancements for multiple cloud-providers and various scenarios (active-active, active-standby, clustered etc)
- CICD infra enhancements
- Robust secret management for HTTPS apis
- Performance enhancements with CT scaling
- Enhanced exception handling
- GoLang Profiling Support
- Full support for in-cluster mode
- Better support for virtio environments
- Enhanced RSS distribution mode via XDP (especially for SCTP workloads)
- Loadbalancer algorithms - LeastConnections and SessionAffinity added
-
kube-loxilb Support:
- Kubernetes 1.29
- BGP (auto) Mesh support
- CRD for BGP peers
- Kubernetes GWAPI support
-
Utilities:
- N4 pfcp test-tool added
- Seagull test tool integrated
- Massive updates to documentation
"},{"location":"roadmap/#095-jul-2024","title":"0.9.5 (Jul, 2024)","text":" -
Functional Features:
- L7 (Transparent) proxy
- HTTPS termination
- Native eBPF implementation for Policy based IP Masquerade/SNAT
- Kubernetes vCluster support
- E2E SCTP multi-homing support with Multus
- Multi-AZ/Region hitless HA support for AWS/EKS
- Service communication proxy support for Telco deployments
-
Kubernetes Support:
- Kubernetes 1.30
- CRD for BGP policies
"},{"location":"roadmap/#096-aug-2024","title":"0.9.6 (Aug, 2024)","text":" -
Functional Features:
- Support for any host onearm LB rule
- HTTP 2.0 parser
- NGAP protocol parser
- ECMP Load-balancing support
- Non-privileged Container support
- AWS Local-Zone support
- Multi-Cloud HA support (AWS+GCP)
- Updated CICD workflows
-
Kubernetes Support:
- Ingress Manager support
- Enhanced GW API support
"},{"location":"roadmap/#097-oct-2024-planned","title":"0.9.7 (Oct, 2024) Planned","text":" -
Functional Features:
- SRv6 implementation
- Rolling upgrades
- URL Filtering
- Wireguard support (ingress + egress)
- SIP protocol support
- Sockmap support for SCTP
- Support for proxy protocol v2
- SYNProxy support
- IPSec service mesh for Telco workloads (ingress + egress)
-
Kubernetes Support:
- Kubernetes 1.31
- Multi-cluster support
- Support for Cilium and LoxiLB in-cluster support
- Kubernetes network policy support
"},{"location":"run/","title":"loxilb - How to build/run","text":""},{"location":"run/#1-build-from-code-and-run-difficult","title":"1. Build from code and run (difficult)","text":" - Install GoLang > v1.17
wget https://go.dev/dl/go1.22.0.linux-amd64.tar.gz && sudo tar -xzf go1.22.0.linux-amd64.tar.gz --directory /usr/local/\nexport PATH=\"${PATH}:/usr/local/go/bin\"\n
- Install standard packages
sudo apt install -y clang llvm libelf-dev gcc-multilib libpcap-dev vim net-tools linux-tools-$(uname -r) elfutils dwarves git libbsd-dev bridge-utils wget unzip build-essential bison flex iproute2 curl\n
- Install loxilb eBPF loader tools
curl -sfL https://github.com/loxilb-io/tools/raw/main/loader/install.sh | sh -\n
- Build and run loxilb
git clone --recurse-submodules https://github.com/loxilb-io/loxilb.git\ncd loxilb\n./loxilb-ebpf/utils/mkllb_bpffs.sh\nmake\nsudo ./loxilb \n
- Build and use loxicmd
git clone https://github.com/loxilb-io/loxicmd.git\ncd loxicmd\ngo get .\nmake\nsudo cp -f loxicmd /usr/local/sbin/\n
loxicmd usage guide can be found here"},{"location":"run/#2-build-and-run-using-docker-easy","title":"2. Build and run using docker (easy)","text":"Build the docker image
git clone --recurse-submodules https://github.com/loxilb-io/loxilb.git\ncd loxilb\nmake docker\n
This would create the docker image ghcr.io/loxilb-io/loxilb:latest
locally. One can then run loxilb in standalone mode by following guide here
"},{"location":"run/#3-running-in-kubernetes","title":"3. Running in Kubernetes","text":" - For running in K8s environment, kindly follow kube-loxilb guide
"},{"location":"service-proxy-calico/","title":"K3s/loxilb service-proxy with calico","text":""},{"location":"service-proxy-calico/#quick-start-guide-k3s-loxilb-service-proxy-and-calico","title":"Quick Start Guide - K3s, LoxiLB \"service-proxy\" and Calico","text":"This document will explain how to install a K3s cluster with loxilb in \"service-proxy\" mode alongside calico networking.
"},{"location":"service-proxy-calico/#what-is-service-proxy-mode","title":"What is service-proxy mode?","text":"service-proxy mode is where kubernetes kube-proxy services are entirely replaced by loxilb for better performance. Users can continue to use their existing networking providers while enjoying streamlined performance and superior feature-set provided by loxilb.
Looking at the left side of the image, you will notice the traffic flow of the packet as it enters the Kubernetes cluster. Kube-proxy, the de-facto networking agent in the Kubernetes which runs on each node of the cluster which monitors the services and translates them to either iptables or IPVS tangible rules. If we talk about the functionality or a cluster with low volume traffic then kube-proxy is fine but when it comes to scalability or a high volume traffic then it acts as a bottle-neck. loxilb \"service-proxy\" mode works with Flannel/Calico and kube-proxy in IPVS mode only as of now. It inherits the IPVS rules and imports these in it's in-kernel eBPF implementation. Traffic will reach at the interface, will be processed by eBPF and sent directly to the pod or to the other node, bypassing all the layers of Linux networking. This way, all the services, be it External, NodePort or ClusterIP, can be managed through LoxiLB and provide optimal performance for the users. The added benefit for the user's is the fact that there is no need to rip and replace their current networking provider (e.g flannel or calico). Kindly note that Kubernetes network policies can't be supported in this miode currently.
"},{"location":"service-proxy-calico/#topology","title":"Topology","text":"For quickly bringing up loxilb \"service-proxy\" in K3s with Calico, we will be deploying a single node k3s cluster (v1.29.3+k3s1) : \u00a0
loxilb and kube-loxilb components run as pods managed by kubernetes in this scenario.
"},{"location":"service-proxy-calico/#setup-k3s","title":"Setup K3s","text":""},{"location":"service-proxy-calico/#configure-k3s-node","title":"Configure K3s node","text":"$ curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable traefik \\\n --disable servicelb --disable-cloud-controller --kube-proxy-arg proxy-mode=ipvs \\\n cloud-provider=external --flannel-backend=none --disable-network-policy --cluster-cidr=10.42.0.0/16 \\\n --node-ip=${MASTER_IP} --node-external-ip=${MASTER_IP} \\\n --bind-address=${MASTER_IP}\" sh -\n
"},{"location":"service-proxy-calico/#deploy-calico","title":"Deploy calico","text":"K3s uses by default flannel for networking but here we are using calico to provide the same:
sudo kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml\nsudo kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml\n
"},{"location":"service-proxy-calico/#deploy-kube-loxilb-and-loxilb","title":"Deploy kube-loxilb and loxilb ?","text":"kube-loxilb is used as an operator to manage loxilb. We need to deploy both kube-loxilb and loxilb components in your kubernetes cluster
sudo kubectl apply -f https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/service-proxy/kube-loxilb.yml\nsudo kubectl apply -f https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/service-proxy/loxilb-service-proxy.yml\n
"},{"location":"service-proxy-calico/#check-the-status","title":"Check the status","text":"In k3s node:
## Check the pods created\n$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\ntigera-operator tigera-operator-689d868448-wwvts 1/1 Running 0 2d23h\ncalico-system calico-typha-67d4484996-2cmzs 1/1 Running 0 2d23h\ncalico-system calico-node-l8r8b 1/1 Running 0 2d23h\nkube-system local-path-provisioner-6c86858495-mrtzv 1/1 Running 0 2d23h\ncalico-system csi-node-driver-ssbnf 2/2 Running 0 2d23h\ncalico-apiserver calico-apiserver-7dccc79b59-txnl5 1/1 Running 0 2d10h\ncalico-apiserver calico-apiserver-7dccc79b59-vk68t 1/1 Running 0 2d10h\ncalico-system calico-node-glm64 1/1 Running 0 2d23h\ncalico-system calico-node-hs7pw 1/1 Running 0 2d23h\ncalico-system csi-node-driver-xqjcd 2/2 Running 0 2d23h\ncalico-system calico-typha-67d4484996-wctwv 1/1 Running 0 2d23h\nkube-system kube-loxilb-5fb5566999-4vvls 1/1 Running 0 38h\ncalico-system csi-node-driver-hz87c 2/2 Running 0 2d23h\nkube-system coredns-6799fbcd5-mhgwg 1/1 Running 0 2d8h\ncalico-system calico-kube-controllers-f5c6cdbdc-vztls 1/1 Running 0 32h\ncalico-system calico-node-mjjs5 1/1 Running 0 2d23h\ncalico-system csi-node-driver-l5r75 2/2 Running 0 2d23h\ndefault iperf1 1/1 Running 0 32h\nkube-system metrics-server-54fd9b65b-78mwr 1/1 Running 0 2d23h\nkube-system loxilb-lb-px6th 1/1 Running 0 20h\n
In loxilb pod, we can check internal LB rules: $ sudo kubectl exec -it -n kube-system loxilb-lb-px6th -- loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-------------------------------|------|-----|---------|-----------------|-------|--------|--------|----------|\n| 10.0.2.15 | | 32598 | tcp | ipvs_10.0.2.15:32598-tcp | 0 | rr | fullnat | 192.168.235.161 | 5001 | 1 | active | 0:0 |\n| 10.43.0.10 | | 53 | tcp | ipvs_10.43.0.10:53-tcp | 0 | rr | default | 192.168.182.39 | 53 | 1 | - | 0:0 |\n| 10.43.0.10 | | 53 | udp | ipvs_10.43.0.10:53-udp | 0 | rr | default | 192.168.182.39 | 53 | 1 | - | 6:1149 |\n| 10.43.0.10 | | 9153 | tcp | ipvs_10.43.0.10:9153-tcp | 0 | rr | default | 192.168.182.39 | 9153 | 1 | - | 0:0 |\n| 10.43.0.1 | | 443 | tcp | ipvs_10.43.0.1:443-tcp | 0 | rr | default | 192.168.80.10 | 6443 | 1 | - | 0:0 |\n| 10.43.182.250 | | 443 | tcp | ipvs_10.43.182.250:443-tcp | 0 | rr | default | 192.168.182.14 | 5443 | 1 | - | 0:0 |\n| | | | | | | | | 192.168.189.75 | 5443 | 1 | - | 0:0 |\n| 10.43.184.155 | | 55001 | tcp | ipvs_10.43.184.155:55001-tcp | 0 | rr | default | 192.168.235.161 | 5001 | 1 | - | 0:0 |\n| 10.43.78.171 | | 5473 | tcp | ipvs_10.43.78.171:5473-tcp | 0 | rr | default | 192.168.80.10 | 5473 | 1 | - | 0:0 |\n| | | | | | | | | 192.168.80.102 | 5473 | 1 | - | 0:0 |\n| 10.43.89.40 | | 443 | tcp | ipvs_10.43.89.40:443-tcp | 0 | rr | default | 192.168.219.68 | 10250 | 1 | - | 0:0 |\n| 192.168.219.64 | | 32598 | tcp | ipvs_192.168.219.64:32598-tcp | 0 | rr | fullnat | 192.168.235.161 | 5001 | 1 | active | 0:0 |\n| 192.168.80.10 | | 32598 | tcp | ipvs_192.168.80.10:32598-tcp | 0 | rr | fullnat | 192.168.235.161 | 5001 | 1 | active | 0:0 |\n| 192.168.80.20 | | 32598 | tcp | ipvs_192.168.80.20:32598-tcp | 0 | rr | fullnat | 192.168.235.161 | 5001 | 1 | active | 0:0 |\n| 192.168.80.20 | | 55001 | tcp | default_iperf-service | 0 | rr | onearm | 192.168.80.101 | 32598 | 1 | - | 0:0 |\n
"},{"location":"service-proxy-calico/#deploy-a-sample-service","title":"Deploy a sample service","text":"To deploy a sample service, we can create service as usual in Kubernetes with few extra annotations as follows :
sudo kubectl apply -f - <<EOF\napiVersion: v1\nkind: Service\nmetadata:\n name: iperf-service\n annotations:\n loxilb.io/lbmode: \"onearm\" \n loxilb.io/staticIP: \"192.168.80.20\"\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: perf-test\n ports:\n - port: 55001\n targetPort: 5001\n type: LoadBalancer\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: iperf1\n labels:\n what: perf-test\nspec:\n containers:\n - name: iperf\n image: ghcr.io/nicolaka/netshoot:latest\n command:\n - iperf\n - \"-s\"\n ports:\n - containerPort: 5001\nEOF\n
Check the service created :
$ sudo kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 3d1h\niperf-service LoadBalancer 10.43.131.107 llb-192.168.80.20 55001:31181/TCP 9m14s\n
Test the service created (from a host outside the cluster) :
## Using service VIP\n$ iperf -c 192.168.80.20 -p 55001 -i1 -t3\n------------------------------------------------------------\nClient connecting to 192.168.80.20, TCP port 55001\nTCP window size: 85.0 KByte (default)\n------------------------------------------------------------\n[ 1] local 192.168.80.80 port 58936 connected with 192.168.80.20 port 55001\n[ ID] Interval Transfer Bandwidth\n[ 1] 0.0000-1.0000 sec 282 MBytes 2.36 Gbits/sec\n[ 1] 1.0000-2.0000 sec 276 MBytes 2.31 Gbits/sec\n[ 1] 2.0000-3.0000 sec 279 MBytes 2.34 Gbits/sec\n\n## Using node-port\n$ iperf -c 192.168.80.100 -p 31181 -i1 -t10\n------------------------------------------------------------\nClient connecting to 192.168.80.100, TCP port 31181\nTCP window size: 85.0 KByte (default)\n------------------------------------------------------------\n[ 1] local 192.168.80.80 port 43208 connected with 192.168.80.100 port 31181\n[ ID] Interval Transfer Bandwidth\n[ 1] 0.0000-1.0000 sec 612 MBytes 5.14 Gbits/sec\n[ 1] 1.0000-2.0000 sec 598 MBytes 5.02 Gbits/sec\n[ 1] 2.0000-3.0000 sec 617 MBytes 5.17 Gbits/sec\n[ 1] 3.0000-4.0000 sec 600 MBytes 5.04 Gbits/sec\n[ 1] 4.0000-5.0000 sec 630 MBytes 5.28 Gbits/sec\n[ 1] 5.0000-6.0000 sec 699 MBytes 5.86 Gbits/sec\n[ 1] 6.0000-7.0000 sec 682 MBytes 5.72 Gbits/sec\n
For more detailed performance comparison with other solutions, kindly follow this blog and for more detailed information on incluster deployment of loxilb with bgp in a full-blown cluster, kindly follow this blog.\u00a0 \u00a0
"},{"location":"service-proxy-flannel/","title":"K3s/loxilb service-proxy with flannel","text":""},{"location":"service-proxy-flannel/#quick-start-guide-k3s-with-loxilb-service-proxy","title":"Quick Start Guide - K3s with LoxiLB \"service-proxy\"","text":"This document will explain how to install a K3s cluster with loxilb in \"service-proxy\" mode alongside flannel networking (default for k3s).
"},{"location":"service-proxy-flannel/#what-is-service-proxy-mode","title":"What is service-proxy mode?","text":"service-proxy mode is where kubernetes kube-proxy services are entirely replaced by loxilb for better performance. Users can continue to use their existing networking providers while enjoying streamlined performance and superior feature-set provided by loxilb.
Looking at the left side of the image, you will notice the traffic flow of the packet as it enters the Kubernetes cluster. Kube-proxy, the de-facto networking agent in the Kubernetes which runs on each node of the cluster which monitors the services and translates them to either iptables or IPVS tangible rules. If we talk about the functionality or a cluster with low volume traffic then kube-proxy is fine but when it comes to scalability or a high volume traffic then it acts as a bottle-neck. loxilb \"service-proxy\" mode works with Flannel/Calico and kube-proxy in IPVS mode only as of now. It inherits the IPVS rules and imports these in it's in-kernel eBPF implementation. Traffic will reach at the interface, will be processed by eBPF and sent directly to the pod or to the other node, bypassing all the layers of Linux networking. This way, all the services, be it External, NodePort or ClusterIP, can be managed through LoxiLB and provide optimal performance for the users. The added benefit for the user's is the fact that there is no need to rip and replace their current networking provider (e.g flannel or calico).
"},{"location":"service-proxy-flannel/#topology","title":"Topology","text":"For quickly bringing up loxilb \"service-proxy\" in K3s, we will be deploying a single node k3s cluster (v1.29.3+k3s1) : \u00a0
loxilb and kube-loxilb components run as pods managed by kubernetes in this scenario.
"},{"location":"service-proxy-flannel/#setup-k3s","title":"Setup K3s","text":""},{"location":"service-proxy-flannel/#configure-k3s-node","title":"Configure K3s node","text":"$ curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"--disable traefik \\\n --disable servicelb --disable-cloud-controller --kube-proxy-arg proxy-mode=ipvs \\\n cloud-provider=external --node-ip=${MASTER_IP} --node-external-ip=${MASTER_IP} \\\n --bind-address=${MASTER_IP}\" sh -\n
"},{"location":"service-proxy-flannel/#deploy-kube-loxilb-and-loxilb","title":"Deploy kube-loxilb and loxilb ?","text":"kube-loxilb is used as an operator to manage loxilb. We need to deploy both kube-loxilb and loxilb components in your kubernetes cluster
sudo kubectl apply -f https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/service-proxy/kube-loxilb.yml\nsudo kubectl apply -f https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/service-proxy/loxilb-service-proxy.yml\n
"},{"location":"service-proxy-flannel/#check-the-status","title":"Check the status","text":"In k3s node:
## Check the pods created\n$ sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-6799fbcd5-c68ws 1/1 Running 0 15m\nkube-system local-path-provisioner-6c86858495-rxk2w 1/1 Running 0 15m\nkube-system metrics-server-54fd9b65b-xtgk2 1/1 Running 0 15m\nkube-system loxilb-lb-5p6pg 1/1 Running 0 6m58s\nkube-system kube-loxilb-5fb5566999-7xdkk 1/1 Running 0 6m59s\n
In loxilb pod, we can check internal LB rules: $ udo kubectl exec -it -n kube-system loxilb-lb-5p6pg -- loxicmd get lb -o wide\n| EXT IP | SEC IPS | PORT | PROTO | NAME | MARK | SEL | MODE | ENDPOINT | EPORT | WEIGHT | STATE | COUNTERS |\n|----------------|---------|-------|-------|-------------------------------|------|-----|---------|----------------|-------|--------|--------|----------|\n| 10.0.2.15 | | 31377 | tcp | ipvs_10.0.2.15:31377-tcp | 0 | rr | fullnat | 10.42.1.2 | 5001 | 1 | active | 0:0 |\n| 10.42.1.0 | | 31377 | tcp | ipvs_10.42.1.0:31377-tcp | 0 | rr | fullnat | 10.42.1.2 | 5001 | 1 | active | 0:0 |\n| 10.42.1.1 | | 31377 | tcp | ipvs_10.42.1.1:31377-tcp | 0 | rr | fullnat | 10.42.1.2 | 5001 | 1 | active | 0:0 |\n| 10.43.0.10 | | 53 | tcp | ipvs_10.43.0.10:53-tcp | 0 | rr | default | 10.42.0.3 | 53 | 1 | - | 0:0 |\n| 10.43.0.10 | | 53 | udp | ipvs_10.43.0.10:53-udp | 0 | rr | default | 10.42.0.3 | 53 | 1 | - | 0:0 |\n| 10.43.0.10 | | 9153 | tcp | ipvs_10.43.0.10:9153-tcp | 0 | rr | default | 10.42.0.3 | 9153 | 1 | - | 0:0 |\n| 10.43.0.1 | | 443 | tcp | ipvs_10.43.0.1:443-tcp | 0 | rr | default | 192.168.80.10 | 6443 | 1 | - | 0:0 |\n| 10.43.202.90 | | 55001 | tcp | ipvs_10.43.202.90:55001-tcp | 0 | rr | default | 10.42.1.2 | 5001 | 1 | - | 0:0 |\n| 10.43.30.93 | | 443 | tcp | ipvs_10.43.30.93:443-tcp | 0 | rr | default | 10.42.0.4 | 10250 | 1 | - | 0:0 |\n| 192.168.80.101 | | 31377 | tcp | ipvs_192.168.80.101:31377-tcp | 0 | rr | fullnat | 10.42.1.2 | 5001 | 1 | active | 15:1014 |\n| 192.168.80.20 | | 55001 | tcp | default_iperf-service | 0 | rr | onearm | 192.168.80.101 | 31377 | 1 | - | 0:0 |\n
"},{"location":"service-proxy-flannel/#deploy-a-sample-service","title":"Deploy a sample service","text":"To deploy a sample service, we can create service as usual in Kubernetes with few extra annotations as follows :
sudo kubectl apply -f - <<EOF\napiVersion: v1\nkind: Service\nmetadata:\n name: iperf-service\n annotations:\n loxilb.io/lbmode: \"onearm\" \n loxilb.io/staticIP: \"192.168.80.20\"\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: perf-test\n ports:\n - port: 55001\n targetPort: 5001\n type: LoadBalancer\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: iperf1\n labels:\n what: perf-test\nspec:\n containers:\n - name: iperf\n image: ghcr.io/nicolaka/netshoot:latest\n command:\n - iperf\n - \"-s\"\n ports:\n - containerPort: 5001\nEOF\n
Check the service created :
$ sudo kubectl get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 17m\niperf-service LoadBalancer 10.43.202.90 llb-192.168.80.20 55001:31377/TCP 2m34s\n
Test the service created (from a host outside the cluster) :
## Using service VIP\n$ iperf -c 192.168.80.20 -p 55001 -i1 -t3\n------------------------------------------------------------\nClient connecting to 192.168.80.20, TCP port 55001\nTCP window size: 85.0 KByte (default)\n------------------------------------------------------------\n[ 1] local 192.168.80.80 port 55686 connected with 192.168.80.20 port 55001\n[ ID] Interval Transfer Bandwidth\n[ 1] 0.0000-1.0000 sec 311 MBytes 2.61 Gbits/sec\n[ 1] 1.0000-2.0000 sec 309 MBytes 2.59 Gbits/sec\n[ 1] 2.0000-3.0000 sec 305 MBytes 2.56 Gbits/sec\n[ 1] 0.0000-3.0109 sec 926 MBytes 2.58 Gbits/sec\n\n## Using node-port\n$ iperf -c 192.168.80.101 -p 31377 -i1 -t10\n------------------------------------------------------------\nClient connecting to 192.168.80.101, TCP port 31377\nTCP window size: 85.0 KByte (default)\n------------------------------------------------------------\n[ 1] local 192.168.80.80 port 34066 connected with 192.168.80.101 port 31377\n[ ID] Interval Transfer Bandwidth\n[ 1] 0.0000-1.0000 sec 792 MBytes 6.64 Gbits/sec\n[ 1] 1.0000-2.0000 sec 727 MBytes 6.10 Gbits/sec\n[ 1] 2.0000-3.0000 sec 784 MBytes 6.57 Gbits/sec\n[ 1] 3.0000-4.0000 sec 814 MBytes 6.83 Gbits/sec\n[ 1] 4.0000-5.0000 sec 1.01 GBytes 8.64 Gbits/sec\n[ 1] 5.0000-6.0000 sec 1.02 GBytes 8.79 Gbits/sec\n[ 1] 6.0000-7.0000 sec 1.03 GBytes 8.84 Gbits/sec\n[ 1] 7.0000-8.0000 sec 814 MBytes 6.83 Gbits/sec\n[ 1] 8.0000-9.0000 sec 965 MBytes 8.09 Gbits/sec\n[ 1] 9.0000-10.0000 sec 946 MBytes 7.93 Gbits/sec\n[ 1] 0.0000-10.0170 sec 8.76 GBytes 7.51 Gbits/sec\n
If you are wondering why there is a performance difference between serviceLB and node-port, there is an interesting blog about it here by one our users. For more detailed performance comparison with other solutions, kindly follow this blog and for more detailed information on incluster deployment of loxilb with bgp in a full-blown cluster, kindly follow this blog.\u00a0 \u00a0"},{"location":"service-zones/","title":"How-To - service-group zones","text":""},{"location":"service-zones/#service-group-zoning-in-loxilb","title":"Service-Group zoning in loxilb","text":"kube-loxilb is used to deploy loxilb with Kubernetes. By default a kube-loxilb instance does not differentiate the services in any way and uses a set-of loxilb instances to setup rules related to these services. But there are potential scenarios where grouping of services is necessary. It might be beneficial for increasing capacity, uptime and security of the cluster services.
"},{"location":"service-zones/#overall-topology","title":"Overall topology","text":"For implementing service-groups with zones, the overall topology including all components should be similar to the following :
The overall concept is to run multiple sets of kube-loxilb each for a separate zone. Each set of kube-loxilb communicates with a particular set of designated loxilb instances dedicated for that zone. Finally when the services are created, we need to mention which zone we want to place them in using special loxilb annotation.
"},{"location":"service-zones/#how-to-deploy-kube-loxilb-for-zones","title":"How to deploy kube-loxilb for zones ?","text":" - The manifest files for deploying kube-loxilb for zones need to mention the zone they cater to. For example:
kube-loxilb-south.yml
args:\n - --loxiURL=http://12.12.12.1:11111\n - --externalCIDR=123.123.123.1/24\n - --zone=south\n
kube-loxilb-north.yml
args:\n - --loxiURL=http://12.12.12.2:11111\n - --externalCIDR=124.124.124.1/24\n - --zone=north\n
-
Complete kube-loxilb manifests for zones can be found here which can be further modified as per user need
-
After deployment, you can find multiple sets of kube-loxilb running as follows :
# sudo kubectl get pods -A\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system coredns-6799fbcd5-6w52r 1/1 Running 0 11h\nkube-system local-path-provisioner-6c86858495-gkqgc 1/1 Running 0 11h\nkube-system metrics-server-67c658944b-vgjqd 1/1 Running 0 11h\ndefault udp-test 1/1 Running 0 11h\nkube-system kube-loxilb-south-596fb8957b-7xg2k 1/1 Running 0 11h\nkube-system kube-loxilb-north-5887f5d848-f86jv 1/1 Running 0 10h\n
"},{"location":"service-zones/#how-to-deploy-services-for-zones","title":"How to deploy services for zones ?","text":" - The manifest files for services need to have annotation related to zone they will be served by. For example, we need to specify \"loxilb.io/zoneselect\" annotation :
apiVersion: v1\nkind: Service\nmetadata:\n name: nginx-lb1\n annotations:\n loxilb.io/lbmode: \"fullnat\"\n loxilb.io/probetimeout: \"10\"\n loxilb.io/proberetries: \"2\"\n loxilb.io/zoneselect: \"north\"\nspec:\n externalTrafficPolicy: Local\n loadBalancerClass: loxilb.io/loxilb\n selector:\n what: nginx-test\n ports:\n - port: 55002\n targetPort: 80 \n type: LoadBalancer\n---\napiVersion: v1\nkind: Pod\nmetadata:\n name: nginx-test\n labels:\n what: nginx-test\nspec:\n containers:\n - name: nginx-test\n image: nginx:stable\n ports:\n - containerPort: 80\n
- Example services manifests for zones can be found here which can be further modified as per user need
"},{"location":"simple_topo/","title":"Creating a simple test topology for loxilb","text":"To test loxilb in a single node cloud-native environment, it is possible to quickly create a test topology. We will explain the steps required to create a very simple topology (more complex topologies can be built using this example) :
Prerequisites :
- Docker should be preinstalled
- Pull and run loxilb docker
# docker pull ghcr.io/loxilb-io/loxilb:latest\n# docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest\n
Next step is to run the following script to create and configure the above topology :
#!/bin/bash\n\ndocker=$1\nHADD=\"sudo ip netns add \"\nLBHCMD=\"sudo ip netns exec loxilb \"\nHCMD=\"sudo ip netns exec \"\n\nid=`docker ps -f name=loxilb | cut -d \" \" -f 1 | grep -iv \"CONTAINER\"`\necho $id\npid=`docker inspect -f '{{.State.Pid}}' $id`\nif [ ! -f /var/run/netns/loxilb ]; then\n sudo touch /var/run/netns/loxilb\n sudo mount -o bind /proc/$pid/ns/net /var/run/netns/loxilb\nfi\n\n$HADD ep1\n$HADD ep2\n$HADD ep3\n$HADD h1\n\n## Configure load-balancer end-point ep1\nsudo ip -n loxilb link add ellb1ep1 type veth peer name eep1llb1 netns ep1\nsudo ip -n loxilb link set ellb1ep1 mtu 9000 up\nsudo ip -n ep1 link set eep1llb1 mtu 7000 up\n$LBHCMD ip addr add 31.31.31.254/24 dev ellb1ep1\n$HCMD ep1 ifconfig eep1llb1 31.31.31.1/24 up\n$HCMD ep1 ip route add default via 31.31.31.254\n$HCMD ep1 ifconfig lo up\n\n## Configure load-balancer end-point ep2\nsudo ip -n loxilb link add ellb1ep2 type veth peer name eep2llb1 netns ep2\nsudo ip -n loxilb link set ellb1ep2 mtu 9000 up\nsudo ip -n ep2 link set eep2llb1 mtu 7000 up\n$LBHCMD ip addr add 32.32.32.254/24 dev ellb1ep2\n$HCMD ep2 ifconfig eep2llb1 32.32.32.1/24 up\n$HCMD ep2 ip route add default via 32.32.32.254\n$HCMD ep2 ifconfig lo up\n\n## Configure load-balancer end-point ep3\nsudo ip -n loxilb link add ellb1ep3 type veth peer name eep3llb1 netns ep3\nsudo ip -n loxilb link set ellb1ep3 mtu 9000 up\nsudo ip -n ep3 link set eep3llb1 mtu 7000 up\n$LBHCMD ip addr add 33.33.33.254/24 dev ellb1ep3\n$HCMD ep3 ifconfig eep3llb1 33.33.33.1/24 up\n$HCMD ep3 ip route add default via 33.33.33.254\n$HCMD ep3 ifconfig lo up\n\n## Configure load-balancer end-point h1\nsudo ip -n loxilb link add ellb1h1 type veth peer name eh1llb1 netns h1\nsudo ip -n loxilb link set ellb1h1 mtu 9000 up\nsudo ip -n h1 link set eh1llb1 mtu 7000 up\n$LBHCMD ip addr add 10.10.10.254/24 dev ellb1h1\n$HCMD h1 ifconfig eh1llb1 10.10.10.1/24 up\n$HCMD h1 ip route add default via 10.10.10.254\n$HCMD h1 ifconfig lo up\n
Finally, we need to configure load-balancer rule inside loxilb docker as follows :
docker exec -it loxilb bash\nroot@8b74b5ddc4d2:/# loxicmd create lb 20.20.20.1 --tcp=2020:5001 --endpoints=31.31.31.1:1,32.32.32.1:1,33.33.33.1:1\n
So, we now have loxilb running as a docker pod with 4 hosts connected to it. 3 of the hosts act as load-balancer end-points and 1 of them act as a client. We can run any workloads as we wish inside the host pods and start testing loxilb
"},{"location":"standalone/","title":"Standalone Mode","text":""},{"location":"standalone/#how-to-run-loxilb-in-standalone-mode","title":"How to run loxilb in standalone mode","text":"This guide will help users to run loxilb in a standalone mode decoupled from kubernetes
"},{"location":"standalone/#pre-requisites","title":"Pre-requisites","text":"This guide uses Ubuntu 20.04.5 LTS as the base operating system
"},{"location":"standalone/#install-docker","title":"Install docker","text":"One can follow the guide here to install latest docker engine or use snap to install docker.
sudo apt update\nsudo apt install snapd\nsudo snap install docker\n
"},{"location":"standalone/#enable-ipv6-if-running-nat64nat66","title":"Enable IPv6 (if running NAT64/NAT66)","text":"sysctl net.ipv6.conf.all.disable_ipv6=0\nsysctl net.ipv6.conf.default.disable_ipv6=0\n
"},{"location":"standalone/#run-loxilb","title":"Run loxilb","text":"Get the loxilb official docker image
-
Latest build image (multi-arch amd64/arm64)
docker pull ghcr.io/loxilb-io/loxilb:latest\n
-
Release build image
docker pull ghcr.io/loxilb-io/loxilb:v0.9.5\n
-
To run loxilb docker, we can use the following commands :
docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest\n
- To drop in to a shell of loxilb doker :
docker exec -it loxilb bash\n
- For load-balancing to effectively work in a bare-metal environment, we need multiple interfaces assigned to the docker (external and internal connectivitiy). loxilb docker relies on docker's macvlan driver for achieving this. The following is an example of creating macvlan network and using with loxilb:
# Create a mac-vlan (on an underlying interface e.g. enp0s3).\n# Subnet used for mac-vlan is usually the same as underlying interface\ndocker network create -d macvlan -o parent=enp0s3 --subnet 172.30.1.0/24 --gateway 172.30.1.254 --aux-address 'host=172.30.1.193\u2019 llbnet\n\n# Run loxilb docker with the created macvlan \ndocker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --net=llbnet --ip=172.30.1.195 --name loxilb ghcr.io/loxilb-io/loxilb:latest\n\n# If we still want to connect loxilb docker additionally to docker's default \"bridge\" network or more macvlan networks\ndocker network connect bridge loxilb\ndocker network connect llbnet2 loxilb --ip=172.30.2.195\n
Note:
- While working with macvlan interfaces, the parent/underlying interface should be put in promiscous mode
- One can further use docker-compose to automate attaching multiple networks to loxilb docker or use
--net=host
as per requirement - To use local socket policy or eBPF sockmap related features, we need to use
--pid=host --cgroupns=host
as additional arguments to docker run. - To create a simple and self-contained topology for testing loxilb, users can follow this guide
- If loxilb docker is in the same node as the app/workload docker, it is advised that \"tx checksum offload\" inside app/workload docker is turned off for sctp load-balancing to work properly
docker exec -dt <app-docker-name> ethtool -K <app-docker-interface> tx off\n
"},{"location":"standalone/#configuration","title":"Configuration","text":"loxicmd command line tool can be used to configure loxilb in standalone mode. A simple example of configuration using loxilb is as follows:
- Drop into loxilb shell
sudo docker exec -it loxilb bash\n
- Create a LB rule inside loxilb docker. Various other options for LB manipulation can be found here
loxicmd create lb 2001::1 --tcp=2020:8080 --endpoints=33.33.33.1:1\n
- Validate entry is created using the command:
loxicmd get lb -o wide\n
The detailed usage guide of loxicmd can be found here.
"},{"location":"standalone/#working-with-gobgp","title":"Working with gobgp","text":"loxilb works in tandem with gobgp when bgp services are required. As a first step, create a file gobgp.conf in host where loxilb docker will run and add the basic necessary fields :
[global.config]\n as = 64512\n router-id = \"10.10.10.1\"\n\n[[neighbors]]\n [neighbors.config]\n neighbor-address = \"10.10.10.254\"\n peer-as = 64512\n
Run loxilb docker with following arguments:
docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v gobgp.conf:/etc/gobgp/gobgp.conf -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest -b \n
The gobgp daemon should pick the configuration. The neighbors can be verified by :
sudo docker exec -it loxilb gobgp neighbor\n
At run time, there are two ways to change gobgp configuration. Ephemeral configuration can simply be done using \u201cgobgp\u201d command as detailed here. If persistence is required, then one can change the gobgp config file /etc/gobgp/gobgp.conf and apply SIGHUP to gobgpd process for loading the edited configuration.
sudo docker exec -it loxilb pkill -1 gobgpd\n
"},{"location":"standalone/#persistent-lb-entries","title":"Persistent LB entries","text":"To save the created rules across reboots, one can use the following command:
sudo mkdir -p /etc/loxilb/\nsudo loxicmd save --lb\n
"}]}
\ No newline at end of file