Skip to content

Latest commit

 

History

History
160 lines (142 loc) · 4.99 KB

README.md

File metadata and controls

160 lines (142 loc) · 4.99 KB

bblfsh-driver-k8s

Example of running bblfsh drivers with gRPC LoadBalancer

Summary

  • There's a need to apply code parse using bblfsh drivers on the large scale
  • Current bblfshd architecture does not allow us to achieve this goal

Solution

Use k8s toolkit:

  • group drivers into scalable deployments
  • apply load-balancing
  • go-client: add ability to specify endpoints for each particular language(wip)

Roadmap

  • deployment generation for different languages, include template to bblfsh/sdk maybe
  • if autoscaling is applied one day we need to consider using this approach

Requirements

Steps

Run k8s cluster using minikube

I was testing it locally so --vm-driver=none

sudo minikube start --vm-driver=none

Optionally you can create the dashboard

sudo minikube dashboard

This command's output will give you a corresponding URL to the dashboard

🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
http://127.0.0.1:43397/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

Download and deploy CoreDNS

git clone https://github.com/coredns/deployment.git
cd deployment/kubernetes/
./deploy.sh | kubectl apply -f -

Deploy Namespace, Service and Deployment for a bunch of go-drivers

Deployment contains several replicas of go-driver containers

From the root folder:

kubectl create -f configs/service.yml
kubectl create -f configs/deployment.yml

Prepare and deploy client

Client is a simple go-driver gRPC client that bombs a given endpoint with simple parse requests

Firstly, let's build an image

cd client/
./build.sh

Now let's deploy an image

From the root folder:

kubectl create -f configs/client.yml

Install and deploy linkerd

We use linkerd as a client-side gRPC load-balancer. Some of steps are taken and changed from the Getting Started guide, please navigate there for steps explanation.

Install linkered:

curl -sL https://run.linkerd.io/install | sh

Optionally add it to PATH:

export PATH=$PATH:$HOME/.linkerd2/bin

Deploy linkered to k8s:

linkerd install | kubectl apply -f -

Note: some proxy start issues may appear, to avoid them DO NOT run pre-installation check linkerd check --pre

Check the installation status:

linkerd check

This may take a while, here's the example of successful output:

kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version

linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ control plane PodSecurityPolicies exist

linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
√ can initialize the client
√ can query the control plane API

linkerd-api
-----------
√ control plane pods are ready
√ control plane self-check
√ [kubernetes] control plane can talk to Kubernetes
√ [prometheus] control plane can talk to Prometheus
√ no invalid service profiles

linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date

control-plane-version
---------------------
√ control plane is up-to-date
√ control plane and cli versions match

Status check results are √

Inject linkerd to existing deployments

This command retrieves all of the deployments running in the go-driver namespace, runs the manifest through linkerd inject, and then reapplies it to the cluster. The linkerd inject command adds annotations to the pod spec instructing Linkerd to add (“inject”) the data plane’s proxy is added as a container to the pod spec.

kubectl get -n go-driver deploy -o yaml \
  | linkerd inject - \
  | kubectl apply -f -

Check the result

Firstly, let's create linkered dashboard

linkerd dashboard

Then choose our namespace go-driver: linkerd As we can see the load is spread between all existing pods.

Additionally, there're links to Grafana to monitor metrics more precisely if needed.