Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(exporter): support quickstart by vagrant #91

Merged
merged 2 commits into from
Aug 14, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ jobs:
- name: Run ShellCheck
uses: ludeeus/[email protected]
env:
SHELLCHECK_OPTS: -e SC2236,SC2162,SC2268
SHELLCHECK_OPTS: -e SC2236,SC2162,SC2268,SC1091
with:
ignore: tests hack

Expand Down
40 changes: 40 additions & 0 deletions deploy/vagrant-exporter/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Vagrantfile for KubeSkoop exporter

This `Vagrantfile` can setup a 3 node kubernetes cluster (1 master, 2 worker) with `flannel` plugin and KubeSkoop exporter for your tests.

Before you start, you need to install [Vagrant](https://developer.hashicorp.com/vagrant/docs/installation) and [VirtualBox](https://www.virtualbox.org/wiki/Downloads) first.

## Run KubeSkoop exporter with Vagrant

When you installed Vagrant and VirtualBox, you can clone the `kubeskoop repo`, move to this folder, and run `vagrant up`.

```shell
git clone [email protected]:alibaba/kubeskoop.git
cd kubeskoop/deploy/vagrant-exporter
vagrant up
```

It may take a white to set up 3 virtual machines to build a kubernetes cluster.

## Manage cluster with `kubectl`

When your cluster is ready, you can ssh into the master node to take a look at the cluster.

```shell
vagrant ssh master
# on master node
kubectl get pod -n kube-system
```

KubeSkoop are installed in `kubeskoop` namespace.

```shell
# on master node
kubectl get pod -n kubeskoop
```

## Access the Grafana on host machine

When all pods under `kubeskoop` namespace are ready, you can now access the Grafana via [http://127.0.0.1:8080](http://127.0.0.1:8080) on your host machine.

The default user is `admin`, and password is `kubeskoop`.
35 changes: 35 additions & 0 deletions deploy/vagrant-exporter/Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :

MASTER_IP = "192.168.56.10"
WORKER1_IP = "192.168.56.11"
WORKER2_IP = "192.168.56.12"

Vagrant.configure("2") do |config|
config.vm.box = "debian/bullseye64"

boxes = [
{ :name => "master", :ip => MASTER_IP, :cpus => 2, :memory => 2048 },
{ :name => "worker1", :ip => WORKER1_IP, :cpus => 2, :memory => 2048 },
{ :name => "worker2", :ip => WORKER2_IP, :cpus => 2, :memory => 2048 }
]

boxes.each do |box|
config.vm.define box[:name] do |b|
b.vm.hostname = box[:name]
b.vm.network :private_network, ip: box[:ip]
b.vm.provider "virtualbox" do |vbox|
vbox.cpus = box[:cpus]
vbox.memory = box[:memory]
end
b.vm.provision "shell", path: "./scripts/common.sh", env: {:ALIYUN_MIRROR => "1", :NODE_IP => box[:ip]}
if /^master.*/.match? b.vm.hostname
b.vm.provision "shell", path: "./scripts/init-master.sh", env: {:MASTER_IP => box[:ip]}
b.vm.network :forwarded_port, guest: 30157, host: 8080
end
if /^worker.*/.match? b.vm.hostname
b.vm.provision "shell", path: "./scripts/init-worker.sh"
end
end
end
end
206 changes: 206 additions & 0 deletions deploy/vagrant-exporter/deploy/kube-flannel.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,206 @@
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.20.1 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.1
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: flannelcni/flannel:v0.20.1 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth1
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
Loading