Skip to content

Commit

Permalink
Cherry-pick elastic#5349 to 6.0: Add kubernetes manifests (elastic#5407)
Browse files Browse the repository at this point in the history
* Add kubernetes manifests (elastic#5349)

* Add kubernetes manifests

Import https://github.com/elastic/beats-kubernetes, these manifests
deploy filebeat & metricbeat to Kubernetes

* Add travis tests

* Add packaging

* Add docs

* Update CHANGELOG

* Fix Makefile

* Check k8s health every second

(cherry picked from commit d8f2092)

* Fix chown timing issues on k8s deploy (elastic#5410)

It seems to be failing for v1.7.7, this change should fix it

* Publish kubernetes manifests in the code (elastic#5418)

They will be available for download from github

* Update YAML files to current version

* Update CHANGELOG.asciidoc
  • Loading branch information
exekias authored and tsg committed Oct 30, 2017
1 parent 48dcec5 commit d877967
Show file tree
Hide file tree
Showing 29 changed files with 1,271 additions and 1 deletion.
21 changes: 21 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ env:
- GOX_FLAGS="-arch amd64"
- DOCKER_COMPOSE_VERSION=1.11.1
- GO_VERSION="$(cat .go-version)"
- TRAVIS_ETCD_VERSION=v3.2.8

jobs:
include:
Expand Down Expand Up @@ -95,6 +96,26 @@ jobs:
go: $GO_VERSION
stage: test

# Kubernetes
- os: linux
install: deploy/kubernetes/.travis/setup.sh
env:
- TARGETS="-C deploy/kubernetes test"
- TRAVIS_KUBE_VERSION=v1.6.11
stage: test
- os: linux
install: deploy/kubernetes/.travis/setup.sh
env:
- TARGETS="-C deploy/kubernetes test"
- TRAVIS_KUBE_VERSION=v1.7.7
stage: test
- os: linux
install: deploy/kubernetes/.travis/setup.sh
env:
- TARGETS="-C deploy/kubernetes test"
- TRAVIS_KUBE_VERSION=v1.8.0
stage: test

addons:
apt:
packages:
Expand Down
4 changes: 4 additions & 0 deletions CHANGELOG.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -50,10 +50,14 @@ https://github.com/elastic/beats/compare/v6.0.0-rc2...master[Check the HEAD diff

*Filebeat*

- Add Kubernetes manifests to deploy Filebeat. {pull}5349[5349]

*Heartbeat*

*Metricbeat*

- Add Kubernetes manifests to deploy Metricbeat. {pull}5349[5349]

*Packetbeat*

*Winlogbeat*
Expand Down
16 changes: 15 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ coverage-report:
.PHONY: update
update: notice
@$(foreach var,$(PROJECTS),$(MAKE) -C $(var) update || exit 1;)
@$(MAKE) -C deploy/kubernetes all

.PHONY: clean
clean:
Expand Down Expand Up @@ -97,18 +98,24 @@ docs:
@sh libbeat/scripts/build_docs.sh $(PROJECTS)

.PHONY: package
package: update beats-dashboards
package: update beats-dashboards kubernetes-manifests
@$(foreach var,$(BEATS),SNAPSHOT=$(SNAPSHOT) $(MAKE) -C $(var) package || exit 1;)

@echo "Start building the dashboards package"
@mkdir -p build/upload/
@BUILD_DIR=$(CURDIR)/build SNAPSHOT=$(SNAPSHOT) $(MAKE) -C dev-tools/packer package-dashboards $(CURDIR)/build/upload/build_id.txt
@mv build/upload build/dashboards-upload

@echo "Start building kubernetes manifests"
@mkdir -p build/upload/
@BUILD_DIR=${BUILD_DIR} SNAPSHOT=$(SNAPSHOT) $(MAKE) -C dev-tools/packer package-kubernetes ${BUILD_DIR}/upload/build_id.txt
@mv build/upload build/kubernetes-upload

@# Copy build files over to top build directory
@mkdir -p build/upload/
@$(foreach var,$(BEATS),cp -r $(var)/build/upload/ build/upload/$(var) || exit 1;)
@cp -r build/dashboards-upload build/upload/dashboards
@cp -r build/kubernetes-upload build/upload/kubernetes
@# Run tests on the generated packages.
@go test ./dev-tools/package_test.go -files "$(CURDIR)/build/upload/*/*"

Expand Down Expand Up @@ -139,3 +146,10 @@ notice: python-env
python-env:
@test -d $(PYTHON_ENV) || virtualenv $(VIRTUALENV_PARAMS) $(PYTHON_ENV)
@$(PYTHON_ENV)/bin/pip install -q --upgrade pip autopep8 six

# Build kubernetes manifests
.PHONY: kubernetes-manifests
kubernetes-manifests:
@mkdir -p build/kubernetes
$(MAKE) -C deploy/kubernetes all
cp deploy/kubernetes/*.yaml build/kubernetes
61 changes: 61 additions & 0 deletions deploy/kubernetes/.travis/setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# This script assumes Docker is already installed
#!/bin/bash

set -x

# set docker0 to promiscuous mode
sudo ip link set docker0 promisc on

# install etcd
wget https://github.com/coreos/etcd/releases/download/$TRAVIS_ETCD_VERSION/etcd-$TRAVIS_ETCD_VERSION-linux-amd64.tar.gz
tar xzf etcd-$TRAVIS_ETCD_VERSION-linux-amd64.tar.gz
sudo mv etcd-$TRAVIS_ETCD_VERSION-linux-amd64/etcd /usr/local/bin/etcd
rm etcd-$TRAVIS_ETCD_VERSION-linux-amd64.tar.gz
rm -rf etcd-$TRAVIS_ETCD_VERSION-linux-amd64

# download kubectl
wget https://storage.googleapis.com/kubernetes-release/release/$TRAVIS_KUBE_VERSION/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/kubectl

# download kubernetes
git clone https://github.com/kubernetes/kubernetes $HOME/kubernetes

# install cfssl
go get -u github.com/cloudflare/cfssl/cmd/...

pushd $HOME/kubernetes
git checkout $TRAVIS_KUBE_VERSION
kubectl config set-credentials myself --username=admin --password=admin
kubectl config set-context local --cluster=local --user=myself
kubectl config set-cluster local --server=http://localhost:8080
kubectl config use-context local

# start kubernetes in the background
sudo PATH=$PATH:/home/travis/.gimme/versions/go1.7.linux.amd64/bin/go \
KUBE_ENABLE_CLUSTER_DNS=true \
hack/local-up-cluster.sh &
popd

# Wait until kube is up and running
TIMEOUT=0
TIMEOUT_COUNT=800
until $(curl --output /dev/null --silent http://localhost:8080) || [ $TIMEOUT -eq $TIMEOUT_COUNT ]; do
echo "Kube is not up yet"
let TIMEOUT=TIMEOUT+1
sleep 1
done

if [ $TIMEOUT -eq $TIMEOUT_COUNT ]; then
echo "Kubernetes is not up and running"
exit 1
fi

echo "Kubernetes is deployed and reachable"

# Try and sleep before issuing chown. Currently, Kubernetes is started by
# a command that is run in the background. Technically Kubernetes could be
# up and running, but those files might not exist yet as the previous command
# could create them after Kube starts successfully.
sleep 30
sudo chown -R $USER:$USER $HOME/.kube
21 changes: 21 additions & 0 deletions deploy/kubernetes/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
ALL=filebeat metricbeat
BEAT_VERSION=$(shell head -n 1 ../../libbeat/docs/version.asciidoc | cut -c 17- )

all: ${ALL:=-kubernetes.yaml}

test: all
for FILE in $(shell ls *-kubernetes.yaml); do \
BEAT=$$(echo $$FILE | cut -d \- -f 1); \
kubectl create -f $$FILE; \
done

clean:
@for f in $(ALL); do rm -f "$$f-kubernetes.yaml"; done

%-kubernetes.yaml: %/*.yaml
@echo "Generating $*-kubernetes.yaml"
@rm -f $*-kubernetes.yaml
@for f in $(shell ls $*/*.yaml); do \
sed "s/%VERSION%/${BEAT_VERSION}/g" $$f >> $*-kubernetes.yaml; \
echo --- >> $*-kubernetes.yaml; \
done
11 changes: 11 additions & 0 deletions deploy/kubernetes/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Beats Kubernetes manifests examples

## Getting started

This is the list of officially supported Beats, with example manifests to run
them in Kubernetes:

Beat | Description
---- | ----
[filebeat](filebeat) | Tails and ships logs
[metricbeat](metricbeat) | Fetches sets of metrics from the operating system and services
160 changes: 160 additions & 0 deletions deploy/kubernetes/filebeat-kubernetes.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: kube-system
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
kubernetes.yml: |-
- type: log
paths:
- /var/lib/docker/containers/*/*.log
json.message_key: log
json.keys_under_root: true
processors:
- add_kubernetes_metadata:
in_cluster: true
namespace: ${POD_NAMESPACE}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.0.0-rc1
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: prospectors
mountPath: /usr/share/filebeat/prospectors.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: prospectors
configMap:
defaultMode: 0600
name: filebeat-prospectors
- name: data
emptyDir: {}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---
35 changes: 35 additions & 0 deletions deploy/kubernetes/filebeat/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Filebeat

## Ship logs from Kubernetes to Elasticsearch

### Kubernetes DaemonSet

By deploying filebeat as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/)
we ensure we get a running filebeat daemon on each node of the cluster.

Docker logs host folder (`/var/lib/docker/containers`) is mounted on the
filebeat container. Filebeat will start a prospector for these files and start
harvesting them as they appear.

Everything is deployed under `kube-system` namespace, you can change that by
updating YAML manifests under this folder.

### Settings

We use official [Beats Docker images](https://github.com/elastic/beats-docker),
as they allow external files configuration, a [ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/)
is used for kubernetes specific settings. Check [filebeat-configmap.yaml](filebeat-configmap.yaml)
for details.

Also, [filebeat-daemonset.yaml](filebeat-daemonset.yaml) uses a set of environment
variables to configure Elasticsearch output:

Variable | Default | Description
-------- | ------- | -----------
ELASTICSEARCH_HOST | elasticsearch | Elasticsearch host
ELASTICSEARCH_PORT | 9200 | Elasticsearch port
ELASTICSEARCH_USERNAME | elastic | Elasticsearch username for HTTP auth
ELASTICSEARCH_PASSWORD | changeme | Elasticsearch password

If there is an existing `elasticsearch` service in the kubernetes cluster these
defaults will use it.
Loading

0 comments on commit d877967

Please sign in to comment.