Category | |
---|---|
Signal types | logs |
Backend type | custom in-cluster |
OTLP-native | no |
Learn how to use Loki as a logging backend with Kyma's LogPipeline or with Promtail.
Warning
This guide uses Grafana Loki, which is distributed under AGPL-3.0 only. Using components that have this license might affect the license of your project. Inform yourself about the license used by Grafana Loki under https://grafana.com/licensing/).
- Prerequisites
- Preparation
- Loki Installation
- Log agent installation
- Grafana installation
- Grafana Exposure
- Kyma as the target deployment environment
- The Telemetry module is added
- Kubectl version that is within one minor version (older or newer) of
kube-apiserver
- Helm 3.x
-
Export your namespace as a variable with the following command:
export K8S_NAMESPACE="loki"
-
Export the Helm release names that you want to use. It can be any name, but be aware that all resources in the cluster will be prefixed with that name. Run the following command:
export HELM_LOKI_RELEASE="loki"
-
Update your Helm installation with the required Helm repository:
helm repo add grafana https://grafana.github.io/helm-charts helm repo update
Depending on your scalability needs and storage requirements, you can install Loki in different Deployment modes. The following instructions install Loki in a lightweight in-cluster solution that does not fulfil production-grade qualities. Consider using a scalable setup based on an object storage backend instead (see Install the simple scalable Helm chart).
You install the Loki stack with a Helm upgrade command, which installs the chart if not present yet.
helm upgrade --install --create-namespace -n ${K8S_NAMESPACE} ${HELM_LOKI_RELEASE} grafana/loki \
-f https://raw.githubusercontent.com/grafana/loki/main/production/helm/loki/single-binary-values.yaml \
--set-string 'loki.podLabels.sidecar\.istio\.io/inject=true' \
--set 'singleBinary.resources.requests.cpu=1' \
--set 'loki.auth_enabled=false'
The previous command uses an example values.yaml from the Loki repository for setting up Loki in the 'SingleBinary' mode. Additionally, it applies:
- Istio sidecar injection for the Loki instance
- a reduced CPU request setting for smaller cluster setups
- disabled multi-tenancy for easier setup
Alternatively, you can create your own values.yaml
file and adjust the command.
Check that the loki
Pod has been created in the Namespace and is in the Running
state:
kubectl -n ${K8S_NAMESPACE} get pod -l app.kubernetes.io/name=loki
To ingest the application logs from within your cluster to Loki, you can either choose an installation based on Promtail, which is the log collector recommended by Loki and provides a ready-to-use setup. Alternatively, you can use Kyma's LogPipeline feature based on Fluent Bit.
To install Promtail pointing it to the previously installed Loki instance, run:
helm upgrade --install --create-namespace -n ${K8S_NAMESPACE} promtail grafana/promtail -f https://raw.githubusercontent.com/kyma-project/telemetry-manager/main/docs/user/integration/loki/promtail-values.yaml --set "config.clients[0].url=https://${HELM_LOKI_RELEASE}.${K8S_NAMESPACE}.svc.cluster.local:3100/loki/api/v1/push"
Warning
This setup uses an unsupported output plugin for the LogPipeline.
Apply the LogPipeline:
cat <<EOF | kubectl apply -f -
apiVersion: telemetry.kyma-project.io/v1alpha1
kind: LogPipeline
metadata:
name: custom-loki
spec:
input:
application:
namespaces:
system: true
output:
custom: |
name loki
host ${HELM_LOKI_RELEASE}.${K8S_NAMESPACE}.svc.cluster.local
port 3100
auto_kubernetes_labels off
labels job=fluentbit, container=\$kubernetes['container_name'], namespace=\$kubernetes['namespace_name'], pod=\$kubernetes['pod_name'], node=\$kubernetes['host'], app=\$kubernetes['labels']['app'],app=\$kubernetes['labels']['app.kubernetes.io/name']
EOF
When the status of the applied LogPipeline resource turns into Running
, the underlying Fluent Bit is reconfigured and log shipment to your Loki instance is active.
Note
The used output plugin configuration uses a static label map to assign labels of a Pod to Loki log streams. It's not recommended to activate the auto_kubernetes_labels
feature for using all labels of a Pod because this lowers the performance. Follow Loki's labelling best practices for a tailor-made setup that fits your workload configuration.
-
To access the Loki API, use kubectl port forwarding. Run:
kubectl -n ${K8S_NAMESPACE} port-forward svc/$(kubectl get svc -n ${K8S_NAMESPACE} -l app.kubernetes.io/name=loki -ojsonpath='{.items[0].metadata.name}') 3100
-
Loki queries need a query parameter time, provided in nanoseconds. To get the current nanoseconds in Linux or macOS, run:
date +%s
-
To get the latest logs from Loki, replace the
{NANOSECONDS}
placeholder with the result of the previous command, and run:curl -G -s "http://localhost:3100/loki/api/v1/query" \ --data-urlencode \ 'query={job="fluentbit"}' \ --data-urlencode \ 'time={NANOSECONDS}'
Because Grafana provides a very good Loki integration, you might want to install it as well.
-
To deploy Grafana, run:
helm upgrade --install --create-namespace -n ${K8S_NAMESPACE} grafana grafana/grafana -f https://raw.githubusercontent.com/kyma-project/telemetry-manager/main/docs/user/integration/loki/grafana-values.yaml
-
To enable Loki as Grafana data source, run:
cat <<EOF | kubectl apply -n ${K8S_NAMESPACE} -f - apiVersion: v1 kind: ConfigMap metadata: name: grafana-loki-datasource labels: grafana_datasource: "1" data: loki-datasource.yaml: |- apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: "http://${HELM_LOKI_RELEASE}:3100" version: 1 isDefault: false jsonData: {} EOF
-
Get the password to access the Grafana UI:
kubectl get secret --namespace ${K8S_NAMESPACE} grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
-
To access the Grafana UI with kubectl port forwarding, run:
kubectl -n ${K8S_NAMESPACE} port-forward svc/grafana 3000:80
-
In your browser, open Grafana under
http://localhost:3000
and log in with user admin and the password you retrieved before.
-
To expose Grafana using the Kyma API Gateway, create an APIRule:
kubectl -n ${K8S_NAMESPACE} apply -f https://raw.githubusercontent.com/kyma-project/telemetry-manager/main/docs/user/integration/loki/apirule.yaml
-
Get the public URL of your Loki instance:
kubectl -n ${K8S_NAMESPACE} get virtualservice -l apirule.gateway.kyma-project.io/v1beta1=grafana.${K8S_NAMESPACE} -ojsonpath='{.items[*].spec.hosts[*]}'
-
Download the
kyma-dashboard-configmap.yaml
file and change{GRAFANA_LINK}
to the public URL of your Grafana instance.curl https://raw.githubusercontent.com/kyma-project/telemetry-manager/main/docs/user/integration/loki/kyma-dashboard-configmap.yaml -o kyma-dashboard-configmap.yaml
-
Optionally, adjust the ConfigMap: You can change the label field to change the name of the tab. If you want to move it to another category, change the category tab.
-
Apply the ConfigMap, and go to Kyma dashboard. Under the Observability section, you should see a link to the newly exposed Grafana. If you already have a busola-config, merge it with the existing one:
kubectl apply -f dashboard-configmap.yaml
-
To remove the installation from the cluster, run:
helm delete -n ${K8S_NAMESPACE} ${HELM_LOKI_RELEASE} helm delete -n ${K8S_NAMESPACE} promtail helm delete -n ${K8S_NAMESPACE} grafana
-
To remove the deployed LogPipeline instance from cluster, run:
kubectl delete LogPipeline custom-loki