Loki Operator is the Kubernetes Operator for Loki.
kind is a tool for running local Kubernetes clusters using Docker container "nodes". kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
- Install kubectl or Openshift CLI for communicating with the cluster. The guide below will be using
kubectl
for the same. - Create a running Kubernetes cluster using kind.
- A container registry that you and your Kubernetes cluster can reach. We recommend quay.io.
-
Build and push the container image and then deploy the operator with:
make oci-build oci-push deploy REGISTRY_ORG=$YOUR_QUAY_ORG VERSION=latest
where
$YOUR_QUAY_ORG
is your personal quay.io account where you can push container images.The above command will deploy the operator to your active Kubernetes cluster defined by your local kubeconfig. The operator will be running in the
default
namespace. -
You can confirm that the operator is up and running using:
kubectl get pods
You should see
controller-manager-xxxx
andminio-xxxx
pods running. -
Now create a LokiStack instance to get the various components of Loki up and running:
kubectl apply -f hack/lokistack_dev.yaml
This will create
distributor
,compactor
,ingester
,querier
andquery-frontend
components.Confirm that all components are up and running for
deployments
using:kubectl rollout status deployment/<DEPLOYMENT_NAME>
where
<DEPLOYMENT_NAME>
is the name of the deployment and can be found using:kubectl get deployments
Confirm that all are up and running for
statefulsets
using:kubectl rollout status statefulset/<STATEFULSET_NAME>
where
<STATEFULSET_NAME>
is the name of the statefulset and can be found using:kubectl get statefulsets
To cleanup deployments of the operator, you can use:
make undeploy
It will undeploy controller from the configured Kubernetes cluster in ~/.kube/config
- Install kubectl or Openshift CLI for communicating with the cluster. The guide below will be using
kubectl
for the same. - Create a running OpenShift cluster on AWS.
- A container registry that you and your OpenShift cluster can reach. We recommend quay.io.
- Create an S3 bucket in one of the AWS Regions.
-
Build and push the container image [2] and then deploy the operator with:
make olm-deploy REGISTRY_ORG=$YOUR_QUAY_ORG VERSION=$VERSION
where
$YOUR_QUAY_ORG
is your personal quay.io account where you can push container images and$VERSION
can be any random version number such asv0.0.1
.The above command will deploy the operator to your active Openshift cluster defined by your local kubeconfig. The operator will be running in the
openshift-logging
namespace. -
You can confirm that the operator is up and running using:
kubectl -n openshift-logging get pods
-
Now you need to create a storage secret for the operator. This can be done using:
make olm-deploy-example-storage-secret
OR
./hack/deploy-example-secret.sh openshift-logging
This secret will be available in openshift-logging namespace. You can check the
hack/deploy-example-secret.sh
file to check the content of the secret. -
Now you need to create a gateway secret [3] for the operator. This can be done using:
kubectl -n openshift-logging create secret generic test1 \ --from-literal=clientID="<CLIENT_ID>" \ --from-literal=clientSecret="<CLIENT_SECRET>" \ --from-literal=issuerCAPath="<ISSUER_CA_PATH>"
-
Once the object storage secret is created, you can now create a LokiStack instance to get the various components of Loki up and running:
kubectl -n openshift-logging apply -f hack/lokistack_gateway_dev.yaml
This will create
distributor
,compactor
,ingester
,querier
,query-frontend
andlokistack-gateway
components.Confirm that all are up and running for
deployments
using:kubectl -n openshift-logging rollout status deployment/<DEPLOYMENT_NAME>
where
<DEPLOYMENT_NAME>
is the name of the deployment and can be found using:kubectl -n openshift-logging get deployments
Confirm that all are up and running for
statefulsets
using:kubectl -n openshift-logging rollout status statefulset/<STATEFULSET_NAME>
where
<STATEFULSET_NAME>
is the name of the statefulset and can be found using:kubectl -n openshift-logging get statefulsets
If you don't want
lokistack-gateway
component [1] then you can skip it by removing the--with-lokistack-gateway
args from theloki-operator-controller-manager
deployment:kubectl -n openshift-logging edit deployment/loki-operator-controller-manager
Delete the flag
--with-lokistack-gateway
from theargs
section and save the file. This will update the deployment and now you can create LokiStack instance using:kubectl -n openshift-logging apply -f hack/lokistack_dev.yaml
This will create
distributor
,compactor
,ingester
,querier
andquery-frontend
components only.
To cleanup deployments of the operator, you can use:
make olm-undeploy
It will cleanup deployments of the operator bundle, and the operator via OLM on an OpenShift cluster selected via ~/.kube/config
[1] lokistack-gateway
is an optional component deployed as part of Loki Operator. It provides secure access to Loki's distributor (i.e. for pushing logs) and query-frontend (i.e. for querying logs) via consulting an OAuth/OIDC endpoint for the request subject.
[2] If you get multiple images as options, and you are required to select one of them then select docker.io/library/golang:1.16
[3] The OIDC configuration expects clientID
, clientSecret
and issuerCAPath
which should be provided via a Kubernetes secret that the LokiStack admin provides upfront.
Each tenant Secret is required to match:
metadata.name
withTenantsSecretsSpec.Name
.metadata.namespace
withLokiStack.metadata.namespace
.
Suppose you made some changes to the Loki Operator's code and deployed it, but the changes are not visible when it runs. This happens when the deployment pulls the old image of the operator because of the imagePullPolicy
being set to IfNotPresent
. Therefore, you need to make some changes to make your deployment pull a new image always:
-
Go to
config/manager/manager.yaml
file. -
Set the
imagePullPolicy
toAlways
i.e.,imagePullPolicy: Always
-
Deploy the operator again.
It is possible that when you use two different clusters - one is kind cluster and the other is OpenShift cluster, you might need to switch between clusters to test your changes. There is a possibility that once you switch between clusters, the kubectl might not switch the context automatically and hence you might need to do this manually to correctly communicate with your cluster.
-
List all the available context:
kubectl config get-contexts
The
*
mark against the context shows the one in use currently. -
Set the context name you want to use now:
kubectl config use-context $CONTEXTNAME
where
$CONTEXTNAME
is the context name you want to use now from the previous step.
You have probably forgotten to create the gateway secrets because of which the operator runs in degraded condition. Follow the steps mentioned in the step-by-step guide to create the gateway secret first. Once done, you can now create the LokiStack instance.
Verify this by checking the conditions
field:
kubectl get lokistack lokistack-dev -o yaml
For OpenShift, the above command would be:
kubectl -n openshift-logging get lokistack lokistack-dev -o yaml
This usually happens when the LokiStack CR is wrongly configured for the lokistack-gateway. Please read the enhancement proposal to figure out the correct way to configure it.