From 74a5bbc5caa3cea306aa7047b73fb81738d80872 Mon Sep 17 00:00:00 2001 From: J Stickler Date: Tue, 28 May 2024 15:54:56 -0400 Subject: [PATCH] docs: Update Grafana Agent to Grafana Alloy (#12602) --- .../get-started/labels/structured-metadata.md | 4 +-- docs/sources/get-started/overview.md | 2 +- docs/sources/operations/loki-canary/_index.md | 2 +- docs/sources/release-notes/v3.0.md | 4 ++- docs/sources/send-data/_index.md | 26 +++++++++++-------- docs/sources/send-data/k6/log-generation.md | 4 +-- docs/sources/send-data/otel/_index.md | 8 +++--- docs/sources/send-data/promtail/_index.md | 4 +++ .../send-data/promtail/installation.md | 4 +++ docs/sources/setup/install/helm/concepts.md | 2 +- .../migrate/migrate-from-distributed/index.md | 2 +- .../setup/migrate/migrate-to-alloy/_index.md | 25 ++++++++++++++++++ .../setup/migrate/migrate-to-tsdb/_index.md | 2 +- 13 files changed, 64 insertions(+), 25 deletions(-) create mode 100644 docs/sources/setup/migrate/migrate-to-alloy/_index.md diff --git a/docs/sources/get-started/labels/structured-metadata.md b/docs/sources/get-started/labels/structured-metadata.md index 99f46f708792..91fe5d80ab67 100644 --- a/docs/sources/get-started/labels/structured-metadata.md +++ b/docs/sources/get-started/labels/structured-metadata.md @@ -21,7 +21,7 @@ Structured metadata can also be used to query commonly needed metadata from log You should only use structured metadata in the following situations: -- If you are ingesting data in OpenTelemetry format, using the Grafana Agent or an OpenTelemetry Collector. Structured metadata was designed to support native ingestion of OpenTelemetry data. +- If you are ingesting data in OpenTelemetry format, using Grafana Alloy or an OpenTelemetry Collector. Structured metadata was designed to support native ingestion of OpenTelemetry data. - If you have high cardinality metadata that should not be used as a label and does not exist in the log line. Some examples might include `process_id` or `thread_id` or Kubernetes pod names. It is an antipattern to extract information that already exists in your log lines and put it into structured metadata. @@ -31,7 +31,7 @@ It is an antipattern to extract information that already exists in your log line You have the option to attach structured metadata to log lines in the push payload along with each log line and the timestamp. For more information on how to push logs to Loki via the HTTP endpoint, refer to the [HTTP API documentation](https://grafana.com/docs/loki//reference/api/#ingest-logs). -Alternatively, you can use the Grafana Agent or Promtail to extract and attach structured metadata to your log lines. +Alternatively, you can use Grafana Alloy or Promtail to extract and attach structured metadata to your log lines. See the [Promtail: Structured metadata stage](https://grafana.com/docs/loki//send-data/promtail/stages/structured_metadata/) for more information. With Loki version 1.2.0, support for structured metadata has been added to the Logstash output plugin. For more information, see [logstash](https://grafana.com/docs/loki//send-data/logstash/). diff --git a/docs/sources/get-started/overview.md b/docs/sources/get-started/overview.md index 4051ba63cc11..1194398c38f0 100644 --- a/docs/sources/get-started/overview.md +++ b/docs/sources/get-started/overview.md @@ -22,7 +22,7 @@ Log data is then compressed and stored in chunks in an object store such as Amaz A typical Loki-based logging stack consists of 3 components: -- **Agent** - An agent or client, for example Promtail, which is distributed with Loki, or the Grafana Agent. The agent scrapes logs, turns the logs into streams by adding labels, and pushes the streams to Loki through an HTTP API. +- **Agent** - An agent or client, for example Grafana Alloy, or Promtail, which is distributed with Loki. The agent scrapes logs, turns the logs into streams by adding labels, and pushes the streams to Loki through an HTTP API. - **Loki** - The main server, responsible for ingesting and storing logs and processing queries. It can be deployed in three different configurations, for more information see [deployment modes]({{< relref "../get-started/deployment-modes" >}}). diff --git a/docs/sources/operations/loki-canary/_index.md b/docs/sources/operations/loki-canary/_index.md index cf2a1075d3c0..f6c1bf23a938 100644 --- a/docs/sources/operations/loki-canary/_index.md +++ b/docs/sources/operations/loki-canary/_index.md @@ -29,7 +29,7 @@ array. The contents look something like this: The relevant part of the log entry is the timestamp; the `p`s are just filler bytes to make the size of the log configurable. -An agent (like Promtail) should be configured to read the log file and ship it +An agent (like Grafana Alloy) should be configured to read the log file and ship it to Loki. Meanwhile, Loki Canary will open a WebSocket connection to Loki and will tail diff --git a/docs/sources/release-notes/v3.0.md b/docs/sources/release-notes/v3.0.md index a44483d57d2f..ea3c7603ff82 100644 --- a/docs/sources/release-notes/v3.0.md +++ b/docs/sources/release-notes/v3.0.md @@ -20,7 +20,7 @@ Key features in Loki 3.0.0 include the following: - **Query acceleration with Bloom filters** (experimental): This is designed to speed up filter queries, with best results for queries that are looking for a specific text string like an error message or UUID. For more information, refer to [Query acceleration with Blooms](https://grafana.com/docs/loki//operations/query-acceleration-blooms/). -- **Native OpenTelemetry Support**: A simplified ingestion pipeline (Loki Exporter no longer needed) and a more intuitive query experience for OTel logs. For more information, refer to the [OTEL documentation](https://grafana.com/docs/loki//send-data/otel/). +- **Native OpenTelemetry Support**: A simplified ingestion pipeline (Loki Exporter no longer needed) and a more intuitive query experience for OTel logs. For more information, refer to the [OTel documentation](https://grafana.com/docs/loki//send-data/otel/). - **Helm charts**: A major upgrade to the Loki helm chart introduces support for `Distributed` mode (also known as [microservices](https://grafana.com/docs/loki//get-started/deployment-modes/#microservices-mode) mode), includes memcached by default, and includes several updates to configurations to improve Loki operations. @@ -46,6 +46,8 @@ One of the focuses of Loki 3.0 was cleaning up unused code and old features that To learn more about breaking changes in this release, refer to the [Upgrade guide](https://grafana.com/docs/loki//setup/upgrade/). +{{< docs/shared source="alloy" lookup="agent-deprecation.md" version="next" >}} + ## Upgrade Considerations The path from 2.9 to 3.0 includes several breaking changes. For important upgrade guidance, refer to the [Upgrade Guide](https://grafana.com/docs/loki//setup/upgrade/) and the separate [Helm Upgrade Guide](https://grafana.com/docs/loki//setup/upgrade/upgrade-to-6x/). diff --git a/docs/sources/send-data/_index.md b/docs/sources/send-data/_index.md index 2064860dbbcd..0ef9432d3caf 100644 --- a/docs/sources/send-data/_index.md +++ b/docs/sources/send-data/_index.md @@ -18,16 +18,20 @@ While all clients can be used simultaneously to cover multiple use cases, which The following clients are developed and supported (for those customers who have purchased a support contract) by Grafana Labs for sending logs to Loki: -- [Grafana Agent](/docs/agent/latest/) - The Grafana Agent is the recommended client for the Grafana stack. It can collect telemetry data for metrics, logs, traces, and continuous profiles and is fully compatible with the Prometheus, OpenTelemetry, and Grafana open source ecosystems. -- [Promtail]({{< relref "./promtail" >}}) - Promtail is the client of choice when you're running Kubernetes, as you can configure it to automatically scrape logs from pods running on the same node that Promtail runs on. Promtail and Prometheus running together in Kubernetes enables powerful debugging: if Prometheus and Promtail use the same labels, users can use tools like Grafana to switch between metrics and logs based on the label set. -Promtail is also the client of choice on bare-metal since it can be configured to tail logs from all files given a host path. It is the easiest way to send logs to Loki from plain-text files (for example, things that log to `/var/log/*.log`). -Lastly, Promtail works well if you want to extract metrics from logs such as counting the occurrences of a particular message. -- [xk6-loki extension](https://github.com/grafana/xk6-loki) - The k6-loki extension lets you perform [load testing on Loki]({{< relref "./k6" >}}). +- [Grafana Alloy](https://grafana.com/docs/alloy/latest/) - Grafana Alloy is a vendor-neutral distribution of the OpenTelemetry (OTel) Collector. Alloy offers native pipelines for OTel, Prometheus, Pyroscope, Loki, and many other metrics, logs, traces, and profile tools. In addition, you can use Alloy pipelines to do different tasks, such as configure alert rules in Loki and Mimir. Alloy is fully compatible with the OTel Collector, Prometheus Agent, and Promtail. You can use Alloy as an alternative to either of these solutions or combine it into a hybrid system of multiple collectors and agents. You can deploy Alloy anywhere within your IT infrastructure and pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor. + {{< docs/shared source="alloy" lookup="agent-deprecation.md" version="next" >}} +- [Grafana Agent](/docs/agent/latest/) - The Grafana Agent is a client for the Grafana stack. It can collect telemetry data for metrics, logs, traces, and continuous profiles and is fully compatible with the Prometheus, OpenTelemetry, and Grafana open source ecosystems. +- [Promtail](https://grafana.com/docs/loki//send-data/promtail/) - Promtail can be configured to automatically scrape logs from Kubernetes pods running on the same node that Promtail runs on. Promtail and Prometheus running together in Kubernetes enables powerful debugging: if Prometheus and Promtail use the same labels, users can use tools like Grafana to switch between metrics and logs based on the label set. Promtail can be configured to tail logs from all files given a host path. It is the easiest way to send logs to Loki from plain-text files (for example, things that log to `/var/log/*.log`). +Promtail works well if you want to extract metrics from logs such as counting the occurrences of a particular message. +{{< admonition type="note" >}} +Promtail is feature complete. All future feature development will occur in Grafana Alloy. +{{< /admonition >}} +- [xk6-loki extension](https://github.com/grafana/xk6-loki) - The k6-loki extension lets you perform [load testing on Loki](https://grafana.com/docs/loki//send-data/k6/). ## OpenTelemetry Collector Loki natively supports ingesting OpenTelemetry logs over HTTP. -See [Ingesting logs to Loki using OpenTelemetry Collector]({{< relref "./otel" >}}) for more details. +For more information, see [Ingesting logs to Loki using OpenTelemetry Collector](https://grafana.com/docs/loki//send-data/otel/). ## Third-party clients @@ -39,14 +43,14 @@ Grafana Labs cannot provide support for third-party clients. Once an issue has b The following are popular third-party Loki clients: -- [Docker Driver]({{< relref "./docker-driver" >}}) - When using Docker and not Kubernetes, the Docker logging driver for Loki should +- [Docker Driver](https://grafana.com/docs/loki//send-data/docker-driver/) - When using Docker and not Kubernetes, the Docker logging driver for Loki should be used as it automatically adds labels appropriate to the running container. -- [Fluent Bit]({{< relref "./fluentbit" >}}) - The Fluent Bit plugin is ideal when you already have Fluentd deployed +- [Fluent Bit](https://grafana.com/docs/loki//send-data/fluentbit/) - The Fluent Bit plugin is ideal when you already have Fluentd deployed and you already have configured `Parser` and `Filter` plugins. -- [Fluentd]({{< relref "./fluentd" >}}) - The Fluentd plugin is ideal when you already have Fluentd deployed +- [Fluentd](https://grafana.com/docs/loki//send-data/fluentd/) - The Fluentd plugin is ideal when you already have Fluentd deployed and you already have configured `Parser` and `Filter` plugins. Fluentd also works well for extracting metrics from logs when using itsPrometheus plugin. -- [Lambda Promtail]({{< relref "./lambda-promtail" >}}) - This is a workflow combining the Promtail push-api [scrape config]({{< relref "./promtail/configuration#loki_push_api" >}}) and the [lambda-promtail]({{< relref "./lambda-promtail" >}}) AWS Lambda function which pipes logs from Cloudwatch to Loki. This is a good choice if you're looking to try out Loki in a low-footprint way or if you wish to monitor AWS lambda logs in Loki -- [Logstash]({{< relref "./logstash" >}}) - If you are already using logstash and/or beats, this will be the easiest way to start. +- [Lambda Promtail](https://grafana.com/docs/loki//send-data/lambda-promtail/) - This is a workflow combining the Promtail push-api [scrape config](https://grafana.com/docs/loki//send-data/promtail/configuration/#loki_push_api) and the lambda-promtail AWS Lambda function which pipes logs from Cloudwatch to Loki. This is a good choice if you're looking to try out Loki in a low-footprint way or if you wish to monitor AWS lambda logs in Loki +- [Logstash](https://grafana.com/docs/loki//send-data/logstash/) - If you are already using logstash and/or beats, this will be the easiest way to start. By adding our output plugin you can quickly try Loki without doing big configuration changes. These third-party clients also enable sending logs to Loki: diff --git a/docs/sources/send-data/k6/log-generation.md b/docs/sources/send-data/k6/log-generation.md index 635f042f90b8..8ad79309191b 100644 --- a/docs/sources/send-data/k6/log-generation.md +++ b/docs/sources/send-data/k6/log-generation.md @@ -61,8 +61,8 @@ export default () => { The second and third argument of the method take the lower and upper bound of the batch size. The resulting batch size is a random value between the two -arguments. This mimics the behaviour of a log client, such as Promtail or -the Grafana Agent, where logs are buffered and pushed once a certain batch size +arguments. This mimics the behavior of a log client, such as Grafana Alloy or Promtail, +where logs are buffered and pushed once a certain batch size is reached or after a certain size when no logs have been received. The batch size is not equal to the payload size, as the batch size only counts diff --git a/docs/sources/send-data/otel/_index.md b/docs/sources/send-data/otel/_index.md index 6fa17c317054..4b28cbf16c7c 100644 --- a/docs/sources/send-data/otel/_index.md +++ b/docs/sources/send-data/otel/_index.md @@ -1,6 +1,6 @@ --- title: Ingesting logs to Loki using OpenTelemetry Collector -menuTitle: OTEL Collector +menuTitle: OTel Collector description: Configuring the OpenTelemetry Collector to send logs to Loki. aliases: - ../clients/k6/ @@ -97,7 +97,7 @@ Since the OpenTelemetry protocol differs from the Loki storage model, here is ho - Timestamp: One of `LogRecord.TimeUnixNano` or `LogRecord.ObservedTimestamp`, based on which one is set. If both are not set, the ingestion timestamp will be used. -- LogLine: `LogRecord.Body` holds the body of the log. However, since Loki only supports Log body in string format, we will stringify non-string values using the [AsString method from the OTEL collector lib](https://github.com/open-telemetry/opentelemetry-collector/blob/ab3d6c5b64701e690aaa340b0a63f443ff22c1f0/pdata/pcommon/value.go#L353). +- LogLine: `LogRecord.Body` holds the body of the log. However, since Loki only supports Log body in string format, we will stringify non-string values using the [AsString method from the OTel collector lib](https://github.com/open-telemetry/opentelemetry-collector/blob/ab3d6c5b64701e690aaa340b0a63f443ff22c1f0/pdata/pcommon/value.go#L353). - [Structured Metadata]({{< relref "../../get-started/labels/structured-metadata" >}}): Anything which can’t be stored in Index labels and LogLine would be stored as Structured Metadata. Here is a non-exhaustive list of what will be stored in Structured Metadata to give a sense of what it will hold: - Resource Attributes not stored as Index labels is replicated and stored with each log entry. @@ -109,7 +109,7 @@ Things to note before ingesting OpenTelemetry logs to Loki: - Dots (.) are converted to underscores (_). Loki does not support `.` or any other special characters other than `_` in label names. The unsupported characters are replaced with an `_` while converting Attributes to Index Labels or Structured Metadata. - Also, please note that while writing the queries, you must use the normalized format, i.e. use `_` instead of special characters while querying data using OTEL Attributes. + Also, please note that while writing the queries, you must use the normalized format, i.e. use `_` instead of special characters while querying data using OTel Attributes. For example, `service.name` in OTLP would become `service_name` in Loki. @@ -120,7 +120,7 @@ Things to note before ingesting OpenTelemetry logs to Loki: - Stringification of non-string Attribute values - While converting Attribute values in OTLP to Index label values or Structured Metadata, any non-string values are converted to string using [AsString method from the OTEL collector lib](https://github.com/open-telemetry/opentelemetry-collector/blob/ab3d6c5b64701e690aaa340b0a63f443ff22c1f0/pdata/pcommon/value.go#L353). + While converting Attribute values in OTLP to Index label values or Structured Metadata, any non-string values are converted to string using [AsString method from the OTel collector lib](https://github.com/open-telemetry/opentelemetry-collector/blob/ab3d6c5b64701e690aaa340b0a63f443ff22c1f0/pdata/pcommon/value.go#L353). ### Changing the default mapping of OTLP to Loki Format diff --git a/docs/sources/send-data/promtail/_index.md b/docs/sources/send-data/promtail/_index.md index 7e560e661438..03bbf6487c39 100644 --- a/docs/sources/send-data/promtail/_index.md +++ b/docs/sources/send-data/promtail/_index.md @@ -12,6 +12,10 @@ Promtail is an agent which ships the contents of local logs to a private Grafana instance or [Grafana Cloud](/oss/loki). It is usually deployed to every machine that runs applications which need to be monitored. +{{< admonition type="note" >}} +Promtail is feature complete. All future feature development will occur in Grafana Alloy. +{{< /admonition >}} + It primarily: - Discovers targets diff --git a/docs/sources/send-data/promtail/installation.md b/docs/sources/send-data/promtail/installation.md index 4d2359e94c17..25a818458a80 100644 --- a/docs/sources/send-data/promtail/installation.md +++ b/docs/sources/send-data/promtail/installation.md @@ -9,6 +9,10 @@ weight: 100 # Install Promtail +{{< admonition type="note" >}} +Promtail is feature complete. All future feature development will occur in Grafana Alloy. +{{< /admonition >}} + Promtail is distributed as a binary, in a Docker container, or there is a Helm chart to install it in a Kubernetes cluster. diff --git a/docs/sources/setup/install/helm/concepts.md b/docs/sources/setup/install/helm/concepts.md index fd8f81ebe474..581498af89b2 100644 --- a/docs/sources/setup/install/helm/concepts.md +++ b/docs/sources/setup/install/helm/concepts.md @@ -21,7 +21,7 @@ By default Loki will be installed in the scalable mode. This consists of a read ## Dashboards -This chart includes dashboards for monitoring Loki. These require the scrape configs defined in the `monitoring.serviceMonitor` and `monitoring.selfMonitoring` sections described below. The dashboards are deployed via a config map which can be mounted on a Grafana instance. The Dashboard require an installation of the Grafana Agent and the Prometheus operator. The agent is installed with this chart. +This chart includes dashboards for monitoring Loki. These require the scrape configs defined in the `monitoring.serviceMonitor` and `monitoring.selfMonitoring` sections described below. The dashboards are deployed via a config map which can be mounted on a Grafana instance. The Dashboard requires an installation of the Grafana Agent and the Prometheus operator. The agent is installed with this chart. ## Canary diff --git a/docs/sources/setup/migrate/migrate-from-distributed/index.md b/docs/sources/setup/migrate/migrate-from-distributed/index.md index 1618716fd26e..01b016b8a937 100644 --- a/docs/sources/setup/migrate/migrate-from-distributed/index.md +++ b/docs/sources/setup/migrate/migrate-from-distributed/index.md @@ -48,7 +48,7 @@ This leverages the fact that the new deployment adds a `app.kubernetes.io/compon Once the new cluster is up, add the appropriate data source in Grafana for the new cluster. Check that the following queries return results: - Confirm new and old logs are in the new deployment. Using the new deployment's Loki data source in Grafana, look for: - - Logs with a job that is unqiue to your existing Promtail or Grafana Agent, the one we adjusted above to exclude logs from the new deployment which is not yet pushing logs to the new deployment. If you can query those via the new deployment in shows we have not lost historical logs. + - Logs with a job that is unique to your existing Promtail or Grafana Agent, the one we adjusted above to exclude logs from the new deployment which is not yet pushing logs to the new deployment. If you can query those via the new deployment in shows we have not lost historical logs. - Logs with the label `job="loki/loki-read"`. The read component does not exist in `loki-distributed`, so this show the new Loki cluster's self monitoring is working correctly. - Confirm new logs are in the old deployment. Using the old deployment's Loki data source in Grafana, look for: - Logs with the label `job="loki/loki-read"`. Since you have excluded logs from the new deployment from going to the `loki-distributed` deployment, if you can query them through the `loki-distributed` Loki data source that show the ingesters have joined the same ring, and are queryable from the `loki-distributed` queriers. diff --git a/docs/sources/setup/migrate/migrate-to-alloy/_index.md b/docs/sources/setup/migrate/migrate-to-alloy/_index.md new file mode 100644 index 000000000000..adb4dac9b3a1 --- /dev/null +++ b/docs/sources/setup/migrate/migrate-to-alloy/_index.md @@ -0,0 +1,25 @@ +--- +title: Migrate to Alloy +description: Provides links to documentation to migrate to Grafana Alloy. +weight: 100 +--- + +# Migrate to Alloy + +Grafana Alloy is the new name for the Grafana Labs distribution of the OpenTelemetry collector. Grafana Agent Static, Grafana Agent Flow, and Grafana Agent Operator have been deprecated and are in Long-Term Support (LTS) through October 31, 2025. They will reach an End-of-Life (EOL) on November 1, 2025. Grafana Labs has provided tools and migration documentation to assist you in migrating to Grafana Alloy. + +Read more about why we recommend migrating to [Grafana Alloy](https://grafana.com/blog/2024/04/09/grafana-alloy-opentelemetry-collector-with-prometheus-pipelines/). + +This section provides links to documentation for how to migrate to Alloy. + +- [Migrate from Grafana Agent Static](https://grafana.com/docs/alloy/latest/tasks/migrate/from-static/) + +- [Migrate from Grafana Agent Flow](https://grafana.com/docs/alloy/latest/tasks/migrate/from-flow/) + +- [Migrate from Grafana Agent Operator](https://grafana.com/docs/alloy/latest/tasks/migrate/from-operator/) + +- [Migrate from OpenTelemetry Collector](https://grafana.com/docs/alloy/latest/tasks/migrate/from-otelcol/) + +- [Migrate from Prometheus](https://grafana.com/docs/alloy/latest/tasks/migrate/from-prometheus/) + +- [Migrate from Promtail](https://grafana.com/docs/alloy/latest/tasks/migrate/from-promtail/) diff --git a/docs/sources/setup/migrate/migrate-to-tsdb/_index.md b/docs/sources/setup/migrate/migrate-to-tsdb/_index.md index 963913e21ef9..49ba506dc553 100644 --- a/docs/sources/setup/migrate/migrate-to-tsdb/_index.md +++ b/docs/sources/setup/migrate/migrate-to-tsdb/_index.md @@ -2,7 +2,7 @@ title: Migrate to TSDB menuTitle: Migrate to TSDB description: Migration guide for moving from any of the older indexes to TSDB -weight: 100 +weight: 300 keywords: - migrate - tsdb