diff --git a/docs/getting_started.md b/docs/getting_started.md
index 9ae7fe60df..1c2ed4d407 100644
--- a/docs/getting_started.md
+++ b/docs/getting_started.md
@@ -35,6 +35,7 @@ You will configure two files:
Depending on what you want to do, we have a few different guides to configuring Data Prepper.
* [Trace Analytics](trace_analytics.md) - Learn how to setup Data Prepper for trace observability
+* [Log Ingestion](log_analytics.md) - Learn how to setup Data Prepper for log observability
* [Simple Pipeline](simple_pipelines.md) - Learn the basics of Data Prepper pipelines with some simple configurations.
## Running
@@ -67,6 +68,8 @@ how to configure the server.
Trace Analytics is an important Data Prepper use case. If you haven't yet configure it,
please visit the [Trace Analytics documentation](trace_analytics.md).
+Log Ingestion is also an important Data Prepper use case. To learn more, visit the [Log Ingestion Documentation](log_analytics.md).
+
To monitor Data Prepper, please read the [Monitoring](monitoring.md) page.
## Other Examples
diff --git a/docs/images/Components.jpg b/docs/images/Components.jpg
deleted file mode 100644
index b243e75f03..0000000000
Binary files a/docs/images/Components.jpg and /dev/null differ
diff --git a/docs/images/LogAnalyticsComponents.png b/docs/images/LogAnalyticsComponents.png
new file mode 100644
index 0000000000..fc31fe33cf
Binary files /dev/null and b/docs/images/LogAnalyticsComponents.png differ
diff --git a/docs/images/Log_Ingestion_FluentBit_DataPrepper_OpenSearch.jpg b/docs/images/Log_Ingestion_FluentBit_DataPrepper_OpenSearch.jpg
index bf49c8798c..9c26194104 100644
Binary files a/docs/images/Log_Ingestion_FluentBit_DataPrepper_OpenSearch.jpg and b/docs/images/Log_Ingestion_FluentBit_DataPrepper_OpenSearch.jpg differ
diff --git a/docs/images/TraceAnalyticsComponents.png b/docs/images/TraceAnalyticsComponents.png
new file mode 100644
index 0000000000..62f43cf90e
Binary files /dev/null and b/docs/images/TraceAnalyticsComponents.png differ
diff --git a/docs/log_analytics.md b/docs/log_analytics.md
new file mode 100644
index 0000000000..04e7945256
--- /dev/null
+++ b/docs/log_analytics.md
@@ -0,0 +1,128 @@
+# Log Analytics
+
+## Introduction
+
+Data Prepper is an extendable, configurable, and scalable solution for log ingestion into OpenSearch and Amazon OpenSearch Service.
+Currently, Data Prepper is focused on receiving logs from [FluentBit](https://fluentbit.io/) via the
+[Http Source](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/http-source/README.md), and processing those logs with a [Grok Prepper](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/grok-prepper/README.md) before ingesting them into OpenSearch through the [OpenSearch sink](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/README.md).
+
+Here is all of the components for log analytics with FluentBit, Data Prepper, and OpenSearch:
+
+
+![Log Analytics Components](images/LogAnalyticsComponents.png)
+
+
+
+In your application environment you will have to run FluentBit.
+FluentBit can be containerized through Kubernetes, Docker, or Amazon ECS.
+It can also be run as an agent on EC2.
+You should configure the [FluentBit http output plugin](https://docs.fluentbit.io/manual/pipeline/outputs/http) to export log data to Data Prepper.
+You will then have to deploy Data Prepper as an intermediate component and configure it to send
+the enriched log data to your OpenSearch cluster or Amazon OpenSearch Service domain. From there, you can
+use OpenSearch Dashboards to perform more intensive visualization and analysis.
+
+## Log Analytics Pipeline
+
+Log analytic pipelines in Data Prepper are extremely customizable. A simple pipeline is shown below.
+
+![](images/Log_Ingestion_FluentBit_DataPrepper_OpenSearch.jpg)
+
+## Http Source
+
+The [Http Source](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/http-source/README.md) accepts log data from FluentBit.
+More specifically, this source accepts log data in a JSON array format.
+This source supports industry-standard encryption in the form of TLS/HTTPS and HTTP basic authentication.
+
+## Preppers
+
+The Data Prepper 1.2 release will come with a [Grok Prepper](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/grok-prepper/README.md).
+The Grok Prepper can be an invaluable tool to structure and extract important fields from your logs in order to make them more queryable.
+
+The Grok Prepper comes with a wide variety of [default patterns](https://github.com/thekrakken/java-grok/blob/master/src/main/resources/patterns/patterns) that match against common log formats like apcahe logs or syslogs,
+but can easily accept any custom patterns that cater to your specific log format.
+
+There are a lot of complex Grok features that will not be discussed here, so please read the documentation if you are interested.
+
+## OpenSearch sink
+
+We have a generic sink that writes the data to OpenSearch as the destination. The [opensearch sink](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/README.md) has configuration options related to an OpenSearch cluster like endpoint, SSL/Username, index name, index template, index state management, etc.
+
+## Pipeline Configuration
+
+Create a file, `pipeline.yaml`. Paste the following example pipeline in there.
+
+```yaml
+log-pipeline:
+ source:
+ http:
+ # This is a simple http source with SSL disabled.
+ # However, you can easily enable SSL and http basic authentication for this source
+ ssl: false
+ # The default port that will listen for incoming logs
+ port: 2021
+ prepper:
+ - grok:
+ match:
+ # This will match logs with a "log" key against the COMMONAPACHELOG pattern (ex: { "log": "actual apache log..." } )
+ # You should change this to match what your logs look like. See the grok documenation to get started.
+ log: [ "%{COMMONAPACHELOG}" ]
+ sink:
+ - opensearch:
+ hosts: [ "https://localhost:9200" ]
+ # Change to your credentials
+ username: "admin"
+ password: "admin"
+ # Add a certificate file if you are accessing an OpenSearch cluster with a self-signed certificate
+ #cert: /path/to/cert
+ # If you are connecting to an Amazon OpenSearch Service domain without
+ # Fine-Grained Access Control, enable these settings. Comment out the
+ # username and password above.
+ #aws_sigv4: true
+ #aws_region: us-east-1
+ # Since we are grok matching for apache logs, it makes sense to send them to an OpenSearch index named apache_logs.
+ # You should change this to correspond with how your OpenSearch indices are set up.
+ index: apache_logs
+```
+
+This pipeline configuration is an example of apache log ingestion. Don't forget that you can easily configure the Grok Prepper for your own custom logs.
+
+You will need to modify the configuration above for your OpenSearch cluster.
+
+The main changes you will need to make are:
+
+* `hosts` - Set to your hosts
+* `index` - Change this to the OpenSearch index you want to send logs to
+* `username`- Provide the OpenSearch username
+* `password` - Provide your OpenSearch password
+* `aws_sigv4` - If you use Amazon OpenSearch Service with AWS signing, set this to true. It will sign requests with the default AWS credentials provider.
+* `aws_region` - If you use Amazon OpenSearch Service with AWS signing, set this value to your region.
+## FluentBit
+
+You will have to run FluentBit in your service environment. You can find the installation guide of FluentBit [here](https://docs.fluentbit.io/manual/installation/getting-started-with-fluent-bit).
+Please ensure that you can configure the [FluentBit http output plugin](https://docs.fluentbit.io/manual/pipeline/outputs/http) to your Data Prepper Http Source. Below is an example `fluent-bit.conf` that tails a log file named `test.log` and forwards it to a locally running Data Prepper's http source, which runs
+by default on port 2021. Note that you should adjust the file `path`, output `Host` and `Port` according to how and where you have FluentBit and Data Prepper running.
+
+```
+[INPUT]
+ name tail
+ refresh_interval 5
+ path test.log
+ read_from_head true
+
+[OUTPUT]
+ Name http
+ Match *
+ Host localhost
+ Port 2021
+ URI /log/ingest
+ Format json
+```
+
+## Next Steps
+
+Follow the [Log Ingestion Demo Guide](../examples/log-ingestion/log_ingestion_demo_guide.md) to get a specific example of apache log ingestion from `FluentBit -> Data Prepper -> OpenSearch` running through Docker.
+
+In the future, Data Prepper will contain additional sources and preppers which will make more complex log analytic pipelines available. Check out our [Roadmap](https://github.com/opensearch-project/data-prepper/projects/1) to see what is coming.
+
+If there is a specifc source, prepper, or sink that you would like to include in your log analytic workflow, and it is not currently on the Roadmap, please bring it to our attention by making a Github issue. Additionally, if you
+are interested in contributing, see our [Contribuing Guidelines](../CONTRIBUTING.md) as well as our [Developer Guide](developer_guide.md) and [Plugin Development Guide](plugin_development.md).
\ No newline at end of file
diff --git a/docs/trace_analytics.md b/docs/trace_analytics.md
index fb91ff2947..4a0f2baf52 100644
--- a/docs/trace_analytics.md
+++ b/docs/trace_analytics.md
@@ -10,10 +10,10 @@ The transformed trace data is the visualized using the
[Trace Analytics OpenSearch Dashboards plugin](https://opensearch.org/docs/monitoring-plugins/trace/ta-dashboards/),
which provides at-a-glance visibility into your application performance, along with the ability to drill down on individual traces.
-Here is how all the components work in trace analytics,
+Here is how all the components work in trace analytics:
-![Trace Analytics Pipeline](images/Components.jpg)
+![Trace Analytics Pipeline](images/TraceAnalyticsComponents.png)
diff --git a/examples/log-ingestion/docker-compose.yaml b/examples/log-ingestion/docker-compose.yaml
new file mode 100644
index 0000000000..a7fef69990
--- /dev/null
+++ b/examples/log-ingestion/docker-compose.yaml
@@ -0,0 +1,73 @@
+version: '3'
+services:
+ fluent-bit:
+ container_name: fluent-bit
+ image: fluent/fluent-bit
+ volumes:
+ - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
+ - ./test.log:/var/log/test.log
+ networks:
+ - opensearch-net
+ opensearch-node1:
+ image: opensearchproject/opensearch:latest
+ container_name: opensearch-node1
+ environment:
+ - cluster.name=opensearch-cluster
+ - node.name=opensearch-node1
+ - discovery.seed_hosts=opensearch-node1,opensearch-node2
+ - cluster.initial_master_nodes=opensearch-node1,opensearch-node2
+ - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
+ - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
+ ulimits:
+ memlock:
+ soft: -1
+ hard: -1
+ nofile:
+ soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
+ hard: 65536
+ volumes:
+ - opensearch-data1:/usr/share/opensearch/data
+ ports:
+ - 9200:9200
+ - 9600:9600 # required for Performance Analyzer
+ networks:
+ - opensearch-net
+ opensearch-node2:
+ image: opensearchproject/opensearch:latest
+ container_name: opensearch-node2
+ environment:
+ - cluster.name=opensearch-cluster
+ - node.name=opensearch-node2
+ - discovery.seed_hosts=opensearch-node1,opensearch-node2
+ - cluster.initial_master_nodes=opensearch-node1,opensearch-node2
+ - bootstrap.memory_lock=true
+ - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
+ ulimits:
+ memlock:
+ soft: -1
+ hard: -1
+ nofile:
+ soft: 65536
+ hard: 65536
+ volumes:
+ - opensearch-data2:/usr/share/opensearch/data
+ networks:
+ - opensearch-net
+ opensearch-dashboards:
+ image: opensearchproject/opensearch-dashboards:latest
+ container_name: opensearch-dashboards
+ ports:
+ - 5601:5601
+ expose:
+ - "5601"
+ environment:
+ OPENSEARCH_HOSTS: '["https://opensearch-node1:9200","https://opensearch-node2:9200"]'
+ networks:
+ - opensearch-net
+
+volumes:
+ opensearch-data1:
+ opensearch-data2:
+
+networks:
+ opensearch-net:
\ No newline at end of file
diff --git a/examples/log-ingestion/fluent-bit.conf b/examples/log-ingestion/fluent-bit.conf
new file mode 100644
index 0000000000..b7a1a6c5dc
--- /dev/null
+++ b/examples/log-ingestion/fluent-bit.conf
@@ -0,0 +1,13 @@
+[INPUT]
+ name tail
+ refresh_interval 5
+ path /var/log/test.log
+ read_from_head true
+
+[OUTPUT]
+ Name http
+ Match *
+ Host data-prepper
+ Port 2021
+ URI /log/ingest
+ Format json
\ No newline at end of file
diff --git a/examples/log-ingestion/log_ingestion_demo_guide.md b/examples/log-ingestion/log_ingestion_demo_guide.md
index 0a7ee9529e..e06f21af2b 100644
--- a/examples/log-ingestion/log_ingestion_demo_guide.md
+++ b/examples/log-ingestion/log_ingestion_demo_guide.md
@@ -1,6 +1,6 @@
# Data Prepper Log Ingestion Demo Guide
-This is a guide that will walk users through setting up a sample Data Prepper for log ingestion.
+This is a guide that will walk users through setting up a sample Data Prepper pipeline for log ingestion.
This guide will go through the steps required to create a simple log ingestion pipeline from \
Fluent Bit → Data Prepper → OpenSearch. This log ingestion flow is shown in the diagram below.
@@ -8,113 +8,80 @@ Fluent Bit → Data Prepper → OpenSearch. This log ingestion flow is shown in
## List of Components
-- An OpenSearch domain running locally. The steps to get started with OpenSearch can be found [here](https://opensearch.org/downloads.html).
-- Data Prepper, which includes a `pipeline.yaml` and a `data-prepper-config.yaml`
+- An OpenSearch domain running through Docker
- A FluentBit agent running through Docker
+- Data Prepper, which includes a `log_pipeline.yaml`
- An Apache Log Generator in the form of a python script
-### Data Prepper Setup
+### FluentBit And OpenSearch Setup
-1. Pull down the Data Prepper repository
+1. Take a look at the [docker-compose.yaml](docker-compose.yaml). This `docker-compose.yaml` will pull the FluentBit and OpenSearch Docker images and run them in the `log-ingestion_opensearch-net` Docker network.
-```
-git clone https://github.com/opensearch-project/data-prepper.git
-```
-2. Build the Data Prepper jar. You must have JDK 14 or 15 in order to build successfully. For more info on building from source, see the
-[Data Prepper Developer Guide](../../docs/developer_guide.md)
-
-```
-./gradlew build`
-```
-
-3. Create the following pipeline.yaml. This configuration will take logs sent to the [http source](../../data-prepper-plugins/http-source),
-process them with the [Grok Prepper](../../data-prepper-plugins/grok-prepper) by matching against the `COMMONAPACHELOG` pattern,
-and send the processed logs to a local [OpenSearch sink](../../data-prepper-plugins/opensearch) to an index named `grok-prepper`.
-
-```yaml
-grok-pipeline:
- source:
- http:
- prepper:
- - grok:
- match:
- log: [ "%{COMMONAPACHELOG}" ]
- sink:
- - opensearch:
- hosts: [ "https://localhost:9200" ]
- username: admin
- password: admin
- index: grok-prepper
-```
-
-4. Create the following `data-prepper-config.yaml`
+2. Now take a look at the [fluent-bit.conf](fluent-bit.conf). This config will tell FluentBit to tail the `/var/log/test.log` file for logs, and uses the FluentBit http output plugin to forward these logs to the http source of Data Prepper, which runs by default on port 2021. The `fluent-bit.conf` file
+is mounted as a Docker volume through the `docker-compose.yaml`.
-```yaml
-ssl: false
-```
-5. From the root of the data prepper repo,
- run the Data Prepper jar with the `pipeline.yaml` and `data-prepper-config.yaml` as command line arguments.
+3. An empty file named `test.log` has been created. This file is also mounted through the `docker-compose.yaml`, and will be the file
+FluentBit is tailing to collect logs from.
-```
-java -jar data-prepper-core/build/libs/data-prepper-core-1.2.0-SNAPSHOT.jar /full/path/to/pipeline.yaml /full/path/to/data-prepper-config.yaml
-```
-
-If you see an error that looks like this: `Caused by: java.lang.RuntimeException: Connection refused`, then that probably means you don't have OpenSearch running locally.
-Go [here](https://opensearch.org/downloads.html) to do so before moving on to the next step of this guide.
-If Data Prepper is running correctly, you should see something similar to the following line as the latest output in your terminal.
+4. Now that you understand a bit more about how FluentBit and OpenSearch are set up, run them with:
```
-INFO com.amazon.dataprepper.pipeline.ProcessWorker - grok-pipeline Worker: No records received from buffer
+docker-compose up
```
-### FluentBit Setup
+### Data Prepper Setup
+
+1. Build the Data Prepper 1.2 SNAPSHOT Docker image by following the instructions found [here](../../release/docker/README.md).
-The FluentBit setup includes a `docker-compose.yaml`, a `fluent-bit.conf`, and a log file which corresponds to where FluentBit will look for logs.
+
+2. Take a look at [log_pipeline.yaml](log_pipeline.yaml). This configuration will take logs sent to the [http source](../../data-prepper-plugins/http-source),
+process them with the [Grok Prepper](../../data-prepper-plugins/grok-prepper) by matching against the `COMMONAPACHELOG` pattern,
+and send the processed logs to a local [OpenSearch sink](../../data-prepper-plugins/opensearch) to an index named `apache_logs`.
-1. Create the following `docker-compose.yaml`. This `docker-compose.yaml` will pull the FluentBit Docker image, and will mount your `fluent-bit.conf` and log file to this Docker image. In this case, the log file is named `test.log`.
-```yaml
-version: "3.7"
+3. Run the Data Prepper docker image with the `log_pipeline.yaml` from step 2 passed in. This command attaches the Data Prepper Docker image to the Docker network `log-ingestion_opensearch_net` so that
+FluentBit is able to send logs to the http source of Data Prepper.
-services:
- fluent-bit:
- image: fluent/fluent-bit
- volumes:
- - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
- - ./test.log:/var/log/test.log
+```
+docker run --name data-prepper -v /full/path/to/log_pipeline.yaml:/usr/share/data-prepper/pipelines.yaml --network "log-ingestion_opensearch-net" opensearch-data-prepper:1.2.0-SNAPSHOT
```
-2. Create the following `fluent-bit.conf`. This config will tell FluentBit to tail the `/var/log/test.log` file for logs, and uses the FluentBit http output plugin to forward these logs to the http source of Data Prepper, which runs by default on port 2021.
+If Data Prepper is running correctly, you should see something similar to the following line as the latest output in your terminal.
```
-[INPUT]
- name tail
- refresh_interval 5
- path /var/log/test.log
- read_from_head true
-
-[OUTPUT]
- Name http
- Match *
- Host host.docker.internal
- Port 2021
- URI /log/ingest
- Format json
+INFO com.amazon.dataprepper.pipeline.ProcessWorker - log-pipeline Worker: No records received from buffer
```
-3. Create an empty file named `test.log`. This file can be named whatever you like, but the `docker-compose.yaml` will need to be updated accordingly.
-
+### Apache Log Generator
-4. Now that you have the `docker-compose.yaml`, `fluent-bit.conf`, and the `test.log` files, FluentBit is ready for log ingestion. Start FluentBit with
+Note that if you just want to see the log ingestion workflow in action, you can simply copy and paste some logs into the `test.log` file yourself without using the Python [Fake Apache Log Generator](https://github.com/graytaylor0/Fake-Apache-Log-Generator).
+Here is a sample batch of randomly generated Apache Logs if you choose to take this route.
```
-docker-compose up
+63.173.168.120 - - [04/Nov/2021:15:07:25 -0500] "GET /search/tag/list HTTP/1.0" 200 5003
+71.52.186.114 - - [04/Nov/2021:15:07:27 -0500] "GET /search/tag/list HTTP/1.0" 200 5015
+223.195.133.151 - - [04/Nov/2021:15:07:29 -0500] "GET /posts/posts/explore HTTP/1.0" 200 5049
+249.189.38.1 - - [04/Nov/2021:15:07:31 -0500] "GET /app/main/posts HTTP/1.0" 200 5005
+36.155.45.2 - - [04/Nov/2021:15:07:33 -0500] "GET /search/tag/list HTTP/1.0" 200 5001
+4.54.90.166 - - [04/Nov/2021:15:07:35 -0500] "DELETE /wp-content HTTP/1.0" 200 4965
+214.246.93.195 - - [04/Nov/2021:15:07:37 -0500] "GET /apps/cart.jsp?appID=4401 HTTP/1.0" 200 5008
+72.108.181.108 - - [04/Nov/2021:15:07:39 -0500] "GET /wp-content HTTP/1.0" 200 5020
+194.43.128.202 - - [04/Nov/2021:15:07:41 -0500] "GET /app/main/posts HTTP/1.0" 404 4943
+14.169.135.206 - - [04/Nov/2021:15:07:43 -0500] "DELETE /wp-content HTTP/1.0" 200 4985
+208.0.179.237 - - [04/Nov/2021:15:07:45 -0500] "GET /explore HTTP/1.0" 200 4953
+134.29.61.53 - - [04/Nov/2021:15:07:47 -0500] "GET /explore HTTP/1.0" 200 4937
+213.229.161.38 - - [04/Nov/2021:15:07:49 -0500] "PUT /posts/posts/explore HTTP/1.0" 200 5092
+82.41.77.121 - - [04/Nov/2021:15:07:51 -0500] "GET /app/main/posts HTTP/1.0" 200 5016
```
-### Apache Log Generator
+Additionally, if you just want to test a single log, you can send it to `test.log` directly with:
+
+```
+echo '63.173.168.120 - - [04/Nov/2021:15:07:25 -0500] "GET /search/tag/list HTTP/1.0" 200 5003' >> test.log
+```
In order to simulate an application generating logs, a simple python script will be used. This script only runs with python 2. You can download this script by running
@@ -140,17 +107,17 @@ You should now be able to check your terminal output for FluentBit and Data Prep
The following FluentBit ouptut means that FluentBit was able to forward logs to the Data Prepper http source
```
-fluent-bit_1 | [2021/10/30 17:16:39] [ info] [output:http:http.0] host.docker.internal:2021, HTTP status=200
+fluent-bit | [2021/10/30 17:16:39] [ info] [output:http:http.0] host.docker.internal:2021, HTTP status=200
```
The following Data Prepper output indicates that Data Prepper is successfully processing logs from FluentBit
```
-2021-10-30T12:17:17,474 [grok-pipeline-prepper-worker-1-thread-1] INFO com.amazon.dataprepper.pipeline.ProcessWorker - grok-pipeline Worker: Processing 2 records from buffer
+2021-10-30T12:17:17,474 [log-pipeline-prepper-worker-1-thread-1] INFO com.amazon.dataprepper.pipeline.ProcessWorker - log-pipeline Worker: Processing 2 records from buffer
```
Finally, head into OpenSearch Dashboards ([http://localhost:5601](http://localhost:5601)) to view your processed logs.
You will need to create an index pattern for the index provided in your `pipeline.yaml` in order to see them. You can do this by going to
-`Stack Management -> Index Pattterns`. Now start typing in the name of the index you sent logs to (in this guide it was `grok-prepper`),
+`Stack Management -> Index Pattterns`. Now start typing in the name of the index you sent logs to (in this guide it was `apache_logs`),
and you should see that the index pattern matches 1 source. Click `Create Index Pattern`, and you should then be able to go back to
the `Discover` tab to see your processed logs.
diff --git a/examples/log-ingestion/log_pipeline.yaml b/examples/log-ingestion/log_pipeline.yaml
new file mode 100644
index 0000000000..9ea8d5323a
--- /dev/null
+++ b/examples/log-ingestion/log_pipeline.yaml
@@ -0,0 +1,15 @@
+log-pipeline:
+ source:
+ http:
+ ssl: false
+ prepper:
+ - grok:
+ match:
+ log: [ "%{COMMONAPACHELOG}" ]
+ sink:
+ - opensearch:
+ hosts: [ "https://opensearch-node1:9200" ]
+ insecure: true
+ username: admin
+ password: admin
+ index: apache_logs
\ No newline at end of file
diff --git a/examples/log-ingestion/test.log b/examples/log-ingestion/test.log
new file mode 100644
index 0000000000..8b13789179
--- /dev/null
+++ b/examples/log-ingestion/test.log
@@ -0,0 +1 @@
+