Skip to content

Commit

Permalink
feat: Add datadog destination
Browse files Browse the repository at this point in the history
  • Loading branch information
fdmsantos committed Oct 14, 2022
1 parent 9511cc3 commit 73e6604
Show file tree
Hide file tree
Showing 11 changed files with 230 additions and 31 deletions.
77 changes: 53 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,17 @@ Supports all destinations and all Kinesis Firehose Features.

* [Features](#features)
* [How to Use](#how-to-use)
* [Kinesis Data Stream as Source](#kinesis-data-stream-as-source)
* [Kinesis Data Stream Encrypted](#kinesis-data-stream-encrypted)
* [Direct Put as Source](#direct-put-as-source)
* [S3 destination](#s3-destination)
* [Redshift Destination](#redshift-destination)
* [Elasticsearch / Opensearch Destination](#elasticsearch--opensearch-destination)
* [Splunk Destination](#splunk-destination)
* [HTTP Endpoint Destination](#http-endpoint-destination)
* [Sources](#sources)
* [Kinesis Data Stream](#kinesis-data-stream)
* [Kinesis Data Stream Encrypted](#kinesis-data-stream-encrypted)
* [Direct Put](#direct-put)
* [Destinations](#destinations)
* [S3](#s3)
* [Redshift](#redshift)
* [Elasticsearch / Opensearch](#elasticsearch--opensearch)
* [Splunk](#splunk)
* [HTTP Endpoint](#http-endpoint)
* [Datadog](#datadog)
* [Server Side Encryption](#server-side-encryption)
* [Data Transformation with Lambda](#data-transformation-with-lambda)
* [Data Format Conversion](#data-format-conversion)
Expand Down Expand Up @@ -54,7 +57,9 @@ Supports all destinations and all Kinesis Firehose Features.

## How to Use

### Kinesis Data Stream as Source
### Sources

#### Kinesis Data Stream

**To Enabled it:** `enable_kinesis_source = true`

Expand All @@ -70,15 +75,15 @@ module "firehose" {
}
```

#### Kinesis Data Stream Encrypted
##### Kinesis Data Stream Encrypted

If Kinesis Data Stream is encrypted, it's necessary pass this info to module .

**To Enabled It:** `kinesis_source_is_encrypted = true`

**KMS Key:** use `kinesis_source_kms_arn` variable to indicate the KMS Key to module add permissions to policy to decrypt the Kinesis Data Stream.

### Direct Put as Source
#### Direct Put

```hcl
module "firehose" {
Expand All @@ -90,7 +95,9 @@ module "firehose" {
}
```

### S3 destination
### Destinations

#### S3

**To Enabled It:** `destination = "s3" or destination = "extended_s3"`

Expand All @@ -110,7 +117,7 @@ module "firehose" {
}
```

### Redshift Destination
#### Redshift

**To Enabled It:** `destination = "redshift"`

Expand All @@ -133,7 +140,7 @@ module "firehose" {
}
```

### Elasticsearch / Opensearch Destination
#### Elasticsearch / Opensearch

**To Enabled It:** `destination = "elasticsearch" or destination = "opensearch"`

Expand All @@ -150,7 +157,7 @@ module "firehose" {
}
```

### Splunk Destination
#### Splunk

**To Enabled It:** `destination = "splunk"`

Expand All @@ -170,7 +177,7 @@ module "firehose" {
}
```

### HTTP Endpoint Destination
#### HTTP Endpoint

**To Enabled It:** `destination = "http_endpoint"`

Expand Down Expand Up @@ -206,6 +213,25 @@ module "firehose" {
}
```

#### Datadog

**To Enabled It:** `destination = "datadog"`

**Variables Prefix:** `http_endpoint_` and `datadog_endpoint_type`

**Check [HTTP Endpoint](#http-endpoint) to more details and [Destinations Mapping](#destinations-mapping) to see the difference between http_endpoint and datadog destinations**

```hcl
module "firehose" {
source = "fdmsantos/kinesis-firehose/aws"
version = "x.x.x"
name = "firehose-delivery-stream"
destination = "datadog"
datadog_endpoint_type = "metrics_eu"
http_endpoint_access_key = "<datadog_access_key>"
}
```

### Server Side Encryption

**Supported By:** Only Direct Put source
Expand Down Expand Up @@ -503,13 +529,14 @@ module "firehose" {

The destination variable configured in module is mapped to firehose valid destination.

| Module Destination | Firehose Destination | Differences |
|------------------------------|----------------------|-------------------------------------------------------------------------|
| s3 and extended_s3 | extended_s3 | There is no difference between s3 or extended_s3 destinations |
| redshift | redshift | |
| splunk | splunk | |
| opensearch and elasticsearch | elasticsearch | There is no difference between opensearch or elasticsearch destinations |
| http_endpoint | http_endpoint | |
| Module Destination | Firehose Destination | Differences |
|------------------------------|----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| s3 and extended_s3 | extended_s3 | There is no difference between s3 or extended_s3 destinations |
| redshift | redshift | |
| splunk | splunk | |
| opensearch and elasticsearch | elasticsearch | There is no difference between opensearch or elasticsearch destinations |
| http_endpoint | http_endpoint | |
| datadog | http_endpoint | The difference regarding http_endpoint is the http_endpoint_url and http_endpoint_name variables aren't support, and it's necessary configure datadog_endpoint_type variable |

## Examples

Expand All @@ -523,6 +550,7 @@ The destination variable configured in module is mapped to firehose valid destin
- [Public Splunk](https://github.com/fdmsantos/terraform-aws-kinesis-firehose/tree/main/examples/splunk/public-splunk) - Creates a Kinesis Firehose Stream with public splunk as destination.
- [Splunk In VPC](https://github.com/fdmsantos/terraform-aws-kinesis-firehose/tree/main/examples/splunk/splunk-in-vpc) - Creates a Kinesis Firehose Stream with splunk in VPC as destination.
- [Custom Http Endpoint](https://github.com/fdmsantos/terraform-aws-kinesis-firehose/tree/main/examples/http-endpoint/custom-http-endpoint) - Creates a Kinesis Firehose Stream with custom http endpoint as destination.
- [Datadog](https://github.com/fdmsantos/terraform-aws-kinesis-firehose/tree/main/examples/http-endpoint/datadog) - Creates a Kinesis Firehose Stream with datadog europe metrics as destination.


<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
Expand Down Expand Up @@ -644,6 +672,7 @@ No modules.
| <a name="input_data_format_conversion_parquet_max_padding"></a> [data\_format\_conversion\_parquet\_max\_padding](#input\_data\_format\_conversion\_parquet\_max\_padding) | The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The value is in bytes | `number` | `0` | no |
| <a name="input_data_format_conversion_parquet_page_size"></a> [data\_format\_conversion\_parquet\_page\_size](#input\_data\_format\_conversion\_parquet\_page\_size) | Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The value is in bytes | `number` | `1048576` | no |
| <a name="input_data_format_conversion_parquet_writer_version"></a> [data\_format\_conversion\_parquet\_writer\_version](#input\_data\_format\_conversion\_parquet\_writer\_version) | Indicates the version of row format to output. | `string` | `"V1"` | no |
| <a name="input_datadog_endpoint_type"></a> [datadog\_endpoint\_type](#input\_datadog\_endpoint\_type) | Endpoint type to datadog destination | `string` | `"logs_eu"` | no |
| <a name="input_destination"></a> [destination](#input\_destination) | This is the destination to where the data is delivered | `string` | n/a | yes |
| <a name="input_destination_log_group_name"></a> [destination\_log\_group\_name](#input\_destination\_log\_group\_name) | The CloudWatch group name for destination logs | `string` | `null` | no |
| <a name="input_destination_log_stream_name"></a> [destination\_log\_stream\_name](#input\_destination\_log\_stream\_name) | The CloudWatch log stream name for destination logs | `string` | `null` | no |
Expand Down Expand Up @@ -677,7 +706,7 @@ No modules.
| <a name="input_http_endpoint_enable_request_configuration"></a> [http\_endpoint\_enable\_request\_configuration](#input\_http\_endpoint\_enable\_request\_configuration) | The request configuration | `bool` | `false` | no |
| <a name="input_http_endpoint_name"></a> [http\_endpoint\_name](#input\_http\_endpoint\_name) | The HTTP endpoint name | `string` | `null` | no |
| <a name="input_http_endpoint_request_configuration_common_attributes"></a> [http\_endpoint\_request\_configuration\_common\_attributes](#input\_http\_endpoint\_request\_configuration\_common\_attributes) | Describes the metadata sent to the HTTP endpoint destination. The variable is list. Each element is map with two keys , name and value, that corresponds to common attribute name and value | `list(map(string))` | `[]` | no |
| <a name="input_http_endpoint_request_configuration_content_encoding"></a> [http\_endpoint\_request\_configuration\_content\_encoding](#input\_http\_endpoint\_request\_configuration\_content\_encoding) | Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination | `string` | `"NONE"` | no |
| <a name="input_http_endpoint_request_configuration_content_encoding"></a> [http\_endpoint\_request\_configuration\_content\_encoding](#input\_http\_endpoint\_request\_configuration\_content\_encoding) | Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination | `string` | `"GZIP"` | no |
| <a name="input_http_endpoint_retry_duration"></a> [http\_endpoint\_retry\_duration](#input\_http\_endpoint\_retry\_duration) | Total amount of seconds Firehose spends on retries. This duration starts after the initial attempt fails, It does not include the time periods during which Firehose waits for acknowledgment from the specified destination after each attempt | `number` | `300` | no |
| <a name="input_http_endpoint_url"></a> [http\_endpoint\_url](#input\_http\_endpoint\_url) | The HTTP endpoint URL to which Kinesis Firehose sends your data | `string` | `null` | no |
| <a name="input_kinesis_source_is_encrypted"></a> [kinesis\_source\_is\_encrypted](#input\_kinesis\_source\_is\_encrypted) | Indicates if Kinesis data stream source is encrypted | `bool` | `false` | no |
Expand Down
2 changes: 1 addition & 1 deletion examples/http-endpoint/custom-http-endpoint/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ http_endpoint_access_key = "<http_endpoint_access_key>"
| <a name="input_http_endpoint_access_key"></a> [http\_endpoint\_access\_key](#input\_http\_endpoint\_access\_key) | Http Endpoint Access Key | `string` | n/a | yes |
| <a name="input_http_endpoint_name"></a> [http\_endpoint\_name](#input\_http\_endpoint\_name) | Http Endpoint Name | `string` | n/a | yes |
| <a name="input_http_endpoint_url"></a> [http\_endpoint\_url](#input\_http\_endpoint\_url) | Http Endpoint URL | `string` | n/a | yes |
| <a name="input_name_prefix"></a> [name\_prefix](#input\_name\_prefix) | Name prefix to use in resources | `string` | `"firehose-to-splunk"` | no |
| <a name="input_name_prefix"></a> [name\_prefix](#input\_name\_prefix) | Name prefix to use in resources | `string` | `"firehose"` | no |

## Outputs

Expand Down
2 changes: 1 addition & 1 deletion examples/http-endpoint/custom-http-endpoint/variables.tf
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
variable "name_prefix" {
description = "Name prefix to use in resources"
type = string
default = "firehose-to-splunk"
default = "firehose"
}

variable "http_endpoint_name" {
Expand Down
65 changes: 65 additions & 0 deletions examples/http-endpoint/datadog/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# Datadpg

Configuration in this directory creates kinesis firehose stream with Direct Put as source and datadog as destination to Europe Metrics URL.

This example can be tested with Demo Data in Kinesis Firehose Console.

## Usage

To run this example you need to execute:

```bash
$ terraform init
$ terraform plan
$ terraform apply
```

It's necessary configure the following variables:

```hcl
http_endpoint_access_key = "<http_endpoint_access_key>"
```

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Requirements

| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 0.13.1 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 4.4 |
| <a name="requirement_random"></a> [random](#requirement\_random) | >= 2.0 |

## Providers

| Name | Version |
|------|---------|
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 4.4 |
| <a name="provider_random"></a> [random](#provider\_random) | >= 2.0 |

## Modules

| Name | Source | Version |
|------|--------|---------|
| <a name="module_firehose"></a> [firehose](#module\_firehose) | ../../../ | n/a |

## Resources

| Name | Type |
|------|------|
| [aws_kms_key.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/kms_key) | resource |
| [aws_s3_bucket.s3](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket) | resource |
| [random_pet.this](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet) | resource |

## Inputs

| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| <a name="input_http_endpoint_access_key"></a> [http\_endpoint\_access\_key](#input\_http\_endpoint\_access\_key) | Datadog Access Key | `string` | n/a | yes |
| <a name="input_name_prefix"></a> [name\_prefix](#input\_name\_prefix) | Name prefix to use in resources | `string` | `"firehose-to-datadog"` | no |

## Outputs

| Name | Description |
|------|-------------|
| <a name="output_firehose_role"></a> [firehose\_role](#output\_firehose\_role) | Firehose Role |
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
43 changes: 43 additions & 0 deletions examples/http-endpoint/datadog/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
resource "random_pet" "this" {
length = 2
}

resource "aws_s3_bucket" "s3" {
bucket = "${var.name_prefix}-destination-bucket-${random_pet.this.id}"
force_destroy = true
}

resource "aws_kms_key" "this" {
description = "${var.name_prefix}-kms-key"
deletion_window_in_days = 7
}

module "firehose" {
source = "../../../"
name = "${var.name_prefix}-delivery-stream"
destination = "datadog"
buffer_interval = 60
datadog_endpoint_type = "metrics_eu"
http_endpoint_access_key = var.http_endpoint_access_key
http_endpoint_retry_duration = 400
http_endpoint_enable_request_configuration = true
http_endpoint_request_configuration_content_encoding = "GZIP"
http_endpoint_request_configuration_common_attributes = [
{
name = "testname"
value = "testvalue"
},
{
name = "testname2"
value = "testvalue2"
}
]
s3_backup_mode = "All"
s3_backup_prefix = "backup/"
s3_backup_bucket_arn = aws_s3_bucket.s3.arn
s3_backup_buffer_interval = 100
s3_backup_buffer_size = 100
s3_backup_compression = "GZIP"
s3_backup_enable_encryption = true
s3_backup_kms_key_arn = aws_kms_key.this.arn
}
4 changes: 4 additions & 0 deletions examples/http-endpoint/datadog/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
output "firehose_role" {
description = "Firehose Role"
value = module.firehose.kinesis_firehose_role_arn
}
11 changes: 11 additions & 0 deletions examples/http-endpoint/datadog/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
variable "name_prefix" {
description = "Name prefix to use in resources"
type = string
default = "firehose-to-datadog"
}

variable "http_endpoint_access_key" {
description = "Datadog Access Key"
type = string
sensitive = true
}
14 changes: 14 additions & 0 deletions examples/http-endpoint/datadog/versions.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
terraform {
required_version = ">= 0.13.1"

required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.4"
}
random = {
source = "hashicorp/random"
version = ">= 2.0"
}
}
}
22 changes: 21 additions & 1 deletion locals.tf
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,8 @@ locals {
elasticsearch : "elasticsearch",
opensearch : "elasticsearch",
splunk : "splunk",
http_endpoint : "http_endpoint"
http_endpoint : "http_endpoint",
datadog : "http_endpoint"
}
destination = local.destinations[var.destination]
s3_destination = local.destination == "extended_s3" ? true : false
Expand Down Expand Up @@ -159,6 +160,25 @@ locals {
not_elasticsearch_vpc_configure_existing_destination_sg = contains(["splunk", "redshift"], local.destination) && var.vpc_security_group_destination_configure_existing
vpc_configure_destination_group = local.elasticsearch_vpc_configure_existing_destination_sg || local.not_elasticsearch_vpc_configure_existing_destination_sg

http_endpoint_url = {
http_endpoint : var.http_endpoint_url
datadog : local.datadog_endpoint_url[var.datadog_endpoint_type]
}

http_endpoint_name = {
http_endpoint : var.http_endpoint_name
datadog : "Datadog"
}

# Data Dog
datadog_endpoint_url = {
logs_us : "https://aws-kinesis-http-intake.logs.datadoghq.com/v1/input"
logs_eu : "https://aws-kinesis-http-intake.logs.datadoghq.eu/v1/input"
logs_gov : "https://aws-kinesis-http-intake.logs.ddog-gov.com/v1/input"
metrics_us : "https://awsmetrics-intake.datadoghq.com/v1/input"
metrics_eu : "https://awsmetrics-intake.datadoghq.eu/v1/input"
}

# Networking
firehose_cidr_blocks = {
redshift : {
Expand Down
4 changes: 2 additions & 2 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -341,8 +341,8 @@ resource "aws_kinesis_firehose_delivery_stream" "this" {
dynamic "http_endpoint_configuration" {
for_each = local.destination == "http_endpoint" ? [1] : []
content {
url = var.http_endpoint_url
name = var.http_endpoint_name
url = local.http_endpoint_url[var.destination]
name = local.http_endpoint_name[var.destination]
access_key = var.http_endpoint_access_key
buffering_size = var.buffer_size
buffering_interval = var.buffer_interval
Expand Down
Loading

0 comments on commit 73e6604

Please sign in to comment.