Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Entry resource #95

Merged
merged 2 commits into from
Aug 18, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,21 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [Unreleased]
### Added
- The `Resource` field was added to Entry
- The `Identifier` helper was created to assist with writing to `Resource`

### Removed
- The `Tags` field was removed from Entry

### Changed
- The `host_metadata` operator now writes to an entry's `Resource` field, instead of Labels
- The `host_labeler` helper has been renamed `host_identifier`
- The `metadata` operator embeds the `Identifier` helper and supports writing to `Resource`
- Input operators embed the `Identifier` helper and support writing to `Resource`
- The `k8s_event` operator now supports the `write_to`, `labels`, and `resource` configuration options

## [0.9.9] - 2020-08-14
### Added
- Kubernetes events input operator ([PR88](https://github.com/observIQ/carbon/pull/88))
Expand Down
5 changes: 3 additions & 2 deletions docs/operators/file_input.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,14 @@ The `file_input` operator reads logs from files. It will place the lines read in
| `exclude` | [] | A list of file glob patterns to exclude from reading |
| `poll_interval` | 200ms | The duration between filesystem polls |
| `multiline` | | A `multiline` configuration block. See below for details |
| `write_to` | $ | A [field](/docs/types/field.md) that will be set to the log message |
| `write_to` | $ | The record [field](/docs/types/field.md) written to when creating a new log entry |
| `encoding` | `nop` | The encoding of the file being read. See the list of supported encodings below for available options |
| `include_file_name` | `true` | Whether to add the file name as the label `file_name` |
| `include_file_path` | `false` | Whether to add the file path as the label `file_path` |
| `start_at` | `end` | At startup, where to start reading logs from the file. Options are `beginning` or `end` |
| `max_log_size` | 1048576 | The maximum size of a log entry to read before failing. Protects against reading large amounts of data into memory |
| `labels` | {} | A map of `key: value` labels to add to the entry |
| `labels` | {} | A map of `key: value` labels to add to the entry's labels |
| `resource` | {} | A map of `key: value` labels to add to the entry's resource |

Note that by default, no logs will be read unless the monitored file is actively being written to because `start_at` defaults to `end`.

Expand Down
14 changes: 8 additions & 6 deletions docs/operators/host_metadata.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
## `host_metadata` operator

The `host_metadata` operator adds labels to incoming entries.
The `host_metadata` operator adds hostname and ip to the resource of incoming entries.

### Configuration Fields

| Field | Default | Description |
| --- | --- | --- |
| `id` | `metadata` | A unique identifier for the operator |
| `id` | `host_metadata` | A unique identifier for the operator |
| `output` | Next in pipeline | The connected operator(s) that will receive all outbound entries |
| `include_hostname` | `true` | Whether to set the `hostname` label on entries |
| `include_hostname` | `true` | Whether to set the `hostname` on the resource of incoming entries |
| `include_ip` | `true` | Whether to set the `ip` on the resource of incoming entries |
| `on_error` | `send` | The behavior of the operator if it encounters an error. See [on_error](/docs/types/on_error.md) |

### Example Configurations
Expand All @@ -19,6 +20,7 @@ Configuration:
```yaml
- type: host_metadata
include_hostname: true
include_ip: true
```

<table>
Expand All @@ -29,7 +31,6 @@ Configuration:
```json
{
"timestamp": "2020-06-15T11:15:50.475364-04:00",
"labels": {},
"record": {
"message": "test"
}
Expand All @@ -42,8 +43,9 @@ Configuration:
```json
{
"timestamp": "2020-06-15T11:15:50.475364-04:00",
"labels": {
"hostname": "my_host"
"resource": {
"hostname": "my_host",
"ip": "0.0.0.0",
},
"record": {
"message": "test"
Expand Down
6 changes: 3 additions & 3 deletions docs/operators/journald_input.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,10 @@ The `journald_input` operator will use the `__REALTIME_TIMESTAMP` field of the j
| `output` | Next in pipeline | The connected operator(s) that will receive all outbound entries |
| `directory` | | A directory containing journal files to read entries from |
| `files` | | A list of journal files to read entries from |
| `write_to` | $ | A [field](/docs/types/field.md) that will be set to the path of the file the entry was read from |
| `write_to` | $ | The record [field](/docs/types/field.md) written to when creating a new log entry |
| `start_at` | `end` | At startup, where to start reading logs from the file. Options are `beginning` or `end` |
| `labels` | {} | A map of `key: value` labels to add to the entry |

| `labels` | {} | A map of `key: value` labels to add to the entry's labels |
| `resource` | {} | A map of `key: value` labels to add to the entry's resource |

### Example Configurations

Expand Down
15 changes: 9 additions & 6 deletions docs/operators/k8s_event_input.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,15 @@ Kubernetes API, and currently requires that Carbon is running inside a Kubernete

### Configuration Fields

| Field | Default | Description |
| --- | --- | --- |
| `id` | `k8s_event_input` | A unique identifier for the operator |
| `output` | Next in pipeline | The connected operator(s) that will receive all outbound entries |
| `namespaces` | All namespaces | An array of namespaces to collect events from. If unset, defaults to all namespaces. |

| Field | Default | Description |
| --- | --- | --- |
| `id` | `k8s_event_input` | A unique identifier for the operator |
| `output` | Next in pipeline | The connected operator(s) that will receive all outbound entries |
| `namespaces` | All namespaces | An array of namespaces to collect events from. If unset, defaults to all namespaces. |
| `write_to` | $ | The record [field](/docs/types/field.md) written to when creating a new log entry |
| `labels` | {} | A map of `key: value` labels to add to the entry's labels |
| `resource` | {} | A map of `key: value` labels to add to the entry's resource |

### Example Configurations

#### Mock a file input
Expand Down
10 changes: 8 additions & 2 deletions docs/operators/metadata.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ The `metadata` operator adds labels to incoming entries.
| --- | --- | --- |
| `id` | `metadata` | A unique identifier for the operator |
| `output` | Next in pipeline | The connected operator(s) that will receive all outbound entries |
| `labels` | {} | A map of `key: value` labels to add to the entry |
| `labels` | {} | A map of `key: value` labels to add to the entry's labels |
| `resource` | {} | A map of `key: value` labels to add to the entry's resource |
| `on_error` | `send` | The behavior of the operator if it encounters an error. See [on_error](/docs/types/on_error.md) |

Inside the label values, an [expression](/docs/types/expression.md) surrounded by `EXPR()`
Expand All @@ -18,13 +19,15 @@ with the `$` variable in the expression so labels can be added dynamically from
### Example Configurations


#### Add static tags and labels
#### Add static labels and resource

Configuration:
```yaml
- type: metadata
labels:
environment: "production"
resource:
cluster: "blue"
```

<table>
Expand All @@ -51,6 +54,9 @@ Configuration:
"labels": {
"environment": "production"
},
"resource": {
"cluster": "blue"
},
"record": {
"message": "test"
}
Expand Down
15 changes: 8 additions & 7 deletions docs/operators/tcp_input.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,14 @@ The `tcp_input` operator listens for logs on one or more TCP connections. The op

### Configuration Fields

| Field | Default | Description |
| --- | --- | --- |
| `id` | `tcp_input` | A unique identifier for the operator |
| `output` | Next in pipeline | The connected operator(s) that will receive all outbound entries |
| `listen_address` | required | A listen address of the form `<ip>:<port>` |
| `write_to` | $ | A [field](/docs/types/field.md) that will be set to the log message |
| `labels` | {} | A map of `key: value` labels to add to the entry |
| Field | Default | Description |
| --- | --- | --- |
| `id` | `tcp_input` | A unique identifier for the operator |
| `output` | Next in pipeline | The connected operator(s) that will receive all outbound entries |
| `listen_address` | required | A listen address of the form `<ip>:<port>` |
| `write_to` | $ | The record [field](/docs/types/field.md) written to when creating a new log entry |
| `labels` | {} | A map of `key: value` labels to add to the entry's labels |
| `resource` | {} | A map of `key: value` labels to add to the entry's resource |

### Example Configurations

Expand Down
15 changes: 8 additions & 7 deletions docs/operators/udp_input.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,14 @@ The `udp_input` operator listens for logs from UDP packets.

### Configuration Fields

| Field | Default | Description |
| --- | --- | --- |
| `id` | `udp_input` | A unique identifier for the operator |
| `output` | Next in pipeline | The connected operator(s) that will receive all outbound entries |
| `listen_address` | required | A listen address of the form `<ip>:<port>` |
| `write_to` | $ | A [field](/docs/types/field.md) that will be set to the log message |
| `labels` | {} | A map of `key: value` labels to add to the entry |
| Field | Default | Description |
| --- | --- | --- |
| `id` | `udp_input` | A unique identifier for the operator |
| `output` | Next in pipeline | The connected operator(s) that will receive all outbound entries |
| `listen_address` | required | A listen address of the form `<ip>:<port>` |
| `write_to` | $ | The record [field](/docs/types/field.md) written to when creating a new log entry |
| `labels` | {} | A map of `key: value` labels to add to the entry's labels |
| `resource` | {} | A map of `key: value` labels to add to the entry's resource |

### Example Configurations

Expand Down
4 changes: 3 additions & 1 deletion docs/operators/windows_eventlog_input.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,9 @@ The `windows_eventlog_input` operator reads logs from the windows event log API.
| `max_reads` | 100 | The maximum number of records read into memory, before beginning a new batch |
| `start_at` | `end` | On first startup, where to start reading logs from the API. Options are `beginning` or `end` |
| `poll_interval` | 1s | The interval at which the channel is checked for new log entries. This check begins again after all new records have been read |
| `labels` | {} | A map of `key: value` labels to add to the entry |
| `write_to` | $ | The record [field](/docs/types/field.md) written to when creating a new log entry |
| `labels` | {} | A map of `key: value` labels to add to the entry's labels |
| `resource` | {} | A map of `key: value` labels to add to the entry's resource |

### Example Configurations

Expand Down
2 changes: 1 addition & 1 deletion docs/types/entry.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,5 @@ Entry is the base representation of log data as it moves through a pipeline. All
| `timestamp` | The timestamp associated with the log (RFC 3339). |
| `severity` | The [severity](/docs/types/field.md) of the log. |
| `labels` | A map of key/value pairs that describes the metadata of the log. This value is often used by a consumer to categorize logs. |
| `tags` | An array of values that describes the metadata of the log. This value is often used by a consumer to tag incoming logs. |
| `resource` | A map of key/value pairs that describes the origin of the log. |
| `record` | The contents of the log. This value is often modified and restructured in the pipeline. |
2 changes: 1 addition & 1 deletion docs/types/expression.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ For reference documentation of the expression language, see [here](https://githu
Available to the expressions are a few special variables:
- `$record` contains the entry's record
- `$labels` contains the entry's labels
- `$tags` contains the entry's tags
- `$resource` contains the entry's resource
- `$timestamp` contains the entry's timestamp
- `env()` is a function that allows you to read environment variables

Expand Down
12 changes: 10 additions & 2 deletions entry/entry.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ import (
type Entry struct {
Timestamp time.Time `json:"timestamp" yaml:"timestamp"`
Severity Severity `json:"severity" yaml:"severity"`
Tags []string `json:"tags,omitempty" yaml:"tags,omitempty"`
Labels map[string]string `json:"labels,omitempty" yaml:"labels,omitempty"`
Resource map[string]string `json:"resource,omitempty" yaml:"resource,omitempty"`
Record interface{} `json:"record" yaml:"record"`
}

Expand All @@ -29,6 +29,14 @@ func (entry *Entry) AddLabel(key, value string) {
entry.Labels[key] = value
}

// AddResourceKey wil add a key/value pair to the entry's resource.
func (entry *Entry) AddResourceKey(key, value string) {
if entry.Resource == nil {
entry.Resource = make(map[string]string)
}
entry.Resource[key] = value
}

// Get will return the value of a field on the entry, including a boolean indicating if the field exists.
func (entry *Entry) Get(field FieldInterface) (interface{}, bool) {
return field.Get(entry)
Expand Down Expand Up @@ -148,8 +156,8 @@ func (entry *Entry) Copy() *Entry {
return &Entry{
Timestamp: entry.Timestamp,
Severity: entry.Severity,
Tags: copyStringArray(entry.Tags),
Labels: copyStringMap(entry.Labels),
Resource: copyStringMap(entry.Resource),
Record: copyValue(entry.Record),
}
}
6 changes: 3 additions & 3 deletions entry/entry_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -124,19 +124,19 @@ func TestCopy(t *testing.T) {
entry.Timestamp = time.Time{}
entry.Record = "test"
entry.Labels = map[string]string{"label": "value"}
entry.Tags = []string{"tag"}
entry.Resource = map[string]string{"resource": "value"}
copy := entry.Copy()

entry.Severity = Severity(1)
entry.Timestamp = time.Now()
entry.Record = "new"
entry.Labels = map[string]string{"label": "new value"}
entry.Tags = []string{"new tag"}
entry.Resource = map[string]string{"resource": "new value"}

require.Equal(t, time.Time{}, copy.Timestamp)
require.Equal(t, Severity(0), copy.Severity)
require.Equal(t, []string{"tag"}, copy.Tags)
require.Equal(t, map[string]string{"label": "value"}, copy.Labels)
require.Equal(t, map[string]string{"resource": "value"}, copy.Resource)
require.Equal(t, "test", copy.Record)
}

Expand Down
5 changes: 3 additions & 2 deletions operator/builtin/input/generate_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -81,8 +81,9 @@ pipeline:
{
Builder: &GenerateInputConfig{
InputConfig: helper.InputConfig{
LabelerConfig: helper.NewLabelerConfig(),
WriteTo: entry.NewRecordField(),
LabelerConfig: helper.NewLabelerConfig(),
IdentifierConfig: helper.NewIdentifierConfig(),
WriteTo: entry.NewRecordField(),
WriterConfig: helper.WriterConfig{
BasicConfig: helper.BasicConfig{
OperatorID: "my_generator",
Expand Down
Loading