diff --git a/stable/aws-for-fluent-bit/Chart.yaml b/stable/aws-for-fluent-bit/Chart.yaml index a65047c25..856251d72 100644 --- a/stable/aws-for-fluent-bit/Chart.yaml +++ b/stable/aws-for-fluent-bit/Chart.yaml @@ -2,7 +2,7 @@ apiVersion: v1 name: aws-for-fluent-bit description: A Helm chart to deploy aws-for-fluent-bit project version: 0.1.12 -appVersion: 2.21.5 +appVersion: 2.21.6 home: https://github.com/aws/eks-charts icon: https://raw.githubusercontent.com/aws/eks-charts/master/docs/logo/aws.png sources: diff --git a/stable/aws-for-fluent-bit/README.md b/stable/aws-for-fluent-bit/README.md index a79bc73ce..9d3d17837 100755 --- a/stable/aws-for-fluent-bit/README.md +++ b/stable/aws-for-fluent-bit/README.md @@ -35,8 +35,9 @@ helm delete aws-for-fluent-bit --namespace kube-system | `imagePullSecrets` | Docker registry pull secret | `[]` | | `serviceAccount.create` | Whether a new service account should be created | `true` | | `serviceAccount.name` | Name of the service account | `aws-for-fluent-bit` | -| `serviceAccount.create` | Whether a new service account should be created | `true` | +| `serviceAccount.create` | Whether a new service account should be created | `true` | | `service.parsersFiles` | List of available parser files | `/fluent-bit/parsers/parsers.conf` | +| `service.extraKeys` | Adding more configuration keys to the service section | `""` | | `service.extraParsers` | Adding more parsers with this value | `""` | | `input.*` | Values for Kubernetes input | | | `extraInputs` | Adding more inputs with this value | `""` | @@ -68,9 +69,9 @@ helm delete aws-for-fluent-bit --namespace kube-system | `kinesis.match` | The log filter | `"*"` | ✔ | `kinesis.region` | The region which your Kinesis Data Stream is in. | `"us-east-1"` | ✔ | `kinesis.stream` | The name of the Kinesis Data Stream that you want log records sent to. | `"my-kinesis-stream-name"` | ✔ -| `kinesis.partitionKey` | A partition key is used to group data by shard within a stream. A Kinesis Data Stream uses the partition key that is associated with each data record to determine which shard a given data record belongs to. For example, if your logs come from Docker containers, you can use container_id as the partition key, and the logs will be grouped and stored on different shards depending upon the id of the container they were generated from. As the data within a shard are coarsely ordered, you will get all your logs from one container in one shard roughly in order. If you don't set a partition key or put an invalid one, a random key will be generated, and the logs will be directed to random shards. If the partition key is invalid, the plugin will print an warning message. | `"container_id"` | -| `kinesis.appendNewline` | If you set append_newline as true, a newline will be addded after each log record. | | -| `kinesis.replaceDots` | Replace dot characters in key names with the value of this option. | | +| `kinesis.partitionKey` | A partition key is used to group data by shard within a stream. A Kinesis Data Stream uses the partition key that is associated with each data record to determine which shard a given data record belongs to. For example, if your logs come from Docker containers, you can use container_id as the partition key, and the logs will be grouped and stored on different shards depending upon the id of the container they were generated from. As the data within a shard are coarsely ordered, you will get all your logs from one container in one shard roughly in order. If you don't set a partition key or put an invalid one, a random key will be generated, and the logs will be directed to random shards. If the partition key is invalid, the plugin will print an warning message. | `"container_id"` | +| `kinesis.appendNewline` | If you set append_newline as true, a newline will be addded after each log record. | | +| `kinesis.replaceDots` | Replace dot characters in key names with the value of this option. | | | `kinesis.dataKeys` | By default, the whole log record will be sent to Kinesis. If you specify key name(s) with this option, then only those keys and values will be sent to Kinesis. For example, if you are using the Fluentd Docker log driver, you can specify data_keys log and only the log message will be sent to Kinesis. If you specify multiple keys, they should be comma delimited. | | | `kinesis.roleArn` | ARN of an IAM role to assume (for cross account access). | | | `kinesis.endpoint` | Specify a custom endpoint for the Kinesis Streams API. | | diff --git a/stable/aws-for-fluent-bit/templates/configmap.yaml b/stable/aws-for-fluent-bit/templates/configmap.yaml index ab0148da6..0ece76062 100755 --- a/stable/aws-for-fluent-bit/templates/configmap.yaml +++ b/stable/aws-for-fluent-bit/templates/configmap.yaml @@ -15,6 +15,10 @@ data: Parsers_File /fluent-bit/etc/parser_extra.conf {{- end }} +{{- if .Values.service.extraKeys }} +{{ .Values.service.extraKeys | indent 8}} +{{- end }} + [INPUT] Name tail Tag {{ .Values.input.tag }} diff --git a/stable/aws-for-fluent-bit/values.yaml b/stable/aws-for-fluent-bit/values.yaml index 7d8d528a5..993e3de72 100644 --- a/stable/aws-for-fluent-bit/values.yaml +++ b/stable/aws-for-fluent-bit/values.yaml @@ -14,6 +14,8 @@ fullnameOverride: "" service: parsersFiles: - /fluent-bit/parsers/parsers.conf + # extraKeys: | + # HTTP_Server On # extraParsers: | # [PARSER] # Name logfmt @@ -139,7 +141,7 @@ affinity: {} annotations: {} # iam.amazonaws.com/role: arn:aws:iam::123456789012:role/role-for-fluent-bit - + env: [] ## To add extra environment variables to the pods, add as below # env: @@ -157,7 +159,7 @@ env: [] # valueFrom: # fieldRef: # fieldPath: spec.nodeName - + volumes: - name: varlog