Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fargate] [request]: Support kubernetes filter for fluentbit configuration #1197

Closed
lindarr915 opened this issue Dec 17, 2020 · 18 comments
Closed
Labels
EKS Amazon Elastic Kubernetes Service Fargate AWS Fargate Proposed Community submitted issue

Comments

@lindarr915
Copy link

lindarr915 commented Dec 17, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Tell us about your request
What do you want us to build?

Which service(s) is this request for?
Fargate on EKS

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?

I am collecting fargate pods to CloudWatch Logs using fluentbit provided by AWS EKS.
It is mentioned in the docs [1] that kubernetes is a supported filter in filters.conf but however it is different from the my expectation.

Fargate validates against the following supported filters: grep, kubernetes, parser, record_modifier, rewrite_tag, throttle, nest, and modify.

I tried to use the fluentbit filter below on regular EKS node (v1.6.8, DaemonSet YAML as [3]) and fargate.

[FILTER]
    Name kubernetes
    Match kube.*
    Merge_Log On
    Keep_Log Off
    K8S-Logging.Parser On
    K8S-Logging.Exclude On

On regular EC2 node I can get kubernetes metadata such as pod_name, namespace_name, pod_id, labels etc.
On the other hand, on fargete node there is no such information [2].

I would expect a fix for the issue.


[1]

{
    "log": "2020-12-11T09:23:08.725811056Z stderr F [2020/12/11 09:23:08] [ info] [engine] started (pid=1)",
    "kubernetes": {
        "pod_name": "fluent-bit-tzh29",
        "namespace_name": "default",
        "pod_id": "cb6f6edd-cf5e-4501-adc0-4e98540a8119",
        "labels": {
            "app.kubernetes.io/instance": "fluent-bit",
            "app.kubernetes.io/name": "fluent-bit",
            "controller-revision-hash": "697f8948f4",
            "pod-template-generation": "3"
        },
        "annotations": {
            "checksum/config": "0422414bf28edb6abd7b06749cc33f6e8c57c0d7fcf1e5b5518b3e8ba7ef0dc0",
            "kubectl.kubernetes.io/restartedAt": "2020-12-11T17:12:58+08:00",
            "kubernetes.io/psp": "eks.privileged"
        },
        "host": "ip-192-168-80-89.us-west-2.compute.internal",
        "container_name": "fluent-bit",
        "docker_id": "e1c28d4563bfa6db32e7977e2630e32570b26882d5d63071a90fb74d5f0f8a4c",
        "container_hash": "docker.io/fluent/fluent-bit@sha256:49e3fbd3e3a76b8e7088d618dab637d35d711a656d4f2a5e72244d66e88bd3e7",
        "container_image": "docker.io/fluent/fluent-bit:1.6.8"
    }
}

[2]

{
    "kubernetes": {
        "dummy": "Tue Dec 09 02:10:51 2020"
    },
    "log": "2020-12-09T02:10:51.531628054Z stdout F \u001b]0;root@hello: /\u0007root@hello:/# \r\u001b[K\u001b]0;root@test: /\hello@test:/# "
}

[3]

➜  logging kubectl describe ds/fluent-bit
Name:           fluent-bit
Selector:       app.kubernetes.io/instance=fluent-bit,app.kubernetes.io/name=fluent-bit
Node-Selector:  <none>
Labels:         app.kubernetes.io/instance=fluent-bit
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=fluent-bit
                app.kubernetes.io/version=1.6.8
                helm.sh/chart=fluent-bit-0.7.13
...

  Service Account:  fluent-bit
  Containers:
   fluent-bit:
    Image:        fluent/fluent-bit:1.6.8

Are you currently working around this issue?
I don't have workaround except giving up fargate and using EC2 worker nodes instead.

Additional context
None

@lindarr915 lindarr915 added the Proposed Community submitted issue label Dec 17, 2020
@lindarr915 lindarr915 changed the title [EKS] [Request]: Support kubernetes filter for fluentbit configuration [EKS] [Request]: Support kubernetes filter for fluentbit configuration Dec 17, 2020
@lindarr915 lindarr915 changed the title [EKS] [Request]: Support kubernetes filter for fluentbit configuration [Fargate] [request]: Support kubernetes filter for fluentbit configuration Dec 17, 2020
@mikestef9 mikestef9 added EKS Amazon Elastic Kubernetes Service Fargate AWS Fargate labels Dec 17, 2020
@mohitanchlia
Copy link

Is there any idea of when this could be made available? Being able to associate logs with pod id, name etc. is extremely important in a large environment.

@vasukiprasad1
Copy link

Hello Team, Can someone please provide us the ETA for making this available for logging in EKS fargate. The current EKS Fargate Firelens doesn't support environment variables in the [FILTER] section on aws_logging config map.
The most essential fields that we need are,
Pod Name
Pod ID
Container Name
Container ID
Namespace
Hostname(Source)
SourceType
ClusterName

@santhoshratala
Copy link

santhoshratala commented May 17, 2021

This is a major issue for my project right now, almost all of our apps are deployed into Fargate and we use AWS Elasticsearch for application logging.

unlike AWS Cloudwatch output filter, which prefixes a fluentbit tag (containing pod metadata details), I can't do the same with Elasticsearch output plugin which is frustrating because I get logs from all pods (across all namespaces) running in Fargate and I can't find out which pod is sending the log, all I can see is the log record.

I reached out the AWS support team on multiple occasions, but in-vain. And I can't find any workaround for this issue anywhere online.

@rkennedy-tpl
Copy link

I'm honestly moving workloads off Fargate this week and last-week because of the poor log metadata and lack of EBS-CSI support. it's advertised as the next-big thing and the end-all-solution to containerized workloads, but Fargate Workers are not feature-complete compared to other types of Kubernetes workers.

@Namrata3991
Copy link

Namrata3991 commented May 21, 2021

@santhoshratala, I am trying to enable fargate logging and send the logs to elasticsearch, however on elasticsearch I am not able to see the logs coming, If you have implemented this already can you help me with figuring out what might be wrong with this cofigmap

The eks version is 1.19 and platform is eks.4

apiVersion: v1
metadata:
  name: aws-logging
  namespace: aws-observability
  labels:
data:
  output.conf: |
    [OUTPUT]
      Name  es
      Match *
      Host  ******-*.es.amazonaws.com
      Port  443
      Index dev-*
      Type _doc
      tls   On
      AWS_Auth On
      AWS_Region us-east-1 

@luzhkovvv
Copy link

@Namrata3991 first of all, you need strictly four spaces indentation under [OUTPUT]. Spent a lot of time trying to figure that out myself with cloudwatch output.

@Namrata3991
Copy link

@luzhkovvv This documentation doesn't have 4 space indentation for elasticsearch example, though I tried it its not working at all, https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html

@lennartt
Copy link

Knowing what pod a log record belongs to is fairly crucial when setting up EKS logging. Is there an ETA on this?

@vaibhavkhunger
Copy link

Amazon EKS on AWS Fargate now Supports the Fluent Bit Kubernetes Filter:
https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-eks-aws-fargate-supports-fluent-bit-kubernetes-filter/

You can find the technical documentation here: https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html#fargate-logging-kubernetes-filter

@gasRU76
Copy link

gasRU76 commented Nov 12, 2021

Seems like it was not enabled for my cluster, because I can't get kubernetes metadata and i cant enable fluent-bit logs (flb_log_cw: "true"). I get a fault:
admission webhook "0500-amazon-eks-fargate-configmaps-admission.amazonaws.com" denied the request: flb_log_cw is not valid. Please only provide output.conf, filters.conf or parsers.conf in the logging configmap

@midestefanis
Copy link

Seems like it was not enabled for my cluster, because I can't get kubernetes metadata and i cant enable fluent-bit logs (flb_log_cw: "true"). I get a fault: admission webhook "0500-amazon-eks-fargate-configmaps-admission.amazonaws.com" denied the request: flb_log_cw is not valid. Please only provide output.conf, filters.conf or parsers.conf in the logging configmap

You need EKS to be on these platforms:

Screenshot_20211112-073029_Chrome

@gasRU76
Copy link

gasRU76 commented Nov 12, 2021

Thanks, I really have a eks.v2 version platform for eks1.21

@rajeevprasanna
Copy link

Seems like it was not enabled for my cluster, because I can't get kubernetes metadata and i cant enable fluent-bit logs (flb_log_cw: "true"). I get a fault: admission webhook "0500-amazon-eks-fargate-configmaps-admission.amazonaws.com" denied the request: flb_log_cw is not valid. Please only provide output.conf, filters.conf or parsers.conf in the logging configmap

We are also getting the same issue. any solution?

@gasRU76
Copy link

gasRU76 commented Jan 4, 2022

Do you check you k8s version and platform level?

@rajeevprasanna
Copy link

After upgrading eks platform elvel, it is accepting this flag.

@David-Tamrazov
Copy link

I'm having issues making this work following the docs at https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html#fargate-logging-kubernetes-filter. Would running kubernetes 1.21 on eks.4 be an issue? I figured a newer/latest version of the EKS platform would work if eks.3 is supported.

Posting my aws-logging ConfigMap for good measure in case something jumps out at folks. I've enabled + checked my fluent bit logs and there's no errors; the kubernetes metadata simply isn't included in my container logs.

apiVersion: v1
data:
  filters.conf: |-
    [FILTER]
      Name parser
      Match *
      Key_name log
      Parser crio

    [FILTER]
      Name             kubernetes
      Match            kube.*
      Merge_Log           On
      Buffer_Size         0
      Kube_Meta_Cache_TTL 300s
  flb_log_cw: "true"
  output.conf: |-
    [OUTPUT]
      Name cloudwatch_logs
      Match   *
      region us-east-1
      log_group_name /aws/eks/my-cluster/pod-container-logs
      log_stream_prefix from-fluent-bit-
      auto_create_group true
      log_key log
  parsers.conf: |-
    [PARSER]
      Name crio
      Format Regex
      Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>P|F) (?<log>.*)$
      Time_Key    time
      Time_Format %Y-%m-%dT%H:%M:%S.%L%z
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"filters.conf":"[FILTER]\n  Name parser\n  Match *\n  Key_name log\n  Parser crio\n\n[FILTER]\n  Name             kubernetes\n  Match            kube.*\n  Merge_Log           On\n  Buffer_Size         0\n  Kube_Meta_Cache_TTL 300s","flb_log_cw":"true","output.conf":"[OUTPUT]\n  Name cloudwatch_logs\n  Match   *\n  region us-east-1\n  log_group_name /aws/eks/my-cluster/pod-container-logs\n  log_stream_prefix from-fluent-bit-\n  auto_create_group true\n  log_key log","parsers.conf":"[PARSER]\n  Name crio\n  Format Regex\n  Regex ^(?\u003ctime\u003e[^ ]+) (?\u003cstream\u003estdout|stderr) (?\u003clogtag\u003eP|F) (?\u003clog\u003e.*)$\n  Time_Key    time\n  Time_Format %Y-%m-%dT%H:%M:%S.%L%z"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"aws.cdk.eks/prune-c82b1a320b963b727bebc6c4eb43a541f836463940":""},"name":"aws-logging","namespace":"aws-observability"}}
  creationTimestamp: "2022-01-14T16:38:37Z"
  labels:
    aws.cdk.eks/prune-c82b1a320b963b727bebc6c4eb43a541f836463940: ""
  name: aws-logging
  namespace: aws-observability
  resourceVersion: "3601673"
  uid: 9de10c0d-7e9b-4f45-a527-dca39b6b724e

@jasonumiker
Copy link

David try removing the crio parser and filter. I don't have that one and it works. https://github.com/aws-quickstart/quickstart-eks-cdk-python/blob/main/cluster-bootstrap/eks_cluster.py#L1872

@David-Tamrazov
Copy link

David-Tamrazov commented Jan 18, 2022

Thanks for the tip @jasonumiker ; unfortunately that still didn't do it for me. The quickstart you linked is great though; I did notice you have configuration there for deploying fluent-bit on its own through a helm chart, so I'm going to try that as well and see if I can get any more information about what might be wrong from the fluent bit pods.

Edit: actually it works just fine with Fargate; my problem was the indentation. Even though the end-result ConfigMap looked correct, the way I set up the data.filters.conf and data.output.conf in my Typescript CDK build ended up adding more spaces than necessary. When I tried the output.conf and filters.conf strings that @jasonumiker had in his link above, the cloudwatch logs included k8s metadata. Thanks again for the help!

Posting my updated ConfigMap:

apiVersion: v1
data:
  filters.conf: |-
    [FILTER]
      Name  kubernetes
      Match  kube.*
      Merge_Log  On
      Buffer_Size  0
      Kube_Meta_Cache_TTL  300s
  flb_log_cw: "true"
  output.conf: |
    [OUTPUT]
        Name cloudwatch_logs
        Match   *
        region us-east-1
        log_group_name my-cluster-fluent-bit-cloudwatch
        log_stream_prefix from-fluent-bit-
        auto_create_group true
        log_retention_days 30
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"filters.conf":"[FILTER]\n  Name  kubernetes\n  Match  kube.*\n  Merge_Log  On\n  Buffer_Size  0\n  Kube_Meta_Cache_TTL  300s","flb_log_cw":"true","output.conf":"[OUTPUT]\n    Name cloudwatch_logs\n    Match   *\n    region us-east-1\n    log_group_name my-cluster-fluent-bit-cloudwatch\n    log_stream_prefix from-fluent-bit-\n    auto_create_group true\n    log_retention_days 30\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"aws.cdk.eks/prune-c82b1a320b963b727bebc6c4eb43a541f836463940":""},"name":"aws-logging","namespace":"aws-observability"}}
  creationTimestamp: "2022-01-18T20:23:38Z"
  labels:
    aws.cdk.eks/prune-c82b1a320b963b727bebc6c4eb43a541f836463940: ""
  name: aws-logging
  namespace: aws-observability
  resourceVersion: "7662"
  uid: 40fdd7de-fd79-45b5-b713-13cced58e03c

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EKS Amazon Elastic Kubernetes Service Fargate AWS Fargate Proposed Community submitted issue
Projects
None yet
Development

No branches or pull requests