Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[EKS/Fargate] [Logging]: EKS Fargate logging is missing logs #1450

Open
andreiseceavsp opened this issue Jul 22, 2021 · 8 comments
Open

[EKS/Fargate] [Logging]: EKS Fargate logging is missing logs #1450

andreiseceavsp opened this issue Jul 22, 2021 · 8 comments
Labels
EKS Amazon Elastic Kubernetes Service Fargate AWS Fargate Proposed Community submitted issue

Comments

@andreiseceavsp
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Tell us about your request
I configured EKS Fargate logging to output pods logs to Cloudwatch (using cloudwatch_logs) as per below tutorials and although they are working we have missing logs.
https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html
https://aws.amazon.com/blogs/containers/fluent-bit-for-amazon-eks-on-aws-fargate-is-here/

Which service(s) is this request for?
Fargate EKS

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
I'm expecting that EKS Fargate with Fluentbit to log consistently in Cloudwatch.

Are you currently working around this issue?
No

Additional context

  • Confirmed logging is enabled for pods kubectl describe pod and we have Logging: LoggingEnabled

Attachments

@andreiseceavsp andreiseceavsp added the Proposed Community submitted issue label Jul 22, 2021
@mikestef9 mikestef9 added EKS Amazon Elastic Kubernetes Service Fargate AWS Fargate labels Jul 22, 2021
@Maxwell2022
Copy link

@andreiseceavsp Did you manage to solve, work around this problem?

@andreiseceavsp
Copy link
Author

@andreiseceavsp Did you manage to solve, work around this problem?

I managed to workaround by using the cloudwatch plugin instead of cloudwatch_logs as per this comment

@radoslav-stefanov
Copy link

I am having the same problem. EKS 1.21. Tried with the workaround without luck.

@booleanbetrayal
Copy link

We are seeing the issue in EKS 1.23 and it happens intermittently with 1 short-lived Pod in particular. We have tried both cloudwatch and cloudwatch_logs plugin and are seeing missing log-groups entirely. This has become a principal concern for us and EKS Fargate reliability.

@andreiseceavsp
Copy link
Author

Now I’m worried because we need to upgrade from 1.20 where it was working fine.

@andreiseceavsp
Copy link
Author

Looks like there’s at least an option to see fluentbit logs now. Maybe it helps troubleshooting this.

kind: ConfigMap
apiVersion: v1
metadata:
  name: aws-logging
  namespace: aws-observability
  labels:
data:
  # Configuration files: server, input, filters and output
  # ======================================================
  flb_log_cw: "true"  #ships fluent-bit process logs to CloudWatch

  output.conf: |
    [OUTPUT]
        Name cloudwatch
        Match kube.*
        region region-code
        log_group_name fluent-bit-cloudwatch
        log_stream_prefix from-fluent-bit-
        auto_create_group true

@booleanbetrayal
Copy link

We believe we may have narrowed this down to Pods with shareProcessNamespace: true. We had been using this to deal with sidecar shutdown in completed Jobs, but are going to have to migrate to a file-watch pattern it looks like. Interestingly enough, several logging frameworks (like DataDog) rely on shareProcessNamespace: true, so definitely a potentially wide-impact issue if it's reproducible in this fashion.

@booleanbetrayal
Copy link

FWIW - For the Pods that have logging failures, we do not see any sort of fluent-bit logging after enabling the logging parameter that @andreiseceavsp has pointed out. Pods that initialize logging correctly all log to the fluent-bit log as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EKS Amazon Elastic Kubernetes Service Fargate AWS Fargate Proposed Community submitted issue
Projects
None yet
Development

No branches or pull requests

5 participants