-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The logging config that I used for sending fluentbit logs to Cloudwatch #291
Conversation
"logs:DescribeLogStreams", | ||
"logs:PutLogEvents" | ||
], | ||
"Resource": "*" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this * needs to be changed to something less generic.
"Statement": [{ | ||
"Effect": "Allow", | ||
"Action": [ | ||
"logs:CreateLogStream", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need permission to allow the logs to set retention time. We also need to complete this permissions json to create a new permission. Right now, I created a new permission from the AWS console, called "eks-fargate-logging-policy" and manually attached it to the Fargate pod execution roles.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After the role is created properly, this role created by the console needs to be deleted
Name kubernetes | ||
Match * | ||
Merge_Log On | ||
Buffer_Size 0 | ||
Kube_Meta_Cache_TTL 300s | ||
Labels On | ||
K8S-Logging.Exclude On |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This adds to the logs the kubernetes pod details, so that later on we can add pod name as log_stream_name. This will cause a separate stream to get created for each pod and sending the logs to it.
Documentation on how to configure this is in here: https://docs.fluentbit.io/manual/pipeline/filters/kubernetes
[OUTPUT] | ||
Name cloudwatch | ||
Match * | ||
region eu-west-1 | ||
log_group_name fluent-bit-cloudwatch | ||
log_stream_name $(kubernetes['pod_name']) | ||
auto_create_group true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I use the non-official Fluent Bit output plugin: https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit. The official plugin is called "cloudwatch-logs" and it doesn't work for us, because it doesn't support templating variables (what we do to get the log_stream_name equal to the pod name).
Link to very useful github issue: aws/amazon-cloudwatch-logs-for-fluent-bit#16
Link to "cloudwatch_logs" that we didn't use in the end: https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch
This config used to work, but it needs to be double-checked because it started failing at some point. There is a chance that it's not working
Time_Keep On | ||
|
||
filters.conf: | | ||
[FILTER] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't try this, but it looks useful for adding some more metadata to the logs: https://docs.fluentbit.io/manual/pipeline/filters/aws-metadata
Background
Link to issue
This is an investigation of a method that can be used to persist logs from pipeline and worker pods.
It is not production-ready, just a PoC. No need for reviews or merging at this stage.
Just raising this Draft PR to have a demo of what was done for enabling Fluent Bit logs to go to Cloudwatch.
Link to staging deployment URL
Links to any Pull Requests related to this
Anything else the reviewers should know about the changes here
Changes
Code changes
Definition of DONE
Your changes will be ready for merging after each of the steps below have been completed:
Testing
To set up easy local testing with inframock, follow the instructions here: https://github.com/hms-dbmi-cellenics/inframock
To deploy to the staging environment, follow the instructions here: https://github.com/hms-dbmi-cellenics/biomage-utils
Documentation updates
Is all relevant documentation updated to reflect the proposed changes in this PR?
Approvers
Just before merging:
unstage
script in here: https://github.com/hms-dbmi-cellenics/biomage-utils is executed. This script cleans up your deployment to stagingOptional