-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logging - posibility of losing logs #1618
Comments
I don't know, but I would like to know. Have you tried asking the upstream fluentd community? Note that OpenShift 4.x uses CRI-O instead of docker - CRI-O has max-size and rotation parameters - not sure how to configure them. Note that logging 4.2 will support rsyslog in addition to fluentd. |
@portante I think this is related to what you have been investigating. |
@alanconway is there work here to be done on the collector side to resolve this or is this purely related to the runtime work you started? |
@camabeh
We strive to collect all logs from the system but we make no guarantees |
On Thu, Jan 2, 2020 at 9:43 AM Jeff Cantrill ***@***.***> wrote:
@alanconway <https://github.com/alanconway> is there work here to be done
on the collector side to resolve this or is this purely related to the
runtime work you started?
It's mostly the backpressure work Sergey is doing. There may be something
to tweak on the collector - e.g. enable Fluentd's blocking mode. It should
just be a matter of configuring the collector correctly, I think all the
collectors we use now or would consider in future will have an
at-least-once delivery mode.
… —
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1618?email_source=notifications&email_token=AB3LUXSOB6PJRU47CRPXETTQ3X4SLA5CNFSM4HJD4VKKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEH6PZLQ#issuecomment-570227886>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3LUXU3NDYPDYHZXDVHYATQ3X4SLANCNFSM4HJD4VKA>
.
|
This problem would likely be solved by a solution like the one proposed for conmon [1]. |
Closing issue to be resolved by impl of containers/conmon#84 |
Let's say I have container which logs massively. Supported configuration from RedHat uses JSON files on /var/log/containers. But it will eventually eat all filesystem because those logs are deleted after pod deletion. One way to combat this situation is to use max-size.
Let's imagine this scenario (for demonstration, log entry will have 1 MB and max-size is 50MB):
Same idea applies for dead containers, k8s GC could have deleted dead containers before sending data to ES (maximum-dead-containers-per-container, default value is 1).
Is there any way to truncate/rotate/delete logs from nodes based on acknowledgment from fluentd that those data has been successfully sent or any idea how to get it working 100% and not to lose a single log line?
The text was updated successfully, but these errors were encountered: