-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Log2ram runs out of space #174
Comments
Did you get an error message on the systemd service or the mount point is full ? Az |
Thank you for your answer! Rsync is installed and i get no errors on systemd service. The mount-point runs full. But if i look through it, a du gives me only a few mb used... Just now (did areboot around 24 hours), i have a usage of 13MB of /var/log (du /var/log). |
Hi anderl78, Typically if you have an ever growing
If the growing behavior is just inside a single 24hours the issue is most probably related to the first point, but if the growing goes on with the same trend after rotation it means it's a mix of the two. However, I find strange that a reboot fixes the issue, as a clean reboot should not change the size of the logs... possibly there is something weird in sudo systemctl restart log2ram.service Does it produce any error? Another strange thing is the fact that you have such a significant difference running
Just to give an idea on raspberries. sudo ls -lRA /var/log | wc -l (I know it's an overestimation far from being precise, as it takes into account directories twice, there are empty lines, and bla bla..., but who cares, it gives you a very quick and dirty idea - just know that this file count is a bit overestimated...) P.S.: having a limited amount of logs due to optimizations of logging granularity and rotation, might lead to the idea that using |
Hey, sorry for my late answer. I had corona and was not able to do anything about this. Thank you very much for your answer! Now i tried, as you suggested, a sudo systemctl restart log2ram.service. And after this, the usage seems to go back to normal... Dont know why! |
Hi, It seems I had a similar issue. For some reason (buggy mail loop), my /var/log filled and log2ram was running out of space. I cleared up some space, resulting in Still, this extra space was not reflected in
Looking at the systemd unit, it seems there is something missing and when the main process exits, the unit does not fall back to failed / stopped state nor systemd does automatically restart it:
A manual restart of log2ram did the trick though:
Best |
I also just experienced this. only 3M used on restarting the log2ram service made df show the correct usage. |
@ferrouswheel do you use rsync on log2ram ? Something weird about this issue.
|
My Output of those commands is:
(I'll also run these commands if I experience the issue again) |
The only thing I suppose to appear here is somehow you have sparse files ( amounts of space preallocated), and log2ram don't see it by default. |
I'm also seeing this issue. I had rsync disabled, just enabled it. Restarting log2ram or the device cleared out the "ghost" space that was filling up the log2ram filesystem. Somewhat odd. Let me know if there are any logs I can pull that would be helpful. |
I have the problem that log2ram runs out of space aswell. The big problem with this in my case is that if gets to 100% my raspberry pi loses its ethernet connection so I have to manually power it off/on to get it working again. I had this problem a while back and never got to the roots of this but now I installed log2ram a few weeks ago on a new pi and after a couple of days it started happening again. Manually restarting the log2ram service doesn't do anything for me. I even increased the partition size via the log2ram config file to 80MB but it still got to 100% usage.
I don't know why journal gets over 25MB because I set it to 20MB max like shown in the readme. Thanks! :) |
I have the exact same problem. Did you ever find a solution to this? |
Unfortunately I did not... |
Seems like the logs are not getting properly recycled and journal logs get bigger than what they are supposed to. |
I ran into this issue and managed to find the solution:
After having a look on it ,the issue is the log of journald getting too big and it quickly fill up all the log2ram space. Solution ? Just clean it a bit: Result:
|
This is not a fix for the issue, but rather a workaround. The root of the problem is that journal logs are getting bigger than what they should (they get bigger than what is specified on the journal config file), so, somehow the journal logs are not being properly rotated. |
I fixed this error by doing
Actually, a few days later and it still ran out of log space. |
df -h hmm, for now, I'm doing it via a simple daily cronjob #cleanup workaround for lo2ram still have no lcue what is really the problem, but a restart fix the issue for me. and after: sudo systemctl restart log2ram |
also seeing the same issue. jumping on the thread for updates |
Pi Zero W here. Running Pi Hole and log2ram. I have been fighting this a while, thinking more and more it's a Systemd journal bloating issue tangled with log2ram. Anyway, I found this as well. home-assistant/operating-system#2226 |
+1 here:
|
Hello! Log2ram runs out of space after a few days. If i check the size of /var/log, it shows the expected size: ways smaller than the configured size of the ramdisk. Fir example /var/log is around 15 MB, the size of the ramdisk is 100 MB - but gets more and more full. Logrotate is already optimized. If i manually write the ramdisks content to disk, there is no difference in the usage of the ramdisk. The only way to fix it, is to reboot the system. System is a raspi 3b
The text was updated successfully, but these errors were encountered: