-
Notifications
You must be signed in to change notification settings - Fork 414
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Log Rotate: Rotated file still in use #308
Comments
Same cause, different issue: keeping the rotated/deleted file opened keeps its blocks from being reclaimed, which is a bit of a problem when the file is 20G big for yesterday and space left is 22G :) I'm not using the latest code, though. typical lsof output: I guess, in my case, using copytruncate in logrotate should help, although I'm not sure about the all possible side effects. |
Well my workaround is to restart logstash-forwarder every day after the logs are rotated. |
Files should be closed 24 hours after they were last updated, if no changes occur. This is really the only way to tell if a file is deleted/moved without having to constantly poll the entry on disk. It's also important to still read for a little while even after rotation anyway, so that no logs are lost if there are some workers still writing to the old file. "dead time" option on the file config should let you change this from 24 hours to something less if you need. |
It's no problem for me that logstash-forwarder keeps reading the rotated log files. But how logstash-forwarder now handles the situation when a logfile disappears is not that great :) |
+1 I think I have a similar problem. I have installed logstash forwarder from a debian package in my load balancers. And today we noticed we were going out of space FAST. It seems that logstash forwarder kept the log files open even after rotated.
lsof:
logstash-forwarder version:
|
As mentioned before if the file is not fully read and sent to logstash it will not be released even if deleted. Once fully read a timer starts. Default 24h. If no more data the log is then closed. This is the "dead time" option that you specify with the "path" and "fields" and can be reduced to say "30m". Lsof should have a SIZE/OFF column that tells you how far forwarder has read in case it's that problem. My Log Courier fork has a tool to view this info too if still having issues, it prints speed and current offset and what time is left before file closure. |
@driskell thanks a lot, This helps although believe this is not yet available in the Debian package yet. But this helps. |
@AlexRRR current advise from the maintainers (and myself) is to use the master branch and build from source as it contains many fixes. I don't think any new release to packaging is due for a while." As they're busy with Logstash 1.5. |
hm this seems to be another issue? having files that are from 19/11
I feel there should be some logic to handle the above case? Is this the desired outcome? to keep a file open for a week that is deleleted. Also restarting logstash-forwarder will close the file anyway so it wont be able to retry, if It cant read the file in 24h and it havent written something to it over this time aswell, why not force close it? |
@kiranos maybe there is a better way. The reason is so no log line is lost. If logstash crash or unreachable for whatever reason (hardware/network split) and you fix logstash following day, logs flow in and none are lost. And if not all log getting sent right you'll find out due to disk space (which your monitoring) not getting freed. So it's an underlying issue. Any logic to try "handle" this might hide that issue. In most cases too it won't cause any issues for 7 or even 30 days since that's how long logs are kept on the server anyway. So it's a major issue to be fair if after 7 days it's still not managed to transmit the logs. I do agree there might be a better way though. |
This issue still exists in the latest version, bigger problem is it causes number of opened files to accumulate continuously and will reach to the process limit (default is 1024) sooner or later. |
anything on this? i don't want to restart logstash-forwarder every 24 hours |
In addition to setting a much shorter deadtime, try increasing the spool-size and harvest-buffer-size. I have lsf running on many different servers and I only experienced this issue on the ones with the heaviest volume. After some testing it seemed that lsf was legitimately still harvesting the file after it was rotated, and by increasing those settings and adding more indexers to receive the logs, the issue went away. |
The mentioned issues above should be fixed in the first release of filebeat (https://github.com/elastic/filebeat/) which should be available in the available in the next week. As @driskell mentioned above some of the issues can't be resolved as filebeat needs to keep the file open to make sure log lines are sent at least once. @Deshke The reason you restart logstash-forwarder every 24 hours is because of the open files? |
yes, logstash kept the rotated file open(nginx-access.log -> nginx-access.log.$DATE) and did not reload it self to the new log (nginx-access.log) |
@Deshke What is your paths configuration? It could be that you have |
yes, somewhere around that, happens on the syslog/authlog as on the nginx logs (part of my conf down below) logstash-forwarder.conf ....
"files": [
{
"paths": [
"/var/log/syslog"
],
"fields": {
"type": "syslog"
}
},
{
"paths": [
"/var/log/auth.log"
],
"fields": {
"type": "auth-log"
}
},
{
"paths": [
"/var/log/nginx/*access.log"
],
"fields": {
"type": "nginx-access"
}
},..... |
What happens if you change
to
Like this logstash should finish reading the moved file. |
Hello guys, |
@fitz123 If it is about closing files after a certain time, ignore_older in filebeat is the current solution: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html#_ignore_older But related to this, also check issue https://github.com/elastic/filebeat/issues/181 |
The question was about logstash-forwarder logs itself (/var/log/logstash-forwarder). Currently it continues log into rotated file (logstash-forwarder.err.1) and service restart is my workaround But I get from your answer that forwarder is not supported anymore and current replacement is filebeat? Am I right? Best Regards. A. L.
|
@fitz123 Filebeat will be the next version of logstash-forwarder. It is planned to be GA in the next weeks and migrating to filebeat is recommended. I thought you were referring to the log rotation of files lsf is reading. In filebeat, the log rotation is part of libbeat (the base library behind the beats) and will be done automatically (not restart required): Check here for more details: https://www.elastic.co/guide/en/beats/libbeat/current/configuration.html#_files_rotateeverybytes |
There are two options I can see to workaround this at this time:
|
You mentioned that filebeat will rotate the log file. I have a question about a configuration. Let's say Program A is writing logs to logFileA.log. We have configured, filebeat to consume logFileA.log and publish the changes to logstash. Now, will setting the files.rotateeverybytes value, rotate the logFileA.log? I think, it will not. If it will not, then what is the recommended way to rotate the logFileA.log. PS: I know we can use logrotate to handle this, but will the FileBeat then handle this properly without losing the logs between filebeat's last consumption and logrotate occuring? |
@kpcool I think you are confusing here two things:
Filebeat can do both. In case you use logrotate, filebeat is able to also read and follow the rotate files as is uses inode and device to identify files and not necessarly the file name. |
@ruflin Thanks for the clarification. |
It appears that logstash-forwarder keeps the rotated log files open.
/var/log/nginx is a NFS share. On a different server we mount this share for scripting purposes and also move the file away. This is the problem because the web-server doesn't know that the file is gone, and so logstash-forwarder either.
The question is: Why does logstash-forwarder keeps the rotated file open?
Version: HEAD~19812cf
OS: Debian 6.0.10
Linux 3.2.63-grsec #2 SMP Thu Oct 16 18:27:02 CEST 2014 x86_64 GNU/Linux
Debug information:
2014/11/07 00:05:13.895018 Launching harvester on rotated file: /var/log/nginx/access.log
2014/11/07 00:05:13.896337 harvest: "/var/log/nginx/access.log" (offset snapshot:0)
2014/11/07 00:05:14.168639 Registrar: precessing 1024 events
..........
2014/11/07 00:32:59.485583 Registrar: precessing 359 events
2014/11/07 00:33:04.090287 error: Harvester.readLine: read /var/log/nginx/access.log: no such file or directory
2014/11/07 00:33:04.090321 Unexpected state reading from /var/log/nginx/access.log; error: read /var/log/nginx/access.log: no such file or directory
panic: runtime error: invalid memory address or nil pointer dereference
goroutine 22 [running]:
runtime.panic(0x675cc0, 0x80cbb3)
/usr/local/go/src/pkg/runtime/panic.c:279 +0xf5
main.(_Harvester).Harvest(0xc2080041e0, 0xc20807c0e0)
/root/downloads/logstash-forwarder/harvester.go:60 +0x529
created by main.(_Prospector).scan
/root/downloads/logstash-forwarder/prospector.go:164 +0xe24
goroutine 16 [chan receive]:
main.Registrar(0xc20800f440, 0xc2080757a0)
/root/downloads/logstash-forwarder/registrar.go:9 +0x79
main.main()
/root/downloads/logstash-forwarder/logstash-forwarder.go:212 +0xde4
goroutine 19 [finalizer wait, 805 minutes]:
runtime.park(0x41fc60, 0x810e90, 0x80f929)
/usr/local/go/src/pkg/runtime/proc.c:1369 +0x89
runtime.parkunlock(0x810e90, 0x80f929)
/usr/local/go/src/pkg/runtime/proc.c:1385 +0x3b
runfinq()
/usr/local/go/src/pkg/runtime/mgc0.c:2644 +0xcf
runtime.goexit()
/usr/local/go/src/pkg/runtime/proc.c:1445
goroutine 20 [sleep]:
time.Sleep(0x2540be400)
/usr/local/go/src/pkg/runtime/time.goc:39 +0x31
main.(*Prospector).Prospect(0xc208005c20, 0xc2080014c0, 0xc20807c0e0)
/root/downloads/logstash-forwarder/prospector.go:67 +0x5e1
created by main.main
/root/downloads/logstash-forwarder/logstash-forwarder.go:183 +0xb2d
goroutine 21 [sleep]:
time.Sleep(0x2540be400)
/usr/local/go/src/pkg/runtime/time.goc:39 +0x31
main.(*Prospector).Prospect(0xc208005c80, 0xc2080014c0, 0xc20807c0e0)
/root/downloads/logstash-forwarder/prospector.go:67 +0x5e1
created by main.main
/root/downloads/logstash-forwarder/logstash-forwarder.go:183 +0xb2d
goroutine 23 [sleep]:
time.Sleep(0x3b9aca00)
/usr/local/go/src/pkg/runtime/time.goc:39 +0x31
main.(_Harvester).readline(0xc208004300, 0xc208005da0, 0xc20803a0e0, 0x2540be400, 0x0, 0x0, 0x0, 0x0)
/root/downloads/logstash-forwarder/harvester.go:152 +0x1d0
main.(_Harvester).Harvest(0xc208004300, 0xc20807c0e0)
/root/downloads/logstash-forwarder/harvester.go:53 +0x476
created by main.(*Prospector).scan
/root/downloads/logstash-forwarder/prospector.go:164 +0xe24
...............................
goroutine 305 [sleep]:
time.Sleep(0x3b9aca00)
/usr/local/go/src/pkg/runtime/time.goc:39 +0x31
main.(_Harvester).readline(0xc208936ba0, 0xc208937440, 0xc2087ff5e0, 0x2540be400, 0x0, 0x0, 0x0, 0x0)
/root/downloads/logstash-forwarder/harvester.go:152 +0x1d0
main.(_Harvester).Harvest(0xc208936ba0, 0xc20807c0e0)
/root/downloads/logstash-forwarder/harvester.go:53 +0x476
created by main.(*Prospector).scan
/root/downloads/logstash-forwarder/prospector.go:187 +0x13e9
The text was updated successfully, but these errors were encountered: