Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logparser does not detect file that has been deleted and recreated. #2847

Closed
mottati opened this issue May 23, 2017 · 11 comments
Closed

Logparser does not detect file that has been deleted and recreated. #2847

mottati opened this issue May 23, 2017 · 11 comments
Labels
area/tail bug unexpected problem or unintended behavior

Comments

@mottati
Copy link

mottati commented May 23, 2017

Bug report

The fixes implemented in #1829 did not address the case where a file being watched by the Logparser plugin is deleted and recreated.

Relevant telegraf.conf:

[agent]
   flush_interval=2

[[inputs.logparser]]
  files = ["/tmp/telegraf/input.log"]
  from_beginning=true
  [inputs.logparser.grok]
    patterns=["%{INT:measurement}"]

[[outputs.file]]
  files = ["stdout"]

System info:

Telegraf v1.3.0 (git: release-1.3 2bc5594)

Tested on CentOS 6.7

Steps to reproduce:

Run this bash script. It first tests the case similar to log rolling where the original file is moved. It next tests the delete case where the original file is deleted and a new file of the same name is created.

#!/bin/bash
function testCase {
echo "$1 Test"
rm -fr /tmp/telegraf
mkdir /tmp/telegraf
cat > /tmp/telegraf/config << EOF
[agent]
   flush_interval=2

[[inputs.logparser]]
  files = ["/tmp/telegraf/input.log"]
  from_beginning=true
  [inputs.logparser.grok]
    patterns=["%{INT:measurement}"]

[[outputs.file]]
  files = ["stdout"]
EOF

# Start with 1 line of data in the input.log
echo 0 > /tmp/telegraf/input.log
rm -f nohup.out
nohup telegraf --config /tmp/telegraf/config --debug 2>/dev/null  &
pid=$!

for i in {1..5}
do
    echo "Writing line $i"
    echo $i >> /tmp/telegraf/input.log
    sleep 10
    # Treat every non "move" value as "delete".
    if [ "$1" = "move" ]; then
        mv /tmp/telegraf/input.log /tmp/telegraf/input.log$i
    else
       rm /tmp/telegraf/input.log
    fi
done
echo
kill -9 $pid
echo "Results for $1"
cat nohup.out
echo
}
testCase move
sleep 2
testCase delete

Results

move Test
Writing line 1
Writing line 2
Writing line 3
Writing line 4
Writing line 5

Results for move
logparser_grok,host=node2055.svc.devpg.pdx.wd measurement="0" 1495561759145829922
logparser_grok,host=node2055.svc.devpg.pdx.wd measurement="1" 1495561759145871841
logparser_grok,host=node2055.svc.devpg.pdx.wd measurement="2" 1495561769118308036
logparser_grok,host=node2055.svc.devpg.pdx.wd measurement="3" 1495561779120012459
logparser_grok,host=node2055.svc.devpg.pdx.wd measurement="4" 1495561789121931675
logparser_grok,host=node2055.svc.devpg.pdx.wd measurement="5" 1495561799123925397

./xx: line 45: 26230 Killed                  nohup telegraf --config /tmp/telegraf/config --debug 2> /dev/null
delete Test
Writing line 1
Writing line 2
Writing line 3
Writing line 4
Writing line 5

Results for delete
logparser_grok,host=node2055.svc.devpg.pdx.wd measurement="0" 1495561811159941941
logparser_grok,host=node2055.svc.devpg.pdx.wd measurement="1" 1495561811159987661

Expected behavior:

The results of the test should be the same regardless of whether the file is deleted or moved.

Actual behavior:

When file is deleted the Logparser is no longer able to detect that a new file of the same name was created.

Use case:

Our use case is that the log we are monitoring is always in the same place. It gets purged and recreated when the the service doing the logging is upgraded.

@danielnelson
Copy link
Contributor

Here is a test case #2965

@kylejvrsa
Copy link

I can also confirm this issue with telegraf 1.3.4 on debian jessie with python logger doing the logging.

@botzill
Copy link

botzill commented Dec 15, 2017

Hi guys, was this issues fixed in the latest version of telegraf?

@danielnelson
Copy link
Contributor

No this is still an issue.

@aseev-xx
Copy link

In version 1.4.5 the problem is reproduced

@piotr1212
Copy link
Contributor

Think this is related: hpcloud/tail#122.

https://github.com/hpcloud/tail/pull/125/files fixes the issues for me (keeping deleted file open).

maybe that commit can be merged in https://github.com/influxdata/tail if upstream library doesn't.

@danielnelson
Copy link
Contributor

@piotr1212 That change looks like it introduces a race condition, I don't think it is the right change to make but it might be a clue how to fix this issue. Also it should probably be opened in the fsnotify project and not in tail.

@piotr1212
Copy link
Contributor

That is unfortunate. I forgot to mention that we had a full disk because of logparser plugin keeping a deleted (rotated) file open. Switched to "poll" method which seems not to be affected.

@sjwang90
Copy link
Contributor

sjwang90 commented Jul 6, 2021

Does this issue still persist with inputs.tail? We have deprecated the Logparser plugin in Telegraf 1.15 so any fixes for this would be made on the Tail plugin.

@rluetzner
Copy link

I've just stumbled upon an issue that looks very similar to this using inputs.tail. I'm tailing an nginx access.log and around midnight (when log rotation happens) I get the following error in /var/log/telegraf/telegraf.log:

2021-11-08T00:00:01Z E! [inputs.tail]: Error in plugin: E! Error tailing file /var/log/nginx/access.log, Error: Unable to open file /var/log/nginx/access.log: open /var/log/nginx/access.log: permission denied

The initial post never mentioned a log message so I cannot say if this is the exact same issue, but it certainly looks like it.

@MyaLongmire
Copy link
Contributor

Closing as logparser is deprecated. If you are having problems with tail please open a new issue :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/tail bug unexpected problem or unintended behavior
Projects
None yet
Development

No branches or pull requests

9 participants