Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prospector taking long time to scan 300k files #37

Closed
java2kus opened this issue Sep 1, 2014 · 6 comments
Closed

Prospector taking long time to scan 300k files #37

java2kus opened this issue Sep 1, 2014 · 6 comments

Comments

@java2kus
Copy link

java2kus commented Sep 1, 2014

We are using log-courier to scan folders that contain events written in the form of XML files. We have around 25 folders having different name (by business event type). In each folder, we create a folder every day where events are logged in XML format. We log around 300k XML events a day (all folders combined).

We have observed that after a day or so the prospector takes a long time (around 20-25 minutes between the file creation time to harvest time) to harvest new files created in the directories. This latency increases even further if we check after 3-4 days. I think this is because the prospector has to compare each file against any changes and then harvest it. In our case, the XML event on the file system is immutable (i.e. its created and never modified).

Question: Is there any setting through which we can tell log-courier to delete the folders that have already been scanned from its internal prospector registry after a day (through some pattern)? This will speed up the subsequent scanning as it does not have to scan all those 300k files in the filesystem.

@driskell
Copy link
Owner

driskell commented Sep 1, 2014

Hi @java2kus

The problem here is the glob match will still return 300k files and for it to know whether a file is old or new etc it will still need to check it via stat()

If we tell it to stop remembering it will just see them again.

It's a complex issue and I can't think of any solution within courier. You may be best running your own archived script to move old files to another directory.

Having so many files in one folder will cause other issues too and wouldn't be isolated to courier.

Jason

@java2kus
Copy link
Author

java2kus commented Sep 1, 2014

Hi Jason,

Thanks for the response. We have these files in multiple folders. The folder structure goes something like this:

/<businesseventname>/YYMMDDHH/*.xml

We actually move these files every week. I guess, we will have to change the job to move it every 1 hour. Thanks for the suggestion.

@driskell
Copy link
Owner

driskell commented Sep 1, 2014

I was thinking about it and I can see a use for a dead time action option. It would work by the user setting dead time to about an hour, then the action to delete or, say, move:/archive/ or something. This would be the best way to reliably ensure files are only archived after they are completed, as using a job to do it might cause skipped files if Logstash was really busy or down. (dead time only triggers if the file is fully processed and received by Logstash, and last modification is older)

Just for a bit more background - so you end up with 300k files each day? That's quite significant!

I'll log down dead time action as an idea. I can't commit to adding it but I will definitely explore it at some point if it sounds feasible to you. What do you think?

@driskell
Copy link
Owner

driskell commented Sep 1, 2014

Sorry just saw you already mentioned it's 300k a day! Disregard that question.

@driskell driskell changed the title Prospector taking long time to scan files Prospector taking long time to scan 300k files Sep 1, 2014
@java2kus
Copy link
Author

java2kus commented Sep 1, 2014

300k files a day is a bit unusual and this is because logging of the events for auditing and troubleshooting purposes came as an after thought. Typically, we would have designed the logging architecture using a high performance message broker but then it started with just logging around 10k events a day (where this was not felt necessary) to the current volume.

On the idea, I like the dead time action concept. This way we wont be skipping events (due to a hourly job running to move events into archive) even if logstash/elasticsearch is overwhelmed with events. This dead time action should only get triggered on after receiving a confirmation from logstash that the event has been processed (I know there is a bug in logstash because of the internal buffer pool which may result in some events getting skipped in case of crash). I am just learning programming in go and hence would maybe modify the source to implement a quick hack of the idea until you decide on the feasibility. Thanks!

@driskell
Copy link
Owner

Closing as it’s a fairly big task and no bandwidth. And would mean a large rewrite of prospector.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants