You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm not so happy with the hourly cron job since it favours servers that are at the beginning of the alphabet, ramps up memory usage as it runs, and is always at a risk of overlapping with its own next run. We could add a semaphore and/or spawn one child process per service. We could also link the scheduler to the per-service API, so spontaneous crawls get queued whenever there are no interactive requests through the API.
We could also have semi-spontaneous crawls, for instance if someone is a reviewing a document that hasn't been crawled in more than a day, automatically trigger a crawl.
The text was updated successfully, but these errors were encountered:
I'm not so happy with the hourly cron job since it favours servers that are at the beginning of the alphabet, ramps up memory usage as it runs, and is always at a risk of overlapping with its own next run. We could add a semaphore and/or spawn one child process per service. We could also link the scheduler to the per-service API, so spontaneous crawls get queued whenever there are no interactive requests through the API.
We could also have semi-spontaneous crawls, for instance if someone is a reviewing a document that hasn't been crawled in more than a day, automatically trigger a crawl.
The text was updated successfully, but these errors were encountered: