-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ML] More advanced model snapshot retention policy #52150
Labels
Comments
Pinging @elastic/ml-core (:ml) |
The plan for 7.8 is:
|
droberts195
added a commit
to droberts195/elasticsearch
that referenced
this issue
Apr 28, 2020
This change adds a new setting, model_snapshot_retention_sparse_after_days, to the anomaly detection job config. Initially this has no effect, the effect will be added in a followup PR. This PR gets the complexities of making changes that interact with BWC over well before feature freeze. Relates elastic#52150
droberts195
added a commit
that referenced
this issue
Apr 28, 2020
) This change adds a new setting, daily_model_snapshot_retention_after_days, to the anomaly detection job config. Initially this has no effect, the effect will be added in a followup PR. This PR gets the complexities of making changes that interact with BWC over well before feature freeze. Relates #52150
droberts195
added a commit
to droberts195/elasticsearch
that referenced
this issue
May 4, 2020
This PR implements the following changes to make ML model snapshot retention more flexible in advance of adding a UI for the feature in an upcoming release. - The default for `model_snapshot_retention_days` for new jobs is now 10 instead of 1 - There is a new job setting, `daily_model_snapshot_retention_after_days`, that defaults to 1 for new jobs and `model_snapshot_retention_days` for pre-7.8 jobs - For days that are older than `model_snapshot_retention_days`, all model snapshots are deleted as before - For days that are in between `daily_model_snapshot_retention_after_days` and `model_snapshot_retention_days` all but the first model snapshot for that day are deleted - The `retain` setting of model snapshots is still respected to allow selected model snapshots to be retained indefinitely Closes elastic#52150
droberts195
added a commit
that referenced
this issue
May 5, 2020
This PR implements the following changes to make ML model snapshot retention more flexible in advance of adding a UI for the feature in an upcoming release. - The default for `model_snapshot_retention_days` for new jobs is now 10 instead of 1 - There is a new job setting, `daily_model_snapshot_retention_after_days`, that defaults to 1 for new jobs and `model_snapshot_retention_days` for pre-7.8 jobs - For days that are older than `model_snapshot_retention_days`, all model snapshots are deleted as before - For days that are in between `daily_model_snapshot_retention_after_days` and `model_snapshot_retention_days` all but the first model snapshot for that day are deleted - The `retain` setting of model snapshots is still respected to allow selected model snapshots to be retained indefinitely Closes #52150
17 tasks
4 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Currently ML anomaly detector model snapshots are written every 3-4 hours and retained for a number of days (default 1) specified in the
model_snapshot_retention_days
setting of the job configuration.Many users have a desire to store model snapshots for longer than the default of 1 day. However, this means that 6-8 potentially large model snapshots are retained for each extra day that is configured.
It would be nicer if the retention policy could be changed to something like "retain all model snapshots for 1 day, plus 1 model snapshot per day for 6 days before that, plus 1 model snapshot per week for 21 days before that".
This leads to some questions:
Once the configuration mechanism is decided it should not be very difficult to make the code changes necessary to action the configured policy.
The text was updated successfully, but these errors were encountered: