You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Several users have indicated that they would find ML's model snapshot reversion functionality very useful if the visibility of this feature was better.
In a future release a model snapshot management page in the anomaly detection configuration section of the ML UI will help. However, we can also increase visibility that reverting to an old model snapshot is a possibility to make the model forget some period where it was fed invalid data by annotating the results views to record when model snapshots were created. If these annotations include the model snapshot ID then this will make it much clearer which model snapshot the administrator should revert to to make the model forget what it learnt during a period when the input to anomaly detection was undesirable in some way.
A precedent for the ML backend creating annotations exists in the form of our "delayed data" annotations.
A related piece of work is to delete these "snapshot created" annotations when the corresponding model snapshot is deleted by the nightly maintenance task after passing its expiry date, or when the entire job is deleted. (This may be more complex than it sounds as it creates a need to search for a specific annotation programmatically. If this requires considerable change then it can be implemented separately to the creation of the annotations.)
The text was updated successfully, but these errors were encountered:
Several users have indicated that they would find ML's model snapshot reversion functionality very useful if the visibility of this feature was better.
In a future release a model snapshot management page in the anomaly detection configuration section of the ML UI will help. However, we can also increase visibility that reverting to an old model snapshot is a possibility to make the model forget some period where it was fed invalid data by annotating the results views to record when model snapshots were created. If these annotations include the model snapshot ID then this will make it much clearer which model snapshot the administrator should revert to to make the model forget what it learnt during a period when the input to anomaly detection was undesirable in some way.
A precedent for the ML backend creating annotations exists in the form of our "delayed data" annotations.
A related piece of work is to delete these "snapshot created" annotations when the corresponding model snapshot is deleted by the nightly maintenance task after passing its expiry date, or when the entire job is deleted. (This may be more complex than it sounds as it creates a need to search for a specific annotation programmatically. If this requires considerable change then it can be implemented separately to the creation of the annotations.)
The text was updated successfully, but these errors were encountered: