-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Storage Based Retention #3942
Storage Based Retention #3942
Conversation
I am unsure if when running |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good first pass. I just think we can get there in a way that's less complex and more efficient.
…retained segments
957e589
to
0a623bb
Compare
logger.error( | ||
f"Could not clear {hourly_bandwidth} currently {deleted_segments_size}, retained recordings must be deleted." | ||
) | ||
for recording in recordings.objects().iterator(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont think you can re-use the iterator like this, but I could be wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test def test_storage_cleanup(self):
does capture this so I think it does work this way, but I'll explicitly define it just to be sure
@NickM-27 Should it be pretty easy to add a config of size to keep free instead of the 1 hour calculation? I think the hour cleanup at a time would still work but having a config for that might be nice as well. Wanted to get your thoughts before I tried to implement it. |
It was deliberately left at this for now as our focus for 0.12.0 is stability. In a future release more config options are planned. That is to say: feel free to go for it but not sure it'll be included in the upcoming release. That being said, it should be fairly trivial to implement. |
Awesome, Thanks for the quick reply. It makes sense that this initial feature falls under stability improvements. I'll try and make a draft PR with a configurable limit option and understand if it's pushed off on merging for awhile.. |
* Add field and migration for segment size * Store the segment size in db * Add comment * Add default * Fix size parsing * Include segment size in recordings endpoint * Start adding storage maintainer * Add storage maintainer and calculate average sizes * Update comment * Store segment and hour avg sizes per camera * Formatting * Keep track of total segment and hour averages * Remove unused files * Cleanup 2 hours of recordings at a time * Formatting * Fix bug * Round segment size * Cleanup some comments * Handle case where segments are not deleted on initial run or is only retained segments * Improve cleanup log * Formatting * Fix typo and improve logging * Catch case where no recordings exist for camera * Specifically define sort * Handle edge case for cameras that only record part time * Increase definition of part time recorder * Remove warning about not supported storage based retention * Add note about storage based retention to recording docs * Add tests for storage maintenance calculation and cleanup * Format tests * Don't run for a camera with no recording segments * Get size of file from cache * Rework camera stats to be more efficient * Remove total and other inefficencies * Rewrite storage cleanup logic to be much more efficient * Fix existing tests * Fix bugs from tests * Add another test * Improve logging * Formatting * Set back correct loop time * Update name * Update comment * Only include segments that have a nonzero size * Catch case where camera has 0 nonzero segment durations * Add test to cover zero bandwidth migration case * Fix test * Incorrect boolean logic * Formatting * Explicity re-define iterator
Make sure to tag/close #994 with this PR. (I can't touch it. But, this PR resolves that issue) |
@XtremeOwnageDotCom I don't think that issue can be closed, there's more configuration and other things that need to be done to fully satisfy that request. |
@NickM-27 It meets the requirement for which I originally opened that issue for. However, I gave up trying to keep up with all of the riff-raff which occurred down in the issue when it went off the rails.. |
To - Do: