-
Notifications
You must be signed in to change notification settings - Fork 501
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Horizon doesn't enforce retention policy (HISTORY_RETENTION_COUNT
environment variable)
#3711
Labels
Comments
Reaping is performed every one hour. Have you waited one hour? |
@bartekn Yes the node has been running for several weeks, it was started by manually ingesting the most recent (at the time) 200,000 blocks. |
related: #3728 I believe this was broken by the 10-second timeout on the shared context in the app ticker. |
7 tasks
7 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What version are you using?
Horizon:
2.3.0-5029e28d1ec6272a44f5c03ad732059b2fead31d
Core:
stellar-core 17.1.0 (fbc0325759ff75dd250cb5e175978669cdb4e90a)
go:
go1.16.3
What did you do?
Horizon was started with the following environment variables, including
HISTORY_RETENTION_COUNT=200000
:What did you expect to see?
Only latest 200,000 blocks are available on node, and disk consumption to stop growing.
What did you see instead?
The disk space consumption of the node has reached over 500GB, checked via API to find that the
eldest_block
was around 600,000 blocks away from block tip. Seems like db reaping did not occur at all.Performed
horizon db reap
and was given the following output, thenew_elder
block was respective to theHISTORY_RETENTION_COUNT
, which was 200,000 blocks away from the block tip:I suppose this suggests that the retention configuration was set in place, but for some reason the reaper did not act on it for whatever reason?
Current workaround
Manually perform
horizon db reap
, and stop horizon+core to perform a Postgres vacuum to reclaim disk space.The text was updated successfully, but these errors were encountered: