-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
agent/cache: Store leases in-order in persistent cache so that restore respects dependencies #12843
agent/cache: Store leases in-order in persistent cache so that restore respects dependencies #12843
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One last question, but otherwise looks great!
}) | ||
|
||
if err := tx.DeleteBucket([]byte(v1BucketType)); err != nil { | ||
return fmt.Errorf("failed to clean up %s bucket during v1 to v2 schema migration: %w", v1BucketType, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean we need to do the migration once more? Could we end up in a state where a migration will never succeed? If so, perhaps it would make sense to scrap the db and start fresh.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the migration fails, something super weird has happened because migrations should be very rare (we only explicitly support persistent stores in k8s today) and failures should also be very rare because the schema creation/migration code is very lenient. So I'd prefer that an operator gets a chance to clean up/debug before we wipe everything automatically.
…s' of github.com:hashicorp/vault into agent-persistent-cache-restore-dependencies-ordered-keys
An alternative to #12765. Here we depend on BoltDB storing, and iterating on keys in its bucket, in byte slice order, and use an auto-incrementing index to store each lease in the same order it was created. When we then restore from the persistent storage, it should automatically restore in dependency order.