You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I expect the vault to crash and not try to get the lock again
Actual Behavior:
Vault is looping in the behavior described in the logs, no leader is ever elected and it feels the leader key in consul at each attempt without cleaning
Steps to Reproduce:
Ok, this one is kinda tricky. We had two consul set of data imported in the same kv.
Create a vault on any storage backend
create an entity with an auth backend
backup the storage and deploy it somewhere with another vault cluster on a separate storage
modify any entity on cluster 2
backup cluster 2 data and import it on cluster 1
Try to start
Look at the //core/leader path in your storage
Important Factoids:
I totally understand this is so wrong in many ways.
I just wanted to raise the leader infinite creation in case of bogus storage data
I'm not sure if it should just crash or still try to get the lock after seeing there is a conflict in entity
(All nodes of the cluster will be impacted the same way)
References:
I'm willing to look at implementing the patch if you identify it as a real bug
The text was updated successfully, but these errors were encountered:
There isn't really much that we can do here, because when a new Vault takes leadership it needs to write its information for other HA nodes to find. The only time that info can be safely cleaned out is when grabbing leadership is successful. So it's a bit chicken and egg, but as you noted in normal usage this should never really occur. Thanks for calling attention to it though!
Environment:
Vault Config File:
Startup Log Output:
This is repeating every 2seconds or so
Expected Behavior:
I expect the vault to crash and not try to get the lock again
Actual Behavior:
Vault is looping in the behavior described in the logs, no leader is ever elected and it feels the leader key in consul at each attempt without cleaning
Steps to Reproduce:
Ok, this one is kinda tricky. We had two consul set of data imported in the same kv.
Create a vault on any storage backend
create an entity with an auth backend
backup the storage and deploy it somewhere with another vault cluster on a separate storage
modify any entity on cluster 2
backup cluster 2 data and import it on cluster 1
Try to start
Look at the //core/leader path in your storage
Important Factoids:
I totally understand this is so wrong in many ways.
I just wanted to raise the leader infinite creation in case of bogus storage data
I'm not sure if it should just crash or still try to get the lock after seeing there is a conflict in entity
(All nodes of the cluster will be impacted the same way)
References:
I'm willing to look at implementing the patch if you identify it as a real bug
The text was updated successfully, but these errors were encountered: