You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When a relation between a coodinator (say Mimir) and S3 is removed, worker's charm config is not updated (by removing S3 config section) and they still in active/idle status instead of blocked + a meaningful message.
To Reproduce
Deploy a coordinator
Deploy a recommended amount of workers
Deploy S3 integrator
Relate coordinator to workers
Relate coordinator to S3
Verify everything is active/idle
Remove relation between S3 and coordinator
S3 config are still in worker's config and workers still in active/idle
Environment
.
Relevant log output
.
Additional context
No response
The text was updated successfully, but these errors were encountered:
Huh.
This is probably because we never got around to implementing the 'shut down the workers when the coordinator becomes incoherent' behaviour, which was originally designed.
What's probably going on is:
coordinator runs __init__, realizes it is incoherent, refuses to process the event further.
the coordinator never updates the worker configs
the worker keeps executing (but the workload, presumably, goes down)
the worker only notices in the next update-status that the workload is down (assuming we have pebble checks implemented)
Likely, the solution is to not just return from the coordinator if incoherent, but stop the workers by dropping all config
Bug Description
When a relation between a coodinator (say Mimir) and S3 is removed, worker's charm config is not updated (by removing S3 config section) and they still in
active/idle
status instead ofblocked
+ a meaningful message.To Reproduce
active/idle
active/idle
Environment
.
Relevant log output
.
Additional context
No response
The text was updated successfully, but these errors were encountered: