-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deleting an age-based Bigtable GCPolicy for a replicated cluster with kubectl hangs #542
Comments
Hi @fosky94, sorry for the late reply. I have been able to reproduce the issue, it seems that the hanging operation is buried in the terraform implementation of google_bigtable_gc_policy, it has been continuously retrying on the following error for some reason. I'll dig a little deep and circle back.
|
File an issue in terraform for inquiry and tracking: hashicorp/terraform-provider-google#10132 |
I believe this issue should be closed once we updated KCC to use the latest TF provider. See hashicorp/terraform-provider-google#10132. |
|
Deleting an age-based Bigtable GCPolicy for a replicated cluster with kubectl hangs instead of throwing and error.
To reproduce this issue, you need to declaratively create an age-based GCPolicy for a Bigtable Instance with >1 clusters. When deleting the GCPolicy it stays in a
deleted
state and the command hangs forever. Due to the prevention of deleting age-based garbage collection policies this should fail and throw an error, instead of stating that it deleted the GCPolicy and hang.An example of how the cbt cli displays this error can be found below:
Steps to reproduce this issue:
Create an instance with 2 clusters using the .yaml file attached at the bottom of this issue.
$ kubectl -n<REDACTED> apply -f <file>.yaml
(optional) Check DI status:
$ kubectl -n<REDACTED> get BigtableInstances <REDACTED>
(optional) Run cbt to check GCPolicies (note might have to wait awhile for everything to take effect):
Delete the gc policy
crtl-C the hanging command and get the status of the gcpolicy
As expected rerunning cbt shows the policy still there:
if you edit the resource and remove the finalizer then the deletion completes, but cbt still shows the policy. So deletion wasn't successful.
Request:
Could it be possible to return an error similar to the one cbt displays?
Thank you in advance! :)
YAML file:
The text was updated successfully, but these errors were encountered: