Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update the make helmfile/destroy/all command for helm3 #389

Closed
wants to merge 4 commits into from

Conversation

willgraf
Copy link
Contributor

helm3 updates the helm delete command to require the namespace of the helm deployment, which breaks our current implementation. Fortunately, helmfile destroy will delete all installed helm charts regardless of namespace, and is a much simpler command than piping each name/namespace to helm destroy.

This issue was not discovered during the helm 3 migration as the make command only deleted the deepcell namespace, the other pods were destroyed during cluster destruction and did not have any PVCs that got stranded.

namespace is now a required field, which is handled with helmfile.
@willgraf
Copy link
Contributor Author

This PR is blocked by a prometheus-operator==8.12.3 issue when deleting CRDs installed by prometheus operator. This should be resolved by #317.

@willgraf willgraf added the bugfix Fix something that is broken label Oct 19, 2020
@willgraf
Copy link
Contributor Author

The issue is NOT resolved. Despite prometheus-operator being upgraded, helmfile destroy does NOT successfully delete it:

Error: uninstallation completed with 1 error(s): unable to build kubernetes objects for delete: [unable to recognize "": no matches for kind "Alertmanager" in version "monitoring.coreos.com/v1", unable to recognize no matches for kind "PrometheusRule" in version "monitoring.coreos.com/v1", unable to recognize "": no matches for kind "Prometheus" in version "monitoring.coreos.com/v1", unable to recognize "": no matches for kind "PrometheusRule" in version "monitoring.coreos.com/v1", unable to recognize "": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"]

I believe a manual workaround involves turning off cleanupCustomResource and manually deleting the CRDs.


There is more work required in figuring out how to reliably delete the helm charts during cluster tear-down.

@willgraf willgraf added the wip label Oct 22, 2020
@willgraf
Copy link
Contributor Author

This issue is being worked on in another PR #406

@willgraf willgraf closed this Dec 12, 2020
@willgraf willgraf deleted the bugfix/helm-delete branch December 12, 2020 03:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bugfix Fix something that is broken wip
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant