-
Notifications
You must be signed in to change notification settings - Fork 399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/enrich grafana controller manager labels #1373
Feature/enrich grafana controller manager labels #1373
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First of all, thanks for your PR.
When creating the latest version of the operator, I don't think we gave the operator label any thought, so from my point of view you can remove control-plane: controller-manager
.
The reason why the makefil generates new config is that we no longer keep the OLM yaml in sync in this repo, this to be able to support disconnected mode in an easier way. See #1234 for more info.
Don't have time to clone down your branch right now, will this affect the kustomize yaml in any way?
Is there any best practice around operator labels in OLM?
To me, it feels a bit strange that we have to add app.kubernetes.io/managed-by: olm
and it wouldn't surprise me if OLM did this automatically.
But I don't have access to OCP clusters anymore, so I can't test.
app.kubernetes.io/managed-by: olm | ||
app.kubernetes.io/name: grafana-operator-controller-manager | ||
app.kubernetes.io/part-of: grafana-operator | ||
app.kubernetes.io/version: v5.6.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would prefer not to have version as a part of this. Then we will have to add logic to update the version.
I also don't think we gain anything by adding it.
Hey @NissesSenap, ah, okay, thanks. Good to know. Regarding best practices for operator labels: I just found some information on labels for multiple architectures [1]. Besides that, I have In the next days, when I find some time, I'll check if other operators have interesting Regarding your question about whether OLM automatically creates the label So, I tried the operator in my Kind cluster, which includes the OLM initialized with The OLM didn't create any default labels for the deployment. So, I guess not. But I will (Why do you need an OCP cluster for this? Isn't the OLM the same, or am I missing something?) Regarding your question about whether the changes will affect the kustomize files: I don't think it affects the kustomize files in any way. The question is whether we should also add the new labels as defaults to the kustomize # deploy/kustomize/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana-operator-controller-manager
namespace: default
labels:
app: grafana-operator-controller-manager <--
spec:
replicas: 1
selector:
matchLabels:
control-plane: grafana-operator-controller-manager Another Point: Should we adjust the Helm Template: Should we also add the additional labels to the Helm template, or should we leave it to # deploy/helm/grafana-operator/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "grafana-operator.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels:
{{- include "grafana-operator.labels" . | nindent 4 }}
{{- with .Values.additionalLabels }}
{{- toYaml . | nindent 4 }}
{{- end }} # deploy/helm/grafana-operator/templates/_helpers.tpl
{{/*
Common labels
*/}}
{{- define "grafana-operator.labels" -}}
helm.sh/chart: {{ include "grafana-operator.chart" . }}
{{ include "grafana-operator.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }} [1] https://olm.operatorframework.io/docs/advanced-tasks/ship-operator-supporting-multiarch/ |
Hi @mikaayil , sorry for the slow answer. Thanks for a well described answer to all my questions. As you point out, it's of course possible to install OLM in k8s, I just have never tried it. And I think the labels as described in k8s docs would be good to follow. As you say add Could you update the PR to reflect this and I'm happy to merge it. |
Hey @NissesSenap, thank you as well + you're welcome. I've added the labels to both the Kustomize deployment and the bundle configuration. Adjusting the Helm deployment wasn't necessary since it already comes with appropriate
I decided to only add the label I believe the other labels might be too much after reading the Helm documentation 1, |
Signed-off-by: Edvin Norling <[email protected]>
Signed-off-by: Edvin Norling <[email protected]>
Signed-off-by: Edvin Norling <[email protected]>
Nice work @mikaayil , I decided to remove
from kustomize, since kustomize isn't a deployment tool, it's a templating tool. In my mind, this field should be something like |
I've enhanced the controller manager with additional labels.
I also executed "make all" before creating the PR and tested that the controller-manager
pod has the new labels in my kind cluster.
While working on this PR, I noticed that the bundle wasn't updated after the commit from
@HubertStefanski (Commit: 171038b Title: Update go dependencies).
Because of this, I first executed "make bundle" before implementing my changes. So,
my initial commit "3eb81dc" was simply to build the new bundle. Am I missing something
when updating the bundle? My assumption is that after changing any CRDs or the CSV in the
config/ folder, the bundle should also be updated.
Regarding the new labels:
I think it's a good idea to talk about whether the labels I added are a good fit. So, I
just want to start the discussion with this PR. I didn't remove the label "control-plane:
controller-manager". Should we consider removing this label in the long run?
Out of curiosity: What was the original purpose of setting this label in the first place?