Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disabled ports are still enabled in the generated service #1353

Closed
ascopes opened this issue Sep 20, 2024 · 6 comments
Closed

Disabled ports are still enabled in the generated service #1353

ascopes opened this issue Sep 20, 2024 · 6 comments
Labels
chart:collector Issue related to opentelemetry-collector helm chart

Comments

@ascopes
Copy link

ascopes commented Sep 20, 2024

I have the following configuration:

mode: daemonset
ports:
  # Disable default ports.
  jaeger-compact:
    enabled: false
  jaeger-grpc:
    enabled: false
  jaeger-thrift:
    enabled: false
  otlp:
    enabled: false
  otlp-http:
    enabled: false
  zipkin:
    enabled: false
service:
  enabled: true

From this, I'd expect these ports to be totally disabled and not configured. This is backed up by the Helm configuration for this:

  • In service.yaml
  {{- $ports := include "opentelemetry-collector.servicePortsConfig" . }}
  {{- if $ports }}
  ports:
    {{- $ports | nindent 4}}
  {{- end }}
  • In _config.tpl
{{- define "opentelemetry-collector.servicePortsConfig" -}}
{{- $ports := deepCopy .Values.ports }}
{{- range $key, $port := $ports }}
{{- if $port.enabled }}
- name: {{ $key }}
  port: {{ $port.servicePort }}
  targetPort: {{ $port.containerPort }}
  protocol: {{ $port.protocol }}
  {{- if $port.appProtocol }}
  appProtocol: {{ $port.appProtocol }}
  {{- end }}
{{- if $port.nodePort }}
  nodePort: {{ $port.nodePort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

However, upon running with service.enabled: true, I can see that these disabled ports are still being marked as exposed on the generated Service resource:

~ kubectl describe service -n test otel
...
Port:              jaeger-compact  6831/UDP
TargetPort:        6831/UDP
Endpoints:         xxx.xxx.xxx.xxx:6831,xxx.xxx.xxx.xxx:6831,xxx.xxx.xxx.xxx:6831
Port:              jaeger-grpc  14250/TCP
TargetPort:        14250/TCP
Endpoints:         xxx.xxx.xxx.xxx:14250,xxx.xxx.xxx.xxx:14250,xxx.xxx.xxx.xxx:14250
Port:              jaeger-thrift  14268/TCP
TargetPort:        14268/TCP
Endpoints:         xxx.xxx.xxx.xxx:14268,xxx.xxx.xxx.xxx:14268,xxx.xxx.xxx.xxx:14268
Port:              otlp  4317/TCP
TargetPort:        4317/TCP
Endpoints:         xxx.xxx.xxx.xxx:4317,xxx.xxx.xxx.xxx:4317,xxx.xxx.xxx.xxx:4317
Port:              otlp-http  4318/TCP
TargetPort:        4318/TCP
Endpoints:         xxx.xxx.xxx.xxx:4318,xxx.xxx.xxx.xxx:4318,xxx.xxx.xxx.xxx:4318
Port:              zipkin  9411/TCP
TargetPort:        9411/TCP
Endpoints:         xxx.xxx.xxx.xxx:9411,xxx.xxx.xxx.xxx:9411,xxx.xxx.xxx.xxx:9411
Session Affinity:  None
Events:            <none>

The config is being pulled in from my values.yaml as adding new ports still shows up in this list as expected.

The redacted IP addresses in this case are being allocated via my network CNI plugin, thus are being allocated externally-facing IP addresses on my network running my cluster, which is not desired behaviour.

  • Image: otel/opentelemetry-collector-contrib:0.109.0
  • Chart: 0.105.1
@povilasv
Copy link
Contributor

Hmm, we do have a check for checking if port is enabled:
https://github.com/open-telemetry/opentelemetry-helm-charts/blob/main/charts/opentelemetry-collector/templates/_config.tpl#L251-L267

Maybe some issue with your values.yaml? Are you using as a subchart? Could you share it?

@TylerHelmuth TylerHelmuth added the chart:collector Issue related to opentelemetry-collector helm chart label Sep 23, 2024
@ascopes
Copy link
Author

ascopes commented Sep 23, 2024

I'm just using the latest chart off of the releases with the settings above (other settings are present that I would need to redact when I am at my work machine next, but it shouldn't impact this looking at the logic). No other modifications are present.

@mowies
Copy link
Member

mowies commented Oct 9, 2024

I tried to reproduce this but wasn't able to with your provided values.

Values
mode: daemonset
image:
  repository: "otel/opentelemetry-collector-k8s"
command:
  name: "otelcol-k8s"
ports:
  # Disable default ports.
  jaeger-compact:
    enabled: false
  jaeger-grpc:
    enabled: false
  jaeger-thrift:
    enabled: false
  otlp:
    enabled: false
  otlp-http:
    enabled: false
  zipkin:
    enabled: false
service:
  enabled: true

Command:

helm template otelcol open-telemetry/opentelemetry-collector --version v0.105.1 --values ./values.yaml > test.yml

That gave me the following k8s service:

apiVersion: v1
kind: Service
metadata:
  name: otelcol-opentelemetry-collector
  namespace: default
  labels:
    helm.sh/chart: opentelemetry-collector-0.105.1
    app.kubernetes.io/name: opentelemetry-collector
    app.kubernetes.io/instance: otelcol
    app.kubernetes.io/version: "0.109.0"
    app.kubernetes.io/managed-by: Helm
    
    component: agent-collector
spec:
  type: ClusterIP
  selector:
    app.kubernetes.io/name: opentelemetry-collector
    app.kubernetes.io/instance: otelcol
    component: agent-collector
  internalTrafficPolicy: Local

As you can see, no ports were generated.

@crutonjohn
Copy link
Contributor

i am unable to produce this issue locally as well.

are you still running into this @ascopes ?

@ascopes
Copy link
Author

ascopes commented Nov 16, 2024

I will have to check next week when I am at my work machine! I will let you know.

@ascopes
Copy link
Author

ascopes commented Nov 18, 2024

Looks like it is working now... that is really bizarre! Thanks for the support.

@ascopes ascopes closed this as not planned Won't fix, can't repro, duplicate, stale Nov 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chart:collector Issue related to opentelemetry-collector helm chart
Projects
None yet
Development

No branches or pull requests

5 participants