Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fluentd not able to run as non root #1908

Open
Tilusch opened this issue Jan 9, 2025 · 2 comments
Open

Fluentd not able to run as non root #1908

Tilusch opened this issue Jan 9, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@Tilusch
Copy link

Tilusch commented Jan 9, 2025

Describe the bug:
When running flluentd asnonroot and a diffrent user the statefulset is not coming up. Is there no way to run fluentd as non root when using the logging operator ressources?

Expected behaviour:
Fluentd is able to run as non root and a specified user.

Steps to reproduce the bug:
Create the following logging ressource and access the fluentd pod logs the output should be similiar:

/usr/local/bundle/gems/logger-1.6.3/lib/logger/log_device.rb:121:in `initialize': Permission denied @ rb_sysopen - /fluentd/log/out (Errno::EACCES)
	from /usr/local/bundle/gems/logger-1.6.3/lib/logger/log_device.rb:121:in `open'
	from /usr/local/bundle/gems/logger-1.6.3/lib/logger/log_device.rb:121:in `create_logfile'
	from /usr/local/bundle/gems/logger-1.6.3/lib/logger/log_device.rb:110:in `rescue in open_logfile'
	from /usr/local/bundle/gems/logger-1.6.3/lib/logger/log_device.rb:106:in `open_logfile'
	from /usr/local/bundle/gems/logger-1.6.3/lib/logger/log_device.rb:85:in `set_dev'
	from /usr/local/bundle/gems/logger-1.6.3/lib/logger/log_device.rb:19:in `initialize'
	from /usr/local/bundle/gems/fluentd-1.17.1/lib/fluent/supervisor.rb:707:in `new'
	from /usr/local/bundle/gems/fluentd-1.17.1/lib/fluent/supervisor.rb:707:in `setup_global_logger'
	from /usr/local/bundle/gems/fluentd-1.17.1/lib/fluent/supervisor.rb:624:in `configure'
	from /usr/local/bundle/gems/fluentd-1.17.1/lib/fluent/command/fluentd.rb:351:in `<top (required)>'
	from <internal:/usr/local/lib/ruby/3.3.0/rubygems/core_ext/kernel_require.rb>:136:in `require'
	from <internal:/usr/local/lib/ruby/3.3.0/rubygems/core_ext/kernel_require.rb>:136:in `require'
	from /usr/local/bundle/gems/fluentd-1.17.1/bin/fluentd:15:in `<top (required)>'
	from /usr/local/bundle/bin/fluentd:25:in `load'
	from /usr/local/bundle/bin/fluentd:25:in `<main>'
/usr/local/bundle/gems/logger-1.6.3/lib/logger/log_device.rb:108:in `initialize': No such file or directory @ rb_sysopen - /fluentd/log/out (Errno::ENOENT)
	from /usr/local/bundle/gems/logger-1.6.3/lib/logger/log_device.rb:108:in `open'
	from /usr/local/bundle/gems/logger-1.6.3/lib/logger/log_device.rb:108:in `open_logfile'
	from /usr/local/bundle/gems/logger-1.6.3/lib/logger/log_device.rb:85:in `set_dev'
	from /usr/local/bundle/gems/logger-1.6.3/lib/logger/log_device.rb:19:in `initialize'
	from /usr/local/bundle/gems/fluentd-1.17.1/lib/fluent/supervisor.rb:707:in `new'
	from /usr/local/bundle/gems/fluentd-1.17.1/lib/fluent/supervisor.rb:707:in `setup_global_logger'
	from /usr/local/bundle/gems/fluentd-1.17.1/lib/fluent/supervisor.rb:624:in `configure'
	from /usr/local/bundle/gems/fluentd-1.17.1/lib/fluent/command/fluentd.rb:351:in `<top (required)>'
	from <internal:/usr/local/lib/ruby/3.3.0/rubygems/core_ext/kernel_require.rb>:136:in `require'
	from <internal:/usr/local/lib/ruby/3.3.0/rubygems/core_ext/kernel_require.rb>:136:in `require'
	from /usr/local/bundle/gems/fluentd-1.17.1/bin/fluentd:15:in `<top (required)>'
	from /usr/local/bundle/bin/fluentd:25:in `load'
	from /usr/local/bundle/bin/fluentd:25:in `<main>'

Additional context:

  • If tried to add the following capabilities but without success:
  • DAC_READ_SEARCH

Environment details:

  • Kubernetes version (e.g. v1.15.2): v1.31.2
  • Cloud-provider/provisioner (e.g. AKS, GKE, EKS, PKE etc): AKS
  • logging-operator version (e.g. 2.1.1): 5.0.1
  • Install method (e.g. helm or static manifests): Helm
  • Logs from the misbehaving component (and any other relevant logs):
  • Resource definition (possibly in YAML format) that caused the issue, without sensitive data:
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
  labels:
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: logging-operator
    app.kubernetes.io/version: 5.0.1
    helm.sh/chart: logging-operator-5.0.1
  name: logging-operator
spec:
  clusterDomain: cluster.local.
  controlNamespace: logging-operator
  enableRecreateWorkloadOnImmutableFieldChange: true
  fluentd:
    bufferVolumeImage:
      pullPolicy: Always
    bufferVolumeMetrics:
      prometheusRules: true
      serviceMonitor: true
    bufferVolumeResources:
      limits:
        cpu: 50m
        memory: 50M
      requests:
        cpu: 1m
        memory: 10M
    configCheckResources:
      limits:
        cpu: 150m
        memory: 128Mi
      requests:
        cpu: 50m
        memory: 32Mi
    configReloaderImage:
      pullPolicy: Always
    configReloaderResources:
      limits:
        cpu: 150m
        memory: 128Mi
      requests:
        cpu: 50m
        memory: 32Mi
    fluentOutLogrotate:
      age: '10'
      enabled: true
      path: /fluentd/log/out
      size: '10485760'
    image:
      pullPolicy: Always
    livenessProbe:
      exec:
        command:
          - /bin/sh
          - '-c'
          - >
            LIVENESS_THRESHOLD_SECONDS=${LIVENESS_THRESHOLD_SECONDS:-300}; if [
            ! -e /buffers ]; then
              exit 1;
            fi; touch -d date -d "@$(($(date +%s) -
            $LIVENESS_THRESHOLD_SECONDS))" /tmp/marker-liveness; if [ -z "$(find
            /buffers -type d -newer /tmp/marker-liveness -print -quit)" ]; then
              exit 1;
            fi;
      initialDelaySeconds: 600
      periodSeconds: 60
    metrics:
      prometheusRules: true
      serviceMonitor: true
    readinessDefaultCheck:
      bufferFileNumber: false
      bufferFreeSpace: true
      bufferFreeSpaceThreshold: 90
      failureThreshold: 1
      initialDelaySeconds: 5
      periodSeconds: 30
      successThreshold: 3
      timeoutSeconds: 3
    scaling: {}
    security:
      podSecurityContext:
        fsGroup: 10013
        fsGroupChangePolicy: Always
        runAsGroup: 10013
        runAsNonRoot: true
        runAsUser: 10013
        supplementalGroups: []
        sysctls: []
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
        readOnlyRootFilesystem: false
        seLinuxOptions: {}
        seccompProfile:
          type: RuntimeDefault
        sysctls: []

/kind bug

@Tilusch Tilusch added the bug Something isn't working label Jan 9, 2025
@csatib02
Copy link
Member

Hey @Tilusch,

Did you check whether your config works, when running as root?

@Tilusch
Copy link
Author

Tilusch commented Jan 10, 2025

Hi @csatib02
Indeed the config works when i use the following securityContext instead and run as root:

      podSecurityContext:
        fsGroup: 10013
        fsGroupChangePolicy: Always
        supplementalGroups: []
        sysctls: []
      securityContext:
        allowPrivilegeEscalation: false
        readOnlyRootFilesystem: false
        seLinuxOptions: {}
        seccompProfile:
          type: RuntimeDefault

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants