Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[prometheus-kube-stack] : Error: failed to start container "node-exporter": Error response from daemon: path / is mounted on / but it is not a shared or slave mount #1726

Closed
Snehil03 opened this issue Jan 19, 2022 · 3 comments
Labels
bug Something isn't working lifecycle/stale

Comments

@Snehil03
Copy link

Describe the bug a clear and concise description of what the bug is.

Dear ,

I am facing issue again after deploying kube-state-metrics, I did refered solution provided as part of issue #467, still does not seems to resolve the issue, my yaml looks like below :

`monitoring:
enabled: true
rbac:
create: true
pspEnabled: false
pspAnnotations:
## Specify pod annotations
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
##
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'

prometheus-node-exporter:
    hostRootFsMount: false
    hostNetwork: false
    hostPID: false
    image:
        pullPolicy: Always
    rbac:
        create: true
        pspEnabled: true
        pspAnnotations:
            apparmor.security.beta.kubernetes.io/defaultProfileName: "runtime/default"
        # Required to prevent escalations to root.
        allowPrivilegeEscalation: false
        # This is redundant with non-root + disallow privilege escalation,
        # but we can provide it for defense in depth.
        requiredDropCapabilities:
         - ALL
    containerSecurityContext:
        allowPrivilegeEscalation: false
        readOnlyRootFilesystem: true
        runAsUser:
            rule: MustRunAsNonRoot
        capabilities:
            add: ["NET_ADMIN", "SYS_TIME"]
            drop: ["ALL"]
    resources: 
        # We usually recommend not to specify default resources and to leave this as a conscious
        # choice for the user. This also increases chances charts run on environments with little
        # resources, such as Minikube. If you do want to specify resources, uncomment the following
        # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
        limits:
            cpu: 10m
            memory: 100Mi
        requests:
            cpu: 5m
            memory: 50Mi   `

deployment is sucess but pod with node-exporter in crash back loop off .

What's your helm version?

v3.4.1

What's your kubectl version?

v1.21.2

Which chart?

prometheus-community/kube-prometheus-stack

What's the chart version?

kube-prometheus-stack-30.0.1

What happened?

Hi,

first time installing kube-state-metrics on docker-desktop which further need to be implemented in Azure infrastructure.
with below command :

helm upgrade -i prome prometheus-community/kube-prometheus-stack -n monitoring -f config/local/values.yaml --set nodeExporter.hostRootfs=false

value.yaml :

`monitoring:
enabled: true
rbac:
create: true
pspEnabled: false
pspAnnotations:
## Specify pod annotations
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
##
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'

prometheus-node-exporter:
    hostRootFsMount: false
    hostNetwork: false
    hostPID: false
    image:
        pullPolicy: Always
    rbac:
        create: true
        pspEnabled: true
        pspAnnotations:
            apparmor.security.beta.kubernetes.io/defaultProfileName: "runtime/default"
        # Required to prevent escalations to root.
        allowPrivilegeEscalation: false
        # This is redundant with non-root + disallow privilege escalation,
        # but we can provide it for defense in depth.
        requiredDropCapabilities:
         - ALL
    containerSecurityContext:
        allowPrivilegeEscalation: false
        readOnlyRootFilesystem: true
        runAsUser:
            rule: MustRunAsNonRoot
        capabilities:
            add: ["NET_ADMIN", "SYS_TIME"]
            drop: ["ALL"]
    resources: 
        # We usually recommend not to specify default resources and to leave this as a conscious
        # choice for the user. This also increases chances charts run on environments with little
        # resources, such as Minikube. If you do want to specify resources, uncomment the following
        # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
        limits:
            cpu: 10m
            memory: 100Mi
        requests:
            cpu: 5m
            memory: 50Mi `

and then pod of node-exporter does not get started. as issue of hostfs

What you expected to happen?

it should get deployed seemlessly and should see monitoring on my docker-desktop

How to reproduce it?

execute command provided on docker-desktop

helm upgrade -i prome prometheus-community/kube-prometheus-stack -n monitoring -f config/local/values.yaml --set nodeExporter.hostRootfs=false

Enter the changed values of values.yaml?

monitoring:
    enabled: true
    rbac:
    create: true
    pspEnabled: false
    pspAnnotations: 
      ## Specify pod annotations
      ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
      ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
      ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
      ##
      # seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
      # seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
      apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'

    prometheus-node-exporter:
        hostRootFsMount: false
        hostNetwork: false
        hostPID: false
        image:
            pullPolicy: Always
        rbac:
            create: true
            pspEnabled: true
            pspAnnotations:
                apparmor.security.beta.kubernetes.io/defaultProfileName: "runtime/default"
            # Required to prevent escalations to root.
            allowPrivilegeEscalation: false
            # This is redundant with non-root + disallow privilege escalation,
            # but we can provide it for defense in depth.
            requiredDropCapabilities:
             - ALL
        containerSecurityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser:
                rule: MustRunAsNonRoot
            capabilities:
                add: ["NET_ADMIN", "SYS_TIME"]
                drop: ["ALL"]
        resources: 
            # We usually recommend not to specify default resources and to leave this as a conscious
            # choice for the user. This also increases chances charts run on environments with little
            # resources, such as Minikube. If you do want to specify resources, uncomment the following
            # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
            limits:
                cpu: 10m
                memory: 100Mi
            requests:
                cpu: 5m
                memory: 50Mi 

Enter the command that you execute and failing/misfunctioning.

helm upgrade -i prome prometheus-community/kube-prometheus-stack -n monitoring -f config/local/values.yaml --set nodeExporter.hostRootfs=false

Anything else we need to know?

no

@Snehil03 Snehil03 added the bug Something isn't working label Jan 19, 2022
@zanhsieh zanhsieh changed the title [name of the chart e.g. prometheus-kube-stack] : Error: failed to start container "node-exporter": Error response from daemon: path / is mounted on / but it is not a shared or slave mount [prometheus-kube-stack] : Error: failed to start container "node-exporter": Error response from daemon: path / is mounted on / but it is not a shared or slave mount Jan 23, 2022
@davidyoti
Copy link

Looks like there's a workaround here by disabling MountPropagation, it worked for me to resolve this issue. I'm not an expert here so I'm sure there is more work to do for resolving this long term

@stale
Copy link

stale bot commented Feb 24, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale
Copy link

stale bot commented Mar 10, 2022

This issue is being automatically closed due to inactivity.

@stale stale bot closed this as completed Mar 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working lifecycle/stale
Projects
None yet
Development

No branches or pull requests

2 participants