Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]opensearch/opensearch-dashboard Failed to get saml header: Error: Error: failed parsing SAML config #284

Open
nomopo45 opened this issue Jul 8, 2022 · 12 comments
Labels
bug Something isn't working

Comments

@nomopo45
Copy link

nomopo45 commented Jul 8, 2022

Hello i'm trying to use opensearch and opensearch-dashboard with SAML.

I have the following error when i'm trying to connect :

Error: failed parsing SAML config
opensearch-dashboards-79b549c84b-mfqjk dashboards     at SecurityClient.getSamlHeader (/usr/share/opensearch-dashboards/plugins/securityDashboards/server/backend/opensearch_security_client.ts:176:15)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at runMicrotasks (<anonymous>)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at processTicksAndRejections (internal/process/task_queues.js:95:5)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at /usr/share/opensearch-dashboards/plugins/securityDashboards/server/auth/types/saml/routes.ts:65:30
opensearch-dashboards-79b549c84b-mfqjk dashboards     at Router.handle (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:163:44)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at handler (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:124:50)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at exports.Manager.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/toolkit.js:60:28)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at Object.internals.handler (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:46:20)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at exports.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:31:20)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at Request._lifecycle (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:371:32)
opensearch-dashboards-79b549c84b-mfqjk dashboards     at Request._execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:281:9)
opensearch-dashboards-79b549c84b-mfqjk dashboards {"type":"log","@timestamp":"2022-07-08T14:23:12Z","tags":["error","plugins","securityDashboards"],"pid":1,"message":"Failed to get saml header: Error: Error: failed parsing SAML config"}
opensearch-dashboards-79b549c84b-mfqjk dashboards {"type":"error","@timestamp":"2022-07-08T14:23:12Z","tags":[],"pid":1,"level":"error","error":{"message":"Internal Server Error","name":"Error","stack":"Error: Internal Server Error\n    at HapiResponseAdapter.toError (/usr/share/opensearch-dashboards/src/core/server/http/router/response_adapter.js:143:19)\n    at HapiResponseAdapter.toHapiResponse (/usr/share/opensearch-dashboards/src/core/server/http/router/response_adapter.js:97:19)\n    at HapiResponseAdapter.handle (/usr/share/opensearch-dashboards/src/core/server/http/router/response_adapter.js:92:17)\n    at Router.handle (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:164:34)\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:95:5)\n    at handler (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:124:50)\n    at exports.Manager.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/toolkit.js:60:28)\n    at Object.internals.handler (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:46:20)\n    at exports.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:31:20)\n    at Request._lifecycle (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:371:32)\n    at Request._execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:281:9)"},"url":"http://opensearch-dashboards.mydomain.com/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards","message":"Internal Server Error"}
opensearch-dashboards-79b549c84b-mfqjk dashboards {"type":"response","@timestamp":"2022-07-08T14:23:12Z","tags":[],"pid":1,"method":"get","statusCode":500,"req":{"url":"/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards","method":"get","headers":{"host":"opensearch-dashboards.mydomain.com","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9","accept-encoding":"gzip, deflate, br","accept-language":"fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7","cache-control":"max-age=0","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"102\", \"Google Chrome\";v=\"102\"","sec-ch-ua-mobile":"?0","sec-ch-ua-platform":"\"macOS\"","sec-fetch-dest":"document","sec-fetch-mode":"navigate","sec-fetch-site":"none","sec-fetch-user":"?1","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","x-forwarded-for":"X.X.X.X","x-forwarded-port":"443","x-forwarded-proto":"https","connection":"keep-alive"},"remoteAddress":"X.X.X.X","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36"},"res":{"statusCode":500,"responseTime":90,"contentLength":9},"message":"GET /auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards 500 90ms - 9.0B"}
opensearch-dashboards-79b549c84b-mfqjk dashboards {"type":"response","@timestamp":"2022-07-08T14:23:12Z","tags":[],"pid":1,"method":"get","statusCode":401,"req":{"url":"/favicon.ico","method":"get","headers":{"host":"opensearch-dashboards.mydomain.com,"accept":"image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8","accept-encoding":"gzip, deflate, br","accept-language":"fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7","referer":"https://opensearch-dashboards.mydomain.com/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"102\", \"Google Chrome\";v=\"102\"","sec-ch-ua-mobile":"?0","sec-ch-ua-platform":"\"macOS\"","sec-fetch-dest":"image","sec-fetch-mode":"no-cors","sec-fetch-site":"same-origin","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","x-forwarded-for":"X.X.X.X","x-forwarded-port":"443","x-forwarded-proto":"https","connection":"keep-alive"},"remoteAddress":"X.X.X.X","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","referer":"https://opensearch-dashboards.mydomain.com/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards"},"res":{"statusCode":401,"responseTime":1,"contentLength":9},"message":"GET /favicon.ico 401 1ms - 9.0B"}

You can see my configuration below for opensearch helm chart (skipping not useful part)

config.yml: |-
        _meta:
          type: "config"
          config_version: "2"
        config:
          dynamic:
            http:
              anonymous_auth_enabled: false
            authc:
              basic_internal_auth_domain:
                description: "Authenticate via HTTP Basic against internal users database"
                http_enabled: true
                transport_enabled: true
                order: 0
                http_authenticator:
                  type: basic
                  challenge: false
                authentication_backend:
                  type: intern
              saml_auth_domain:
                order: 1
                description: "SAML provider"
                http_enabled: true
                transport_enabled: false
                http_authenticator:
                  type: saml
                  challenge: true
                  config:
                    idp:
                      metadata_file: "/usr/share/opensearch/plugins/opensearch-security/securityconfig/gsuite.xml"
                      entity_id: "https://accounts.google.com/o/saml2?idpid=XXXXXXXX"
                    sp:
                      entity_id: "https://opensearch-dashboards.mydomain.com"
                    kibana_url: "https://opensearch-dashboards.mydomain.com"
                    subject_key: NameID
                    roles_key: Role
                authentication_backend:
                  type: noop
      gsuite.xml: |-
        <?xml version="1.0" encoding="UTF-8"?><md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" entityID="https://accounts.google.com/o/saml2?idpid=XXXXXXXX" validUntil="XXXXXXXXX">
          <md:IDPSSODescriptor WantAuthnRequestsSigned="false" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
            <md:KeyDescriptor use="signing">
              <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
                <ds:X509Data>
                  <ds:X509Certificate>XXX</ds:X509Certificate>
                </ds:X509Data>
              </ds:KeyInfo>
            </md:KeyDescriptor>
            <md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
            <md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://accounts.google.com/o/saml2/idp?idpid=XXXXXXXX"/>
            <md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://accounts.google.com/o/saml2/idp?idpid=XXXXXXXX"/>
          </md:IDPSSODescriptor>
        </md:EntityDescriptor>

And here for opensearch-dashboard.yml :

config:
  # Default OpenSearch Dashboards configuration from docker image of Dashboards
   opensearch_dashboards.yml: |
    server:
      ssl: 
        enabled: "false"
      xsrf:
        allowlist: ["/_plugins/_security/saml/acs","/_plugins/_security/saml/acs/idpinitiated","/_plugins/_security/saml/logout","/_opendistro/_security/saml/acs/idpinitiated", "/_opendistro/_security/saml/acs", "/_opendistro/_security/saml/logout"]
    opensearch_security:
      auth:
        type: "saml"
    opensearch:
      ssl:
        verificationMode: "none"
      hosts: ["${var.elasticsearch-host}:9200"]
      username: "username"
      password: "password"

Do you have any idea why i have this SAML parsing error ?

Thanks a lot for your help!

@nomopo45 nomopo45 added bug Something isn't working untriaged Issues that have not yet been triaged labels Jul 8, 2022
@nomopo45 nomopo45 changed the title [BUG]opensearch/opensearch-dashboard [BUG]opensearch/opensearch-dashboard Failed to get saml header: Error: Error: failed parsing SAML config Jul 8, 2022
@marcinwito
Copy link

I have the same problem. Only occurs after 1.2.0 -> 2.x update.
The same configuration works fine in version 1.2.0 but not in version 2.x.

@nomopo45
Copy link
Author

I have the same problem. Only occurs after 1.2.0 -> 2.x update. The same configuration works fine in version 1.2.0 but not in version 2.x.

@marcinwito thanks a lot for the answer i will try in version 1.2.0, but are you talking about the opensearch version or opensearch-dashboard ?

@marcinwito
Copy link

I have the same problem. Only occurs after 1.2.0 -> 2.x update. The same configuration works fine in version 1.2.0 but not in version 2.x.

@marcinwito thanks a lot for the answer i will try in version 1.2.0, but are you talking about the opensearch version or opensearch-dashboard ?

exactly on such versions:

# opensearch version
os_version: "1.2.4"

# opensearch dashboards version
os_dashboards_version: "1.2.0"

@nomopo45
Copy link
Author

nomopo45 commented Jul 13, 2022

After some tests where i changed the image tag i have other errors now

opensearch-dashboard 1.2.0 :

[2022-07-18T08:26:40,215][WARN ][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-cluster-master-2] No data for [actiongroups,roles,rolesmapping,tenants] while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, AUDIT]  (index=.opendistro_security and type=_doc)

@nomopo45
Copy link
Author

nomopo45 commented Jul 18, 2022

Ok i deleted eveything.

I use the 2.1.0 tag for the image and below the config my helm values i have new error:

for opensearch-dashboard :

# Copyright OpenSearch Contributors
# SPDX-License-Identifier: Apache-2.0

# Default values for opensearch-dashboards.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

opensearchHosts: "${var.elasticsearch-host}:9200"
replicaCount: 1
image:
  repository: "opensearchproject/opensearch-dashboards"
  # override image tag, which is .Chart.AppVersion by default
  tag: "${var.imagetag}"
  pullPolicy: "IfNotPresent"

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

rbac:
  create: true

# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
#  - name: certs
#    secretName: dashboard-certs
#    path: /usr/share/dashboards/certs

podAnnotations: {}

extraEnvs: []
#  - name: "NODE_OPTIONS"
#    value: "--max-old-space-size=1800"

envFrom: []

extraVolumes: []
  # - name: extras
  #   emptyDir: {}

extraVolumeMounts: []
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

extraInitContainers: ""

extraContainers: ""

podSecurityContext: {}

securityContext:
  capabilities:
    drop:
      - ALL
  # readOnlyRootFilesystem: true
  runAsNonRoot: true
  runAsUser: 1000

config:
  # Default OpenSearch Dashboards configuration from docker image of Dashboards
   opensearch_dashboards.yml: |
    timelion:
      ui:
        enabled: "true"
    server:
      host: "0"
      ssl: 
        enabled: "false"
      xsrf:
        allowlist: ["/_plugins/_security/api/authtoken", "/_opendistro/_security/api/authtoken", "/_opendistro/_security/saml/acs/idpinitiated", "/_opendistro/_security/saml/acs", "/_opendistro/_security/saml/logout", "/_plugins/_security/saml/acs/idpinitiated", "/_plugins/_security/saml/acs", "/_plugins/_security/saml/logout"]
    opensearch_security:
      auth:
        type: "saml"
      multitenancy:
        enabled: "true"
        tenants:
          preferred: ["Private", "Global"]
    opensearch:
      ssl:
        verificationMode: "none"
      hosts: ["${var.elasticsearch-host}:9200"]
      username: "kibanaserver"
      password: "mypassword"
      requestHeadersAllowlist: ["securitytenant", "security_tenant", "Authorization"]




  # Dashboards TLS Config (Ensure the cert files are present before enabling SSL
      # ssl:
      #   enabled: true
      #   key: /usr/share/opensearch-dashboards/certs/dashboards-key.pem
      #   certificate: /usr/share/opensearch-dashboards/certs/dashboards-crt.pem

    # determines how dashboards will verify certificates (needs to be none for default opensearch certificates to work)
    # opensearch:
    #   ssl:
    #     certificateAuthorities: /usr/share/opensearch-dashboards/certs/dashboards-root-ca.pem
    #     if utilizing custom CA certs for connection to opensearch, provide the CA here

priorityClassName: ""

opensearchAccount:
  secret: ""
  keyPassphrase:
    enabled: false

labels: {}

hostAliases: []
# - ip: "127.0.0.1"
#   hostnames:
#   - "foo.local"
#   - "bar.local"

serverHost: "0.0.0.0"

service:
  type: LoadBalancer
  port: 443
  loadBalancerIP: ""
  nodePort: ""
  labels: {}
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ${data.aws_acm_certificate.cert.arn}
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
    service.beta.kubernetes.io/aws-load-balancer-alpn-policy: "HTTP2Preferred"
    ${var.global_domain}/dns-type: private
    external-dns.alpha.kubernetes.io/access: private
    external-dns.alpha.kubernetes.io/hostname: example.${var.domain}
    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=${var.environment},Projet=${var.projet},TimestampLastUpdate=${var.timestamp}
  loadBalancerSourceRanges: []
  # 0.0.0.0/0
  httpPortName: http

ingress:
  enabled: false
  # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
  # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
  # ingressClassName: nginx
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths:
        - path: /
          backend:
            serviceName: chart-example.local
            servicePort: 80
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources:
  requests:
    cpu: "100m"
    memory: "512M"
  limits:
    cpu: "100m"
    memory: "512M"

autoscaling:
  # This requires metrics server to be installed, to install use kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
  # See https://github.com/kubernetes-sigs/metrics-server
  enabled: false
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 80

updateStrategy:
  type: "Recreate"

nodeSelector:
  ${var.nodeSelector}

tolerations: []

affinity: {}

# -- Array of extra K8s manifests to deploy
extraObjects: []
  # - apiVersion: secrets-store.csi.x-k8s.io/v1
  #   kind: SecretProviderClass
  #   metadata:
  #     name: argocd-secrets-store
  #   spec:
  #     provider: aws
  #     parameters:
  #       objects: |
  #         - objectName: "argocd"
  #           objectType: "secretsmanager"
  #           jmesPath:
  #               - path: "client_id"
  #                 objectAlias: "client_id"
  #               - path: "client_secret"
  #                 objectAlias: "client_secret"
  #     secretObjects:
  #     - data:
  #       - key: client_id
  #         objectName: client_id
  #       - key: client_secret
  #         objectName: client_secret
  #       secretName: argocd-secrets-store
  #       type: Opaque
  #       labels:
  #         app.kubernetes.io/part-of: argocd

and here for opensearch :

clusterName: "opensearch-cluster"
nodeGroup: "master"

# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: "opensearch-cluster-master"

# OpenSearch roles that will be applied to this nodeGroup
# These will be set as environment variable "node.roles". E.g. node.roles=master,ingest,data,remote_cluster_client
roles:
  - master
  - ingest
  - data
  - remote_cluster_client

replicas: 3
minimumMasterNodes: 1

# if not set, falls back to parsing .Values.imageTag, then .Chart.appVersion.
majorVersion: ""

global:
  # Set if you want to change the default docker registry, e.g. a private one.
  dockerRegistry: ""

# Allows you to add any config files in {{ .Values.opensearchHome }}/config
opensearchHome: /usr/share/opensearch
# such as opensearch.yml and log4j2.properties
config:
  # Values must be YAML literal style scalar / YAML multiline string.
  # <filename>: |
  #   <formatted-value(s)>
  # log4j2.properties: |
  #   status = error
  #   appender.console.type = Console
  #   appender.console.name = console
  #   appender.console.layout.type = PatternLayout
  #   appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
  #   rootLogger.level = debug
  #   rootLogger.appenderRef.console.ref = console
  #   logger.securityjwt.name = com.amazon.dlic.auth.http.jwt
  #   logger.securityjwt.level = trace

  #   status = error
  #
  #   appender.console.type = Console
  #   appender.console.name = console
  #   appender.console.layout.type = PatternLayout
  #   appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
  #
  #   rootLogger.level = info
  #   rootLogger.appenderRef.console.ref = console
  opensearch.yml: |
    cluster.name: opensearch-cluster
    # Bind to all interfaces because we don't know what IP address Docker will assign to us.
    network.host: 0.0.0.0
    # Setting network.host to a non-loopback address enables the annoying bootstrap checks. "Single-node" mode disables them again.
    # discovery.type: single-node
    # Start OpenSearch Security Demo Configuration
    # WARNING: revise all the lines below before you go into production
    plugins:
      security:
        ssl:
          transport:
            pemcert_filepath: esnode.pem
            pemkey_filepath: esnode-key.pem
            pemtrustedcas_filepath: root-ca.pem
            enforce_hostname_verification: false
          http:
            enabled: true
            pemcert_filepath: esnode.pem
            pemkey_filepath: esnode-key.pem
            pemtrustedcas_filepath: root-ca.pem
        allow_unsafe_democertificates: true
        allow_default_init_securityindex: true
        authcz:
          admin_dn:
            - CN=kirk,OU=client,O=client,L=test,C=de
        audit.type: internal_opensearch
        enable_snapshot_restore_privilege: true
        check_snapshot_restore_write_privileges: true
        restapi:
          roles_enabled: ["all_access", "security_rest_api_access"]
        system_indices:
          enabled: true
          indices:
            [
              ".opendistro-alerting-config",
              ".opendistro-alerting-alert*",
              ".opendistro-anomaly-results*",
              ".opendistro-anomaly-detector*",
              ".opendistro-anomaly-checkpoints",
              ".opendistro-anomaly-detection-state",
              ".opendistro-reports-*",
              ".opendistro-notifications-*",
              ".opendistro-notebooks",
              ".opendistro-asynchronous-search-response*",
            ]
    ######## End OpenSearch Security Demo Configuration ########
  # log4j2.properties:

# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
#  - name: MY_ENVIRONMENT_VAR
#    value: the_value_goes_here

# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
#     name: env-secret
# - configMapRef:
#     name: config-map

# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []

hostAliases: []
# - ip: "127.0.0.1"
#   hostnames:
#   - "foo.local"
#   - "bar.local"

image:
  repository: "opensearchproject/opensearch"
  # override image tag, which is .Chart.AppVersion by default
  tag: "${var.imagetag}"
  pullPolicy: "IfNotPresent"

podAnnotations: {}
  # iam.amazonaws.com/role: es-cluster

# additionals labels
labels: {}

opensearchJavaOpts: "-Xmx512M -Xms512M"

resources:
  requests:
    cpu: "1000m"
    memory: "100Mi"

initResources: {}
#  limits:
#     cpu: "25m"
#     memory: "128Mi"
#  requests:
#     cpu: "25m"
#     memory: "128Mi"

sidecarResources: {}
#   limits:
#     cpu: "25m"
#     memory: "128Mi"
#   requests:
#     cpu: "25m"
#     memory: "128Mi"

networkHost: "0.0.0.0"

rbac:
  create: false
  serviceAccountAnnotations: {}
  serviceAccountName: ""

podSecurityPolicy:
  create: false
  name: ""
  spec:
    privileged: true
    fsGroup:
      rule: RunAsAny
    runAsUser:
      rule: RunAsAny
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
      - secret
      - configMap
      - persistentVolumeClaim
      - emptyDir

persistence:
  enabled: true
  # Set to false to disable the `fsgroup-volume` initContainer that will update permissions on the persistent disk.
  enableInitChown: true
  # override image, which is busybox by default
  # image: busybox
  # override image tag, which is latest by default
  # imageTag:
  labels:
    # Add default labels for the volumeClaimTemplate of the StatefulSet
    enabled: false
  # OpenSearch Persistent Volume Storage Class
  # If defined, storageClassName: <storageClass>
  # If set to "-", storageClassName: "", which disables dynamic provisioning
  # If undefined (the default) or set to null, no storageClassName spec is
  #   set, choosing the default provisioner.  (gp2 on AWS, standard on
  #   GKE, AWS & OpenStack)
  #
  storageClass: "${var.storageClass}"
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  annotations: {}

extraVolumes: []
  # - name: extras
  #   emptyDir: {}

extraVolumeMounts: []
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

extraContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

extraInitContainers: []
  # - name: do-somethings
  #   image: busybox
  #   command: ['do', 'something']

# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""

# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"

# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "soft"

# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}

# This is the pod topology spread constraints
# https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
topologySpreadConstraints: []

# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"

# The environment variables injected by service links are not used, but can lead to slow OpenSearch boot times when
# there are many services in the current namespace.
# If you experience slow pod startups you probably want to set this to `false`.
enableServiceLinks: true

protocol: https
httpPort: 9200
transportPort: 9300

service:
  labels: {}
  labelsHeadless: {}
  headless:
    annotations: {}
  type: ClusterIP
  nodePort: ""
  annotations: {}
  httpPortName: http
  transportPortName: transport
  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  externalTrafficPolicy: ""

updateStrategy: RollingUpdate

# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1

podSecurityContext:
  fsGroup: 1000
  runAsUser: 1000

securityContext:
  capabilities:
    drop:
      - ALL
  # readOnlyRootFilesystem: true
  runAsNonRoot: true
  runAsUser: 1000

securityConfig:
  enabled: true
  path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
  actionGroupsSecret:
  configSecret:
  internalUsersSecret:
  rolesSecret:
  rolesMappingSecret:
  tenantsSecret:
  # The following option simplifies securityConfig by using a single secret and
  # specifying the config files as keys in the secret instead of creating
  # different secrets for for each config file.
  # Note that this is an alternative to the individual secret configuration
  # above and shouldn't be used if the above secrets are used.
  config:
    # There are multiple ways to define the configuration here:
    # * If you define anything under data, the chart will automatically create
    #   a secret and mount it.
    # * If you define securityConfigSecret, the chart will assume this secret is
    #   created externally and mount it.
    # * It is an error to define both data and securityConfigSecret.
    securityConfigSecret: ""
    dataComplete: true
    data:
      config.yml: |-
        _meta:
          type: "config"
          config_version: "2"
        config:
          dynamic:
            http:
              anonymous_auth_enabled: false
            authc:
              basic_internal_auth_domain:
                description: "Authenticate via HTTP Basic against internal users database"
                http_enabled: true
                transport_enabled: true
                order: 0
                http_authenticator:
                  type: basic
                  challenge: false
                authentication_backend:
                  type: intern
              saml_auth_domain:
                order: 1
                description: "SAML provider"
                http_enabled: true
                transport_enabled: false
                http_authenticator:
                  type: saml
                  challenge: true
                  config:
                    idp:
                      metadata_file: "/usr/share/opensearch/plugins/opensearch-security/securityconfig/gsuite.xml"
                      entity_id: "https://accounts.google.com/o/saml2?idpid=XXXXXX"
                    sp:
                      entity_id: "kibana-saml"
                    kibana_url: "https://example.com"
                    exchange_key : "XXXX"
                    subject_key: NameID
                    roles_key: Role
                authentication_backend:
                  type: noop
      gsuite.xml: |-
        <?xml version="1.0" encoding="UTF-8"?><md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" entityID="https://accounts.google.com/o/saml2?idpid=XXXXXXX" validUntil="2027-01-26T23:13:54.000Z">
          <md:IDPSSODescriptor WantAuthnRequestsSigned="false" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
            <md:KeyDescriptor use="signing">
              <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
                <ds:X509Data>
                  <ds:X509Certificate>XXXX</ds:X509Certificate>
                </ds:X509Data>
              </ds:KeyInfo>
            </md:KeyDescriptor>
            <md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
            <md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://accounts.google.com/o/saml2/idp?idpid=C018ua8xi"/>
            <md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://accounts.google.com/o/saml2/idp?idpid=XXXXXXXX"/>
          </md:IDPSSODescriptor>
        </md:EntityDescriptor>
        
      # internal_users.yml: |-
      # roles.yml: |-
      # roles_mapping.yml: |-
      # action_groups.yml: |-
      # tenants.yml: |-


# How long to wait for opensearch to stop gracefully
terminationGracePeriod: 120

sysctlVmMaxMapCount: 262144

startupProbe:
  tcpSocket:
    port: 9200
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 3
  failureThreshold: 30
readinessProbe:
  tcpSocket:
    port: 9200
  periodSeconds: 5
  timeoutSeconds: 3
  failureThreshold: 3

## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""

imagePullSecrets: []
nodeSelector: 
  ${var.nodeSelector}
tolerations: []

# Enabling this will publically expose your OpenSearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
  enabled: false
  # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
  # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
  # ingressClassName: nginx

  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  path: /
  hosts:
    - chart-example.local
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

nameOverride: ""
fullnameOverride: ""

masterTerminationFix: false

lifecycle: {}
  # preStop:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
  # postStart:
  #   exec:
  #     command:
  #       - bash 
  #       - -c
  #       - |
  #         #!/bin/bash
  #         # Add a template to adjust number of shards/replicas1
  #         TEMPLATE_NAME=my_template
  #         INDEX_PATTERN="logstash-*"
  #         SHARD_COUNT=8
  #         REPLICA_COUNT=1
  #         ES_URL=http://localhost:9200
  #         while [[ "$(curl -s -o /dev/null -w '{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
  #         curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'

keystore: []
# To add secrets to the keystore:
#  - secretName: opensearch-encryption-key

networkPolicy:
  create: false
  ## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
  ## In order for a Pod to access OpenSearch, it needs to have the following label:
  ## {{ template "uname" . }}-client: "true"
  ## Example for default configuration to access HTTP port:
  ## opensearch-master-http-client: "true"
  ## Example for default configuration to access transport port:
  ## opensearch-master-transport-client: "true"

  http:
    enabled: false

# Deprecated
# please use the above podSecurityContext.fsGroup instead
fsGroup: ""

## Set optimal sysctl's. This requires privilege. Can be disabled if
## the system has already been preconfigured. (Ex: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html)
## Also see: https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/
sysctl:
  enabled: false

## Enable to add 3rd Party / Custom plugins not offered in the default OpenSearch image.
plugins:
  enabled: false
  installList: []
  # - example-fake-plugin

# -- Array of extra K8s manifests to deploy
extraObjects: []
  # - apiVersion: secrets-store.csi.x-k8s.io/v1
  #   kind: SecretProviderClass
  #   metadata:
  #     name: argocd-secrets-store
  #   spec:
  #     provider: aws
  #     parameters:
  #       objects: |
  #         - objectName: "argocd"
  #           objectType: "secretsmanager"
  #           jmesPath:
  #               - path: "client_id"
  #                 objectAlias: "client_id"
  #               - path: "client_secret"
  #                 objectAlias: "client_secret"
  #     secretObjects:
  #     - data:
  #       - key: client_id
  #         objectName: client_id
  #       - key: client_secret
  #         objectName: client_secret
  #       secretName: argocd-secrets-store
  #       type: Opaque
  #       labels:
  #         app.kubernetes.io/part-of: argocd

I don't have any error, unless when i try to reach opensearch-dashboard on my browser,here is the error i have :

opensearch-dashboards-866b578cff-lzgvw dashboards {"type":"response","@timestamp":"2022-07-18T15:21:26Z","tags":[],"pid":1,"method":"get","statusCode":302,"req":{"url":"/","method":"get","headers":{"host":"opensearch-dashboards.domain.com","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9","accept-encoding":"gzip, deflate, br","accept-language":"fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"102\", \"Google Chrome\";v=\"102\"","sec-ch-ua-mobile":"?0","sec-ch-ua-platform":"\"macOS\"","sec-fetch-dest":"document","sec-fetch-mode":"navigate","sec-fetch-site":"none","sec-fetch-user":"?1","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","x-forwarded-for":"15.236.145.2","x-forwarded-port":"443","x-forwarded-proto":"https","connection":"keep-alive"},"remoteAddress":"192.169.104.246","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36"},"res":{"statusCode":302,"responseTime":2,"contentLength":9},"message":"GET / 302 2ms - 9.0B"}
opensearch-dashboards-866b578cff-lzgvw dashboards Error: failed parsing SAML config
opensearch-dashboards-866b578cff-lzgvw dashboards     at SecurityClient.getSamlHeader (/usr/share/opensearch-dashboards/plugins/securityDashboards/server/backend/opensearch_security_client.ts:176:15)
opensearch-dashboards-866b578cff-lzgvw dashboards     at runMicrotasks (<anonymous>)
opensearch-dashboards-866b578cff-lzgvw dashboards     at processTicksAndRejections (internal/process/task_queues.js:95:5)
opensearch-dashboards-866b578cff-lzgvw dashboards     at /usr/share/opensearch-dashboards/plugins/securityDashboards/server/auth/types/saml/routes.ts:62:30
opensearch-dashboards-866b578cff-lzgvw dashboards     at Router.handle (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:163:44)
opensearch-dashboards-866b578cff-lzgvw dashboards     at handler (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:124:50)
opensearch-dashboards-866b578cff-lzgvw dashboards     at exports.Manager.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/toolkit.js:60:28)
opensearch-dashboards-866b578cff-lzgvw dashboards     at Object.internals.handler (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:46:20)
opensearch-dashboards-866b578cff-lzgvw dashboards     at exports.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:31:20)
opensearch-dashboards-866b578cff-lzgvw dashboards     at Request._lifecycle (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:371:32)
opensearch-dashboards-866b578cff-lzgvw dashboards     at Request._execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:281:9)
opensearch-dashboards-866b578cff-lzgvw dashboards {"type":"log","@timestamp":"2022-07-18T15:21:26Z","tags":["error","plugins","securityDashboards"],"pid":1,"message":"Failed to get saml header: Error: Error: failed parsing SAML config"}
opensearch-dashboards-866b578cff-lzgvw dashboards {"type":"error","@timestamp":"2022-07-18T15:21:26Z","tags":[],"pid":1,"level":"error","error":{"message":"Internal Server Error","name":"Error","stack":"Error: Internal Server Error\n    at HapiResponseAdapter.toError (/usr/share/opensearch-dashboards/src/core/server/http/router/response_adapter.js:143:19)\n    at HapiResponseAdapter.toHapiResponse (/usr/share/opensearch-dashboards/src/core/server/http/router/response_adapter.js:97:19)\n    at HapiResponseAdapter.handle (/usr/share/opensearch-dashboards/src/core/server/http/router/response_adapter.js:92:17)\n    at Router.handle (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:164:34)\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:95:5)\n    at handler (/usr/share/opensearch-dashboards/src/core/server/http/router/router.js:124:50)\n    at exports.Manager.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/toolkit.js:60:28)\n    at Object.internals.handler (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:46:20)\n    at exports.execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/handler.js:31:20)\n    at Request._lifecycle (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:371:32)\n    at Request._execute (/usr/share/opensearch-dashboards/node_modules/@hapi/hapi/lib/request.js:281:9)"},"url":"http://opensearch-dashboards.domain.com/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards","message":"Internal Server Error"}
opensearch-dashboards-866b578cff-lzgvw dashboards {"type":"response","@timestamp":"2022-07-18T15:21:26Z","tags":[],"pid":1,"method":"get","statusCode":500,"req":{"url":"/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards","method":"get","headers":{"host":"opensearch-dashboards.domain.com","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9","accept-encoding":"gzip, deflate, br","accept-language":"fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"102\", \"Google Chrome\";v=\"102\"","sec-ch-ua-mobile":"?0","sec-ch-ua-platform":"\"macOS\"","sec-fetch-dest":"document","sec-fetch-mode":"navigate","sec-fetch-site":"none","sec-fetch-user":"?1","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","x-forwarded-for":"15.236.145.2","x-forwarded-port":"443","x-forwarded-proto":"https","connection":"keep-alive"},"remoteAddress":"192.169.104.150","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36"},"res":{"statusCode":500,"responseTime":42,"contentLength":9},"message":"GET /auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards 500 42ms - 9.0B"}
opensearch-dashboards-866b578cff-lzgvw dashboards {"type":"response","@timestamp":"2022-07-18T15:21:26Z","tags":[],"pid":1,"method":"get","statusCode":401,"req":{"url":"/favicon.ico","method":"get","headers":{"host":"opensearch-dashboards.domain.com","accept":"image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8","accept-encoding":"gzip, deflate, br","accept-language":"fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7","referer":"https://opensearch-dashboards.domain.com/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards","sec-ch-ua":"\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"102\", \"Google Chrome\";v=\"102\"","sec-ch-ua-mobile":"?0","sec-ch-ua-platform":"\"macOS\"","sec-fetch-dest":"image","sec-fetch-mode":"no-cors","sec-fetch-site":"same-origin","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","x-forwarded-for":"15.236.145.2","x-forwarded-port":"443","x-forwarded-proto":"https","connection":"keep-alive"},"remoteAddress":"192.169.104.226","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36","referer":"https://opensearch-dashboards.domain.com/auth/saml/login?nextUrl=%2Fapp%2Fopensearch-dashboards"},"res":{"statusCode":401,"responseTime":4,"contentLength":9},"message":"GET /favicon.ico 401 4ms - 9.0B"}

@bbarani bbarani removed the untriaged Issues that have not yet been triaged label Jul 19, 2022
@bbarani
Copy link
Member

bbarani commented Jul 19, 2022

@opensearch-project/security Can someone take a look at this issue and provide an update?

@peternied peternied self-assigned this Jul 25, 2022
@dobharweim
Copy link

@peternied any update on this one guys?

@peternied
Copy link
Member

@dobharweim thanks for reaching out, I've been unable to make progress in this space. I'm going to remove myself so someone else can pick this work up.

@peternied peternied removed their assignment Aug 22, 2022
@jonardcaguioa
Copy link

I am encountering this on as well. do u know any workarounds?

@nomopo45
Copy link
Author

nomopo45 commented Aug 24, 2022

You can follow my thread i manage to make it work :

https://forum.opensearch.org/t/opensearch-opensearch-dashboard-failed-to-get-saml-header-error-error-failed-parsing-saml-config/10224

the thing was to change the path and use version 2.0.1

securityConfig:
enabled: true
path: "/usr/share/opensearch/config/opensearch-security"

and

config:
idp:
metadata_file: "/usr/share/opensearch/config/opensearch-security/gsuite.xml"

@bbarani
Copy link
Member

bbarani commented Jul 31, 2023

@peternied @davidlago Do you have any inputs on this issue? This seems to be related to opensearch-project/security#1941

@peternied
Copy link
Member

Thanks - I've refreshed the issue for the security triage for the revisit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: 📦 Backlog
Development

No branches or pull requests

6 participants