Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot deploy as user in multicluster mode. An error occurred: Cannot read properties of undefined (reading 'enabled'). #5805

Closed
sfxworks opened this issue Dec 18, 2022 · 6 comments · Fixed by #5826
Assignees
Labels
kind/bug An issue that reports a defect in an existing feature

Comments

@sfxworks
Copy link

sfxworks commented Dec 18, 2022

Describe the bug
A user with cluster-admin permissions in a namespace trying to deploy against another cluster cannot do so

To Reproduce
Steps to reproduce the behavior:

  1. Deploy mutli-cluster kubeapps with oidc, service token kubeapps role included for second cluster
  2. Login as the test user, then ensure you're in the second cluster in your namespace. You should not have any permissions in the first cluster, but both should still be configured to talk to the same OIDC provider as the documentation describes
  3. Click on an application in the catalog, such as apache
  4. Click deploy on the top right
  5. See error

Expected behavior
User can see the deployment options and click deploy

Screenshots
If applicable, add screenshots to help explain your problem.
image

Desktop (please complete the following information):

  • Version 2.6.2
  • Kubernetes version v1.25.3
  • flux: v0.37.0 helm-controller: v0.27.0

For reference I, the cluster admin of both clusters, was not able to reproduce this issue. A user pointed this out to me.

@sfxworks sfxworks added the kind/bug An issue that reports a defect in an existing feature label Dec 18, 2022
@kubeapps-bot kubeapps-bot moved this to 🗂 Backlog in Kubeapps Dec 18, 2022
@antgamdia
Copy link
Contributor

Seems like a bug, we should be using the optional chaining operator here:

Like:

 if (featureFlags?.schemaEditor?.enabled) { 

To work around it, you can just add the following excerpt in the values.yaml file:

featureFlags:
  schemaEditor:
    enabled: false

@ppbaena ppbaena moved this from 🗂 Backlog to 🗒 Todo in Kubeapps Dec 19, 2022
@sfxworks
Copy link
Author

sfxworks commented Dec 20, 2022

With that amendment the same issue still occurs, but only for one user with less permissions. I tested it with two different users to be sure.

@absoludity
Copy link
Contributor

Hi @sfxworks . I'm just looking at this issue, and am wondering what chart version you are using, as you mention the app version is Kubeapps 2.6.2 which was released quite recently, but the featureFlags.schemaEditor.enabled option was added to the values.yaml of the Bitnami chart over a month ago when v2.4.6 of the app was released. The current chart version is 12.1.3, as shown with:

$ helm search repo kubeapps
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                       
bitnami/kubeapps        12.1.3          2.6.2           Kubeapps is a web-based UI for launching and ma...

Assuming you are using the 12.1.3 (or .2) chart of the 2.6.2 app release, there must be something else causing that option to not be set for users, but I can't yet see what that would be. Can you please confirm which chart version you are using? I'll add the fix that Antonio mentioned anyway (no loss), but keen to understand.

@ppbaena ppbaena added this to the Technical debt milestone Dec 20, 2022
@sfxworks
Copy link
Author

sfxworks commented Dec 21, 2022

Sure, thanks for looking into this.

NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
kubeapps        kubeapps        13              2022-12-19 11:11:54.97887211 +0000 UTC  deployed        kubeapps-12.1.3 2.6.2  

Also here's the head of our flux helm file.

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: kubeapps
  namespace: kubeapps
spec:
  interval: 1m
  upgrade:
    force: true
  chart:
    spec:
      chart: kubeapps
      version: '12.1.3'
      sourceRef:
        kind: HelmRepository
        name: bitnami
        namespace: default
      interval: 1m
  values:
    featureFlags:
      schemaEditor:
        enabled: false

Which references

NAMESPACE   NAME                   URL                                                  AGE    READY   STATUS
default     bitnami                https://charts.bitnami.com/bitnami                   175d   True    stored artifact for revision '7fdee2975b0fd8f1df6d37ee515e37477d205b230ec7b14410ddedacaaff5dd2'

@absoludity
Copy link
Contributor

Can you please provide a scrubbed version of the rest of your values? When I saw that you're using flux to deploy Kubeapps there, it made me wonder whether you're configuring the flux plugin for Kubeapps (which doesn't support multi-cluster yet, sorry that's not so clear from the docs), but I can't tell without more info (nor would it make any more sense of the error that you see).

absoludity added a commit to absoludity/kubeapps that referenced this issue Dec 22, 2022
Signed-off-by: Michael Nelson <[email protected]>
@sfxworks
Copy link
Author

Sure, here you go. I'm still waiting on the other user to re-test in the case that cache was involved. Though I'm bringing a few more in to assist.

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: kubeapps
  namespace: kubeapps
spec:
  interval: 1m
  upgrade:
    force: true
  chart:
    spec:
      chart: kubeapps
      version: '12.1.3'
      sourceRef:
        kind: HelmRepository
        name: bitnami
        namespace: default
      interval: 1m
  values:
    featureFlags:
      schemaEditor:
        enabled: false
    clusters:
    - name: red
      domain: cluster.mcsh.red 
      apiServiceURL: https://cluster.mcsh.red:6443/
      certificateAuthorityData: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1ETXdPREUwTlRRd05Wb1hEVE14TURNd05qRTBOVFF3TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFYwCmlZV002SWUwOE9OdThORTZ0TnpnR2ZmbWNWMVV0cFRwaWJtWk13bEpmWG1FczMxbUFRcmV1TU5kM0k4eDhNVVIKR1ZoUzd2aHNGd2p3TjFwUTg4VVMvTVhPMFFOQjJiQnA3M0VKSlRGMjBybUkvOWZGaklkVE83QW1NTEcwc2lXMApRNktqaHdWY1FxeFhoc1JJRWYwVDFUU2xJWm40cG8xWlhkNm5HUXpZNWtUb0ZHbkhrWkpQS1Z4M3M3MWE5VlR3Ck9OdXV5MHkzWEtwdWxQZm9EbDN6cHZVYjkvajhOWWMwQzJ2RzZrcm9IWDVNb3NJaTVMTDE5RFF2NHVTZzlNdGUKUWVpbzl0Unh4M1RHUGtTV0dBUTMxUlhlY21Tckw0MEo3bWdxclpaSXhhNlZrU3RZSFRLV1hvcDE1OHNDK3R4TAplbjJOWnBaVjIvSG9YMUY2dE9jQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZGWnpIVUZiUDQ1RjNPOXFVNERLRUlzTFY4MWNNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBcndGYmI1TjBxdk1LSnE4SzV6ekY2d1AxbFVUMGVWLy9udTRsRzhKMWRmZHFQRDlmZQpIS0lyUitLalhIZ2xyZURLV1JSUTBSTTAwSVA4R3hXT1NXMEwwT3dtYzlWY1FhUGx5dTI1VzBsZ24rYkhkZ2o4CkJHVmZKL1hubVRMeTRjbDR5ZnEySmhPeDM0ZkgwWVhSTG1EYTNKTmlxNWVkN2hQejNRNHI3dWR3cGxqb1ZHT08KcHNTZlI1eHpyWlFpS3ZMbkIvbUZxTm9jNHl2U004d0dxQXlVaHNiak43M0RscXZwTHRvekU2WkFXUHQ1VEtsTQpiNEpLVmZHNXRTYmdGUk1BMGNiU0YvYkQ2MlN2U2RpMitrZ0tjeUU2YUFBbFhUcnVuYm9nUDg5OFJCNHErUVU1CmRUdW1BQThFZzlscHM1NkFBZHNkTlBFcllxTjdWaHRQeUsrSwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
      serviceToken: snip
    - name: office
      domain: cluster.local
    frontend:
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      service:
        type: LoadBalancer
    ingress:
      enabled: true
      hostname: kubeapps.service.mcserverhosting.net
      ingressClassName: nginx
      tls: true
      annotations:
        cert-manager.io/cluster-issuer: letsencrypt-prod
        nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
    authProxy:
      enabled: true
      scope: "openid email"
      provider: oidc
      clientID: account
      clientSecret: "snip"
      cookieSecret: "snip"
      extraFlags:
      - "--cookie-secure=false"
      - "--oidc-issuer-url=https://auth.service.mcserverhosting.net/realms/mcsh"
    dashboard:
      nodeSelector:
        beta.kubernetes.io/arch: amd64
    apprepository:
      initialRepos:
      - name: mcshservers
        url: https://registry.service.mcserverhosting.net/chartrepo/servers
      nodeSelector:
        beta.kubernetes.io/arch: amd64
    kubeops:
      nodeSelector:
        beta.kubernetes.io/arch: amd64
    kubeappsapis:
      nodeSelector:
        beta.kubernetes.io/arch: amd64

absoludity added a commit that referenced this issue Jan 10, 2023
Signed-off-by: Michael Nelson <[email protected]>

<!--
Before you open the request please review the following guidelines and
tips to help it be more easily integrated:

 - Describe the scope of your change - i.e. what the change does.
 - Describe any known limitations with your change.
- Please run any tests or examples that can exercise your modified code.

 Thank you for contributing!
 -->

### Description of the change

<!-- Describe the scope of your change - i.e. what the change does. -->

Ensures that deployment form can still display even if the schemaEditor
feature flag is not set.

### Benefits

<!-- What benefits will be realized by the code change? -->

Fixes the immediate error shown for #5805, though I'm not convinced
it'll be the only error (it's still not clear why the configuration
options would not be set).

### Possible drawbacks

<!-- Describe any known limitations with your change -->

### Applicable issues

<!-- Enter any applicable Issues here (You can reference an issue using
#) -->

- fixes #5805 

### Additional information

<!-- If there's anything else that's important and relevant to your pull
request, mention that information here.-->

Signed-off-by: Michael Nelson <[email protected]>
@github-project-automation github-project-automation bot moved this from 🗒 Todo to ✅ Done in Kubeapps Jan 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug An issue that reports a defect in an existing feature
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

4 participants