Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Helm release cluster to v0.36.0 #703

Merged
merged 1 commit into from
Jul 21, 2024
Merged

Update Helm release cluster to v0.36.0 #703

merged 1 commit into from
Jul 21, 2024

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Jul 19, 2024

Mend Renovate

This PR contains the following updates:

Package Update Change
cluster minor 0.35.0 -> 0.36.0

Trigger E2E tests:

/run cluster-test-suites


Release Notes

giantswarm/cluster (cluster)

v0.36.0

Compare Source

This release removes the CronJobTimeZone feature gate as it becomes stable and is included in Kubernetes v1.29.

For Kubernetes <v1.29, you will need to re-enable it using the respective values.

Removed
  • Feature Gates: Remove CronJobTimeZone. (#​267)

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Mend Renovate. View repository job log here.

@renovate renovate bot requested a review from a team as a code owner July 19, 2024 14:17
@renovate renovate bot added dependencies Pull requests that update a dependency file renovate PR created by RenovateBot labels Jul 19, 2024
@Gacko Gacko changed the title Update Helm release cluster to v0.36.0 Chart: Update to cluster v0.36.0. Jul 19, 2024
@Gacko Gacko force-pushed the renovate/cluster-0.x branch from 64d3cd9 to 0c45a40 Compare July 19, 2024 14:23
@renovate renovate bot changed the title Chart: Update to cluster v0.36.0. Update Helm release cluster to v0.36.0 Jul 19, 2024
@Gacko Gacko force-pushed the renovate/cluster-0.x branch from 0c45a40 to d459384 Compare July 19, 2024 14:25
@Gacko Gacko changed the title Update Helm release cluster to v0.36.0 Chart: Update cluster chart to v0.36.0. Jul 19, 2024
Copy link
Contributor

There were differences in the rendered Helm template, please check! ⚠️

Output
=== Differences when rendered with values file helm/cluster-aws/ci/test-local-registry-cache-values.yaml ===

(file level)
  - five documents removed:
    ---
    # Source: cluster-aws/charts/cluster/templates/clusterapi/workers/kubeadmconfig.yaml
    apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
    kind: KubeadmConfig
    metadata:
      name: test-wc-pool0-b813b
      namespace: org-giantswarm
      annotations:
        machine-pool.giantswarm.io/name: test-wc-pool0
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 0.35.0
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-0.35.0
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: test-wc
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
        giantswarm.io/machine-pool: test-wc-pool0
    spec:
      format: ignition
      ignition:
        containerLinuxConfig:
          additionalConfig: |
            systemd:
              units:      
              - name: os-hardening.service
                enabled: true
                contents: |
                  [Unit]
                  Description=Apply os hardening
                  [Service]
                  Type=oneshot
                  ExecStartPre=-/bin/bash -c "gpasswd -d core rkt; gpasswd -d core docker; gpasswd -d core wheel"
                  ExecStartPre=/bin/bash -c "until [ -f '/etc/sysctl.d/hardening.conf' ]; do echo Waiting for sysctl file; sleep 1s;done;"
                  ExecStart=/usr/sbin/sysctl -p /etc/sysctl.d/hardening.conf
                  [Install]
                  WantedBy=multi-user.target
              - name: update-engine.service
                enabled: false
                mask: true
              - name: locksmithd.service
                enabled: false
                mask: true
              - name: sshkeys.service
                enabled: false
                mask: true
              - name: teleport.service
                enabled: true
                contents: |
                  [Unit]
                  Description=Teleport Service
                  After=network.target
                  [Service]
                  Type=simple
                  Restart=on-failure
                  ExecStart=/opt/bin/teleport start --roles=node --config=/etc/teleport.yaml --pid-file=/run/teleport.pid
                  ExecReload=/bin/kill -HUP $MAINPID
                  PIDFile=/run/teleport.pid
                  LimitNOFILE=524288
                  [Install]
                  WantedBy=multi-user.target
              - name: kubeadm.service
                dropins:
                - name: 10-flatcar.conf
                  contents: |
                    [Unit]
                    # kubeadm must run after coreos-metadata populated /run/metadata directory.
                    Requires=coreos-metadata.service
                    After=coreos-metadata.service
                    # kubeadm must run after containerd - see https://github.com/kubernetes-sigs/image-builder/issues/939.
                    After=containerd.service
                    # kubeadm requires having an IP
                    After=network-online.target
                    Wants=network-online.target
                    [Service]
                    # Ensure kubeadm service has access to kubeadm binary in /opt/bin on Flatcar.
                    Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bin
                    # To make metadata environment variables available for pre-kubeadm commands.
                    EnvironmentFile=/run/metadata/*
              - name: containerd.service
                enabled: true
                contents: |
                dropins:
                - name: 10-change-cgroup.conf
                  contents: |
                    [Service]
                    CPUAccounting=true
                    MemoryAccounting=true
                    Slice=kubereserved.slice
              - name: audit-rules.service
                enabled: true
                dropins:
                - name: 10-wait-for-containerd.conf
                  contents: |
                    [Service]
                    ExecStartPre=/bin/bash -c "while [ ! -f /etc/audit/rules.d/containerd.rules ]; do echo 'Waiting for /etc/audit/rules.d/containerd.rules to be written' && sleep 1; done"
                    Restart=on-failure      
              - name: kubelet-aws-config.service
                enabled: true
              - name: var-lib.mount
                enabled: true
                contents: |
                  [Unit]
                  Description=lib volume
                  DefaultDependencies=no
                  [Mount]
                  What=/dev/disk/by-label/lib
                  Where=/var/lib
                  Type=xfs
                  [Install]
                  WantedBy=local-fs-pre.target
              - name: var-log.mount
                enabled: true
                contents: |
                  [Unit]
                  Description=log volume
                  DefaultDependencies=no
                  [Mount]
                  What=/dev/disk/by-label/log
                  Where=/var/log
                  Type=xfs
                  [Install]
                  WantedBy=local-fs-pre.target
            storage:
              filesystems:      
              - name: lib
                mount:
                  device: /dev/xvdd
                  format: xfs
                  wipeFilesystem: true
                  label: lib
              - name: log
                mount:
                  device: /dev/xvde
                  format: xfs
                  wipeFilesystem: true
                  label: log
              directories:      
              - path: /var/lib/kubelet
                mode: 0750      
            
      joinConfiguration:
        nodeRegistration:
          name: ${COREOS_EC2_HOSTNAME}
          kubeletExtraArgs:
            cloud-provider: external
            feature-gates: CronJobTimeZone=true
            healthz-bind-address: 0.0.0.0
            node-ip: ${COREOS_EC2_IPV4_LOCAL}
            node-labels: "ip=${COREOS_EC2_IPV4_LOCAL},role=worker,giantswarm.io/machine-pool=test-wc-pool0,"
            v: 2
        patches:
          directory: /etc/kubernetes/patches
      preKubeadmCommands:
      - "envsubst < /etc/kubeadm.yml > /etc/kubeadm.yml.tmp"
      - "mv /etc/kubeadm.yml.tmp /etc/kubeadm.yml"
      - "systemctl restart containerd"
      files:
      - path: /etc/sysctl.d/hardening.conf
        permissions: 0644
        encoding: base64
        content: ZnMuaW5vdGlmeS5tYXhfdXNlcl93YXRjaGVzID0gMTYzODQKZnMuaW5vdGlmeS5tYXhfdXNlcl9pbnN0YW5jZXMgPSA4MTkyCmtlcm5lbC5rcHRyX3Jlc3RyaWN0ID0gMgprZXJuZWwuc3lzcnEgPSAwCm5ldC5pcHY0LmNvbmYuYWxsLmxvZ19tYXJ0aWFucyA9IDEKbmV0LmlwdjQuY29uZi5hbGwuc2VuZF9yZWRpcmVjdHMgPSAwCm5ldC5pcHY0LmNvbmYuZGVmYXVsdC5hY2NlcHRfcmVkaXJlY3RzID0gMApuZXQuaXB2NC5jb25mLmRlZmF1bHQubG9nX21hcnRpYW5zID0gMQpuZXQuaXB2NC50Y3BfdGltZXN0YW1wcyA9IDAKbmV0LmlwdjYuY29uZi5hbGwuYWNjZXB0X3JlZGlyZWN0cyA9IDAKbmV0LmlwdjYuY29uZi5kZWZhdWx0LmFjY2VwdF9yZWRpcmVjdHMgPSAwCiMgSW5jcmVhc2VkIG1tYXBmcyBiZWNhdXNlIHNvbWUgYXBwbGljYXRpb25zLCBsaWtlIEVTLCBuZWVkIGhpZ2hlciBsaW1pdCB0byBzdG9yZSBkYXRhIHByb3Blcmx5CnZtLm1heF9tYXBfY291bnQgPSAyNjIxNDQKIyBSZXNlcnZlZCB0byBhdm9pZCBjb25mbGljdHMgd2l0aCBrdWJlLWFwaXNlcnZlciwgd2hpY2ggYWxsb2NhdGVzIHdpdGhpbiB0aGlzIHJhbmdlCm5ldC5pcHY0LmlwX2xvY2FsX3Jlc2VydmVkX3BvcnRzPTMwMDAwLTMyNzY3Cm5ldC5pcHY0LmNvbmYuYWxsLnJwX2ZpbHRlciA9IDEKbmV0LmlwdjQuY29uZi5hbGwuYXJwX2lnbm9yZSA9IDEKbmV0LmlwdjQuY29uZi5hbGwuYXJwX2Fubm91bmNlID0gMgoKIyBUaGVzZSBhcmUgcmVxdWlyZWQgZm9yIHRoZSBrdWJlbGV0ICctLXByb3RlY3Qta2VybmVsLWRlZmF1bHRzJyBmbGFnCiMgU2VlIGh0dHBzOi8vZ2l0aHViLmNvbS9naWFudHN3YXJtL2dpYW50c3dhcm0vaXNzdWVzLzEzNTg3CnZtLm92ZXJjb21taXRfbWVtb3J5PTEKa2VybmVsLnBhbmljPTEwCmtlcm5lbC5wYW5pY19vbl9vb3BzPTEK
      - path: /etc/selinux/config
        permissions: 0644
        encoding: base64
        content: IyBUaGlzIGZpbGUgY29udHJvbHMgdGhlIHN0YXRlIG9mIFNFTGludXggb24gdGhlIHN5c3RlbSBvbiBib290LgoKIyBTRUxJTlVYIGNhbiB0YWtlIG9uZSBvZiB0aGVzZSB0aHJlZSB2YWx1ZXM6CiMgICAgICAgZW5mb3JjaW5nIC0gU0VMaW51eCBzZWN1cml0eSBwb2xpY3kgaXMgZW5mb3JjZWQuCiMgICAgICAgcGVybWlzc2l2ZSAtIFNFTGludXggcHJpbnRzIHdhcm5pbmdzIGluc3RlYWQgb2YgZW5mb3JjaW5nLgojICAgICAgIGRpc2FibGVkIC0gTm8gU0VMaW51eCBwb2xpY3kgaXMgbG9hZGVkLgpTRUxJTlVYPXBlcm1pc3NpdmUKCiMgU0VMSU5VWFRZUEUgY2FuIHRha2Ugb25lIG9mIHRoZXNlIGZvdXIgdmFsdWVzOgojICAgICAgIHRhcmdldGVkIC0gT25seSB0YXJnZXRlZCBuZXR3b3JrIGRhZW1vbnMgYXJlIHByb3RlY3RlZC4KIyAgICAgICBzdHJpY3QgICAtIEZ1bGwgU0VMaW51eCBwcm90ZWN0aW9uLgojICAgICAgIG1scyAgICAgIC0gRnVsbCBTRUxpbnV4IHByb3RlY3Rpb24gd2l0aCBNdWx0aS1MZXZlbCBTZWN1cml0eQojICAgICAgIG1jcyAgICAgIC0gRnVsbCBTRUxpbnV4IHByb3RlY3Rpb24gd2l0aCBNdWx0aS1DYXRlZ29yeSBTZWN1cml0eQojICAgICAgICAgICAgICAgICAgKG1scywgYnV0IG9ubHkgb25lIHNlbnNpdGl2aXR5IGxldmVsKQpTRUxJTlVYVFlQRT1tY3MK
      - path: /etc/systemd/timesyncd.conf
        permissions: 0644
        encoding: base64
        content: W1RpbWVdCk5UUD0xNjkuMjU0LjE2OS4xMjMK
      - path: /etc/containerd/config.toml
        permissions: 0644
        contentFrom:
          secret:
            name: test-wc-containerd-07a4e226
            key: config.toml
      - path: /etc/kubernetes/patches/kubeletconfiguration.yaml
        permissions: 0644
        encoding: base64
        content: YXBpVmVyc2lvbjoga3ViZWxldC5jb25maWcuazhzLmlvL3YxYmV0YTEKa2luZDogS3ViZWxldENvbmZpZ3VyYXRpb24Kc2h1dGRvd25HcmFjZVBlcmlvZDogMzAwcwpzaHV0ZG93bkdyYWNlUGVyaW9kQ3JpdGljYWxQb2RzOiA2MHMKa2VybmVsTWVtY2dOb3RpZmljYXRpb246IHRydWUKZXZpY3Rpb25Tb2Z0OgogIG1lbW9yeS5hdmFpbGFibGU6ICI1MDBNaSIKZXZpY3Rpb25IYXJkOgogIG1lbW9yeS5hdmFpbGFibGU6ICIyMDBNaSIKICBpbWFnZWZzLmF2YWlsYWJsZTogIjE1JSIKZXZpY3Rpb25Tb2Z0R3JhY2VQZXJpb2Q6CiAgbWVtb3J5LmF2YWlsYWJsZTogIjVzIgpldmljdGlvbk1heFBvZEdyYWNlUGVyaW9kOiA2MAprdWJlUmVzZXJ2ZWQ6CiAgY3B1OiAzNTBtCiAgbWVtb3J5OiAxMjgwTWkKICBlcGhlbWVyYWwtc3RvcmFnZTogMTAyNE1pCmt1YmVSZXNlcnZlZENncm91cDogL2t1YmVyZXNlcnZlZC5zbGljZQpwcm90ZWN0S2VybmVsRGVmYXVsdHM6IHRydWUKc3lzdGVtUmVzZXJ2ZWQ6CiAgY3B1OiAyNTBtCiAgbWVtb3J5OiAzODRNaQpzeXN0ZW1SZXNlcnZlZENncm91cDogL3N5c3RlbS5zbGljZQp0bHNDaXBoZXJTdWl0ZXM6Ci0gVExTX0FFU18xMjhfR0NNX1NIQTI1NgotIFRMU19BRVNfMjU2X0dDTV9TSEEzODQKLSBUTFNfQ0hBQ0hBMjBfUE9MWTEzMDVfU0hBMjU2Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9DQkNfU0hBCi0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9DQkNfU0hBCi0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQ0hBQ0hBMjBfUE9MWTEzMDUKLSBUTFNfRUNESEVfRUNEU0FfV0lUSF9DSEFDSEEyMF9QT0xZMTMwNV9TSEEyNTYKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzEyOF9DQkNfU0hBCi0gVExTX0VDREhFX1JTQV9XSVRIX0FFU18xMjhfR0NNX1NIQTI1NgotIFRMU19FQ0RIRV9SU0FfV0lUSF9BRVNfMjU2X0NCQ19TSEEKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1X1NIQTI1NgotIFRMU19SU0FfV0lUSF9BRVNfMTI4X0NCQ19TSEEKLSBUTFNfUlNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2Ci0gVExTX1JTQV9XSVRIX0FFU18yNTZfQ0JDX1NIQQotIFRMU19SU0FfV0lUSF9BRVNfMjU2X0dDTV9TSEEzODQKc2VyaWFsaXplSW1hZ2VQdWxsczogZmFsc2UKc3RyZWFtaW5nQ29ubmVjdGlvbklkbGVUaW1lb3V0OiAxaAphbGxvd2VkVW5zYWZlU3lzY3RsczoKLSAibmV0LioiCg==
      - path: /etc/systemd/logind.conf.d/zzz-kubelet-graceful-shutdown.conf
        permissions: 0700
        encoding: base64
        content: W0xvZ2luXQojIGRlbGF5CkluaGliaXREZWxheU1heFNlYz0zMDAK
      - path: /etc/teleport-join-token
        permissions: 0644
        contentFrom:
          secret:
            name: test-wc-teleport-join-token
            key: joinToken
      - path: /opt/teleport-node-role.sh
        permissions: 0755
        encoding: base64
        content: IyEvYmluL2Jhc2gKCmlmIHN5c3RlbWN0bCBpcy1hY3RpdmUgLXEga3ViZWxldC5zZXJ2aWNlOyB0aGVuCiAgICBpZiBbIC1lICIvZXRjL2t1YmVybmV0ZXMvbWFuaWZlc3RzL2t1YmUtYXBpc2VydmVyLnlhbWwiIF07IHRoZW4KICAgICAgICBlY2hvICJjb250cm9sLXBsYW5lIgogICAgZWxzZQogICAgICAgIGVjaG8gIndvcmtlciIKICAgIGZpCmVsc2UKICAgIGVjaG8gIiIKZmkK
      - path: /etc/teleport.yaml
        permissions: 0644
        encoding: base64
        content: dmVyc2lvbjogdjMKdGVsZXBvcnQ6CiAgZGF0YV9kaXI6IC92YXIvbGliL3RlbGVwb3J0CiAgam9pbl9wYXJhbXM6CiAgICB0b2tlbl9uYW1lOiAvZXRjL3RlbGVwb3J0LWpvaW4tdG9rZW4KICAgIG1ldGhvZDogdG9rZW4KICBwcm94eV9zZXJ2ZXI6IHRlbGVwb3J0LmdpYW50c3dhcm0uaW86NDQzCiAgbG9nOgogICAgb3V0cHV0OiBzdGRlcnIKYXV0aF9zZXJ2aWNlOgogIGVuYWJsZWQ6ICJubyIKc3NoX3NlcnZpY2U6CiAgZW5hYmxlZDogInllcyIKICBjb21tYW5kczoKICAtIG5hbWU6IG5vZGUKICAgIGNvbW1hbmQ6IFtob3N0bmFtZV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogYXJjaAogICAgY29tbWFuZDogW3VuYW1lLCAtbV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogcm9sZQogICAgY29tbWFuZDogWy9vcHQvdGVsZXBvcnQtbm9kZS1yb2xlLnNoXQogICAgcGVyaW9kOiAxbTBzCiAgbGFiZWxzOgogICAgaW5zOiB0ZXN0CiAgICBtYzogdGVzdAogICAgY2x1c3RlcjogdGVzdC13YwogICAgYmFzZURvbWFpbjogZXhhbXBsZS5jb20KcHJveHlfc2VydmljZToKICBlbmFibGVkOiAibm8iCg==
      - path: /etc/audit/rules.d/99-default.rules
        permissions: 0640
        encoding: base64
        content: IyBPdmVycmlkZGVuIGJ5IEdpYW50IFN3YXJtLgotYSBleGl0LGFsd2F5cyAtRiBhcmNoPWI2NCAtUyBleGVjdmUgLWsgYXVkaXRpbmcKLWEgZXhpdCxhbHdheXMgLUYgYXJjaD1iMzIgLVMgZXhlY3ZlIC1rIGF1ZGl0aW5nCg==
      - contentFrom:
          secret:
            name: test-wc-provider-specific-files-4
            key: kubelet-aws-config.sh
        path: /opt/bin/kubelet-aws-config.sh
        permissions: 0755
      - contentFrom:
          secret:
            name: test-wc-provider-specific-files-4
            key: kubelet-aws-config.service
        path: /etc/systemd/system/kubelet-aws-config.service
        permissions: 0644
      - contentFrom:
          secret:
            name: test-wc-provider-specific-files-4
            key: 99-unmanaged-devices.network
        path: /etc/systemd/network/99-unmanaged-devices.network
        permissions: 0644
    # Source: cluster-aws/charts/cluster/templates/apps/cleanup-helmreleases-hook-job.yaml
    # Because cluster provider resources are often deleted before flux has a chance
    # to uninstall helm releases for all deleted HelmRelease CRs they become
    # leftovers because there is still flux finalizer on them. This looks like
    # following:
    #
    #     $ kubectl get helmrelease -n org-multi-project
    #     NAME                           AGE     READY   STATUS
    #     pawe1-cilium                   99m     False   failed to get last release revision
    #     pawe1-cloud-provider-vsphere   99m     False   failed to get last release revision
    #
    # Both HelmRelease CRs in this case have deletionTimestamp and finalizers set,
    # e.g.:
    #
    #     deletionTimestamp: "2023-03-02T14:34:49Z"
    #     finalizers:
    #       - finalizers.fluxcd.io
    #
    # To work around this, post-delete Job deletes all finalizers on all HelmRelease
    # CRs created with this chart.
    #
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: test-wc-cleanup-helmreleases-hook
      namespace: org-giantswarm
      annotations:
        helm.sh/hook: post-delete
        helm.sh/hook-delete-policy: "before-hook-creation,hook-succeeded,hook-failed"
        helm.sh/hook-weight: "-1"
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 0.35.0
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-0.35.0
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: test-wc
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
    # Source: cluster-aws/charts/cluster/templates/apps/cleanup-helmreleases-hook-job.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: test-wc-cleanup-helmreleases-hook
      namespace: org-giantswarm
      annotations:
        helm.sh/hook: post-delete
        helm.sh/hook-delete-policy: "before-hook-creation,hook-succeeded,hook-failed"
        helm.sh/hook-weight: "-1"
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 0.35.0
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-0.35.0
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: test-wc
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
    rules:
    - apiGroups:
      - helm.toolkit.fluxcd.io
      resources:
      - helmreleases
      verbs:
      - get
      - list
      - patch
    - apiGroups:
      - source.toolkit.fluxcd.io
      resources:
      - helmcharts
      verbs:
      - get
      - list
      - patch
    # Source: cluster-aws/charts/cluster/templates/apps/cleanup-helmreleases-hook-job.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: test-wc-cleanup-helmreleases-hook
      namespace: org-giantswarm
      annotations:
        helm.sh/hook: post-delete
        helm.sh/hook-delete-policy: "before-hook-creation,hook-succeeded,hook-failed"
        helm.sh/hook-weight: "-1"
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 0.35.0
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-0.35.0
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: test-wc
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
    subjects:
    - kind: ServiceAccount
      name: test-wc-cleanup-helmreleases-hook
      namespace: org-giantswarm
    roleRef:
      kind: Role
      name: test-wc-cleanup-helmreleases-hook
      apiGroup: rbac.authorization.k8s.io
    # Source: cluster-aws/charts/cluster/templates/apps/cleanup-helmreleases-hook-job.yaml
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: test-wc-cleanup-helmreleases-hook
      namespace: org-giantswarm
      annotations:
        helm.sh/hook: post-delete
        helm.sh/hook-delete-policy: before-hook-creation
        helm.sh/hook-weight: 0
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 0.35.0
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-0.35.0
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: test-wc
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
    spec:
      ttlSecondsAfterFinished: 86400 # 24h
      template:
        metadata:
          name: test-wc-cleanup-helmreleases-hook
          namespace: org-giantswarm
          labels:
            # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
            app.kubernetes.io/name: cluster
            app.kubernetes.io/version: 0.35.0
            app.kubernetes.io/part-of: cluster-aws
            app.kubernetes.io/instance: release-name
            app.kubernetes.io/managed-by: Helm
            helm.sh/chart: cluster-0.35.0
            application.giantswarm.io/team: turtles
            giantswarm.io/cluster: test-wc
            giantswarm.io/organization: test
            giantswarm.io/service-priority: highest
            cluster.x-k8s.io/cluster-name: test-wc
            cluster.x-k8s.io/watch-filter: capi
            release.giantswarm.io/version: 27.0.0-alpha.1
        spec:
          restartPolicy: Never
          serviceAccountName: test-wc-cleanup-helmreleases-hook
          securityContext:
            runAsUser: 1000
            runAsGroup: 1000
            runAsNonRoot: true
            seccompProfile:
              type: RuntimeDefault
          containers:
          - name: post-delete-job
            image: "gsoci.azurecr.io/giantswarm/kubectl:1.25.16"
            command:
            - /bin/sh
            - "-xc"
            - |
              for r in $(kubectl get helmrelease -n org-giantswarm -l "giantswarm.io/cluster=test-wc" -o name) ; do
                  kubectl patch -n org-giantswarm helmchart $(kubectl get -n org-giantswarm "${r}" -o jsonpath='{.status.helmChart}' | cut -d/ -f2) --type=merge -p '{"metadata": {"finalizers": []}}'
                  kubectl patch -n org-giantswarm "${r}" --type=merge -p '{"metadata": {"finalizers": []}}'
              done
              
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              seccompProfile:
                type: RuntimeDefault
              readOnlyRootFilesystem: true
            resources:
              requests:
                memory: 64Mi
                cpu: 10m
              limits:
                memory: 256Mi
                cpu: 100m
    
  
    ---
    # Source: cluster-aws/charts/cluster/templates/clusterapi/workers/kubeadmconfig.yaml
    apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
    kind: KubeadmConfig
    metadata:
      name: test-wc-pool0-6d9b0
      namespace: org-giantswarm
      annotations:
        machine-pool.giantswarm.io/name: test-wc-pool0
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 0.36.0
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-0.36.0
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: test-wc
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
        giantswarm.io/machine-pool: test-wc-pool0
    spec:
      format: ignition
      ignition:
        containerLinuxConfig:
          additionalConfig: |
            systemd:
              units:      
              - name: os-hardening.service
                enabled: true
                contents: |
                  [Unit]
                  Description=Apply os hardening
                  [Service]
                  Type=oneshot
                  ExecStartPre=-/bin/bash -c "gpasswd -d core rkt; gpasswd -d core docker; gpasswd -d core wheel"
                  ExecStartPre=/bin/bash -c "until [ -f '/etc/sysctl.d/hardening.conf' ]; do echo Waiting for sysctl file; sleep 1s;done;"
                  ExecStart=/usr/sbin/sysctl -p /etc/sysctl.d/hardening.conf
                  [Install]
                  WantedBy=multi-user.target
              - name: update-engine.service
                enabled: false
                mask: true
              - name: locksmithd.service
                enabled: false
                mask: true
              - name: sshkeys.service
                enabled: false
                mask: true
              - name: teleport.service
                enabled: true
                contents: |
                  [Unit]
                  Description=Teleport Service
                  After=network.target
                  [Service]
                  Type=simple
                  Restart=on-failure
                  ExecStart=/opt/bin/teleport start --roles=node --config=/etc/teleport.yaml --pid-file=/run/teleport.pid
                  ExecReload=/bin/kill -HUP $MAINPID
                  PIDFile=/run/teleport.pid
                  LimitNOFILE=524288
                  [Install]
                  WantedBy=multi-user.target
              - name: kubeadm.service
                dropins:
                - name: 10-flatcar.conf
                  contents: |
                    [Unit]
                    # kubeadm must run after coreos-metadata populated /run/metadata directory.
                    Requires=coreos-metadata.service
                    After=coreos-metadata.service
                    # kubeadm must run after containerd - see https://github.com/kubernetes-sigs/image-builder/issues/939.
                    After=containerd.service
                    # kubeadm requires having an IP
                    After=network-online.target
                    Wants=network-online.target
                    [Service]
                    # Ensure kubeadm service has access to kubeadm binary in /opt/bin on Flatcar.
                    Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bin
                    # To make metadata environment variables available for pre-kubeadm commands.
                    EnvironmentFile=/run/metadata/*
              - name: containerd.service
                enabled: true
                contents: |
                dropins:
                - name: 10-change-cgroup.conf
                  contents: |
                    [Service]
                    CPUAccounting=true
                    MemoryAccounting=true
                    Slice=kubereserved.slice
              - name: audit-rules.service
                enabled: true
                dropins:
                - name: 10-wait-for-containerd.conf
                  contents: |
                    [Service]
                    ExecStartPre=/bin/bash -c "while [ ! -f /etc/audit/rules.d/containerd.rules ]; do echo 'Waiting for /etc/audit/rules.d/containerd.rules to be written' && sleep 1; done"
                    Restart=on-failure      
              - name: kubelet-aws-config.service
                enabled: true
              - name: var-lib.mount
                enabled: true
                contents: |
                  [Unit]
                  Description=lib volume
                  DefaultDependencies=no
                  [Mount]
                  What=/dev/disk/by-label/lib
                  Where=/var/lib
                  Type=xfs
                  [Install]
                  WantedBy=local-fs-pre.target
              - name: var-log.mount
                enabled: true
                contents: |
                  [Unit]
                  Description=log volume
                  DefaultDependencies=no
                  [Mount]
                  What=/dev/disk/by-label/log
                  Where=/var/log
                  Type=xfs
                  [Install]
                  WantedBy=local-fs-pre.target
            storage:
              filesystems:      
              - name: lib
                mount:
                  device: /dev/xvdd
                  format: xfs
                  wipeFilesystem: true
                  label: lib
              - name: log
                mount:
                  device: /dev/xvde
                  format: xfs
                  wipeFilesystem: true
                  label: log
              directories:      
              - path: /var/lib/kubelet
                mode: 0750      
            
      joinConfiguration:
        nodeRegistration:
          name: ${COREOS_EC2_HOSTNAME}
          kubeletExtraArgs:
            cloud-provider: external
            healthz-bind-address: 0.0.0.0
            node-ip: ${COREOS_EC2_IPV4_LOCAL}
            node-labels: "ip=${COREOS_EC2_IPV4_LOCAL},role=worker,giantswarm.io/machine-pool=test-wc-pool0,"
            v: 2
        patches:
          directory: /etc/kubernetes/patches
      preKubeadmCommands:
      - "envsubst < /etc/kubeadm.yml > /etc/kubeadm.yml.tmp"
      - "mv /etc/kubeadm.yml.tmp /etc/kubeadm.yml"
      - "systemctl restart containerd"
      files:
      - path: /etc/sysctl.d/hardening.conf
        permissions: 0644
        encoding: base64
        content: ZnMuaW5vdGlmeS5tYXhfdXNlcl93YXRjaGVzID0gMTYzODQKZnMuaW5vdGlmeS5tYXhfdXNlcl9pbnN0YW5jZXMgPSA4MTkyCmtlcm5lbC5rcHRyX3Jlc3RyaWN0ID0gMgprZXJuZWwuc3lzcnEgPSAwCm5ldC5pcHY0LmNvbmYuYWxsLmxvZ19tYXJ0aWFucyA9IDEKbmV0LmlwdjQuY29uZi5hbGwuc2VuZF9yZWRpcmVjdHMgPSAwCm5ldC5pcHY0LmNvbmYuZGVmYXVsdC5hY2NlcHRfcmVkaXJlY3RzID0gMApuZXQuaXB2NC5jb25mLmRlZmF1bHQubG9nX21hcnRpYW5zID0gMQpuZXQuaXB2NC50Y3BfdGltZXN0YW1wcyA9IDAKbmV0LmlwdjYuY29uZi5hbGwuYWNjZXB0X3JlZGlyZWN0cyA9IDAKbmV0LmlwdjYuY29uZi5kZWZhdWx0LmFjY2VwdF9yZWRpcmVjdHMgPSAwCiMgSW5jcmVhc2VkIG1tYXBmcyBiZWNhdXNlIHNvbWUgYXBwbGljYXRpb25zLCBsaWtlIEVTLCBuZWVkIGhpZ2hlciBsaW1pdCB0byBzdG9yZSBkYXRhIHByb3Blcmx5CnZtLm1heF9tYXBfY291bnQgPSAyNjIxNDQKIyBSZXNlcnZlZCB0byBhdm9pZCBjb25mbGljdHMgd2l0aCBrdWJlLWFwaXNlcnZlciwgd2hpY2ggYWxsb2NhdGVzIHdpdGhpbiB0aGlzIHJhbmdlCm5ldC5pcHY0LmlwX2xvY2FsX3Jlc2VydmVkX3BvcnRzPTMwMDAwLTMyNzY3Cm5ldC5pcHY0LmNvbmYuYWxsLnJwX2ZpbHRlciA9IDEKbmV0LmlwdjQuY29uZi5hbGwuYXJwX2lnbm9yZSA9IDEKbmV0LmlwdjQuY29uZi5hbGwuYXJwX2Fubm91bmNlID0gMgoKIyBUaGVzZSBhcmUgcmVxdWlyZWQgZm9yIHRoZSBrdWJlbGV0ICctLXByb3RlY3Qta2VybmVsLWRlZmF1bHRzJyBmbGFnCiMgU2VlIGh0dHBzOi8vZ2l0aHViLmNvbS9naWFudHN3YXJtL2dpYW50c3dhcm0vaXNzdWVzLzEzNTg3CnZtLm92ZXJjb21taXRfbWVtb3J5PTEKa2VybmVsLnBhbmljPTEwCmtlcm5lbC5wYW5pY19vbl9vb3BzPTEK
      - path: /etc/selinux/config
        permissions: 0644
        encoding: base64
        content: IyBUaGlzIGZpbGUgY29udHJvbHMgdGhlIHN0YXRlIG9mIFNFTGludXggb24gdGhlIHN5c3RlbSBvbiBib290LgoKIyBTRUxJTlVYIGNhbiB0YWtlIG9uZSBvZiB0aGVzZSB0aHJlZSB2YWx1ZXM6CiMgICAgICAgZW5mb3JjaW5nIC0gU0VMaW51eCBzZWN1cml0eSBwb2xpY3kgaXMgZW5mb3JjZWQuCiMgICAgICAgcGVybWlzc2l2ZSAtIFNFTGludXggcHJpbnRzIHdhcm5pbmdzIGluc3RlYWQgb2YgZW5mb3JjaW5nLgojICAgICAgIGRpc2FibGVkIC0gTm8gU0VMaW51eCBwb2xpY3kgaXMgbG9hZGVkLgpTRUxJTlVYPXBlcm1pc3NpdmUKCiMgU0VMSU5VWFRZUEUgY2FuIHRha2Ugb25lIG9mIHRoZXNlIGZvdXIgdmFsdWVzOgojICAgICAgIHRhcmdldGVkIC0gT25seSB0YXJnZXRlZCBuZXR3b3JrIGRhZW1vbnMgYXJlIHByb3RlY3RlZC4KIyAgICAgICBzdHJpY3QgICAtIEZ1bGwgU0VMaW51eCBwcm90ZWN0aW9uLgojICAgICAgIG1scyAgICAgIC0gRnVsbCBTRUxpbnV4IHByb3RlY3Rpb24gd2l0aCBNdWx0aS1MZXZlbCBTZWN1cml0eQojICAgICAgIG1jcyAgICAgIC0gRnVsbCBTRUxpbnV4IHByb3RlY3Rpb24gd2l0aCBNdWx0aS1DYXRlZ29yeSBTZWN1cml0eQojICAgICAgICAgICAgICAgICAgKG1scywgYnV0IG9ubHkgb25lIHNlbnNpdGl2aXR5IGxldmVsKQpTRUxJTlVYVFlQRT1tY3MK
      - path: /etc/systemd/timesyncd.conf
        permissions: 0644
        encoding: base64
        content: W1RpbWVdCk5UUD0xNjkuMjU0LjE2OS4xMjMK
      - path: /etc/containerd/config.toml
        permissions: 0644
        contentFrom:
          secret:
            name: test-wc-containerd-07a4e226
            key: config.toml
      - path: /etc/kubernetes/patches/kubeletconfiguration.yaml
        permissions: 0644
        encoding: base64
        content: YXBpVmVyc2lvbjoga3ViZWxldC5jb25maWcuazhzLmlvL3YxYmV0YTEKa2luZDogS3ViZWxldENvbmZpZ3VyYXRpb24Kc2h1dGRvd25HcmFjZVBlcmlvZDogMzAwcwpzaHV0ZG93bkdyYWNlUGVyaW9kQ3JpdGljYWxQb2RzOiA2MHMKa2VybmVsTWVtY2dOb3RpZmljYXRpb246IHRydWUKZXZpY3Rpb25Tb2Z0OgogIG1lbW9yeS5hdmFpbGFibGU6ICI1MDBNaSIKZXZpY3Rpb25IYXJkOgogIG1lbW9yeS5hdmFpbGFibGU6ICIyMDBNaSIKICBpbWFnZWZzLmF2YWlsYWJsZTogIjE1JSIKZXZpY3Rpb25Tb2Z0R3JhY2VQZXJpb2Q6CiAgbWVtb3J5LmF2YWlsYWJsZTogIjVzIgpldmljdGlvbk1heFBvZEdyYWNlUGVyaW9kOiA2MAprdWJlUmVzZXJ2ZWQ6CiAgY3B1OiAzNTBtCiAgbWVtb3J5OiAxMjgwTWkKICBlcGhlbWVyYWwtc3RvcmFnZTogMTAyNE1pCmt1YmVSZXNlcnZlZENncm91cDogL2t1YmVyZXNlcnZlZC5zbGljZQpwcm90ZWN0S2VybmVsRGVmYXVsdHM6IHRydWUKc3lzdGVtUmVzZXJ2ZWQ6CiAgY3B1OiAyNTBtCiAgbWVtb3J5OiAzODRNaQpzeXN0ZW1SZXNlcnZlZENncm91cDogL3N5c3RlbS5zbGljZQp0bHNDaXBoZXJTdWl0ZXM6Ci0gVExTX0FFU18xMjhfR0NNX1NIQTI1NgotIFRMU19BRVNfMjU2X0dDTV9TSEEzODQKLSBUTFNfQ0hBQ0hBMjBfUE9MWTEzMDVfU0hBMjU2Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9DQkNfU0hBCi0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9DQkNfU0hBCi0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQ0hBQ0hBMjBfUE9MWTEzMDUKLSBUTFNfRUNESEVfRUNEU0FfV0lUSF9DSEFDSEEyMF9QT0xZMTMwNV9TSEEyNTYKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzEyOF9DQkNfU0hBCi0gVExTX0VDREhFX1JTQV9XSVRIX0FFU18xMjhfR0NNX1NIQTI1NgotIFRMU19FQ0RIRV9SU0FfV0lUSF9BRVNfMjU2X0NCQ19TSEEKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1X1NIQTI1NgotIFRMU19SU0FfV0lUSF9BRVNfMTI4X0NCQ19TSEEKLSBUTFNfUlNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2Ci0gVExTX1JTQV9XSVRIX0FFU18yNTZfQ0JDX1NIQQotIFRMU19SU0FfV0lUSF9BRVNfMjU2X0dDTV9TSEEzODQKc2VyaWFsaXplSW1hZ2VQdWxsczogZmFsc2UKc3RyZWFtaW5nQ29ubmVjdGlvbklkbGVUaW1lb3V0OiAxaAphbGxvd2VkVW5zYWZlU3lzY3RsczoKLSAibmV0LioiCg==
      - path: /etc/systemd/logind.conf.d/zzz-kubelet-graceful-shutdown.conf
        permissions: 0700
        encoding: base64
        content: W0xvZ2luXQojIGRlbGF5CkluaGliaXREZWxheU1heFNlYz0zMDAK
      - path: /etc/teleport-join-token
        permissions: 0644
        contentFrom:
          secret:
            name: test-wc-teleport-join-token
            key: joinToken
      - path: /opt/teleport-node-role.sh
        permissions: 0755
        encoding: base64
        content: IyEvYmluL2Jhc2gKCmlmIHN5c3RlbWN0bCBpcy1hY3RpdmUgLXEga3ViZWxldC5zZXJ2aWNlOyB0aGVuCiAgICBpZiBbIC1lICIvZXRjL2t1YmVybmV0ZXMvbWFuaWZlc3RzL2t1YmUtYXBpc2VydmVyLnlhbWwiIF07IHRoZW4KICAgICAgICBlY2hvICJjb250cm9sLXBsYW5lIgogICAgZWxzZQogICAgICAgIGVjaG8gIndvcmtlciIKICAgIGZpCmVsc2UKICAgIGVjaG8gIiIKZmkK
      - path: /etc/teleport.yaml
        permissions: 0644
        encoding: base64
        content: dmVyc2lvbjogdjMKdGVsZXBvcnQ6CiAgZGF0YV9kaXI6IC92YXIvbGliL3RlbGVwb3J0CiAgam9pbl9wYXJhbXM6CiAgICB0b2tlbl9uYW1lOiAvZXRjL3RlbGVwb3J0LWpvaW4tdG9rZW4KICAgIG1ldGhvZDogdG9rZW4KICBwcm94eV9zZXJ2ZXI6IHRlbGVwb3J0LmdpYW50c3dhcm0uaW86NDQzCiAgbG9nOgogICAgb3V0cHV0OiBzdGRlcnIKYXV0aF9zZXJ2aWNlOgogIGVuYWJsZWQ6ICJubyIKc3NoX3NlcnZpY2U6CiAgZW5hYmxlZDogInllcyIKICBjb21tYW5kczoKICAtIG5hbWU6IG5vZGUKICAgIGNvbW1hbmQ6IFtob3N0bmFtZV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogYXJjaAogICAgY29tbWFuZDogW3VuYW1lLCAtbV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogcm9sZQogICAgY29tbWFuZDogWy9vcHQvdGVsZXBvcnQtbm9kZS1yb2xlLnNoXQogICAgcGVyaW9kOiAxbTBzCiAgbGFiZWxzOgogICAgaW5zOiB0ZXN0CiAgICBtYzogdGVzdAogICAgY2x1c3RlcjogdGVzdC13YwogICAgYmFzZURvbWFpbjogZXhhbXBsZS5jb20KcHJveHlfc2VydmljZToKICBlbmFibGVkOiAibm8iCg==
      - path: /etc/audit/rules.d/99-default.rules
        permissions: 0640
        encoding: base64
        content: IyBPdmVycmlkZGVuIGJ5IEdpYW50IFN3YXJtLgotYSBleGl0LGFsd2F5cyAtRiBhcmNoPWI2NCAtUyBleGVjdmUgLWsgYXVkaXRpbmcKLWEgZXhpdCxhbHdheXMgLUYgYXJjaD1iMzIgLVMgZXhlY3ZlIC1rIGF1ZGl0aW5nCg==
      - contentFrom:
          secret:
            name: test-wc-provider-specific-files-4
            key: kubelet-aws-config.sh
        path: /opt/bin/kubelet-aws-config.sh
        permissions: 0755
      - contentFrom:
          secret:
            name: test-wc-provider-specific-files-4
            key: kubelet-aws-config.service
        path: /etc/systemd/system/kubelet-aws-config.service
        permissions: 0644
      - contentFrom:
          secret:
            name: test-wc-provider-specific-files-4
            key: 99-unmanaged-devices.network
        path: /etc/systemd/network/99-unmanaged-devices.network
        permissions: 0644
    # Source: cluster-aws/charts/cluster/templates/apps/helmreleases-cleanup/serviceaccount.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: test-wc-helmreleases-cleanup
      namespace: org-giantswarm
      annotations:
        helm.sh/hook: post-delete
        helm.sh/hook-delete-policy: "before-hook-creation,hook-succeeded,hook-failed"
        helm.sh/hook-weight: "-1"
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 0.36.0
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-0.36.0
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: test-wc
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
    # Source: cluster-aws/charts/cluster/templates/apps/helmreleases-cleanup/role.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: test-wc-helmreleases-cleanup
      namespace: org-giantswarm
      annotations:
        helm.sh/hook: post-delete
        helm.sh/hook-delete-policy: "before-hook-creation,hook-succeeded,hook-failed"
        helm.sh/hook-weight: "-1"
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 0.36.0
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-0.36.0
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: test-wc
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
    rules:
    - apiGroups:
      - helm.toolkit.fluxcd.io
      resources:
      - helmreleases
      verbs:
      - list
      - get
      - patch
    # Source: cluster-aws/charts/cluster/templates/apps/helmreleases-cleanup/rolebinding.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: test-wc-helmreleases-cleanup
      namespace: org-giantswarm
      annotations:
        helm.sh/hook: post-delete
        helm.sh/hook-delete-policy: "before-hook-creation,hook-succeeded,hook-failed"
        helm.sh/hook-weight: "-1"
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 0.36.0
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-0.36.0
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: test-wc
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: test-wc-helmreleases-cleanup
    subjects:
    - kind: ServiceAccount
      name: test-wc-helmreleases-cleanup
      namespace: org-giantswarm
    # Source: cluster-aws/charts/cluster/templates/apps/helmreleases-cleanup/job.yaml
    #
    # Because cluster resources are often deleted before Flux has a chance to uninstall the Helm releases for all deleted HelmRelease CRs,
    # they become leftovers because there is still a Flux finalizer on them.
    #
    # This looks as follows:
    #
    #     $ kubectl get helmreleases --namespace org-multi-project
    #     NAME                           AGE     READY   STATUS
    #     pawe1-cilium                   99m     False   failed to get last release revision
    #     pawe1-cloud-provider-vsphere   99m     False   failed to get last release revision
    #
    # Both HelmRelease CRs in this case have a deletion timestamp and finalizer set, e.g.:
    #
    #     deletionTimestamp: "2023-03-02T14:34:49Z"
    #     finalizers:
    #     - finalizers.fluxcd.io
    #
    # To work around this, this post-delete hook suspends all HelmRelease CRs created with this chart.
    #
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: test-wc-helmreleases-cleanup
      namespace: org-giantswarm
      annotations:
        helm.sh/hook: post-delete
        helm.sh/hook-delete-policy: before-hook-creation
        helm.sh/hook-weight: 0
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 0.36.0
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-0.36.0
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: test-wc
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
    spec:
      template:
        metadata:
          labels:
            # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
            app.kubernetes.io/name: cluster
            app.kubernetes.io/version: 0.36.0
            app.kubernetes.io/part-of: cluster-aws
            app.kubernetes.io/instance: release-name
            app.kubernetes.io/managed-by: Helm
            helm.sh/chart: cluster-0.36.0
            application.giantswarm.io/team: turtles
            giantswarm.io/cluster: test-wc
            giantswarm.io/organization: test
            giantswarm.io/service-priority: highest
            cluster.x-k8s.io/cluster-name: test-wc
            cluster.x-k8s.io/watch-filter: capi
            release.giantswarm.io/version: 27.0.0-alpha.1
        spec:
          serviceAccountName: test-wc-helmreleases-cleanup
          containers:
          - name: kubectl
            image: "gsoci.azurecr.io/giantswarm/kubectl:1.25.16"
            securityContext:
              runAsNonRoot: true
              runAsUser: 1000
              runAsGroup: 1000
              allowPrivilegeEscalation: false
              seccompProfile:
                type: RuntimeDefault
              capabilities:
                drop:
                - ALL
              readOnlyRootFilesystem: true
            env:
            - name: NAMESPACE
              value: org-giantswarm
            - name: CLUSTER
              value: test-wc
            command:
            - /bin/sh
            args:
            - "-c"
            - |
              # Print namespace & cluster.
              echo "# Namespace: ${NAMESPACE} | Cluster: ${CLUSTER}"
              
              # Get releases.
              releases="$(kubectl get helmreleases.helm.toolkit.fluxcd.io --namespace "${NAMESPACE}" --selector giantswarm.io/cluster="${CLUSTER}" --output name)"
              
              # Check releases.
              if [ -n "${releases}" ]
              then
                # Patch releases.
                kubectl patch --namespace "${NAMESPACE}" ${releases} --type merge --patch '{ "spec": { "suspend": true } }'
              else
                # Print info.
                echo "No releases to patch found."
              fi
              
            resources:
              requests:
                cpu: 10m
                memory: 64Mi
              limits:
                cpu: 100m
                memory: 256Mi
          restartPolicy: Never
      ttlSecondsAfterFinished: 86400 # 24h
    
  

/metadata/labels/app.kubernetes.io/version  (v1/ConfigMap/org-giantswarm/test-wc-cert-manager-user-values)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (v1/ConfigMap/org-giantswarm/test-wc-cert-manager-user-values)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (v1/ConfigMap/org-giantswarm/test-wc-cluster-autoscaler-user-values)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (v1/ConfigMap/org-giantswarm/test-wc-cluster-autoscaler-user-values)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (v1/ConfigMap/org-giantswarm/test-wc-etcd-k8s-res-count-exporter-user-values)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (v1/ConfigMap/org-giantswarm/test-wc-etcd-k8s-res-count-exporter-user-values)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (v1/ConfigMap/org-giantswarm/test-wc-external-dns-user-values)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (v1/ConfigMap/org-giantswarm/test-wc-external-dns-user-values)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (v1/ConfigMap/org-giantswarm/test-wc-metrics-server-user-values)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (v1/ConfigMap/org-giantswarm/test-wc-metrics-server-user-values)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (v1/ConfigMap/org-giantswarm/test-wc-net-exporter-user-values)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (v1/ConfigMap/org-giantswarm/test-wc-net-exporter-user-values)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (v1/ConfigMap/org-giantswarm/test-wc-security-bundle-user-values)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (v1/ConfigMap/org-giantswarm/test-wc-security-bundle-user-values)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-capi-node-labeler)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-capi-node-labeler)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-cert-exporter)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-cert-exporter)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-cert-manager)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-cert-manager)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-chart-operator-extensions)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-chart-operator-extensions)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-cilium-servicemonitors)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-cilium-servicemonitors)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-cluster-autoscaler)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-cluster-autoscaler)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-etcd-k8s-res-count-exporter)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-etcd-k8s-res-count-exporter)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-external-dns)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-external-dns)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-k8s-audit-metrics)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-k8s-audit-metrics)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-k8s-dns-node-cache)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-k8s-dns-node-cache)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-metrics-server)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-metrics-server)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-net-exporter)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-net-exporter)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-node-exporter)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-node-exporter)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-observability-bundle)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-observability-bundle)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-prometheus-blackbox-exporter)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-prometheus-blackbox-exporter)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-security-bundle)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-security-bundle)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-teleport-kube-agent)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-teleport-kube-agent)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-vertical-pod-autoscaler)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (application.giantswarm.io/v1alpha1/App/org-giantswarm/test-wc-vertical-pod-autoscaler)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (cluster.x-k8s.io/v1beta1/Cluster/org-giantswarm/test-wc)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (cluster.x-k8s.io/v1beta1/Cluster/org-giantswarm/test-wc)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (helm.toolkit.fluxcd.io/v2beta1/HelmRelease/org-giantswarm/test-wc-cilium)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (helm.toolkit.fluxcd.io/v2beta1/HelmRelease/org-giantswarm/test-wc-cilium)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (helm.toolkit.fluxcd.io/v2beta1/HelmRelease/org-giantswarm/test-wc-coredns)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (helm.toolkit.fluxcd.io/v2beta1/HelmRelease/org-giantswarm/test-wc-coredns)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (helm.toolkit.fluxcd.io/v2beta1/HelmRelease/org-giantswarm/test-wc-network-policies)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (helm.toolkit.fluxcd.io/v2beta1/HelmRelease/org-giantswarm/test-wc-network-policies)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (helm.toolkit.fluxcd.io/v2beta1/HelmRelease/org-giantswarm/test-wc-vertical-pod-autoscaler-crd)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (helm.toolkit.fluxcd.io/v2beta1/HelmRelease/org-giantswarm/test-wc-vertical-pod-autoscaler-crd)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (source.toolkit.fluxcd.io/v1beta2/HelmRepository/org-giantswarm/test-wc-default)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (source.toolkit.fluxcd.io/v1beta2/HelmRepository/org-giantswarm/test-wc-default)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (source.toolkit.fluxcd.io/v1beta2/HelmRepository/org-giantswarm/test-wc-default-test)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (source.toolkit.fluxcd.io/v1beta2/HelmRepository/org-giantswarm/test-wc-default-test)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (source.toolkit.fluxcd.io/v1beta2/HelmRepository/org-giantswarm/test-wc-cluster)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (source.toolkit.fluxcd.io/v1beta2/HelmRepository/org-giantswarm/test-wc-cluster)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (source.toolkit.fluxcd.io/v1beta2/HelmRepository/org-giantswarm/test-wc-cluster-test)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (source.toolkit.fluxcd.io/v1beta2/HelmRepository/org-giantswarm/test-wc-cluster-test)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (controlplane.cluster.x-k8s.io/v1beta1/KubeadmControlPlane/org-giantswarm/test-wc)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (controlplane.cluster.x-k8s.io/v1beta1/KubeadmControlPlane/org-giantswarm/test-wc)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/spec/machineTemplate/metadata/labels/app.kubernetes.io/version  (controlplane.cluster.x-k8s.io/v1beta1/KubeadmControlPlane/org-giantswarm/test-wc)
  ± value change
    - 0.35.0
    + 0.36.0

/spec/machineTemplate/metadata/labels/helm.sh/chart  (controlplane.cluster.x-k8s.io/v1beta1/KubeadmControlPlane/org-giantswarm/test-wc)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/spec/kubeadmConfigSpec/clusterConfiguration/controllerManager/extraArgs/feature-gates  (controlplane.cluster.x-k8s.io/v1beta1/KubeadmControlPlane/org-giantswarm/test-wc)
  ± value change
    - CronJobTimeZone=true,StatefulSetAutoDeletePVC=true
    + StatefulSetAutoDeletePVC=true

/spec/kubeadmConfigSpec/clusterConfiguration/scheduler/extraArgs  (controlplane.cluster.x-k8s.io/v1beta1/KubeadmControlPlane/org-giantswarm/test-wc)
  - one map entry removed:
    feature-gates: CronJobTimeZone=true

/spec/kubeadmConfigSpec/initConfiguration/nodeRegistration/kubeletExtraArgs  (controlplane.cluster.x-k8s.io/v1beta1/KubeadmControlPlane/org-giantswarm/test-wc)
  - one map entry removed:
    feature-gates: CronJobTimeZone=true

/spec/kubeadmConfigSpec/joinConfiguration/nodeRegistration/kubeletExtraArgs  (controlplane.cluster.x-k8s.io/v1beta1/KubeadmControlPlane/org-giantswarm/test-wc)
  - one map entry removed:
    feature-gates: CronJobTimeZone=true

/metadata/labels/app.kubernetes.io/version  (cluster.x-k8s.io/v1beta1/MachineHealthCheck/org-giantswarm/test-wc-control-plane)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (cluster.x-k8s.io/v1beta1/MachineHealthCheck/org-giantswarm/test-wc-control-plane)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/metadata/labels/app.kubernetes.io/version  (cluster.x-k8s.io/v1beta1/MachinePool/org-giantswarm/test-wc-pool0)
  ± value change
    - 0.35.0
    + 0.36.0

/metadata/labels/helm.sh/chart  (cluster.x-k8s.io/v1beta1/MachinePool/org-giantswarm/test-wc-pool0)
  ± value change
    - cluster-0.35.0
    + cluster-0.36.0

/spec/template/spec/bootstrap/configRef/name  (cluster.x-k8s.io/v1beta1/MachinePool/org-giantswarm/test-wc-pool0)
  ± value change
    - test-wc-pool0-b813b
    + test-wc-pool0-6d9b0



=== Differences when rendered with values file helm/cluster-aws/ci/test-mc-proxy-values.yaml ===

(file level)
  - five documents removed:
    ---
    # Source: cluster-aws/charts/cluster/templates/clusterapi/workers/kubeadmconfig.yaml
    apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
    kind: KubeadmConfig
    metadata:
      name: test-mc-proxy-pool0-2b887
      namespace: org-giantswarm
      annotations:
        machine-pool.giantswarm.io/name: test-mc-proxy-pool0
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 0.35.0
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-0.35.0
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test-mc-proxy
        giantswarm.io/organization: test
        giantswarm.io/service-priority: lowest
        cluster.x-k8s.io/cluster-name: test-mc-proxy
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
        giantswarm.io/machine-pool: test-mc-proxy-pool0
    spec:
      format: ignition
      ignition:
        containerLinuxConfig:
          additionalConfig: |
            systemd:
              units:      
              - name: os-hardening.service
                enabled: true
                contents: |
                  [Unit]
                  Description=Apply os hardening
                  [Service]
                  Type=oneshot
                  ExecStartPre=-/bin/bash -c "gpasswd -d core rkt; gpasswd -d core docker; gpasswd -d core wheel"
                  ExecStartPre=/bin/bash -c "until [ -f '/etc/sysctl.d/hardening.conf' ]; do echo Waiting for sysctl file; sleep 1s;done;"
                  ExecStart=/usr/sbin/sysctl -p /etc/sysctl.d/hardening.conf
                  [Install]
                  WantedBy=multi-user.target
              - name: update-engine.service
                enabled: false
                mask: true
              - name: locksmithd.service
                enabled: false
                mask: true
              - name: sshkeys.service
                enabled: false
                mask: true
              - name: teleport.service
                enabled: true
                contents: |
                  [Unit]
                  Description=Teleport Service
                  After=network.target
                  [Service]
                  Type=simple
                  Restart=on-failure
                  ExecStart=/opt/bin/teleport start --roles=node --config=/etc/teleport.yaml --pid-file=/run/teleport.pid
                  ExecReload=/bin/kill -HUP $MAINPID
                  PIDFile=/run/teleport.pid
                  LimitNOFILE=524288
                  [Install]
                  WantedBy=multi-user.target
              - name: kubeadm.service
                dropins:
                - name: 10-flatcar.conf
                  contents: |
                    [Unit]
                    # kubeadm must run after coreos-metadata populated /run/metadata directory.
                    Requires=coreos-metadata.service
                    After=coreos-metadata.service
                    # kubeadm must run after containerd - see https://github.com/kubernetes-sigs/image-builder/issues/939.
                    After=containerd.service
                    # kubeadm requires having an IP
                    After=network-online.target
                    Wants=network-online.target
                    [Service]
                    # Ensure kubeadm service has access to kubeadm binary in /opt/bin on Flatcar.
                    Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bin
                    # To make metadata environment variables available for pre-kubeadm commands.
                    EnvironmentFile=/run/metadata/*
              - name: containerd.service
                enabled: true
                contents: |
                dropins:
                - name: 10-change-cgroup.conf
                  contents: |
                    [Service]
                    CPUAccounting=true
                    MemoryAccounting=true
                    Slice=kubereserved.slice
              - name: audit-rules.service
                enabled: true
                dropins:
                - name: 10-wait-for-containerd.conf
                  contents: |
                    [Service]
                    ExecStartPre=/bin/bash -c "while [ ! -f /etc/audit/rules.d/containerd.rules ]; do echo 'Waiting for /etc/audit/rules.d/containerd.rules to be written' && sleep 1; done"
                    Restart=on-failure      
              - name: kubel...*[Comment body truncated]*

@renovate renovate bot changed the title Chart: Update cluster chart to v0.36.0. Update Helm release cluster to v0.36.0 Jul 19, 2024
@AverageMarcus
Copy link
Member

@Gacko Can you please try to avoid excessive force-pushes to apps that trigger E2E tests? There's currently 5 in-progress test runs for this single PR, each one creating 4 test clusters and taking up resources.

@tinkerers-ci

This comment has been minimized.

@tinkerers-ci

This comment has been minimized.

@tinkerers-ci

This comment has been minimized.

@tinkerers-ci

This comment has been minimized.

@tinkerers-ci

This comment has been minimized.

@Gacko
Copy link
Member

Gacko commented Jul 20, 2024

I knew it's doing "something" on pushes/updates, but I expected it to trigger tests only when I actually type /run cluster-test-suites. Can we maybe improve the trigger? In my opinion it should only trigger when really being asked for...

@Gacko
Copy link
Member

Gacko commented Jul 20, 2024

/run cluster-test-suites

@tinkerers-ci

This comment has been minimized.

@AverageMarcus
Copy link
Member

Renovate automatically rebases PRs when there are updates available, there is no need to manually do it as Renovate is good at not overloading these most of the time. To ensure that PRs are ready to merge when someone is available we re-trigger the tests on PR resynchronise.

@Gacko
Copy link
Member

Gacko commented Jul 20, 2024

Sure, but Renovate does not add an entry to the changelog, but sometimes also removes your commits. So I'm kind of lucky the changelog is still there, last time I needed to manually amend a Renovate PR, it didn't work and I first had to fix go.mod in a separate PR. That sucks.

In the end, I'm sticking with it: I wouldn't expect our tests to trigger for every little event, but only for someone/somewhat really typing /run cluster-test-suites.

@Gacko
Copy link
Member

Gacko commented Jul 20, 2024

Also the recent run looks a bit flaky. I can barely imagine the changes from cluster chart to break the deployment of provider specific apps in the Cilium ENI mode suite, especially if they rolled out for all the other suites.

So forgive me, but...

/run cluster-test-suites

@AverageMarcus
Copy link
Member

I was just trying to make sure you were aware so that we can avoid wasting resource and money where possible. There are limitations with our use of Renovate that we need to account for here. If you have suggestions for improvements of the process then please open an issue for it or discuss it with the team.

@tinkerers-ci
Copy link

tinkerers-ci bot commented Jul 20, 2024

cluster-test-suites

Run name pr-cluster-aws-703-cluster-test-suiteszc9pr
Commit SHA d459384
Result Succeeded ✅

📋 View full results in Tekton Dashboard

Rerun trigger:
/run cluster-test-suites


Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

@Gacko Gacko merged commit 3918727 into main Jul 21, 2024
11 checks passed
@Gacko Gacko deleted the renovate/cluster-0.x branch July 21, 2024 09:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file renovate PR created by RenovateBot
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants