Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add startup probes into the health trait #4190

Closed
wants to merge 11 commits into from
Closed

add startup probes into the health trait #4190

wants to merge 11 commits into from

Conversation

mertdotcc
Copy link
Contributor

@mertdotcc mertdotcc commented Mar 29, 2023

Completes (?) #4146

(Linked to closed-by-mistake PR #4182)

@mertdotcc
Copy link
Contributor Author

@squakez @gansheer Guys I closed the other PR by mistake. Sorry for the inconvenience. Could we continue the discussion here?

@squakez I implemented the changes you mentioned in the last comment.

@gansheer Regarding your last comment, I am not sure why those imports were deleted. For some reason, my IDE acts weird when I work with these test files. When I try to add those missing lines back, I am getting errors.

@mertdotcc
Copy link
Contributor Author

image

image

@gansheer
Copy link
Contributor

gansheer commented Mar 29, 2023

If you are on vs code, you might need to take care of the // To enable compilation of this file in Goland, go to "Settings -> Go -> Vendoring & Build Tags -> Custom Tags" and add "integration" line present in e2e golang files (even it is not completely correct nowadays). I had to put integration in my settings.

visual_camelk

I don't know if it is enough, but it should help if it is not something you did.

And don't worry about the closed PR.

@mertdotcc
Copy link
Contributor Author

The number of commits I am making for such a simple feature is getting ridiculous, I am aware. Let's squash all of them into one commit when (if) this PR gets merged into main.

@mertdotcc mertdotcc marked this pull request as ready for review March 29, 2023 16:49
@squakez
Copy link
Contributor

squakez commented Mar 30, 2023

The number of commits I am making for such a simple feature is getting ridiculous, I am aware. Let's squash all of them into one commit when (if) this PR gets merged into main.

Do not worry at all about that. You do all the work you need and at the end we'll decide if it makes sense to squash. Also, as a suggestion, feel free to git commit --amend and git push -f if you are iteratively adding any change to the same commit scope.

@mertdotcc
Copy link
Contributor Author

@squakez Any idea why some tests are failing? I looked into the logs but the errors don't seem that relevant to the feature in the PR.

 MemoryPressure   False   Thu, 30 Mar 2023 07:15:00 +0000   Thu, 30 Mar 2023 07:14:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 30 Mar 2023 07:15:00 +0000   Thu, 30 Mar 2023 07:14:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 30 Mar 2023 07:15:00 +0000   Thu, 30 Mar 2023 07:14:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 30 Mar 2023 07:15:00 +0000   Thu, 30 Mar 2023 07:15:00 +0000   KubeletReady                 kubelet is posting ready status

@@ -97,6 +99,9 @@ func (t *healthTrait) Apply(e *Environment) error {
if pointer.BoolDeref(t.ReadinessProbeEnabled, true) {
container.ReadinessProbe = t.newReadinessProbe(port, defaultReadinessProbePath)
}
if pointer.BoolDeref(t.StartupProbeEnabled, true) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We said the default should be false, so we need to adjust the value here.

name := "startup-never-ready"

Expect(KamelRunWithID(operatorID, ns, "files/NeverReady.java",
"-t", "health.enabled=true",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this test is missing the equivalent of running the new probe with -t health.startup-probe-enabled=true

@squakez
Copy link
Contributor

squakez commented Mar 30, 2023

@squakez Any idea why some tests are failing? I looked into the logs but the errors don't seem that relevant to the feature in the PR.

 MemoryPressure   False   Thu, 30 Mar 2023 07:15:00 +0000   Thu, 30 Mar 2023 07:14:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 30 Mar 2023 07:15:00 +0000   Thu, 30 Mar 2023 07:14:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 30 Mar 2023 07:15:00 +0000   Thu, 30 Mar 2023 07:14:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 30 Mar 2023 07:15:00 +0000   Thu, 30 Mar 2023 07:15:00 +0000   KubeletReady                 kubelet is posting ready status

Just focus on the failure reasons for the test you're introducing: https://github.com/apache/camel-k/actions/runs/4555539840/jobs/8047750444?pr=4190 - I think that test is a copy of the readiness probe, so, likely you need to change it properly to fill the conditions we need to prove with this new feature.

@mertdotcc mertdotcc requested a review from squakez March 30, 2023 09:43
@mertdotcc
Copy link
Contributor Author

I am confused by this PR passing the custom operator installation test but not the single operator installation test...

@squakez
Copy link
Contributor

squakez commented Mar 30, 2023

I am confused by this PR passing the custom operator installation test but not the single operator installation test...

Those are 2 different suite of tests. The single vs custom is referring the requirement for each of the test suite to have a common installation of the operator or to have a dedicated one (custom) for each test execution.

As for the failure, it seems it is exactly failing in the new test you introduced:

❌ TestHealthTrait (18m28.21s)
health_test.go:339:
Timed out after 900.000s.
Expected
<v1.PodPhase>:
to equal
<v1.PodPhase>: Running

I think you need to fine tune the test locally as it is easier for you to troubleshoot. You can follow the same instructions I provided in the other PR: #4182 (review)

@mertdotcc
Copy link
Contributor Author

Hey @squakez,

I am trying the steps as they are listed on the Contributing page, in the correct order:

Build the whole project with make

Output
Regenerating pkg/util/defaults/defaults.go
gofmt -w pkg/util/defaults/defaults.go
./script/get_catalog.sh 3.20.1-SNAPSHOT
go generate ./pkg/...
writing /Users/mert/opensource/mertdotcc/camel-k/pkg/resources/resources.go
go install github.com/gotesttools/gotestfmt/v2/cmd/gotestfmt@latest
####### Running unit test...
go test ./...
ok  	github.com/apache/camel-k/v2/addons	(cached)
?   	github.com/apache/camel-k/v2/addons/keda/duck/v1alpha1	[no test files]
?   	github.com/apache/camel-k/v2/addons/master	[no test files]
?   	github.com/apache/camel-k/v2/addons/resume	[no test files]
ok  	github.com/apache/camel-k/v2/addons/keda	(cached)
?   	github.com/apache/camel-k/v2/addons/strimzi/duck/client/internalclientset	[no test files]
?   	github.com/apache/camel-k/v2/addons/strimzi/duck/client/internalclientset/fake	[no test files]
?   	github.com/apache/camel-k/v2/addons/strimzi/duck/client/internalclientset/typed/duck/v1beta2	[no test files]
?   	github.com/apache/camel-k/v2/addons/strimzi/duck/client/internalclientset/scheme	[no test files]
?   	github.com/apache/camel-k/v2/addons/strimzi/duck/client/internalclientset/typed/duck/v1beta2/fake	[no test files]
?   	github.com/apache/camel-k/v2/addons/strimzi/duck/v1beta2	[no test files]
ok  	github.com/apache/camel-k/v2/addons/strimzi	(cached)
?   	github.com/apache/camel-k/v2/addons/telemetry/discovery	[no test files]
ok  	github.com/apache/camel-k/v2/addons/telemetry	(cached)
ok  	github.com/apache/camel-k/v2/addons/threescale	(cached)
?   	github.com/apache/camel-k/v2/addons/tracing/discovery	[no test files]
ok  	github.com/apache/camel-k/v2/addons/tracing	(cached)
ok  	github.com/apache/camel-k/v2/addons/vault/aws	(cached)
ok  	github.com/apache/camel-k/v2/addons/vault/azure	(cached)
ok  	github.com/apache/camel-k/v2/addons/vault/gcp	(cached)
ok  	github.com/apache/camel-k/v2/addons/vault/hashicorp	(cached)
?   	github.com/apache/camel-k/v2/cmd/kamel	[no test files]
?   	github.com/apache/camel-k/v2/cmd/manager	[no test files]
?   	github.com/apache/camel-k/v2/cmd/util/doc-gen	[no test files]
?   	github.com/apache/camel-k/v2/cmd/util/doc-gen/generators	[no test files]
?   	github.com/apache/camel-k/v2/cmd/util/json-schema-gen	[no test files]
?   	github.com/apache/camel-k/v2/cmd/util/license-check	[no test files]
?   	github.com/apache/camel-k/v2/cmd/util/platform-check	[no test files]
?   	github.com/apache/camel-k/v2/cmd/util/vfs-gen	[no test files]
?   	github.com/apache/camel-k/v2/cmd/util/vfs-gen/multifs	[no test files]
?   	github.com/apache/camel-k/v2/pkg/apis	[no test files]
?   	github.com/apache/camel-k/v2/pkg/base	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client	[no test files]
ok  	github.com/apache/camel-k/v2/pkg/builder	(cached)
?   	github.com/apache/camel-k/v2/pkg/cmd/builder	[no test files]
?   	github.com/apache/camel-k/v2/pkg/cmd/operator	[no test files]
?   	github.com/apache/camel-k/v2/pkg/controller	[no test files]
?   	github.com/apache/camel-k/v2/pkg/controller/build	[no test files]
?   	github.com/apache/camel-k/v2/pkg/controller/catalog	[no test files]
ok  	github.com/apache/camel-k/v2/pkg/cmd	(cached)
ok  	github.com/apache/camel-k/v2/pkg/cmd/local	(cached)
ok  	github.com/apache/camel-k/v2/pkg/cmd/source	(cached)
?   	github.com/apache/camel-k/v2/pkg/controller/integrationkit	[no test files]
?   	github.com/apache/camel-k/v2/pkg/controller/kamelet	[no test files]
?   	github.com/apache/camel-k/v2/pkg/event	[no test files]
?   	github.com/apache/camel-k/v2/pkg/kamelet	[no test files]
ok  	github.com/apache/camel-k/v2/pkg/controller/integration	(cached)
ok  	github.com/apache/camel-k/v2/pkg/controller/integrationplatform	(cached)
ok  	github.com/apache/camel-k/v2/pkg/controller/kameletbinding	(cached)
ok  	github.com/apache/camel-k/v2/pkg/install	(cached)
?   	github.com/apache/camel-k/v2/pkg/platform	[no test files]
ok  	github.com/apache/camel-k/v2/pkg/metadata	(cached)
ok  	github.com/apache/camel-k/v2/pkg/resources	(cached)
?   	github.com/apache/camel-k/v2/pkg/util/cancellable	[no test files]
?   	github.com/apache/camel-k/v2/pkg/util/config	[no test files]
ok  	github.com/apache/camel-k/v2/pkg/trait	(cached)
?   	github.com/apache/camel-k/v2/pkg/util/indentedwriter	[no test files]
?   	github.com/apache/camel-k/v2/pkg/util/kamelets	[no test files]
?   	github.com/apache/camel-k/v2/pkg/util/kubernetes/log	[no test files]
?   	github.com/apache/camel-k/v2/pkg/util/log	[no test files]
?   	github.com/apache/camel-k/v2/pkg/util/minikube	[no test files]
?   	github.com/apache/camel-k/v2/pkg/util/monitoring	[no test files]
?   	github.com/apache/camel-k/v2/pkg/util/olm	[no test files]
?   	github.com/apache/camel-k/v2/pkg/util/openshift	[no test files]
?   	github.com/apache/camel-k/v2/pkg/util/patch	[no test files]
ok  	github.com/apache/camel-k/v2/pkg/util	0.210s
ok  	github.com/apache/camel-k/v2/pkg/util/bindings	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/camel	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/defaults	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/digest	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/docker	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/dsl	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/envvar	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/gzip	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/jitpack	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/jvm	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/knative	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/kubernetes	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/label	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/maven	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/modeline	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/property	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/reference	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/registry	(cached)
?   	github.com/apache/camel-k/v2/pkg/util/tar	[no test files]
ok  	github.com/apache/camel-k/v2/pkg/util/resource	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/source	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/sync	(cached)
?   	github.com/apache/camel-k/v2/pkg/util/watch	[no test files]
ok  	github.com/apache/camel-k/v2/pkg/util/test	(cached)
ok  	github.com/apache/camel-k/v2/pkg/util/uri	(cached)
cd pkg/apis/camel && go test ./...
?   	github.com/apache/camel-k/v2/pkg/apis/camel	[no test files]
?   	github.com/apache/camel-k/v2/pkg/apis/camel/v1/knative	[no test files]
?   	github.com/apache/camel-k/v2/pkg/apis/camel/v1/trait	[no test files]
ok  	github.com/apache/camel-k/v2/pkg/apis/camel/v1	(cached)
ok  	github.com/apache/camel-k/v2/pkg/apis/camel/v1alpha1	(cached)
cd pkg/client/camel && go test ./...
?   	github.com/apache/camel-k/v2/pkg/client/camel/applyconfiguration	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/applyconfiguration/camel/v1alpha1	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/applyconfiguration/camel/v1	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/applyconfiguration/internal	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/clientset/versioned	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/clientset/versioned/fake	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/clientset/versioned/scheme	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/clientset/versioned/typed/camel/v1	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/clientset/versioned/typed/camel/v1/fake	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/clientset/versioned/typed/camel/v1alpha1	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/clientset/versioned/typed/camel/v1alpha1/fake	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/informers/externalversions	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/informers/externalversions/camel	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/informers/externalversions/camel/v1	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/informers/externalversions/camel/v1alpha1	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/informers/externalversions/internalinterfaces	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/listers/camel/v1	[no test files]
?   	github.com/apache/camel-k/v2/pkg/client/camel/listers/camel/v1alpha1	[no test files]
cd pkg/kamelet/repository && go test ./...
ok  	github.com/apache/camel-k/v2/pkg/kamelet/repository	(cached)
####### Building kamel CLI...
go build -ldflags "-X github.com/apache/camel-k/v2/pkg/util/defaults.GitCommit=37e589dcdc91d47ead5f1eb7677ac9830373c032" -trimpath -o kamel ./cmd/kamel/*.go
go test -run nope -tags="integration" ./e2e/...
ok  	github.com/apache/camel-k/v2/e2e/builder	(cached) [no tests to run]
ok  	github.com/apache/camel-k/v2/e2e/common/cli	(cached) [no tests to run]
ok  	github.com/apache/camel-k/v2/e2e/common/config	(cached) [no tests to run]
ok  	github.com/apache/camel-k/v2/e2e/common/languages	(cached) [no tests to run]
ok  	github.com/apache/camel-k/v2/e2e/common/misc	(cached) [no tests to run]
ok  	github.com/apache/camel-k/v2/e2e/common/support	(cached) [no tests to run]
ok  	github.com/apache/camel-k/v2/e2e/common/traits	(cached) [no tests to run]
ok  	github.com/apache/camel-k/v2/e2e/commonwithcustominstall	(cached) [no tests to run]
?   	github.com/apache/camel-k/v2/e2e/support	[no test files]
?   	github.com/apache/camel-k/v2/e2e/support/util	[no test files]
ok  	github.com/apache/camel-k/v2/e2e/install/cli	(cached) [no tests to run]
ok  	github.com/apache/camel-k/v2/e2e/install/kustomize	(cached) [no tests to run]
ok  	github.com/apache/camel-k/v2/e2e/install/olm	(cached) [no tests to run]
ok  	github.com/apache/camel-k/v2/e2e/knative	(cached) [no tests to run]
ok  	github.com/apache/camel-k/v2/e2e/knative/support	(cached) [no tests to run]
ok  	github.com/apache/camel-k/v2/e2e/native	(cached) [no tests to run]
ok  	github.com/apache/camel-k/v2/e2e/telemetry	(cached) [no tests to run]
./script/build_submodules.sh
Building submodule pkg/apis/camel...
Building submodule pkg/client/camel...
Building submodule pkg/kamelet/repository...

Verify

Output
./kamel version
Camel K Client 2.0.0-SNAPSHOT

Push the image to my custom repository

Output
make STAGING_IMAGE_NAME='docker.io/mert1ozturk/camel-k-local' images-push-staging
image

Then I run the following command to install the operator into my personal cluster:

./kamel install \
--global \
--operator-image=docker.io/mert1ozturk/camel-k-local:2.0.0-SNAPSHOT \
--operator-image-pull-policy=Always \
--olm=false \
--registry gcr.io \
--build-publish-strategy=Kaniko \
--organization mert-personal-cluster \
--registry-secret kaniko-secret \
--maven-repository https://repo1.maven.org/maven2/ \
--operator-resources requests.memory=4096Mi \
--operator-resources limits.memory=4096Mi \
--monitoring=true \
--monitoring-port=8888 \
--force \
--namespace camel
Output
A persistent volume claim for "camel-k-pvc" already exist, reusing it
Warning: the operator won't be able to detect a local image registry via KEP-1755
Camel K installed in namespace camel  (global mode)

However, I am getting CrashLoopBackOff in my operator.

kubectl logs camel-k-operator-5dd5f7757-6lcqx:

exec /usr/local/bin/kamel: exec format error

kubectl describe pod camel-k-operator-5dd5f7757-6lcqx:

Name:             camel-k-operator-5dd5f7757-6lcqx
Namespace:        camel
Priority:         0
Service Account:  camel-k-operator
Node:             gk3-mert-personal-cluste-nap-bgf1df2c-6b37dc78-dspv/10.156.0.14
Start Time:       Thu, 30 Mar 2023 21:47:20 +0200
Labels:           app=camel-k
                  app.kubernetes.io/component=operator
                  app.kubernetes.io/name=camel-k
                  app.kubernetes.io/version=2.0.0-SNAPSHOT
                  camel.apache.org/component=operator
                  name=camel-k-operator
                  pod-template-hash=5dd5f7757
Annotations:      <none>
Status:           Running
IP:               10.90.0.141
IPs:
  IP:           10.90.0.141
Controlled By:  ReplicaSet/camel-k-operator-5dd5f7757
Containers:
  camel-k-operator:
    Container ID:  containerd://0d6f4e538b97beaa31b93f21b0266076a80b4cc88b6f2ba88f36e19684365716
    Image:         docker.io/mert1ozturk/camel-k-local:2.0.0-SNAPSHOT
    Image ID:      docker.io/mert1ozturk/camel-k-local@sha256:956f88f89e25d29fe111894bca908dae437ffef5264686302fe6adc5f27555ff
    Port:          8888/TCP
    Host Port:     0/TCP
    Command:
      kamel
      operator
    Args:
      --monitoring-port=8888
      --health-port=8081
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 30 Mar 2023 21:48:19 +0200
      Finished:     Thu, 30 Mar 2023 21:48:19 +0200
    Ready:          False
    Restart Count:  3
    Limits:
      cpu:                750m
      ephemeral-storage:  1Gi
      memory:             4Gi
    Requests:
      cpu:                750m
      ephemeral-storage:  1Gi
      memory:             4Gi
    Liveness:             http-get http://:8081/healthz delay=20s timeout=1s period=10s #success=1 #failure=3
    Environment:
      WATCH_NAMESPACE:
      OPERATOR_NAME:      camel-k
      OPERATOR_ID:        camel-k
      POD_NAME:           camel-k-operator-5dd5f7757-6lcqx (v1:metadata.name)
      NAMESPACE:          camel (v1:metadata.namespace)
      KAMEL_OPERATOR_ID:  camel-k
      LOG_LEVEL:          info
    Mounts:
      /etc/maven/m2 from camel-k-pvc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-55ql2 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  camel-k-pvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  camel-k-pvc
    ReadOnly:   false
  kube-api-access-55ql2:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 kubernetes.io/arch=amd64:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                From                                   Message
  ----     ------                  ----               ----                                   -------
  Normal   Scheduled               98s                gke.io/optimize-utilization-scheduler  Successfully assigned camel/camel-k-operator-5dd5f7757-6lcqx to gk3-mert-personal-cluste-nap-bgf1df2c-6b37dc78-dspv
  Normal   SuccessfulAttachVolume  94s                attachdetach-controller                AttachVolume.Attach succeeded for volume "pvc-eae32660-8e5b-4e7f-95da-617ff3b548f8"
  Normal   Pulled                  90s                kubelet                                Successfully pulled image "docker.io/mert1ozturk/camel-k-local:2.0.0-SNAPSHOT" in 981.31173ms (981.329471ms including waiting)
  Normal   Pulled                  89s                kubelet                                Successfully pulled image "docker.io/mert1ozturk/camel-k-local:2.0.0-SNAPSHOT" in 942.970187ms (942.980089ms including waiting)
  Normal   Pulled                  69s                kubelet                                Successfully pulled image "docker.io/mert1ozturk/camel-k-local:2.0.0-SNAPSHOT" in 992.0494ms (992.106293ms including waiting)
  Normal   Pulling                 41s (x4 over 91s)  kubelet                                Pulling image "docker.io/mert1ozturk/camel-k-local:2.0.0-SNAPSHOT"
  Normal   Created                 40s (x4 over 90s)  kubelet                                Created container camel-k-operator
  Normal   Pulled                  40s                kubelet                                Successfully pulled image "docker.io/mert1ozturk/camel-k-local:2.0.0-SNAPSHOT" in 943.56844ms (943.594041ms including waiting)
  Normal   Started                 39s (x4 over 90s)  kubelet                                Started container camel-k-operator
  Warning  BackOff                 22s (x9 over 87s)  kubelet                                Back-off restarting failed container

kubectl describe ip camel-k:

Name:         camel-k
Namespace:    camel
Labels:       app=camel-k
Annotations:  camel.apache.org/operator.id: camel-k
API Version:  camel.apache.org/v1
Kind:         IntegrationPlatform
Metadata:
  Creation Timestamp:  2023-03-30T19:47:21Z
  Generation:          1
  Managed Fields:
    API Version:  camel.apache.org/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:camel.apache.org/operator.id:
        f:labels:
          .:
          f:app:
      f:spec:
        .:
        f:build:
          .:
          f:maven:
            .:
            f:settings:
              .:
              f:configMapKeyRef:
                .:
                f:key:
                f:name:
            f:settingsSecurity:
          f:publishStrategy:
          f:registry:
            .:
            f:address:
            f:organization:
            f:secret:
        f:kamelet:
        f:traits:
    Manager:         kamel
    Operation:       Update
    Time:            2023-03-30T19:47:21Z
  Resource Version:  1020826
  UID:               f57a87d1-dae7-4b74-86df-5ef6ca9e714a
Spec:
  Build:
    Maven:
      Settings:
        Config Map Key Ref:
          Key:   settings.xml
          Name:  camel-k-maven-settings
      Settings Security:
    Publish Strategy:  Kaniko
    Registry:
      Address:       gcr.io
      Organization:  mert-personal-cluster
      Secret:        kaniko-secret
  Kamelet:
  Traits:
Events:  <none>

I uninstall everything with ./kamel uninstall --all:

Camel K Integration Platform removed from namespace camel
Camel K Config Maps removed from namespace camel
Camel K Registry Secret removed from namespace camel
Camel K Platform Kamelets removed from namespace camel
Camel K Operator removed from namespace camel
Camel K Role Bindings removed from namespace camel
Camel K Roles removed from namespace camel
Camel K Service Accounts removed from namespace camel
Camel K Custom Resource Definitions removed from cluster
Camel K Cluster Role Bindings removed from cluster
Camel K Cluster Roles removed from cluster

Since there weren't any useful logs or events (not useful to me at least) I got frustrated with it after a couple of tries and wanted to proceed with the e2e tests by running make test-common.

As you said, I changed my Makefile's test-common command from:

test-common: do-build
	FAILED=0; STAGING_RUNTIME_REPO="$(STAGING_RUNTIME_REPO)"; \
	go test -timeout 30m -v ./e2e/common/support/startup_test.go -tags=integration $(TEST_INTEGRATION_COMMON_LANG_RUN) $(GOTESTFMT) || FAILED=1; \
	go test -timeout 30m -v ./e2e/common/languages -tags=integration $(TEST_INTEGRATION_COMMON_LANG_RUN) $(GOTESTFMT) || FAILED=1; \
	go test -timeout 30m -v ./e2e/common/cli -tags=integration $(TEST_INTEGRATION_COMMON_LANG_RUN) $(GOTESTFMT) || FAILED=1; \
	go test -timeout 30m -v ./e2e/common/config -tags=integration $(TEST_INTEGRATION_COMMON_LANG_RUN) $(GOTESTFMT) || FAILED=1; \
	go test -timeout 30m -v ./e2e/common/misc -tags=integration $(TEST_INTEGRATION_COMMON_LANG_RUN) $(GOTESTFMT) || FAILED=1; \
	go test -timeout 30m -v ./e2e/common/traits -tags=integration $(TEST_INTEGRATION_COMMON_LANG_RUN) $(GOTESTFMT) || FAILED=1; \
	go test -timeout 30m -v ./e2e/common/support/teardown_test.go -tags=integration $(TEST_INTEGRATION_COMMON_LANG_RUN) $(GOTESTFMT) || FAILED=1; \
	exit $${FAILED}

to:

test-common: do-build
	FAILED=0; STAGING_RUNTIME_REPO="$(STAGING_RUNTIME_REPO)"; \
	go test -timeout 30m -v ./e2e/common/support/startup_test.go -tags=integration $(TEST_INTEGRATION_COMMON_LANG_RUN) $(GOTESTFMT) || FAILED=1; \
	go test -timeout 30m -v ./e2e/common/traits/health_test.go -tags=integration $(TEST_INTEGRATION_COMMON_LANG_RUN) $(GOTESTFMT) || FAILED=1; \
	go test -timeout 30m -v ./e2e/common/support/teardown_test.go -tags=integration $(TEST_INTEGRATION_COMMON_LANG_RUN) $(GOTESTFMT) || FAILED=1; \
	exit $${FAILED}

Then I ran make test-common (I think the problem I am facing is due to the error that says cannot find a registry where to push images but I am not sure how to configure my own GKE image registry settings (like I do with my custom kamel install command I shared above) when I run this test-common):

Output
go install github.com/gotesttools/gotestfmt/v2/cmd/gotestfmt@latest
FAILED=0; STAGING_RUNTIME_REPO=""; \
	go test -timeout 30m -v ./e2e/common/support/startup_test.go -tags=integration   || FAILED=1; \
	go test -timeout 30m -v ./e2e/common/languages -tags=integration   || FAILED=1; \
	go test -timeout 30m -v ./e2e/common/cli -tags=integration   || FAILED=1; \
	go test -timeout 30m -v ./e2e/common/config -tags=integration   || FAILED=1; \
	go test -timeout 30m -v ./e2e/common/misc -tags=integration   || FAILED=1; \
	go test -timeout 30m -v ./e2e/common/traits -tags=integration   || FAILED=1; \
	go test -timeout 30m -v ./e2e/common/support/teardown_test.go -tags=integration   || FAILED=1; \
	exit ${FAILED}
=== RUN   TestCommonCamelKInstallStartup
OLM is not available in the cluster. Fallback to regular installation.
Using storage class "standard-rwo" to create "camel-k-pvc" volume for the operator
Warning: the operator won't be able to detect a local image registry via KEP-1755
Error: cannot find a registry where to push images
    startup_test.go:46:
        Expected success, but got an error:
            <*errors.fundamental | 0x14001212c60>:
            cannot find a registry where to push images
            {
                msg: "cannot find a registry where to push images",
                stack: [0x10217fec5, 0x1022d1adc, 0x1022d102c, 0x1022d05cc, 0x1022d0344, 0x101986384, 0x101986ac0, 0x10232b004, 0x10232af9d, 0x100c26294, 0x100b70544],
            }
--- FAIL: TestCommonCamelKInstallStartup (4.67s)
FAIL
FAIL	command-line-arguments	5.163s
FAIL
=== RUN   TestRunSimpleGroovyExamples
=== RUN   TestRunSimpleGroovyExamples/run_groovy
No IntegrationPlatform resource in test-269de0af-266f-4b6d-9b47-8a430bc68615 namespace
Error: unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state
=== NAME  TestRunSimpleGroovyExamples
    groovy_test.go:40:
        Expected success, but got an error:
            <*errors.errorString | 0x14000e4a0e0>:
            unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state
            {
                s: "unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state",
            }
=== NAME  TestRunSimpleGroovyExamples/run_groovy
    testing.go:1471: test executed panic(nil) or runtime.Goexit: subtest may have called FailNow on a parent test
--- FAIL: TestRunSimpleGroovyExamples (0.47s)
    --- FAIL: TestRunSimpleGroovyExamples/run_groovy (0.47s)
=== RUN   TestRunSimpleJavaExamples
=== RUN   TestRunSimpleJavaExamples/run_java
No IntegrationPlatform resource in test-269de0af-266f-4b6d-9b47-8a430bc68615 namespace
Error: unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state
=== NAME  TestRunSimpleJavaExamples
    java_test.go:40:
        Expected success, but got an error:
            <*errors.errorString | 0x140003f7860>:
            unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state
            {
                s: "unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state",
            }
=== NAME  TestRunSimpleJavaExamples/run_java
    testing.go:1471: test executed panic(nil) or runtime.Goexit: subtest may have called FailNow on a parent test
--- FAIL: TestRunSimpleJavaExamples (0.26s)
    --- FAIL: TestRunSimpleJavaExamples/run_java (0.26s)
=== RUN   TestRunSimpleJavaScriptExamples
=== RUN   TestRunSimpleJavaScriptExamples/run_js
No IntegrationPlatform resource in test-269de0af-266f-4b6d-9b47-8a430bc68615 namespace
Error: unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state
=== NAME  TestRunSimpleJavaScriptExamples
    js_test.go:40:
        Expected success, but got an error:
            <*errors.errorString | 0x140006e46f0>:
            unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state
            {
                s: "unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state",
            }
=== NAME  TestRunSimpleJavaScriptExamples/run_js
    testing.go:1471: test executed panic(nil) or runtime.Goexit: subtest may have called FailNow on a parent test
--- FAIL: TestRunSimpleJavaScriptExamples (0.26s)
    --- FAIL: TestRunSimpleJavaScriptExamples/run_js (0.26s)
=== RUN   TestRunSimpleKotlinExamples
=== RUN   TestRunSimpleKotlinExamples/run_kotlin
No IntegrationPlatform resource in test-269de0af-266f-4b6d-9b47-8a430bc68615 namespace
Error: unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state
=== NAME  TestRunSimpleKotlinExamples
    kotlin_test.go:40:
        Expected success, but got an error:
            <*errors.errorString | 0x14000f3c540>:
            unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state
            {
                s: "unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state",
            }
=== NAME  TestRunSimpleKotlinExamples/run_kotlin
    testing.go:1471: test executed panic(nil) or runtime.Goexit: subtest may have called FailNow on a parent test
--- FAIL: TestRunSimpleKotlinExamples (0.25s)
    --- FAIL: TestRunSimpleKotlinExamples/run_kotlin (0.25s)
=== RUN   TestRunPolyglotExamples
=== RUN   TestRunPolyglotExamples/run_polyglot
No IntegrationPlatform resource in test-269de0af-266f-4b6d-9b47-8a430bc68615 namespace
Error: unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state
=== NAME  TestRunPolyglotExamples
    polyglot_test.go:40:
        Expected success, but got an error:
            <*errors.errorString | 0x14000cec950>:
            unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state
            {
                s: "unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state",
            }
=== NAME  TestRunPolyglotExamples/run_polyglot
    testing.go:1471: test executed panic(nil) or runtime.Goexit: subtest may have called FailNow on a parent test
--- FAIL: TestRunPolyglotExamples (0.25s)
    --- FAIL: TestRunPolyglotExamples/run_polyglot (0.25s)
=== RUN   TestRunSimpleXmlExamples
=== RUN   TestRunSimpleXmlExamples/run_xml
No IntegrationPlatform resource in test-269de0af-266f-4b6d-9b47-8a430bc68615 namespace
Error: unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state
=== NAME  TestRunSimpleXmlExamples
    xml_test.go:40:
        Expected success, but got an error:
            <*errors.errorString | 0x1400060e450>:
            unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state
            {
                s: "unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state",
            }
=== NAME  TestRunSimpleXmlExamples/run_xml
    testing.go:1471: test executed panic(nil) or runtime.Goexit: subtest may have called FailNow on a parent test
--- FAIL: TestRunSimpleXmlExamples (0.24s)
    --- FAIL: TestRunSimpleXmlExamples/run_xml (0.24s)
=== RUN   TestRunSimpleYamlExamples
=== RUN   TestRunSimpleYamlExamples/run_yaml
No IntegrationPlatform resource in test-269de0af-266f-4b6d-9b47-8a430bc68615 namespace
Error: unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state
=== NAME  TestRunSimpleYamlExamples
    yaml_test.go:40:
        Expected success, but got an error:
            <*errors.errorString | 0x14000cd4a00>:
            unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state
            {
                s: "unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state",
            }
=== NAME  TestRunSimpleYamlExamples/run_yaml
    testing.go:1471: test executed panic(nil) or runtime.Goexit: subtest may have called FailNow on a parent test
--- FAIL: TestRunSimpleYamlExamples (0.24s)
    --- FAIL: TestRunSimpleYamlExamples/run_yaml (0.24s)
FAIL
FAIL	github.com/apache/camel-k/v2/e2e/common/languages	2.433s
FAIL
=== RUN   TestKamelCLIBind
=== RUN   TestKamelCLIBind/bind_timer_to_log
No IntegrationPlatform resource in test-269de0af-266f-4b6d-9b47-8a430bc68615 namespace
unable to find operator with given id [test-269de0af-266f-4b6d-9b47-8a430bc68615] - resource may not be reconciled and get stuck in waiting state

My findings in the created (and failed) test operator:

kubectl get pods -l "app=camel-k" --all-namespaces:

NAMESPACE                                   NAME                               READY   STATUS             RESTARTS   AGE
test-269de0af-266f-4b6d-9b47-8a430bc68615   camel-k-operator-8c4c9fb7c-rtwtc   0/1     ImagePullBackOff   0          4m7s

describe camel-k-operator-8c4c9fb7c-rtwtc:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2023-03-30T19:54:26Z"
  generateName: camel-k-operator-8c4c9fb7c-
  labels:
    app: camel-k
    app.kubernetes.io/component: operator
    app.kubernetes.io/name: camel-k
    app.kubernetes.io/version: 2.0.0-SNAPSHOT
    camel.apache.org/component: operator
    name: camel-k-operator
    pod-template-hash: 8c4c9fb7c
  name: camel-k-operator-8c4c9fb7c-rtwtc
  namespace: test-269de0af-266f-4b6d-9b47-8a430bc68615
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: camel-k-operator-8c4c9fb7c
    uid: 54d02711-ce48-4f4e-8c19-a42719e500f1
  resourceVersion: "1027858"
  uid: aacf0833-fa4d-432f-b729-552e3b26c634
spec:
  containers:
  - args:
    - --monitoring-port=8080
    - --health-port=8081
    command:
    - kamel
    - operator
    env:
    - name: WATCH_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: OPERATOR_NAME
      value: camel-k
    - name: OPERATOR_ID
      value: camel-k
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: KAMEL_OPERATOR_ID
      value: test-269de0af-266f-4b6d-9b47-8a430bc68615
    - name: LOG_LEVEL
      value: info
    image: docker.io/apache/camel-k:2.0.0-SNAPSHOT
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /healthz
        port: 8081
        scheme: HTTP
      initialDelaySeconds: 20
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: camel-k-operator
    ports:
    - containerPort: 8080
      name: metrics
      protocol: TCP
    resources:
      limits:
        cpu: 500m
        ephemeral-storage: 1Gi
        memory: 2Gi
      requests:
        cpu: 500m
        ephemeral-storage: 1Gi
        memory: 2Gi
    securityContext:
      capabilities:
        drop:
        - NET_RAW
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/maven/m2
      name: camel-k-pvc
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-bkldt
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: gk3-mert-personal-cluste-nap-bgf1df2c-6b37dc78-dspv
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: gke.io/optimize-utilization-scheduler
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  serviceAccount: camel-k-operator
  serviceAccountName: camel-k-operator
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoSchedule
    key: kubernetes.io/arch
    operator: Equal
    value: amd64
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: camel-k-pvc
    persistentVolumeClaim:
      claimName: camel-k-pvc
  - name: kube-api-access-bkldt
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2023-03-30T19:54:30Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2023-03-30T19:54:30Z"
    message: 'containers with unready status: [camel-k-operator[]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2023-03-30T19:54:30Z"
    message: 'containers with unready status: [camel-k-operator[]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2023-03-30T19:54:30Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - image: docker.io/apache/camel-k:2.0.0-SNAPSHOT
    imageID: ""
    lastState: {}
    name: camel-k-operator
    ready: false
    restartCount: 0
    started: false
    state:
      waiting:
        message: Back-off pulling image "docker.io/apache/camel-k:2.0.0-SNAPSHOT"
        reason: ImagePullBackOff
  hostIP: 10.156.0.14
  phase: Pending
  podIP: 10.90.0.142
  podIPs:
  - ip: 10.90.0.142
  qosClass: Guaranteed
  startTime: "2023-03-30T19:54:30Z"

It's trying to pull the Apache image Back-off pulling image "docker.io/apache/camel-k:2.0.0-SNAPSHOT"... How do I configure this to use my own image?

Sorry for the dreadfully long comment. 😬

@squakez
Copy link
Contributor

squakez commented Mar 31, 2023

@mertdotcc yeah, the main problem is that the E2E are thought to be run in a local cluster. Basically the idea is that they take care of installing everything needed by the test using a local registry. It would be possible to tweak them in order to run remotely like you're trying to do, however, I think that it is not a worth to go in that direction. If you try to do the local development and test using Minikube or Kind, it would be much easier.
IMO, you should try to run a Minikube locally and see how compares the development experience. Basically it turns out in the following steps:

minikube start
minikube addons enable registry // only the first time after you install minikube
eval $(minikube -p minikube docker-env) // only once when you start a command shell
make images // will push your local operator where it is expected to be
make test-common

If you're planning to work on Camel K, I think in the long run you'll see a lot of benefits compared to deploying directly to a remote cluster. Feel free to reach out for any more advice.

@mertdotcc
Copy link
Contributor Author

My reasons for choosing a remote cluster over minikube were:

  1. I already have a personal cluster with enough nodes and necessary resources dedicated to this type of stuff, testing, building, etc.
  2. I am using an M1 Mac with 16GB of RAM and without having the Docker Engine or the minikube running, I am already at around 80% memory utilization rate. 😕
  3. In the past (this was over 2 years ago) I experienced some different behaviours when I used to deploy exact same manifests in an actual cluster vs a minikube cluster. Since that day, I am a bit hesitant to work with minikube. (This might very well be due to my lack of experience at that time, or maybe minikube came a long way). I feel like when I test full-fledged operations with prometheus enabled, jaeger enabled, doing smoke tests, and sending 100000 requests using iter8 or something, minikube won't "hold up" and I would rather work in an actual cluster.

That being said though, I will follow the steps you mentioned for minikube and switch over my workflow there.

Do you have any idea what might have been wrong with my cluster though? Even if you have minor suspicions I would like to follow&explore them in my free time.

Thanks.

@squakez
Copy link
Contributor

squakez commented Mar 31, 2023

@mertdotcc yeah, I see your points. And it's true, with a limited resource machine, then, you have no other option to delegate such work remotely. So, I am thinking that maybe we can introduce a couple of environment variables when installing the operator in the E2E suite [1] which can control the following parameters:

      --operator-image string                       Set the operator Image used for the operator deployment
      --operator-image-pull-policy string           Set the operator ImagePullPolicy used for the operator deployment

so, the final user can override them to look for a custom image (like in your case) or controlling the pull policy. I am opening an issue to work on this separately, and feel free to contribute to it as well. In the while, if you are not able to run things locally, I suggest to push the changes and have a look at the checks result. Fortunately now the time to complete a full cycle of test is less that 1 hour.

[1]

cmdArgs = []string{command, "-n", namespace, "--operator-id", operatorID}

@mertdotcc
Copy link
Contributor Author

Those parameters you mentioned, especially the --operator-image would come in super handy for me. And I can't be the only one. Thanks!

I will first try out with minikube, if that doesn't work (due to limited resources on my machine) then I will push my changes in this PR and also start taking a look at the issue you just created.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants