Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error creating Integration with Kaniko build strategy and Docker Desktop #4158

Closed
fvcortes opened this issue Mar 23, 2023 · 6 comments · Fixed by #4746
Closed

Error creating Integration with Kaniko build strategy and Docker Desktop #4158

fvcortes opened this issue Mar 23, 2023 · 6 comments · Fixed by #4746

Comments

@fvcortes
Copy link

fvcortes commented Mar 23, 2023

Actual behaviour

When creating an Integration with kaniko build strategy and Docker Desktop, kaniko container ends with:
kaniko should only be run inside of a container, run with the --force flag if you are sure you want to continue

Expected behaviour
camel-k-kit builder pod should have STATUS Completed

Steps to reproduce
First, have a docker desktop installation running on linux machine. Installed as described here

  • Check docker context
$ docker context ls
NAME            DESCRIPTION                               DOCKER ENDPOINT                                  ...
default         Current DOCKER_HOST based configuration   unix:///var/run/docker.sock                      ...
desktop-linux *                                           unix:///home/<user>/.docker/desktop/docker.sock  ...
  • Create a GCP Service Account configure gcr.io registry correctly as described here

Note: before installing camel-k on cluster, create a docker-registry secret with the downloaded json key

$ kubectl create secret docker-registry gcr-json-key \
 --docker-server=gcr.io \
 --docker-username=_json_key \
 --docker-password="$(cat <downloaded-key-name>.json)" \
 [email protected]
  • Wait for IntegrationPlatform to be Ready
    $ kubectl get IntegrationPlatform

  • Edit spec traits to include registry secret
    $ kubectl edit IntegrationPlatform camel-k

add traits spec:

  traits:
    pull-secret:
      secretName: gcr-json-key
  • Create a custom integration file with basic Integration template
apiVersion: camel.apache.org/v1
kind: Integration
metadata:
  name: test  
spec:
  sources:
  - name: main.groovy
    content: |-
      rest("/test")
          .post()
          .to("direct:start")

      from("direct:start").to("log:info")
  • Apply integration
    $ kubectl apply -f integration.yaml

  • Wait for camel-k-kit builder container to finish building

{"level":"info","ts":1679596880.5453982,"logger":"camel-k.builder","msg":"base image: docker.io/eclipse-temurin:11"}
{"level":"info","ts":1679596880.545404,"logger":"camel-k.builder","msg":"resolved base image: gcr.io/registry-ipaas-testing/camel-k-kit-cgab3vp5ive1pd2uvvng:15039554"}
  • Get kaniko container logs on builder pod
    get builder pod name
    $ kubectl get pods
NAME                      
camel-k-kit-cgec2trs3c7ebgv37nd0-builder
camel-k-operator-9977dd584-8g9v4 

get pod logs
$ kubectl logs camel-k-kit-cgec2trs3c7ebgv37nd0-builder -f -c kaniko

Output should be:
kaniko should only be run inside of a container, run with the --force flag if you are sure you want to continue

Problem Resolution

Changing docker context to use Docker Engine resolved the problem
$ docker context use default

Then recreate Integration.

I know this is a very specific issue. I struggled some time to discover that the root of the problem was the docker desktop context, but it's good for the sake of documentation, in case someone bumps with a similar error.
I don't know if this is a kaniko specific error, or relates to camel-k build strategy configuration.

@squakez
Copy link
Contributor

squakez commented Mar 24, 2023

Thanks for reporting. Yes, it looks like something specific to Kaniko. Probably this behavior is something we should eventually document as requested in #3336

@pintomic
Copy link

Looks like we're running into the same issue on AKS after upgrading to kubernetes version 1.25. We're running camel k in version 1.10.4. Upgrading the operator to 1.12.0 didn't solve the problem.
Looking forward for any advice on how to fix the issue.

@squakez
Copy link
Contributor

squakez commented Apr 24, 2023

Looks like we're running into the same issue on AKS after upgrading to kubernetes version 1.25. We're running camel k in version 1.10.4. Upgrading the operator to 1.12.0 didn't solve the problem. Looking forward for any advice on how to fix the issue.

Have you looked at the workaround provided in the previous comment?

@pintomic
Copy link

@squakez not sure how I could apply the mentioned problem resolution on an Azure Kubernetes Service. We were able to solve the issue by using Spectrum as the publishStrategy which reenabled the push to registry.

@squakez
Copy link
Contributor

squakez commented Apr 24, 2023

Okey, it seems probably something related to Kaniko and the new version of Kubernetes. Now sure how to workaround. I have seen there is a new version of Kaniko (1.9.2), but not sure if that is going to fix this kind of problems. I suggest you try to execute a sample Kaniko build in the same cluster to verify if the behavior is the same.

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale due to 90 days of inactivity.
It will be closed if no further activity occurs within 15 days.
If you think that’s incorrect or the issue should never stale, please simply write any comment.
Thanks for your contributions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants