Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to pull image that is pushed with minikube image build #16036

Closed
VasilisManol opened this issue Mar 13, 2023 · 12 comments · Fixed by #16214
Closed

Failed to pull image that is pushed with minikube image build #16036

VasilisManol opened this issue Mar 13, 2023 · 12 comments · Fixed by #16214
Labels
area/image Issues/PRs related to the minikube image subcommand co/runtime/containerd co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug.

Comments

@VasilisManol
Copy link

What Happened?

I am running latest minikube on Ubuntu 22.04 on rootless docker started with:

 $> minikube start --container-runtime=containerd --driver=docker 
😄  minikube v1.29.0 on Ubuntu 22.04
✨  Using the docker driver based on user configuration
📌  Using rootless Docker driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=3900MB) ...
📦  Preparing Kubernetes v1.26.1 on containerd 1.6.15 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

I tried to push a local image to minikube with the image build command. I followed the guide: https://minikube.sigs.k8s.io/docs/handbook/pushing/

The image is built correctly and it is shown if I run:
minikube image ls
However, when trying to create a pod with it like:
minikube kubectl -- run testapp2 --image=testapp2 --image-pull-policy=Never --restart=Never
I get:
"PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/testapp2:latest\": failed to resolve reference \"docker.io/library/testapp2:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" image="testapp2:latest"

Same image, but build on the host with docker build and then loaded minikube image load works exact as expected for pods started with the same command. The whole scenario is in the attached log file.

An interesting thing, maybe related, is that the image pushed with minikube image build cannot be removed with minikube image rm.

registry.k8s.io/pause:3.9
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/testapp2:latest
docker.io/kindest/kindnetd:v20221004-44d545d1
test>minikube image rm testapp2
❗  Failed to remove images for profile minikube error removing images: crictl: sudo /usr/bin/crictl rmi testapp2: Process exited with status 1
stdout:

stderr:
time="2023-03-13T10:41:39Z" level=error msg="no such image testapp2"
time="2023-03-13T10:41:39Z" level=fatal msg="unable to remove the image(s)"```
Exact same command works fine with images pushed by 'minikube image load'

### Attach the log file

[minikube.log](https://github.com/kubernetes/minikube/files/10956082/minikube.log)


### Operating System

Ubuntu

### Driver

Docker
@afbjorklund
Copy link
Collaborator

The removing is a bug, not sure if it had a separate ticket but crictl rmi needs to use the full Id for it to work.

@afbjorklund
Copy link
Collaborator

Wonder if it is a mismatch between testapp2 and docker.io/library/testapp2:latest - even if it is the same ?

@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. co/runtime/containerd labels Mar 13, 2023
@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 13, 2023

It seems like both the build and the run need to use the full image, otherwise containerd can't find it.

minikube image build -t testapp2 testbuild
naming to testapp2:latest done

minikube kubectl -- run testapp2 --image=testapp2:latest --image-pull-policy=Never --restart=Never
Container image "testapp2:latest" is not present with pull policy of Never

"docker.io/library/testapp2:latest"

minikube image build -t docker.io/library/testapp2:latest testbuild

minikube kubectl -- run testapp2 --image=docker.io/library/testapp2:latest --image-pull-policy=Never --restart=Never


Otherwise buildkitd tags the image as "testapp2:latest", but containerd does a magic docker.io/library prefix.

Then the prefix will be randomly hidden and added, in different kube commands, because there is no standard.

Maybe it should add a fake registry, like podman ?

"localhost/testapp2:latest"

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 13, 2023

Then again, the deployment fails on podman/CRI-O too - mostly for the same reason. Magic prefixes.

minikube image build -t testapp2 testbuild
Successfully tagged localhost/testapp2:latest

minikube kubectl -- run testapp2 --image=testapp2:latest --image-pull-policy=Never --restart=Never
Container image "testapp2:latest" is not present with pull policy of Never

registry.k8s.io/pause:3.9
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
localhost/testapp2:latest

So that is not a solution. 😭

@afbjorklund afbjorklund added the area/image Issues/PRs related to the minikube image subcommand label Mar 13, 2023
@VasilisManol
Copy link
Author

Thx! @afbjorklund. You are right, using the full image worked! Then also the rm command worked. Unfortunately, I don't have the knowledge on containerd or minikube to explain why :)

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 13, 2023

Once upon a time, everything just used docker.io and amd64 and it was the default and it was hardcoded everywhere.

@afbjorklund afbjorklund added the co/runtime/crio CRIO related issues label Mar 13, 2023
@afbjorklund
Copy link
Collaborator

Kubernetes calls this function, on all image references:

https://pkg.go.dev/github.com/distribution/distribution/reference#ParseNormalizedNamed

But then it still keeps using the short name internally

// applyDefaultImageTag parses a docker image string, if it doesn't contain any tag or digest,
// a default tag will be applied.
func applyDefaultImageTag(image string) (string, error) {
        named, err := dockerref.ParseNormalizedNamed(image)
        if err != nil {
                return "", fmt.Errorf("couldn't parse image reference %q: %v", image, err)
        }
        _, isTagged := named.(dockerref.Tagged)
        _, isDigested := named.(dockerref.Digested)
        if !isTagged && !isDigested {
                // we just concatenate the image name with the default tag here instead
                // of using dockerref.WithTag(named, ...) because that would cause the
                // image to be fully qualified as docker.io/$name if it's a short name
                // (e.g. just busybox). We don't want that to happen to keep the CRI
                // agnostic wrt image names and default hostnames.
                image = image + ":latest"
        }
        return image, nil
}

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 11, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 11, 2023
@vaibhav2107
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 4, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 25, 2024
@vaibhav2107
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/image Issues/PRs related to the minikube image subcommand co/runtime/containerd co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants