Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CIDR insecure registries not supported for containerd #15597

Closed
cvila84 opened this issue Jan 5, 2023 · 10 comments
Closed

CIDR insecure registries not supported for containerd #15597

cvila84 opened this issue Jan 5, 2023 · 10 comments
Labels
co/runtime/containerd kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@cvila84
Copy link

cvila84 commented Jan 5, 2023

What Happened?

During our tests to replace docker by containerd as container runtime, we can't declare insecure registry in CIDR format

minikube start --driver virtualbox --container-runtime containerd --insecure-registry 10.10.0.0/16

The VM is created but any subsequent pulls on insecure registries resolved to the declared CIDR are failing as if it tries to securely pull (usual message of kind: x509: certificate signed by unknown authority)

❯ kubectl describe pod cassandra-0
Name:         cassandra-0
Namespace:    default
Priority:     0
Node:         minikube/192.168.99.100
Start Time:   Thu, 05 Jan 2023 18:31:39 +0100
Labels:       app=cassandra
              controller-revision-hash=cassandra-86f44bb8f4
              statefulset.kubernetes.io/pod-name=cassandra-0
Annotations:  <none>
Status:       Pending
IP:           10.244.0.7
IPs:
  IP:           10.244.0.7
Controlled By:  StatefulSet/cassandra
Containers:
  cassandra:
    Container ID:
    Image:          dockerhub.xxx.com/cassandra:3.11.12
    Image ID:
    Ports:          7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP, 0/TCP
    State:          Waiting
      Reason:       ErrImagePull
    Ready:          False
    Restart Count:  0
    Environment:
      MAX_HEAP_SIZE:  256M
      HEAP_NEWSIZE:   96M
    Mounts:
      /var/lib/cassandra from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cczvm (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  cassandra-data
    ReadOnly:   false
  kube-api-access-cczvm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  3m39s                 default-scheduler  Successfully assigned default/cassandra-0 to minikube
  Normal   Pulling    103s (x4 over 3m38s)  kubelet            Pulling image "dockerhub.xxx.com/cassandra:3.11.12"
  Warning  Failed     103s (x4 over 3m8s)   kubelet            Error: ErrImagePull
  Warning  Failed     103s (x3 over 2m54s)  kubelet            Failed to pull image "dockerhub.xxx.com/cassandra:3.11.12": rpc error: code = Unknown desc = failed to pull and unpack image "dockerhub.xxx.com/cassandra:3.11.12": failed to resolve reference "dockerhub.xxx.com/cassandra:3.11.12": failed to do request: Head "https://dockerhub.xxx.com/v2/cassandra/manifests/3.11.12": x509: certificate signed by unknown authority
  Warning  Failed     92s (x6 over 3m7s)    kubelet            Error: ImagePullBackOff
  Normal   BackOff    77s (x7 over 3m7s)    kubelet            Back-off pulling image "dockerhub.xxx.com/cassandra:3.11.12"

containerd insecure registry configuration is taken into account but the created directory hierarchy is invalid because of CIDR format (the slash).

❯ minikube ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ ls -lR /etc/containerd/certs.d/
/etc/containerd/certs.d/:
total 0
drwxr-xr-x 3 root root 60 Jan  5 17:19 10.10.0.0
drwxr-xr-x 2 root root 60 Oct 28 21:53 docker.io

/etc/containerd/certs.d/10.10.0.0:
total 0
drwxr-xr-x 2 root root 60 Jan  5 17:19 16

/etc/containerd/certs.d/10.10.0.0/16:
total 4
-rw-r--r-- 1 root root 82 Jan  5 17:19 hosts.toml

/etc/containerd/certs.d/docker.io:
total 4
-rw-r--r-- 1 root root 39 Oct 28 21:53 hosts.toml

On top of that, I'm not sure IP resolving will work like with docker (if I declare 10.10.0.0/16 with docker and if dockerhub.xxx.com resolves to 10.10.0.1, then any pull to this registry will be done insecurely)

In docker, the insecure registry configuration is done this way (in /lib/systemd/system/docker.service)

ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=virtualbox --insecure-registry 10.96.0.0/12 --insecure-registry 10.10.0.0/16
ExecReload=/bin/kill -s HUP $MAINPID

Attach the log file

--- No minikube logs as VM is created normally, error is seen during its usage ---

Operating System

Windows

Driver

VirtualBox

@cvila84
Copy link
Author

cvila84 commented Jan 15, 2023

@afbjorklund hello ! do you need other information to consider it as an issue ? thanks !

@cvila84
Copy link
Author

cvila84 commented Mar 2, 2023

For clarity, CIDR insecure registries are working well with docker as container runtime but are not with containerd (this issue).

IMO, this could become a wider problem as almost everybody will move to containerd sooner or later (because of deprecation).

@afbjorklund what do you think ?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 31, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 30, 2023
@vholer
Copy link

vholer commented Aug 15, 2023

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 15, 2023
@afbjorklund
Copy link
Collaborator

afbjorklund commented Aug 15, 2023

I don't think this is implemented, the minikube registry still uses the localhost proxy hack

EDIT: Actually it does look implemented, but only for the hosts.toml

https://github.com/containerd/containerd/blob/main/docs/hosts.md

/etc/containerd/certs.d needs parsing the info into the hostname and the CIDR range and error out

@afbjorklund afbjorklund added kind/feature Categorizes issue or PR as related to a new feature. co/runtime/containerd priority/backlog Higher priority than priority/awaiting-more-evidence. labels Aug 15, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 26, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 25, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/runtime/containerd kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

5 participants