-
Notifications
You must be signed in to change notification settings - Fork 292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
interested in integrating with minikube #1
Comments
Added an issue for it in minikube, the lack of releases and packages is an issue: kubernetes/minikube#12103 |
For minikube, we would like a static binary (for the ISO) and an Ubuntu 20.04 LTS deb package (for the KIC) |
Tested with Kubernetes v1.23.0-alpha.0, but there are a lot of blocking bugs in the current cri-dockerd master |
This comment has been minimized.
This comment has been minimized.
Releases and packages will be up on Monday |
This comment has been minimized.
This comment has been minimized.
Must have missed the announcement, dockershim will not be removed until Kubernetes 1.24 https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed |
The dockershim was still present in
|
I think we will have to change minikube to "containerd", since "docker" is not tested anymore. |
Hey Anders - My apologies here, I've been on holiday for a bit and just got back. In the meantime, my GH account was re-created, and I'm still fighting with getting it to appropriately send me notifications. It was completely expected that upstream would remove support for A more or less complete refactor so we do not depend on upstream modules for more or less static Admittedly, with the announcement that it would be Automated releases are coming this week once the Github workflows are re-engineered a little and re-enabled (they were previously stubs taken from upstream). We'd love to have cooperation with the larger community, including The parts of the CRI spec which
It's honestly fairly minor, though there isn't (currently) a pressing demand to add support for this. With development picking back up to get a release out, what can we do to support you? |
It seems like k8s Until there are releases available, we are building from source. But binaries and testing (end-to-end) is missed. FROM golang:1.16
RUN git clone -n https://github.com/Mirantis/cri-dockerd && \
cd cri-dockerd && git checkout 542e27dee12db61d6e96d2a83a20359474a5efa2 && \
cd src && env CGO_ENABLED=0 go build -o cri-dockerd The discussion in minikube is more which one should be the default, all three container runtimes are supported:
Previously it was about changing the default, but the new scenario is that there is no default (only the CRI) Users will have to bring their own CRI and their own CNI, and Kubernetes is more of a framework than a product ? |
Removing support for dockershim was expected, removing support (like documentation and testing) for docker (via CRI) was not ? I don't see that the two are related, there are reasons for standards (API) and there are reasons for choosing implementations... |
Binaries, debs, rpm’s, and E2E will land tomorrow afternoon US time. Testing complex GH actions is slow. The short answer is that, yeah, upstream k8s has, for a long time, tried to shrink and limit core. Particularly as more feature gates make it, supporting Then tack on weave, flannel, calico, rook, etc… So it’s follow CNI/CRI/CSI. Users will have to bring their own. Sure, I think it’s likely that minikube users will probably have docker as part of their development workflow, and that means they’ll also have containerd. The advantage to docker (and cri-dockerd) for dev workflows is that the docker user experience is friendlier and more familiar to many than Part or what slowed me down today in workflows was actually sussing out which parts of the audit subsystem work in GH actions to enable The next step here is documenting commandline options, getting a workable README, and so forth. But it is coming quickly. |
I don't think that is true, docker disables the https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd I think it is mostly because of the poor packaging and documentation of containerd/buildkitd ? The development workflow in minikube involves interacting with the cluster, so changing container runtime is a disruption. The main tool will still be Kubernetes will not have any default CRI and not have any default CNI. |
To clarify, I meant that they will have containerd installed. Buildkitd (or Docker buildkit) is a plugin to Docker itself which is pretty much an overhaul of Containerd (and I want to be explicit about my statement: it is not that minikube users will be interacting with k8s via the Docker CLI, but that minikube users (rather than k0s/microk8s/k3s/etc) are very likely to be on their workstation/laptop, and already have The main tool to interact with k8s will of course be No, k8s will not have any default CRI or CNI. As you said earlier, k8s is a framework. Releases are now published. Outside of a patch for the README, what else can I help with? |
Sounds clear!
The The majority of users would never have to use either, but can use "minikube" directly - including It helps them with the os, arch and version problems - and fills in the holes of the CRI abstraction (like load and build).
Some users of minikube will prefer to avoid having to run two VMs (one for Docker, one for Kubernetes). Even Linux users prefer to not have transfer the images between the two daemons, but to have them instantly available to the single-node cluster on build. There are even users who run minikube without kubernetes, which is somewhat ironic - but fully possible ( Some of these have been running Docker Machine before, which has now been abandoned. And they can run minikube now.
Will update. |
Let me ask this differently: I haven't used Windows or MacOS in over a decade. The idea of having different VMs for Docker and Minikube is pretty alien to me. But using We're interested in supporting minikube use cases as much as we can, so what can I do to help you? |
I think that once we get the cri-dockerd sockets sorted out, we're good to go for the next minikube release. It will check the kubernetes version parameter, and use dockershim for - 1.23 and cri-dockerd for 1.24 - I have done updated packaging for 0.2.0, After that I think we can leave "docker" as the default runtime for minikube, and leave the rest up to the user. Thanks for the support! |
Unfortunately it was broken in minikube 1.25.1 with kubernetes 1.24.0 alpha2, due to our cri-dockerd not updated in time. Hopefully it ("docker" runtime) will work out-of-the-box with next minikube release and the kubernetes 1.24.0 beta releases ?
Aim for fixing it already in minikube 1.26.0, though.
|
After the latest PR (bumping cri-dockerd to Still using dockershim with v1.23.3, I'm not sure which version of Kubernetes is the earliest one supported by cri-dockerd ? The user command is always: |
This has been integrated /close |
Signed-off-by: galal-hussein <[email protected]>
Signed-off-by: galal-hussein <[email protected]>
Signed-off-by: galal-hussein <[email protected]>
Signed-off-by: galal-hussein <[email protected]>
Signed-off-by: galal-hussein <[email protected]>
Signed-off-by: galal-hussein <[email protected]>
Signed-off-by: galal-hussein <[email protected]>
hello, in minikube we support docker, containerd and cri-o runtimes. we like to keep supporting docker runtime. I would like to know if there has already been work started for a separate dockershim that we could use in minikube?
The text was updated successfully, but these errors were encountered: