-
Notifications
You must be signed in to change notification settings - Fork 430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Azure cncf-upstream:capi missing PlatformImage (Quick Start issue) #2375
Comments
/assign @mboersma Hi @lmcdasm, the latest batch of images were named differently in the cncf-upstream offer which required #2302 to be able to use the latest versions released (v1.24.1, v1.24.0, v1.23.7, v1.22.10, v1.21.13). In the meantime, you can use an older Kubernetes version as a workaround (such as v1.3.2 will be released today which will fix the issue and make the quickstart work again out of the box again. Apologies for the inconvenience. |
Regarding your other questions:
A workload cluster != a node pool. It is a full Kubernetes cluster, including a control-plane. AKS is a managed Kubernetes service offered by Azure. Cluster API on the other hand is a tool for managing the lifecycle of Kubernetes clusters declaratively. When you build a workload cluster with CAPZ with the "default" flavor, it builds a full Kubernetes clusters that you as a user can manage yourself. There are very good reasons for using AKS and we actually recommend that most users do it when it fits their needs as it takes away a lot of the hassle of managing your own k8s clusters. CAPZ is an alternative for users who have requirements for managing their own k8s clusters. If you are interested in using AKS but still want the declarative lifecycle management that Cluster API offers, check out https://capz.sigs.k8s.io/topics/managedcluster.html, which is an experimental feature of CAPZ to create/manage AKS clusters. |
Hello Cecile.
Thanks for taking the time to answer my question. You're right to point
out that node pool is not a workload cluster, and my apologies it was a
poor description on my side. From going through the quickstart and
picking docs, what I was perhaps alluding to (or expecting) was that (and I
believe this is the link you have provided - will check it out tonight for
sure - thx!) if I have deployed the management cluster on a AKS with VMSS
autoscaling setup, one of a couple things would happen:
1 - that a new "nodepool" - either on the existing CNI underlay network (or
perhaps now that Azure supports it, on a new discrete subnet) would be
created to "host" this workload cluster. Thus still encapsulated
perhaps within the view/context of the "host AKS". And thus you could/are
making the workloads "live" on top of discrete, isolated node pools (which
you can then straddle across AZ zones/AS'es) .
or
2 - that a new complete AKS would be created, and then a peering across
from the "workload aks clusters" to the "management AKS cluster" would be
established (peer, internal VNG or something maybe more prosaic). Thus you
would have a potential for Fan ok (AKS over AKS'es).
Thanks for setting me straight on the AKS approach for cluster management
and how the management cluster setup/deployment on AKS (in the quickstart)
is used as a seeder for the other "VM based K8s cluster" deployments. Much
appreciated
I'm very much interested in CAPI and CAPZ for different use cases, in this
question i was playing around with AKS specifically as a 'first go" (caught
the release announcements ) since I had some Azure setup at hand and will
be working through the rest in more detail. I'm very much of the opinion
the declarative way is the way to go.
Thanks again for the time and resources and info as I start digging around.
Daniel
…On Tue, Jun 14, 2022 at 12:29 PM Cecile Robert-Michon < ***@***.***> wrote:
Regarding your other questions:
As a side question, are the workload clusters simply "VMs with ubuntu and
K8s installed after the fact" ? I was thinking/wondering the following:
if im using an AKS as the management cluster host, should we not see
either:
A new node pool within the existing AKS that is used as a "User Pool"
(from the context of the management cluster) and then runs the
workload-cluster "there". AKS Supports having Node pools on different VNETs
now (well subnets anyway).
if not within the reach of the Management (Host AKS) cluster, would it not
be better to have the Azure provider spin AKS instances with VMSS (and have
autoscaling and all that jazz with it)?
A workload cluster != a node pool. It is a full Kubernetes cluster,
including a control-plane. AKS is a managed Kubernetes service offered by
Azure. Cluster API on the other hand is a tool for managing the lifecycle
of Kubernetes clusters declaratively. When you build a workload cluster
with CAPZ with the "default" flavor, it builds a full Kubernetes clusters
that you as a user can manage yourself. There are very good reasons for
using AKS and we actually recommend that most users do it when it fits
their needs as it takes away a lot of the hassle of managing your own k8s
clusters. CAPZ is an alternative for users who have requirements for
managing their own k8s clusters.
If you are interested in using AKS but still want the declarative
lifecycle management that Cluster API offers, check out
https://capz.sigs.k8s.io/topics/managedcluster.html, which is an
experimental feature of CAPZ to create/manage AKS clusters.
—
Reply to this email directly, view it on GitHub
<#2375 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACHMXRGDTDJZHTESQ5ZZLI3VPCXOXANCNFSM5YMKZDTA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Hello Cecile.
Thanks for the tip/info!
Cheers,
Daniel
…On Tue, Jun 14, 2022 at 12:19 PM Cecile Robert-Michon < ***@***.***> wrote:
/assign @mboersma <https://github.com/mboersma>
Hi @lmcdasm <https://github.com/lmcdasm>, the latest batch of images were
named differently in the cncf-upstream offer which required #2302
<#2302>
to be able to use the latest versions released (v1.24.1, v1.24.0, v1.23.7,
v1.22.10, v1.21.13).
In the meantime, you can use an older Kubernetes version as a workaround
(such as v1.23.6).
v1.3.2 will be released today which will fix the issue and make the
quickstart work again out of the box again. Apologies for the inconvenience.
—
Reply to this email directly, view it on GitHub
<#2375 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACHMXRBC4QHZVH3DCBSMTQDVPCWHRANCNFSM5YMKZDTA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
/close |
@CecileRobertMichon: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hey there. Confirmed that i was able to roll out 1.24.0 Azure IaaS based Clusters and was able to finish my deployment. Thanks again! |
/kind bug
[Before submitting an issue, have you checked the Troubleshooting Guide?]
What steps did you take and what happened:
Following the Quick Start Guide - Using Azure Provider steps and documentation.
After generating the workload cluster configuration (workload-cluster-3 in this case), i was able to do the "kubectl apply -f workload-cluster-3.yaml" the the "kubectl get cluster" comes back as "provisioned".
I can see that the Azure LB, VNET, NSG and other objects are being created, however the VM's fail to create. Complaining of a missing image
ERROR MESSAGE IN AZURE ACTIVITY LOGS/RES MANAGER:
I have tried this with v1.24.0 (what my AKS cluster that is hosting the management cluster is running), and tried 1.23.7 (Last 1.23 release from AKS) and 1.23.5.
I would imagine that im missing something simple - reading the documentation it seems to indicate that Azure Marketplace would have the images, however i can only find the Windows one there (from CNCF-UPSTREAM) and a CAPI TKG Vmware one, not Ubuntu one.
What did you expect to happen:
I would have expected the cluster deployment to start up the VM using an image that is available from within Azure without having to add additional configuration. Following the Quick Start guide, we cannot connect using the workload-cluster specific KUBECONFIG / Apply Calico workaround until the VM is up (and the workload-cluster control API is able to respond behind the LB created).
Anything else you would like to add:
its very possible that I missed something here / in the Quick start, so apologies. Do i need to pull the image from source and drop to local repo, etc? I couldnt find the image locations in the Docs (again, sorry if this is silly/newbie question) :)
As a side question, are the workload clusters simply "VMs with ubuntu and K8s installed after the fact" ? I was thinking/wondering the following:
A new node pool within the existing AKS that is used as a "User Pool" (from the context of the management cluster) and then runs the workload-cluster "there". AKS Supports having Node pools on different VNETs now (well subnets anyway).
if not within the reach of the Management (Host AKS) cluster, would it not be better to have the Azure provider spin AKS instances with VMSS (and have autoscaling and all that jazz with it)?
Sorry, maybe the above is counter to the cloud Native idea, so you dont want to go too much Azure AKS land?
Environment:
cluster-api-provider-azure version:
PROVIDER= "infrastructure-azure" - Version = v1.3.1
Kubernetes version: (use
kubectl version
):cluster Version = 1.24.1
server Version = 1.24.0
OS (e.g. from
/etc/os-release
):CentOS 7.9.2009
The text was updated successfully, but these errors were encountered: