Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration with Proxmox (paas-proxmox bundle) #69

Open
5 of 8 tasks
kvaps opened this issue Apr 9, 2024 · 12 comments · May be fixed by #107
Open
5 of 8 tasks

Integration with Proxmox (paas-proxmox bundle) #69

kvaps opened this issue Apr 9, 2024 · 12 comments · May be fixed by #107
Assignees

Comments

@kvaps
Copy link
Member

kvaps commented Apr 9, 2024

phase1: adapting the management cluster to work on Proxmox VMs

we need to add the following components:

  • proxmox-csi
  • proxmox-ccm
  • Hybrid LINSTOR inside k8s + based on proxmox - in proggress
  • disable kube-ovn (leave only Cilium)

phase1.5: how to achieve L2 connectivity with Proxmox?

  • VLAN internal in one DC

phase2: adapting tenant clusters to work on Proxmox VMs:

  • Modify Cluster-API to order VMs in Proxmox (proxmox-infrastructure-provider) - in progress
  • Load balancers - what if we use MetalLB instead of kubevirt-ccm
  • Storage - proxmox-csi instead of kubevirt-csi
@kvaps kvaps changed the title Integration with Proxmox (paas-proxmox) Integration with Proxmox (paas-proxmox bundle) Apr 9, 2024
@themoriarti
Copy link
Collaborator

I plan to implement LINSTORE directly into Proxmox itself, but integrating LINSTORE and proxmox disk management into the cozystack using the cluster API and possibly the create operator, this still needs to be discussed and most likely it will be a separate task.

@kvaps
Copy link
Member Author

kvaps commented Apr 12, 2024

I see two options how can you utilize Proxmox with cozystack.

Option one, where you create management cozystack cluster inside the proxmox VMs:

Screenshot 2024-04-12 at 11 54 09

This is more safe, as it allows to isolate cozystack from the hardware nodes, but it still runs databases and tenant Kubernetes clusters for multiple users in the same virtual machines of management cluster.

There is an another option, where proxmox nodes become used the same way as Kubernetes nodes, and we just replace KubeVirt virtualization by Proxmox virtualization:

Screenshot 2024-04-12 at 11 54 20

This setup looks more interesting. As it is more native to cozystack approach, but also allows you to simplify management of hypervisor by providing the oportunity for installing LINSTOR and extra things on it.

Personaly I like the second option more.


Another question is how to provide stable Kubernetes on Proxmox nodes. I was doing the following steps:

  • create three small VMs on proxmox and install TalosLinux on them - they will be used as control-plane nodes for our hardware cluster
  • Join proxmox nodes as workers to these VMs:
VIP="192.168.100.5"

mkdir -p /etc/kubernetes/pki
talosctl -n "$VIP" cat /etc/kubernetes/kubeconfig-kubelet > /etc/kubernetes/kubelet.conf
talosctl -n "$VIP" cat /etc/kubernetes/bootstrap-kubeconfig > /etc/kubernetes/bootstrap-kubelet.conf
talosctl -n "$VIP" cat /etc/kubernetes/pki/ca.crt > /etc/kubernetes/pki/ca.crt

sed -i "/server:/ s|:.*|: https://${VIP}:6443|g" \
  /etc/kubernetes/kubelet.conf \
  /etc/kubernetes/bootstrap-kubelet.conf


clusterDomain=$(talosctl -n "$VIP" get kubeletconfig -o jsonpath="{.spec.clusterDomain}")
clusterDNS=$(talosctl -n "$VIP" get kubeletconfig -o jsonpath="{.spec.clusterDNS}")
cat > /var/lib/kubelet/config.yaml <<EOT
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
clusterDomain: "$clusterDomain"
clusterDNS: $clusterDNS
runtimeRequestTimeout: "0s"
cgroupDriver: systemd
EOT

systemctl restart kubelet

I was using this setup for a long time it works quite well. Another option is using k3s or something like that.

@themoriarti
Copy link
Collaborator

themoriarti commented Apr 12, 2024

@kvaps You presented a pretty good idea, but I would like to give customers the opportunity to determine what type of isolation will be for Database as a service and other services that are in a cluster mode or have two replica instances. That is, so that they can deploy them in LXC or KVM.

Tenant k8s workers will be in VM only.

For now the minimum installation will be on 2 servers, but ideally on three.

cozystack-proxmox drawio(1)

@kvaps
Copy link
Member Author

kvaps commented Apr 13, 2024

So you want to make Cozystack manage LXC containers and run DBs inside of them?

I guess this would be really challengeable, as you will need to replace operators with the custom logic.

@themoriarti
Copy link
Collaborator

themoriarti commented Apr 15, 2024

Integrating process for proxmox

  • Prepare ansible role install 3 proxmox server - done
  • Install LINSTOR as shared storage on proxmox - will be used current default cozystack solution
  • Prepare setup script cozystack in VMs - in progress 95% done
  • Integrate proxmox servers to cozystack as a workers in managment k8s
  • Integrate Proxmox CSI - in progress 99% done, writing test
  • Integrate Proxmox CSI node - assessment of the complexity of integration - testing
  • Integrate Proxmox CCM - testing
  • Use internal network for proxmox and for LINSTOR based on VLAN - minimal requirments DRBD 9.2.9
  • Investigate Kubemox for manage LXC - not suitable for use
  • Integrate Cluster API - part implemented by Remi, in correction process.
  • Intergate MetaLB or haproxy - simple method MetalLB
  • Сhanges in service packages for the ability to run on local disks - use LINSTORE

@themoriarti
Copy link
Collaborator

So you want to make Cozystack manage LXC containers and run DBs inside of them?

@kvaps To give the user the opportunity to choose either LXC or VM, if the implementation will be difficult, then it will not be done until it is done, but it is worth including such an opportunity in the architecture.

@remipcomaite
Copy link

If it can help:

A Proxmox CCM Project
https://github.com/sergelogvinov/proxmox-cloud-controller-manager

A Proxmox CSI Project:
https://github.com/sergelogvinov/proxmox-csi-plugin

@themoriarti themoriarti linked a pull request Apr 25, 2024 that will close this issue
@themoriarti
Copy link
Collaborator

If it can help:

A Proxmox CCM Project https://github.com/sergelogvinov/proxmox-cloud-controller-manager

A Proxmox CSI Project: https://github.com/sergelogvinov/proxmox-csi-plugin

Thx, we know about these projects, the integration for them is already ready.

@remipcomaite
Copy link

@themoriarti Can I help you with the integration of Proxmox into Cozystack? And if yes, how?

@themoriarti
Copy link
Collaborator

@themoriarti Can I help you with the integration of Proxmox into Cozystack? And if yes, how?

Sure, we are always open to cooperation, in this thread there is an high architectural design #69 (comment) , there is a check list for the integration process #69 (comment) and #69 (comment) , you can take any of the parts and start preparation, branch is tied to this submission. If some kind of discussion is needed, then there is either a Slack or Telegram channel, or we can create a separate channel for integration, for example, in Telegram. Open to any suggestions and help.

@remipcomaite
Copy link

remipcomaite commented Apr 30, 2024

Regarding high architectural design, I would like to make the suggestions below:

  • Kube-OVN: keep its use and use hookscripts in Proxmox to add the necessary information to the tap/veth interface of the VM/Container to make it compatible with OVN.
  • LINSTOR: Give the possibility to choose between LINSTOR or CEPH. For our part, we prefer the use of Ceph. The proxmox-csi will do this without any problem.
  • L2 Connectivity: we also think that using VLAN is very good.
  • Cluster-API: it seems that a project already exists https://image-builder.sigs.k8s.io/capi/providers/proxmox. It's not enough to change the Infrastructure Provider to Proxmox? I'm sorry, I lack knowledge on the subject.
  • Load-Balancers: Do you want to set up layer2/layer3 load-balancers? If so, MetalLB will be best. For layer4 load-balancers, we prefer HAProxy.

I can work on integrating OVN into Proxmox. In my opinion, we should be able to manage this with a hookscript that will make a call to the Kube-OVN API to retrieve the iface-id of the VM and apply it to the tap/veth interface.
This would keep the Tenants isolated using Kube-OVN. We could even create layer 3 Load-Balancers in Kube-OVN. MetalLB would therefore only be used to provide a pool of IPs.
I can also try working on Cozystack's CAPI.
Could you tell me the files containing the code that needs to be adapted?

@themoriarti
Copy link
Collaborator

@remipcomaite Cozystack telegram chat https://t.me/cozystack you can discuss the details there, or come to the meeting on Thursday https://meet.google.com/swr-urij-hde https://docs.google.com/document/d/18OtrmgeiRHGhufRAuWHZuZOOSNBZagouNvULDmeJ2F4/edit

  1. We don't need kube-ovn, we have enough proxmox capabilities, i.e. SDN + Cilium inside the cluster;

  2. Storage depends on the size of the data you need, if it is large, then perhaps Ceph will be suitable, but from practice I will say that for stable distributed+replicated storage you need at least 12 servers, but if we have 3-6 servers, LINSTOR is a more optimal solution, so we first implement LINSTORE , and I also already have automation setup Ceph on proxmox level and integrate into the cozystack (k8s);

  3. VLAN or VXLAN based on proxmox level (SDN);

  4. Cluster API - https://github.com/ionos-cloud/cluster-api-provider-proxmox - You can try to start implementation.

  5. I haven't looked at LB yet, integration with metalLB or haproxy is suitable for us, I even prefer haproxy. Do you have a desire to do this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants