Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Idea] Support multi-zone K8S Cluster #253

Open
md2k opened this issue Feb 20, 2019 · 3 comments
Open

[Idea] Support multi-zone K8S Cluster #253

md2k opened this issue Feb 20, 2019 · 3 comments

Comments

@md2k
Copy link
Contributor

md2k commented Feb 20, 2019

Not sure if it comes already or not, but maybe it is good idea to think about multi-zone K8S cluster configuration where we can roll control-plane and workers specifying 2-3 different DCs what can increase high availability and redundancy ?

@md2k md2k changed the title [Iidea] Support multi-zone K8S Cluster [Idea] Support multi-zone K8S Cluster Feb 20, 2019
@xetys
Copy link
Owner

xetys commented Feb 21, 2019

AFAIK we do 2 things on this:

  1. per default we use at least 2 DCs to provision hetzner kube nodes. In this case, the DCs are picked in Round Robin style, so both, workers and controller nodes are spreaded there. This means that the control plane in a HA setup are at least in 2 DCs by default
  2. you may choose to install the hcloud-controller-manager addon, which automatically adds failure domains on these node, which k8s can use for failover. Here I am not sure, if this addon still works with the current version. This should be checked.

@md2k
Copy link
Contributor Author

md2k commented Feb 21, 2019

Hm, maybe it because i'm explicitly set DC to deploy.... my bad :)

hcloud-controller-manager when i tried to deploy it - it has failed. not sure already what problem was, i'll check it again. if i not wrong, it was something about flannel, but not sure what exactly.

@md2k
Copy link
Contributor Author

md2k commented Feb 28, 2019

yeah.. here another problem with multi-zone... floating IP bounded to DC... so it will be complicated to balance traffic and can create complications for ingress controllers :(
unless we can add additional parameter to enable floating IP per master or per worker (optional) to have static addresses which we can then use for dns and so on.
but yeah... this part with hetzner not easy at this moment. can see only some kind of 2-3 instance pool rolled with terraform and traefik inside as traffic balancers and pointing statically to all workers.
not sure what to do with this puzzle.
If everyone in single DC all ok , it is possible to use keepalived with floating IP.
But with multi-DC out from ideas :( (only costly DynNect Traffic Director or AWS Traffic Flow as DNS balancers and end-point health checks)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants