Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with k8s.io/docs/setup/independent/high-availability/ #14258

Closed
apoliakevitch opened this issue May 10, 2019 · 10 comments
Closed

Issue with k8s.io/docs/setup/independent/high-availability/ #14258

apoliakevitch opened this issue May 10, 2019 · 10 comments
Labels
kind/support Categorizes issue or PR as a support question. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.

Comments

@apoliakevitch
Copy link

This is a Bug Report

Problem:
kubeadm init failed with the following error:

[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
error execution phase upload-config/kubeadm: error uploading the kubeadm ClusterConfiguration: configmaps "kubeadm-config" is forbidden: User "system:anonymous" cannot get resource "configmaps" in API group "" in the namespace "kube-system"

Proposed Solution:

Page to Update:
https://kubernetes.io/docs/setup/independent/high-availability/

1.14.1

@DanyC97
Copy link
Contributor

DanyC97 commented May 14, 2019

@apoliakevitch thanks for raising the issue however from the info provided i understand this is an issue with kubeadm and not the docs per say, is that correct?

if so best place to raise the issue is either open an issue https://github.com/kubernetes/kubeadm or have a quick chat on slack.

/cc @kubernetes/kubeadm-maintainers in case they can provide some input.

@DanyC97
Copy link
Contributor

DanyC97 commented May 14, 2019

/triage support
/sig cluster-lifecycle

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. labels May 14, 2019
@neolit123
Copy link
Member

/close
duplicate of:
kubernetes/kubernetes#77646

although even in the related issue it's unclear why it happens.

@k8s-ci-robot
Copy link
Contributor

@neolit123: Closing this issue.

In response to this:

/close
duplicate of:
kubernetes/kubernetes#77646

although even in the related issue it's unclear why it happens.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@RockyFu267
Copy link

RockyFu267 commented Jun 25, 2019

@neolit123
@SataQiu
if I configure “controlPlaneEndpoint”,I would have problems with SLB4 when I use kubeam init my first “master”;kubelet would have error like “nodexxx not found”;
because my first master get SLB4:6443 fail , apiserver is running in fact, it is succeed when telnet my first master IP:6443;
currently there is no support for the load balancing back-end ECS instance to provide services directly to the client while also acting as the load balancing back-end server in Aliyun.

I guess the same problem will happen with aws; SLB4 principle is the same;
so I think I can't use SLB4 for my apiserver, right?
If I need to use SLB for my apiserver:6443, Can I remove controlPlaneEndpoint?
If I want keep controlPlaneEndpoint, How can I use kubeadm init in cloudservice?
Can I configure it like : visitor--> SLB4 --->SLB7--->my first apiserver:6443?

Now I use this way, I took apiserver.crt&key in my SLB7;

logs like:
“k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope”
“k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope”
“k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "ec2l-devops-k8smaster-p01" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope”

I am looking forward to your reply :)

@neolit123
Copy link
Member

@RockyFu267
i think you might get better support on stackoverflow or reddit/kubernetes, asking the questions there.
i personally have no experience with ECS / SLB what so ever.

@RockyFu267
Copy link

RockyFu267 commented Jun 26, 2019

@neolit123
I'm sorry I didn't express mypoint clearly.
I think my focus is on how to use this configuration parameter- "controlPlaneEndpoint".
It is currently buggy to use "controlPlaneEndpoint".
https://kubernetes.io/docs/setup/independent/high-availability/

I really hope you can be patient to see my actual situation.

First, The configuration parameter- "controlPlaneEndpoint" is a vip or a Load balancing, right?
So, I configure “controlPlaneEndpoint” with 4 layer load balancing; I tried aws\ali.
All the results show that will be probability of timeout during use, and "nodexxx not found" appeared 100% of the time during the installation with kubeadm.

Why is this happening?
If I use 4 layers of load balancing in parameter- "controlPlaneEndpoint", There will be network problems.
For example, I have three master , ServerA、ServerB、ServerC, I enter the command ”kubectl get pod“ on serverA. There was a 33 percent probability of timeout.
Everything is fine when the serverA request is directed to either ServerB or ServerC through the 4 layer load balancing.
If the request directed to ServerA itself through the 4-layer load balancing, A timeout is bound to occur.

Because the 4 layer load balancing cannot be used when the ServerA is the server as well as the requestor.
This is the network feature of the 4-layer load balancing.
Same reason, When I create a new cluster with kubeadm, My first master is serverA. Although ServerA's apiserver is already running in docker and I can telnet ServerA-IP:6443 successful , kubelet will check 4-layer load balancing-IP:prot in parameter- "controlPlaneEndpoint" . So "nodexxx not found" appeared 100% of the time during the installation with kubeadm when I configure “controlPlaneEndpoint”.

In a public cloud environment, such as ali, I can't use keepalived+haproxy.
This means that I have to use 7 layers of load balancing for k8s-apiserver ,If I want use parameter- "controlPlaneEndpoint" . right?

What do you think about that?
Is this design reasonable? For internal communication from k8s-master, should access the url containing "controlPlaneEndpoint"?

By the way,
How to configure the kubeadm-config with layer 7 load balancing? It is https, I had a problem with kubeadm certification. Is there any documentation?

I am looking forward to your reply :)

@neolit123
Copy link
Member

hi,

If I use 4 layers of load balancing in parameter- "controlPlaneEndpoint", There will be network problems.
For example, I have three master , ServerA、ServerB、ServerC, I enter the command ”kubectl get pod“ on serverA. There was a 33 percent probability of timeout.

the choice of a load balancer is left to the user. if a load balancer has problems then it's up to the user to resolve, but i have no explanation why you are having these problems.
(possibly related to something we saw at some point, see bellow).

In a public cloud environment, such as ali, I can't use keepalived+haproxy.
This means that I have to use 7 layers of load balancing for k8s-apiserver ,If I want use parameter- "controlPlaneEndpoint" . right?

haproxy is known to work and also it supports HTTPS health checks which are quite important.
this comment might help you:
kubernetes-sigs/kind#588 (comment)

maybe you are seeing the same problem - early TCP redirects and no TLS support.

@RockyFu267
Copy link

@neolit123
It's not just a single case.
I think the product manager of k8s should redesign this area.

https://stackoverflow.com/questions/56768956/how-to-use-kubeadm-init-configuration-parameter-controlplaneendpoint/57121454#57121454

@neolit123
Copy link
Member

@RockyFu267
i create this tracking issue in kubernetes/kubeadm:
kubernetes/kubeadm#1685

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.
Projects
None yet
Development

No branches or pull requests

5 participants