Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fail to mark edge-node #629

Closed
GitHubThinking opened this issue Nov 25, 2021 · 5 comments
Closed

fail to mark edge-node #629

GitHubThinking opened this issue Nov 25, 2021 · 5 comments
Assignees
Labels

Comments

@GitHubThinking
Copy link

What happened:

The edge-node is not marked.

[root@k8smaster bin]# ./yurtctl convert --provider kubeadm --cloud-nodes k8smaster

I1125 10:47:21.752654   17882 convert.go:318] mark k8smaster as the cloud-node
I1125 10:48:01.793163   17882 util.go:542] servant job(yurtctl-disable-node-controller-k8smaster) has succeeded
I1125 10:48:01.793210   17882 convert.go:343] complete disabling node-controller
I1125 10:48:01.794493   17882 convert.go:443] kube-public/cluster-info configmap already exists, skip to prepare it
I1125 10:48:01.805102   17882 convert.go:408] deploying the yurt-hub and resetting the kubelet service on edge nodes...
E1125 10:48:01.808416   17882 util.go:539] fail to run servant job(yurtctl-servant-convert-k8snode2): jobs.batch "yurtctl-servant-convert-k8snode2" already exists
E1125 10:48:01.808802   17882 util.go:539] fail to run servant job(yurtctl-servant-convert-k8snode1): jobs.batch "yurtctl-servant-convert-k8snode1" already exists
I1125 10:48:01.808812   17882 convert.go:414] complete deploying yurt-hub on edge nodes
I1125 10:48:01.808817   17882 convert.go:417] deploying the yurt-hub and resetting the kubelet service on cloud nodes
E1125 10:48:01.811134   17882 util.go:539] fail to run servant job(yurtctl-servant-convert-k8smaster): jobs.batch "yurtctl-servant-convert-k8smaster" already exists
I1125 10:48:01.811145   17882 convert.go:423] complete deploying yurt-hub on cloud nodes
[root@k8smaster bin]# ./yurtctl markautonomous
W1125 10:57:39.404158   20585 markautonomous.go:118] there is no edge nodes, please label the edge node first

In log,it is showed that

11月 25 10:57:51 k8smaster kubelet[1660]: E1125 10:22:51.583366    1660 pod_workers.go:191] Error syncing pod 740f5710-694d-4c13-b251-1d8797403b17 ("yurtctl-servant-revert-k8smaster-4pgh6_kube-system(740f5710-694d-4c13-b251-1d8797403b17)"), skipping: failed to "StartContainer" for "yurtctl-servant" with CrashLoopBackOff: "back-off 2m40s restarting failed container=yurtctl-servant pod=yurtctl-servant-revert-k8smaster-4pgh6_kube-system(740f5710-694d-4c13-b251-1d8797403b17)"

What you expected to happen:

I want to see the result:
'mark k8smaster as the edge-node'

and "markautonomous" is successful.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • OpenYurt version: v0.5.0
  • Kubernetes version (use kubectl version): v1.18.0
  • OS (e.g: cat /etc/os-release): centos7
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

others

/kind bug

@GitHubThinking GitHubThinking added the kind/bug kind/bug label Nov 25, 2021
@rambohe-ch
Copy link
Member

@GitHubThinking Thank you for raising issue.

/assign @adamzhoul @Peeknut

@adamzhoul
Copy link
Member

E1125 10:48:01.808416   17882 util.go:539] fail to run servant job(yurtctl-servant-convert-k8snode2): jobs.batch "yurtctl-servant-convert-k8snode2" already exists
E1125 10:48:01.808802   17882 util.go:539] fail to run servant job(yurtctl-servant-convert-k8snode1): jobs.batch "yurtctl-servant-convert-k8snode1" already exists
E1125 10:48:01.811134   17882 util.go:539] fail to run servant job(yurtctl-servant-convert-k8smaster): jobs.batch "yurtctl-servant-convert-k8smaster" already exists

maybe the old job is stopping yurtctl convert label the node.
please delete the job, and yurtctl reset then yurtctl convert to do all those again.

issue alike #572

@GitHubThinking
Copy link
Author

E1125 10:48:01.808416   17882 util.go:539] fail to run servant job(yurtctl-servant-convert-k8snode2): jobs.batch "yurtctl-servant-convert-k8snode2" already exists
E1125 10:48:01.808802   17882 util.go:539] fail to run servant job(yurtctl-servant-convert-k8snode1): jobs.batch "yurtctl-servant-convert-k8snode1" already exists
E1125 10:48:01.811134   17882 util.go:539] fail to run servant job(yurtctl-servant-convert-k8smaster): jobs.batch "yurtctl-servant-convert-k8smaster" already exists

maybe the old job is stopping yurtctl convert label the node. please delete the job, and yurtctl reset then yurtctl convert to do all those again.

issue alike #572

Please tell me how to delete the old job, I'm a beginner and hope you can instruct me step by step.

@Peeknut
Copy link
Member

Peeknut commented Nov 26, 2021

You can get the job in ns: kube-system, and you will see the convert jobs.

kubectl get job -n kube-system -A

then you can delete the specified job by its namespace and name with

kubectl delete job -n < ns> < job-name>

here you can run:

kubectl delete job -n kube-system yurtctl-servant-convert-k8snode2
kubectl delete job -n kube-system yurtctl-servant-convert-k8snode1
kubectl delete job -n kube-system yurtctl-servant-convert-k8smaster

@stale
Copy link

stale bot commented Feb 24, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Feb 24, 2022
@stale stale bot closed this as completed Mar 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants