-
Notifications
You must be signed in to change notification settings - Fork 891
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sync status of pod #1913
sync status of pod #1913
Conversation
newStatus := &corev1.PodStatus{} | ||
for _, item := range aggregatedStatusItems { | ||
if item.Status == nil { | ||
continue | ||
} | ||
if err = json.Unmarshal(item.Status.Raw, newStatus); err != nil { | ||
return nil, err | ||
} | ||
klog.V(3).Infof("Grab pod(%s/%s) status from cluster(%s), phase: %s, qosClass: %s", newStatus.Phase, newStatus.QOSClass) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic looks like it only unmarshals the latest object in the items.
Actually, we have not found a good method to aggregate the pod
status. The PodStatus
can not represent the pod status in multiple clusters figuratively.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic looks like it only unmarshals the latest object in the items.
Actually, we have not found a good method to aggregate the
pod
status. ThePodStatus
can not represent the pod status in multiple clusters figuratively.
This loop only exec once. And aggregating the pod status do not involve multiple clusters.This is just a independent pod resource, not pod in deployment or something else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we propagate to more than one cluster, there will be more than one item.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we propagate to more than one cluster, there will be more than one item.
you are right.
So can i set Pod status Ready only when all Pods are in that state?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's maybe okay.
This is how I propagate a pod. |
Remember to update the documents as well. Thanks. |
okey |
3984c62
to
ca50789
Compare
@xyz2277 thanks for your contribution. /assign |
|
||
for _, containerStatus := range temp.ContainerStatuses { | ||
tempStatus := containerStatus | ||
newStatus.ContainerStatuses = append(newStatus.ContainerStatuses, tempStatus) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMOP, the status of the collection container does not appear to be required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMOP, the status of the collection container does not appear to be required.
The status of containers is needed to be collected. Or the column of READY will always be 0/1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The status of containers is needed to be collected. Or the column of READY will always be 0/1.
I'm not sure about it. I thought the Ready
status is based on the .status.phase
. @lonelyCZ might know it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The status of containers is needed to be collected. Or the column of READY will always be 0/1.
I'm not sure about it. I thought the
Ready
status is based on the.status.phase
. @lonelyCZ might know it.
I've already tried. ready status is based on the status of the container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @xyz2277, how about only collecting the fields we need.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @xyz2277, how about only collecting the fields we need.
I tried too. When you see pod details, a lot of empty field will be found.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will be ok.
if reflect.DeepEqual(pod.Status, *newStatus) { | ||
klog.V(3).Infof("ignore update pod(%s/%s) status as up to date", pod.Namespace, pod.Name) | ||
return object, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic maybe needs to move back after newStatus.Phase
assignment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The kubectl output format for Pod
resource looks like:
# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx 2/1 Running 0 9m19s
|
||
for _, containerStatus := range temp.ContainerStatuses { | ||
tempStatus := containerStatus | ||
newStatus.ContainerStatuses = append(newStatus.ContainerStatuses, tempStatus) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @xyz2277, how about only collecting the fields we need.
for _, containerStatus := range temp.ContainerStatuses { | ||
tempStatus := containerStatus | ||
newStatus.ContainerStatuses = append(newStatus.ContainerStatuses, tempStatus) | ||
if containerStatus.Ready { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here judgement can refer to: https://github.com/kubernetes/kubernetes/blob/8415ae647d2c433c89910a0e677094e3a20ffb2b/pkg/printers/internalversion/printers.go#L822-L824
do you mean when container is ready, then we can collect that containerStatus?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The judgment here may not be strong enough.
container.Ready && container.State.Running != nil
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The judgment here may not be strong enough.
container.Ready && container.State.Running != nil
got it~
Just a reminder, please don't forget to add a unit test for it and update the documents I mentioned above. |
Got it! |
/cc @RainbowMango |
Can you share the test report with the new patch? Such as propagating a Pod to at least 2 clusters. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about modify like this:
newStatus := &corev1.PodStatus{}
newStatus.ContainerStatuses = make([]corev1.ContainerStatus, 0)
- readySum := 0
- containerSum := 0
+ runningFlag := true
for _, item := range aggregatedStatusItems {
if item.Status == nil {
+ runningFlag = false
continue
}
@@ -286,19 +286,22 @@ func aggregatePodStatus(object *unstructured.Unstructured, aggregatedStatusItems
return nil, err
}
+ if temp.Phase != corev1.PodRunning {
+ runningFlag = false
+ }
+
for _, containerStatus := range temp.ContainerStatuses {
- tempStatus := containerStatus
- newStatus.ContainerStatuses = append(newStatus.ContainerStatuses, tempStatus)
- if containerStatus.Ready && containerStatus.State.Running != nil {
- readySum++
+ tempStatus := corev1.ContainerStatus{
+ Ready: containerStatus.Ready,
+ State: containerStatus.State,
}
- containerSum++
+ newStatus.ContainerStatuses = append(newStatus.ContainerStatuses, tempStatus)
}
klog.V(3).Infof("Grab pod(%s/%s) status from cluster(%s), phase: %s", pod.Namespace,
pod.Name, item.ClusterName, temp.Phase)
}
- if containerSum == readySum {
+ if runningFlag {
newStatus.Phase = corev1.PodRunning
yeah, it looks more clean and concise |
kindly ping @xyz2277 |
Why not collect all the information? karmada 的目标不是像操作单集群一下操作多集群嘛? 如果不收集这些信息,对一些可视化页面可能不是友好。
|
并不是所有信息都可以汇聚到resource template的,比如HostIP,如果Pod分布于多个集群,多个HostIP无法同时体现在resource template中。状态汇聚是有局限性的。具体需要什么状态根据实际应用场景再行考虑。 |
Got it |
Signed-off-by: bruce <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution!
/lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
Note: I updated the PR description.(kind and release notes.)
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: RainbowMango The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: bruce [email protected]
What type of PR is this?
/kind feature
What this PR does / why we need it:
When I try to propagate or promote a pod, I found that the status of pod is always pending and never changed.
Which issue(s) this PR fixes:
Fixes #1988
Special notes for your reviewer:
Does this PR introduce a user-facing change?: