Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MachineDeployment status does not show the number of ReadyReplicas in a Management cluster #930

Closed
DheerajSShetty opened this issue May 3, 2019 · 10 comments · Fixed by #1052
Assignees
Labels
area/machine Issues or PRs related to machine lifecycle management kind/bug Categorizes issue or PR as related to a bug. lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@DheerajSShetty
Copy link

Create a MachineDeployment in ManagementCluster with n Replica
n nodes are created.
Check status of MachineDeployment, the ReadyReplica wont be set to n.

@vincepri vincepri changed the title MachineDeployment status does not show the number of ReadyReplicas in a Management cl; MachineDeployment status does not show the number of ReadyReplicas in a Management cluster May 7, 2019
@vincepri
Copy link
Member

vincepri commented May 7, 2019

/kind bug

This issue is related to the AWS provider not setting the NodeRef field when running in a Management cluster. The nodeRef was being set before, but the logic was revisited in kubernetes-sigs/cluster-api-provider-aws#744 because Cluster API wouldn't properly delete the cluster in a management scenario.

Going to close this one here, given it's specific to the AWS provider.
/close

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label May 7, 2019
@k8s-ci-robot
Copy link
Contributor

@vincepri: Closing this issue.

In response to this:

/kind bug

This issue is related to the AWS provider not setting the NodeRef field when running in a Management cluster. The nodeRef was being set before, but the logic was revisited in kubernetes-sigs/cluster-api-provider-aws#744 because Cluster API wouldn't properly delete the cluster in a management scenario.

Going to close this one here, given it's specific to the AWS provider.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@detiber
Copy link
Member

detiber commented May 7, 2019

@vincepri This would also affect other providers that do not set NodeRef when run in a management cluster (which I believe is all)

/reopen

@k8s-ci-robot k8s-ci-robot reopened this May 7, 2019
@k8s-ci-robot
Copy link
Contributor

@detiber: Reopened this issue.

In response to this:

@vincepri This would also affect other providers that do not set NodeRef when run in a management cluster (which I believe is all)

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@vincepri vincepri added this to the Next milestone May 7, 2019
@vincepri vincepri added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label May 7, 2019
@liztio
Copy link
Contributor

liztio commented May 22, 2019

/assign

@liztio
Copy link
Contributor

liztio commented May 23, 2019

Reproduction steps:

  1. Create a new management cluster
  2. Use the example yaml and generate.sh scripts as per capa setup instructions
  3. Wait for machines to run
  4. Observe lack of ReadyRequests:
$ kubectl get machinedeployments   -o=jsonpath='{.items[0].status}'
map[observedGeneration:2 replicas:1 unavailableReplicas:1 updatedReplicas:1]%

@liztio
Copy link
Contributor

liztio commented May 24, 2019

I'm curious how we can make this work in the general case. The only status information reported by the machine is provider-specific:

$ kubectl get machine brew-69b776f86c-8s2tl -o json | jq '.status'
{
  "providerStatus": {
    "instanceID": "i-0d833b1e56938a3bb",
    "instanceState": "running",
    "metadata": {
      "creationTimestamp": null
    }
  }
}

If we look at the structure itself, we see that there's a Phase field, currently underpopulated. But it's a free-form text field, not an enum.

There's the Conditions field, vwhich includes a ready status. This is also not currently not populated by CAPA, I don't know if other things do populated it.

I think perhaps the issue here is that the MachineSet doesn't get updated with ReadyReplicas either. If it did, it'd be trivial to propagate that up to MachineDeployment

@ncdc
Copy link
Contributor

ncdc commented Jun 10, 2019

@liztio FYI @vincepri is working on a POC to handle setting remote node references. If that is approved, this should be achievable as a follow-up.

@ncdc
Copy link
Contributor

ncdc commented Jun 10, 2019

/area machine

@k8s-ci-robot k8s-ci-robot added the area/machine Issues or PRs related to machine lifecycle management label Jun 10, 2019
@vincepri
Copy link
Member

/lifecycle active

@k8s-ci-robot k8s-ci-robot added the lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. label Jun 20, 2019
@timothysc timothysc modified the milestones: Next, v1alpha2 Jun 21, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/machine Issues or PRs related to machine lifecycle management kind/bug Categorizes issue or PR as related to a bug. lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants