-
Notifications
You must be signed in to change notification settings - Fork 803
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configurable topology labels #962
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The current status is that nodes have both labels but PVs select only on the old non-standardised label ( I have written a patch to provide a command-line option, disabled by default (as I'm not sure if enabling it by default may break some setups), to make PVs select on the standardised label ( Can this issue be re-opened? |
/reopen |
@AndrewSirenko: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/close The original reason for customization of the topology labels (using the k8s standardized default) is no longer necessary as of If you have a specific need for customization other than using the standardized label, please open a new issue so that it can be evaluated. |
@ConnorJC3: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/kind bug
What happened?
#729 <- autoscaler reads the in-tree topology labels
#899 <- csi migration translation reads in-tree topology labels and our ebs-specific topology label and translates both to normal topology label which results in PV having duplicate labels
to solve 899 we started to only put the ebs-specific topology label on csinode and on PVs.
but that breaks 729 in case migration is off which expects to read in-tree topology labels.
Moreover, if there exist PVs with both labels and users upgrade to version with only one label, then scheduler won't be able to schedule PVs with both labels to nodes with just one label, we may already have users who ran into this bug
To fix, we might need to default to having both labels on CSINode again, but still just have one label on PV. The csi translation duplicate label issue will be fixed ,but it'ps unclear at this time whether we should be using the in-tree topology label in CSINode or not, or if that label being kubernetes-specific should be reserved for in-tree kubernetes usage, etc, or if topology labels should be configurable so that by default we include the kubernetes-specific label but (totally hypothetical) non-kubernetes users can omit it.
What you expected to happen?
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?:
Environment
kubectl version
):The text was updated successfully, but these errors were encountered: