-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CreateVolume respects the topology requirements of the node #200
CreateVolume respects the topology requirements of the node #200
Conversation
Hi @alexanderKhaustov. Thanks for your PR. I'm waiting for a kubernetes-csi or kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
Unknown CLA label state. Rechecking for CLA labels. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
var accReqs *csi.TopologyRequirement | ||
if ni.AccessibleTopology != nil { | ||
accReqs = &csi.TopologyRequirement{ | ||
Requisite: []*csi.Topology{ni.AccessibleTopology}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assumes that the storage backend can support a one-node topology, but may not be the case for something that requires say a min of 3 nodes for replication.
I'm unsure what would be a great way to support that in the test though. This is probably ok for now, but it might be good to add a comment about the assumption.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
csi-sanity essentially simulates a CO with a single node, so I think this code change is consistent with the rest of the testing.
@alexanderKhaustov can you add a short comment here that explains that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay
I'm not sure I understand the potentially problematic scenario.
The SP is returning topology information for a node, but then it doesn't support provisioning a volume with this topology, because of a requirement (existing only externally to csi plugin/spec?) that a volume can only be created if there are no less than 3 nodes with some specific topology?
If that is the case then it seems that such an external peculiarity is hardly within the scope of csi-test, is it not? (csi-test aside, it seems that such a plugin would also perform unexpectedly for a CO end-user, would it not?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alexanderKhaustov: I agree. Let me try to propose a comment that you can put above the new code and then if @msau42 has no additional comments we can merge it:
csi-sanity testing emulates the behavior of a CO with a single node. The name and topology of that node is provided by the driver itself. Here we ensure that the new volume gets provisioned on that node, because otherwise staging it on the node later in the test might fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be good to add a note that this test does not support storage backends that require more than 1 node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've checked the spec more thoroughly, seems like there are no requirements for the setups that a SP MUST support. So I added comments about test assumptions.
As an aside, with regard to the topology returned by NodeGetInfo the spec suprisingly states that
COs MAY use this [topology] information ... when scheduling workloads
(https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetinfo)
rather than MUST use which seems more reasonable. Do you know any reason for that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know why topology information is allowed to be ignored.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
I'll let this sit for a few more days in case that @msau42 has further comments, then approve.
/approve Thanks! |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alexanderKhaustov, msau42 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…olumes-with-topology CreateVolume respects the topology requirements of the node
d24254f Merge pull request kubernetes-csi#202 from xing-yang/kind_0.14.0 0faa3fc Update to Kind v0.14.0 images ef4e1b2 Merge pull request kubernetes-csi#201 from xing-yang/add_1.24_image 4ddce25 Add 1.24 Kind image 7fe5149 Merge pull request kubernetes-csi#200 from pohly/bump-kubernetes-version 70915a8 prow.sh: update snapshotter version 31a3f38 Merge pull request kubernetes-csi#199 from pohly/bump-kubernetes-version 7577454 prow.sh: bump Kubernetes to v1.22.0 d29a2e7 Merge pull request kubernetes-csi#198 from pohly/csi-test-5.0.0 41cb70d prow.sh: sanity testing with csi-test v5.0.0 c85a63f Merge pull request kubernetes-csi#197 from pohly/fix-alpha-testing b86d8e9 support Kubernetes 1.25 + Ginkgo v2 ab0b0a3 Merge pull request kubernetes-csi#192 from andyzhangx/patch-1 7bbab24 Merge pull request kubernetes-csi#196 from humblec/non-alpha e51ff2c introduce control variable for non alpha feature gate configuration ca19ef5 Merge pull request kubernetes-csi#195 from pohly/fix-alpha-testing 3948331 fix testing with latest Kubernetes 9a0260c fix boilerplate header git-subtree-dir: release-tools git-subtree-split: d24254f
prow.sh: update snapshotter version
What type of PR is this?
/kind bug
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #194
Special notes for your reviewer:
Does this PR introduce a user-facing change?: