Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add documentation for Azure availability zones #57

Merged
merged 1 commit into from
Sep 11, 2018

Conversation

feiskyer
Copy link
Member

@feiskyer feiskyer commented Sep 6, 2018

Add docs for AZ: kubernetes/enhancements#586.

/assign @justaugustus

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Sep 6, 2018
}
```

If topology-aware provisioning feature is used, feature gates `VolumeScheduling` and `DynamicProvisioningScheduling` should be enabled on master components (e.g. kube-apiserver, kube-controller-manager and kube-scheduler).
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've removed the DynamicProvisioningScheduling feature gate in 1.12. So only VolumeScheduling is required.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, thanks for tip

Zone-aware and topology-aware provisioning are supported for Azure managed disks. To support these features, a few options are added in AzureDisk storage class:

- **zoned**: indicates whether new disks are provisioned with AZ. Default is true.
- **zone** and **zones**: indicates which zones should be used to provision new disks (zone-aware provisioning). Only can be set if `zoned` is not false and allowedTopologies is not set.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Considering that we are deprecating the zone/zones parameters for aws/gce in 1.12, replacing it with allowedTopologies which is beta, and this is just newly being introduced for azure as alpha, do you want to consider leaving it out?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, let me remove it from this usage guide


Zone-aware and topology-aware provisioning are supported for Azure managed disks. To support these features, a few options are added in AzureDisk storage class:

- **zoned**: indicates whether new disks are provisioned with AZ. Default is true.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this cause backwards compatibility issues for users that have StorageClasses before this change?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, azure supports both zoned and unzoned nodes together. If there's no zoned nodes in the cluster, then the disks will still be provisioned without zone.

@@ -0,0 +1,220 @@
# Availability Zones

**Feature Status:** Alpha since v1.12.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a feature gate that users have to set to enable this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noop, this is enabled for v1.12, so that new zoned nodes could be joined easily after upgrading an existing cluster.

Zone-aware and topology-aware provisioning are supported for Azure managed disks. To support these features, a few options are added in AzureDisk storage class:

- **zoned**: indicates whether new disks are provisioned with AZ. Default is true.
- **allowedTopologies**: indicates which topologies are allowed for topology-aware provisioning. Only can be set if `zoned` is not false and `zone`/`zones` are not set.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this needs to be updated

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, fixed now

Copy link
Member

@andyzhangx andyzhangx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 11, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: andyzhangx, feiskyer

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [andyzhangx,feiskyer]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit a51ab9e into kubernetes-sigs:master Sep 11, 2018
@feiskyer feiskyer deleted the az branch September 11, 2018 08:56
JoelSpeed pushed a commit to JoelSpeed/cloud-provider-azure that referenced this pull request Apr 3, 2023
…-leader-election-lost-openshift

OCPBUGS-8474: CCM should not panic when losing leader election lease
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants