Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with k8s.io/docs/concepts/cluster-administration/cloud-providers/ #8304

Closed
1 of 2 tasks
edqallen opened this issue May 3, 2018 · 15 comments
Closed
1 of 2 tasks
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@edqallen
Copy link

edqallen commented May 3, 2018

This is a...

  • Feature Request
  • Bug Report

Problem:
Only AWS and OpenStack configurations are documented. There is no documentation available on how to set up clusters with other providers such as Azure, EC2, or vSphere.

Proposed Solution:
Create the documentation, even beta engineer-style documentation from those that tested those cloud providers before activating them.

Page to Update:
https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/

@zacharysarah zacharysarah added the P2 label May 7, 2018
@zacharysarah
Copy link
Contributor

/cc @hogepodge

@andrewsykim
Copy link
Member

DigitalOcean will be adding docs for this soon

@hogepodge
Copy link
Contributor

wg-cloud-provider, in collaboration with sig-docs, is formulating a general way to handle this issue. The cloud-provider group is working on setting documentation standards for the providers that will do a few things:

  • Create a minimum set of required documentation, including "getting started" an "configuration" guides.
  • Establish a known location for documents to reside, so that sig-docs can automatically generate documentation for each wg-cloud-provider approved set of documents
  • Make wg-cloud-provider the owner of the documentation standards, with clear guidelines for adding and removing documents.

Currently wg-cloud-provider is investigation transitioning to a SIG, and converting the existing SIG- into working groups. Individual cloud-provider additions in the meantime should be caveated with those governance changes in mind.

The end goal is to provide a positive and consistent user experience for identifying, installing, and using any particular cloud provider.

Initial work in this PR kubernetes/community#1942

@karataliu
Copy link
Contributor

Please consider known location to be docs dir under each cloud provider. Otherwise every document would have two copies.

Also posted the question here: kubernetes/enhancements#88 (comment)

There would be a docs dir in every single provider:
https://github.com/kubernetes/community/blob/master/wg-cloud-provider/cloud-provider-requirements.md

What's the relationship with the kubernetes website? Shall the document be periodly synced?
https://github.com/kubernetes/website/blob/master/docs/concepts/cluster-administration/cloud-providers.md

@jagosan
Copy link

jagosan commented May 9, 2018

/cc @chenopis
/cc @jaredbhatti
FYI

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 7, 2018
@edqallen
Copy link
Author

edqallen commented Aug 7, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 7, 2018
@timothysc
Copy link
Member

xref kubernetes/kubeadm#526

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 17, 2018
@andrewsykim
Copy link
Member

/remove-lifecycle stale

cc @hogepodge @chenopis for updates on some progress we made during KubeCON :)

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 17, 2018
@hogepodge
Copy link
Contributor

At the developer summit at KubeCon Seattle, we make plans to refocus efforts between /sig cloud-provider and /sig cluster-lifecycle to consolidate the getting started pages. The plan is for SIG-Docs to act as a high level project manager, setting goals and tasks for the corresponding SIGs to take back to their teams to accomplish. @zacharysarah did I capture that right?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 20, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 19, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

9 participants