Skip to content

Commit

Permalink
Add markdown CI (kubernetes-sigs#5380)
Browse files Browse the repository at this point in the history
  • Loading branch information
Miouge1 authored and k8s-ci-robot committed Dec 4, 2019
1 parent b1fbead commit a9b67d5
Show file tree
Hide file tree
Showing 41 changed files with 572 additions and 512 deletions.
8 changes: 8 additions & 0 deletions .gitlab-ci/lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,3 +47,11 @@ tox-inventory-builder:
- cd contrib/inventory_builder && tox
when: manual
except: ['triggers', 'master']

markdownlint:
stage: unit-tests
image: node
before_script:
- npm install -g markdownlint-cli
script:
- markdownlint README.md docs --ignore docs/_sidebar.md
2 changes: 2 additions & 0 deletions .markdownlint.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
---
MD013: false
259 changes: 129 additions & 130 deletions README.md

Large diffs are not rendered by default.

39 changes: 20 additions & 19 deletions docs/ansible.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
Ansible variables
===============
# Ansible variables

## Inventory

Inventory
-------------
The inventory is composed of 3 groups:

* **kube-node** : list of kubernetes nodes where the pods will run.
Expand All @@ -14,7 +12,7 @@ Note: do not modify the children of _k8s-cluster_, like putting
the _etcd_ group into the _k8s-cluster_, unless you are certain
to do that and you have it fully contained in the latter:

```
```ShellSession
k8s-cluster ⊂ etcd => kube-node ∩ etcd = etcd
```

Expand All @@ -32,7 +30,7 @@ There are also two special groups:

Below is a complete inventory example:

```
```ini
## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface
node1 ansible_host=95.54.0.12 ip=10.3.0.1
Expand Down Expand Up @@ -63,8 +61,7 @@ kube-node
kube-master
```

Group vars and overriding variables precedence
----------------------------------------------
## Group vars and overriding variables precedence

The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
Optional variables are located in the `inventory/sample/group_vars/all.yml`.
Expand All @@ -73,7 +70,7 @@ Mandatory variables that are common for at least one role (or a node group) can
There are also role vars for docker, kubernetes preinstall and master roles.
According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
those cannot be overridden from the group vars. In order to override, one should use
the `-e ` runtime flags (most simple way) or other layers described in the docs.
the `-e` runtime flags (most simple way) or other layers described in the docs.

Kubespray uses only a few layers to override things (or expect them to
be overridden for roles):
Expand All @@ -97,8 +94,8 @@ block vars (only for tasks in block) | Kubespray overrides for internal roles' l
task vars (only for the task) | Unused for roles, but only for helper scripts
**extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml``

Ansible tags
------------
## Ansible tags

The following tags are defined in playbooks:

| Tag name | Used for
Expand Down Expand Up @@ -145,36 +142,40 @@ Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all
tags found in the codebase. New tags will be listed with the empty "Used for"
field.

Example commands
----------------
## Example commands

Example command to filter and apply only DNS configuration tasks and skip
everything else related to host OS configuration and downloading images of containers:

```
```ShellSession
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
```

And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
```

```ShellSession
ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf
```

And this prepares all container images locally (at the ansible runner node) without installing
or upgrading related stuff or trying to upload container to K8s cluster nodes:
```

```ShellSession
ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
-e download_run_once=true -e download_localhost=true \
--tags download --skip-tags upload,upgrade
```

Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing.

Bastion host
--------------
## Bastion host

If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
you can use a so called *bastion* host to connect to your nodes. To specify and use a bastion,
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
bastion host.

```
```ShellSession
[bastion]
bastion ansible_host=x.x.x.x
```
Expand Down
3 changes: 2 additions & 1 deletion docs/arch.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
## Architecture compatibility
# Architecture compatibility

The following table shows the impact of the CPU architecture on compatible features:

- amd64: Cluster using only x86/amd64 CPUs
- arm64: Cluster using only arm64 CPUs
- amd64 + arm64: Cluster with a mix of x86/amd64 and arm64 CPUs
Expand Down
13 changes: 6 additions & 7 deletions docs/atomic.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,22 @@
Atomic host bootstrap
=====================
# Atomic host bootstrap

Atomic host testing has been done with the network plugin flannel. Change the inventory var `kube_network_plugin: flannel`.

Note: Flannel is the only plugin that has currently been tested with atomic

### Vagrant
## Vagrant

* For bootstrapping with Vagrant, use box centos/atomic-host or fedora/atomic-host
* For bootstrapping with Vagrant, use box centos/atomic-host or fedora/atomic-host
* Update VagrantFile variable `local_release_dir` to `/var/vagrant/temp`.
* Update `vm_memory = 2048` and `vm_cpus = 2`
* Networking on vagrant hosts has to be brought up manually once they are booted.

```
```ShellSession
vagrant ssh
sudo /sbin/ifup enp0s8
```

* For users of vagrant-libvirt download centos/atomic-host qcow2 format from https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/
* For users of vagrant-libvirt download fedora/atomic-host qcow2 format from https://dl.fedoraproject.org/pub/alt/atomic/stable/
* For users of vagrant-libvirt download centos/atomic-host qcow2 format from <https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/>
* For users of vagrant-libvirt download fedora/atomic-host qcow2 format from <https://dl.fedoraproject.org/pub/alt/atomic/stable/>

Then you can proceed to [cluster deployment](#run-deployment)
15 changes: 9 additions & 6 deletions docs/aws.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
AWS
===============
# AWS

To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`. Refer to the [Kubespray Configuration](#kubespray-configuration) for customizing the provider.

Expand All @@ -13,11 +12,13 @@ The next step is to make sure the hostnames in your `inventory` file are identic

You can now create your cluster!

### Dynamic Inventory ###
## Dynamic Inventory

There is also a dynamic inventory script for AWS that can be used if desired. However, be aware that it makes some certain assumptions about how you'll create your inventory. It also does not handle all use cases and groups that we may use as part of more advanced deployments. Additions welcome.

This will produce an inventory that is passed into Ansible that looks like the following:
```

```json
{
"_meta": {
"hostvars": {
Expand Down Expand Up @@ -48,15 +49,18 @@ This will produce an inventory that is passed into Ansible that looks like the f
```

Guide:

- Create instances in AWS as needed.
- Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube-master`, `etcd`, or `kube-node`. You can also share roles like `kube-master, etcd`
- Copy the `kubespray-aws-inventory.py` script from `kubespray/contrib/aws_inventory` to the `kubespray/inventory` directory.
- Set the following AWS credentials and info as environment variables in your terminal:
```

```ShellSession
export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="yyyyy"
export REGION="us-east-2"
```

- We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kubespray-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml`

## Kubespray configuration
Expand All @@ -75,4 +79,3 @@ aws_kubernetes_cluster_id|string|KubernetesClusterID is the cluster id we'll use
aws_disable_security_group_ingress|bool|The aws provider creates an inbound rule per load balancer on the node security group. However, this can run into the AWS security group rule limit of 50 if many LoadBalancers are created. This flag disables the automatic ingress creation. It requires that the user has setup a rule that allows inbound traffic on kubelet ports from the local VPC subnet (so load balancers can access it). E.g. 10.82.0.0/16 30000-32000.
aws_elb_security_group|string|Only in Kubelet version >= 1.7 : AWS has a hard limit of 500 security groups. For large clusters creating a security group for each ELB can cause the max number of security groups to be reached. If this is set instead of creating a new Security group for each ELB this security group will be used instead.
aws_disable_strict_zone_check|bool|During the instantiation of an new AWS cloud provider, the detected region is validated against a known set of regions. In a non-standard, AWS like environment (e.g. Eucalyptus), this check may be undesirable. Setting this to true will disable the check and provide a warning that the check was skipped. Please note that this is an experimental feature and work-in-progress for the moment.

52 changes: 30 additions & 22 deletions docs/azure.md
Original file line number Diff line number Diff line change
@@ -1,46 +1,50 @@
Azure
===============
# Azure

To deploy Kubernetes on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'azure'`.

All your instances are required to run in a resource group and a routing table has to be attached to the subnet your instances are in.

Not all features are supported yet though, for a list of the current status have a look [here](https://github.com/colemickens/azure-kubernetes-status)

### Parameters
## Parameters

Before creating the instances you must first set the `azure_` variables in the `group_vars/all.yml` file.

All of the values can be retrieved using the azure cli tool which can be downloaded here: https://docs.microsoft.com/en-gb/azure/xplat-cli-install
All of the values can be retrieved using the azure cli tool which can be downloaded here: <https://docs.microsoft.com/en-gb/azure/xplat-cli-install>
After installation you have to run `azure login` to get access to your account.

### azure\_tenant\_id + azure\_subscription\_id

#### azure\_tenant\_id + azure\_subscription\_id
run `azure account show` to retrieve your subscription id and tenant id:
`azure_tenant_id` -> Tenant ID field
`azure_subscription_id` -> ID field

### azure\_location

#### azure\_location
The region your instances are located, can be something like `westeurope` or `westcentralus`. A full list of region names can be retrieved via `azure location list`

### azure\_resource\_group

#### azure\_resource\_group
The name of the resource group your instances are in, can be retrieved via `azure group list`

#### azure\_vnet\_name
### azure\_vnet\_name

The name of the virtual network your instances are in, can be retrieved via `azure network vnet list`

#### azure\_subnet\_name
### azure\_subnet\_name

The name of the subnet your instances are in, can be retrieved via `azure network vnet subnet list --resource-group RESOURCE_GROUP --vnet-name VNET_NAME`

#### azure\_security\_group\_name
### azure\_security\_group\_name

The name of the network security group your instances are in, can be retrieved via `azure network nsg list`

#### azure\_aad\_client\_id + azure\_aad\_client\_secret
### azure\_aad\_client\_id + azure\_aad\_client\_secret

These will have to be generated first:

- Create an Azure AD Application with:
`azure ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET`
`azure ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET`
display name, identifier-uri, homepage and the password can be chosen
Note the AppId in the output.
- Create Service principal for the application with:
Expand All @@ -51,24 +55,28 @@ This is the AppId from the last command

azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your chosen secret.

#### azure\_loadbalancer\_sku
### azure\_loadbalancer\_sku

Sku of Load Balancer and Public IP. Candidate values are: basic and standard.

#### azure\_exclude\_master\_from\_standard\_lb
### azure\_exclude\_master\_from\_standard\_lb

azure\_exclude\_master\_from\_standard\_lb excludes master nodes from `standard` load balancer.

#### azure\_disable\_outbound\_snat
### azure\_disable\_outbound\_snat

azure\_disable\_outbound\_snat disables the outbound SNAT for public load balancer rules. It should only be set when azure\_exclude\_master\_from\_standard\_lb is `standard`.

#### azure\_primary\_availability\_set\_name
(Optional) The name of the availability set that should be used as the load balancer backend .If this is set, the Azure
cloudprovider will only add nodes from that availability set to the load balancer backend pool. If this is not set, and
multiple agent pools (availability sets) are used, then the cloudprovider will try to add all nodes to a single backend

### azure\_primary\_availability\_set\_name

(Optional) The name of the availability set that should be used as the load balancer backend .If this is set, the Azure
cloudprovider will only add nodes from that availability set to the load balancer backend pool. If this is not set, and
multiple agent pools (availability sets) are used, then the cloudprovider will try to add all nodes to a single backend
pool which is forbidden. In other words, if you use multiple agent pools (availability sets), you MUST set this field.

#### azure\_use\_instance\_metadata
Use instance metadata service where possible
### azure\_use\_instance\_metadata

Use instance metadata service where possible

## Provisioning Azure with Resource Group Templates

Expand Down
Loading

0 comments on commit a9b67d5

Please sign in to comment.