Skip to content

Latest commit

 

History

History
271 lines (204 loc) · 9.01 KB

DEVELOPMENT.md

File metadata and controls

271 lines (204 loc) · 9.01 KB

Development Guide

Development Prerequisites

  1. go
  2. git
  3. kubectl
  4. ko
  5. kustomize

Getting started

  1. Ramp up on kubernetes and CRDs
  2. Ramp Tekton Pipelines
  3. Create a GitHub account
  4. Setup GitHub access via SSH
  5. Create and checkout a repo fork
  6. Set up your shell environment
  7. Install requirements
  8. Set up a Kubernetes cluster
  9. Configure kubectl to use your cluster
  10. Set up a docker repository you can push to
  11. Install Tekton Operator
  12. Iterate!
  13. Running Codegen
  14. Running Operator
  15. Running Tests

Ramp up

Welcome to the project!! You may find these resources helpful to ramp up on some of the technology this project is built on.

Ramp up on CRDs

This project extends Kubernetes (aka k8s) with Custom Resource Definitions (CRDSs). To find out more:

Ramp up on Tekton Pipelines

Ramp up on Kubernetes Operators

Checkout your fork

The Go tools require that you clone the repository to the src/github.com/tektoncd/operator directory in your GOPATH.

To check out this repository:

  1. Create your own fork of this repo
  2. Clone it to your machine:
mkdir -p ${GOPATH}/src/github.com/tektoncd
cd ${GOPATH}/src/github.com/tektoncd
git clone [email protected]:${YOUR_GITHUB_USERNAME}/operator.git
cd operator
git remote add upstream [email protected]:tektoncd/operator.git
git remote set-url --push upstream no_push

Adding the upstream remote sets you up nicely for regularly syncing your fork.

Requirements

You must install these tools:

  1. go: The language Tekton Pipelines is built in
  2. git: For source control
  3. dep: For managing external Go dependencies. - Please Install dep v0.5.0 or greater.
  4. kubectl: For interacting with your kube cluster

Your [$GOPATH] setting is critical for go to function properly.

Kubernetes cluster

Docker for Desktop using an edge version has been proven to work for both developing and running Pipelines. The recommended configuration is:

  • Kubernetes version 1.11 or later
  • 4 vCPU nodes (n1-standard-4)
  • Node autoscaling, up to 3 nodes
  • API scopes for cloud-platform

To setup a cluster with GKE:

  1. Install required tools and setup GCP project (You may find it useful to save the ID of the project in an environment variable (e.g. PROJECT_ID).

  2. Create a GKE cluster (with --cluster-version=latest but you can use any version 1.11 or later):

    export PROJECT_ID=my-gcp-project
    export CLUSTER_NAME=mycoolcluster
    
    gcloud container clusters create $CLUSTER_NAME \
     --enable-autoscaling \
     --min-nodes=1 \
     --max-nodes=3 \
     --scopes=cloud-platform \
     --enable-basic-auth \
     --no-issue-client-certificate \
     --project=$PROJECT_ID \
     --region=us-central1 \
     --machine-type=n1-standard-4 \
     --image-type=cos \
     --num-nodes=1 \
     --cluster-version=latest

    Note that the --scopes argument to gcloud container cluster create controls what GCP resources the cluster's default service account has access to; for example to give the default service account full access to your GCR registry, you can add storage-full to your --scopes arg.

  3. Grant cluster-admin permissions to the current user:

    kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole=cluster-admin \
    --user=$(gcloud config get-value core/account)

Environment Setup

To run/test your operator you'll need to set these environment variables (we recommend adding them to your .bashrc):

  1. GOPATH: If you don't have one, simply pick a directory and add export GOPATH=...
  2. $GOPATH/bin on PATH: This is so that tooling installed via go get will work properly.

.bashrc example:

export GOPATH="$HOME/go"
export PATH="${PATH}:${GOPATH}/bin"

Iterating

While iterating on the project, you may need to:

  1. Install/Run Operator

  2. Verify it's working by looking at the logs

  3. Update your (external) dependencies with: ./hack/update-deps.sh.

    Running dep ensure manually, will pull a bunch of scripts deleted here

  4. Update your type definitions with: ./hack/update-codegen.sh.

  5. Add new CRD types

  6. Add and run tests

Install Operator

Note: this needs to be completed! We don't yet have any code or config to deploy, watch this space!

Accessing logs

Note: this needs to be completed! We don't yet have any code or config to deploy, watch this space!

Updating the clustertasks in OpenShift addons

You can update the clustertasks present in the codebase with the latest using the script present at /hack/openshift/update-tasks.sh

You can edit the script to mention the specific version of the task or to add a new task.

Then all the tasks mentioned in the script can be added to codebase using

./hack/openshift/fetch-tektoncd-catalog-tasks.sh cmd/openshift/operator/kodata/tekton-addon/addons/02-clustertasks/source_external

Running Codegen

If the files in pkg/apis are updated we need to run codegen scripts

./hack/update-codegen.sh

Running Operator (Development)

Reset (Clean) Cluster

Target: Kubernetes

    make clean

Target Openshift

    make TARGET=openshift clean

Setup

  • Set KO_DOCKER_ENV environment variable (ko#usage)

Run operator

Target: Kubernetes

    make apply

Target Openshift

    make TARGET=openshift apply

Install Tekton components

Operator provides an option to choose which components needs to be installed by specifying profile.

profile is an optional field and supported profile are

  • lite
  • basic
  • all
  1. If profile is lite TektonPipeline will be installed
  2. If profile is basic TektonPipeline and TektonTrigger will be installed
  3. If profile is all then all the Tekton Components installed

To create Tekton Components run

make apply-cr
make CR=config/basic apply-cr

To delete installed Tekton Components run

make clean-cr
make CR=config/basic clean-cr

Running Tests

test docs