diff --git a/.markdownlint-cli2.yaml b/.markdownlint-cli2.yaml index 0503c1fec..416fbcedb 100644 --- a/.markdownlint-cli2.yaml +++ b/.markdownlint-cli2.yaml @@ -5,10 +5,7 @@ config: style: dash no-hard-tabs: false no-multiple-blanks: false - line-length: - line_length: 120 - code_blocks: false - tables: false + line-length: false blanks-around-headers: false no-duplicate-heading: siblings_only: true diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 17777fa81..c4937f6bd 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -29,7 +29,7 @@ Reserve GitHub issues for feature requests and bugs rather than general question ## Getting Started -Follow our [Installation Instructions](/docs/installation.md) to get the NGINX Gateway Fabric up and running. +Follow our [Installation Instructions](https://docs.nginx.com/nginx-gateway-fabric/installation/) to get the NGINX Gateway Fabric up and running. ### Project Structure @@ -91,7 +91,7 @@ Before beginning development, familiarize yourself with the following documents: outlining guidelines and best practices to ensure smooth and efficient pull request processes. - [Go Style Guide](/docs/developer/go-style-guide.md): A coding style guide for Go. Contains best practices and conventions to follow when writing Go code for the project. -- [Architecture](/docs/architecture.md): A high-level overview of the project's architecture. +- [Architecture](https://docs.nginx.com/nginx-gateway-fabric/overview/gateway-architecture/): A high-level overview of the project's architecture. - [Design Principles](/docs/developer/design-principles.md): An overview of the project's design principles. ## Contributor License Agreement diff --git a/README.md b/README.md index cf92fdfc5..8d5f25818 100644 --- a/README.md +++ b/README.md @@ -10,19 +10,21 @@ and `UDPRoute` -- to configure an HTTP or TCP/UDP load balancer, reverse-proxy, on Kubernetes. NGINX Gateway Fabric supports a subset of the Gateway API. For a list of supported Gateway API resources and features, see -the [Gateway API Compatibility](docs/gateway-api-compatibility.md) doc. +the [Gateway API Compatibility](https://docs.nginx.com/nginx-gateway-fabric/gateway-api-compatibility/) doc. -Learn about our [design principles](/docs/developer/design-principles.md) and [architecture](/docs/architecture.md). +Learn about our [design principles](/docs/developer/design-principles.md) and [architecture](https://docs.nginx.com/nginx-gateway-fabric/overview/gateway-architecture/). ## Getting Started -1. [Quick Start on a kind cluster](docs/running-on-kind.md). -2. [Install](docs/installation.md) NGINX Gateway Fabric. -3. [Build](docs/building-the-images.md) an NGINX Gateway Fabric container image from source or use a pre-built image +1. [Quick Start on a kind cluster](https://docs.nginx.com/nginx-gateway-fabric/installation/running-on-kind/). +2. [Install](https://docs.nginx.com/nginx-gateway-fabric/installation/) NGINX Gateway Fabric. +3. [Build](https://docs.nginx.com/nginx-gateway-fabric/installation/building-the-images/) an NGINX Gateway Fabric container image from source or use a pre-built image available on [GitHub Container Registry](https://github.com/nginxinc/nginx-gateway-fabric/pkgs/container/nginx-gateway-fabric). 4. Deploy various [examples](examples). -5. Read our [guides](/docs/guides). +5. Read our [How-to guides](https://docs.nginx.com/nginx-gateway-fabric/how-to/). + +You can find the comprehensive NGINX Gateway Fabric user documentation on the [NGINX Documentation](https://docs.nginx.com/nginx-gateway-fabric/) website. ## NGINX Gateway Fabric Releases @@ -99,7 +101,7 @@ docker buildx imagetools inspect ghcr.io/nginxinc/nginx-gateway-fabric:edge --fo ## Troubleshooting -For troubleshooting help, see the [Troubleshooting](/docs/troubleshooting.md) document. +For troubleshooting help, see the [Troubleshooting](https://docs.nginx.com/nginx-gateway-fabric/how-to/monitoring/troubleshooting/) document. ## Contacts diff --git a/conformance/provisioner/README.md b/conformance/provisioner/README.md index 0eb42a9b3..ff1d0dfe2 100644 --- a/conformance/provisioner/README.md +++ b/conformance/provisioner/README.md @@ -26,7 +26,7 @@ manifest and **re-build** NGF. How to deploy: -1. Follow the [installation](/docs/installation.md) instructions up until the Deploy the NGINX Gateway Fabric step +1. Follow the [installation](https://docs.nginx.com/nginx-gateway-fabric/installation/) instructions up until the Deploy the NGINX Gateway Fabric step to deploy prerequisites for both the static mode Deployments and the provisioner. 1. Deploy provisioner: diff --git a/deploy/helm-chart/README.md b/deploy/helm-chart/README.md index c11358de0..2801de2a9 100644 --- a/deploy/helm-chart/README.md +++ b/deploy/helm-chart/README.md @@ -32,27 +32,34 @@ This chart deploys the NGINX Gateway Fabric in your Kubernetes cluster. > **Note** > -> The Gateway API resources from the standard channel must be installed -> before deploying NGINX Gateway Fabric. If they are already installed in your cluster, please ensure they are -> the correct version as supported by the NGINX Gateway Fabric - +> The [Gateway API resources](https://github.com/kubernetes-sigs/gateway-api) from the standard channel must be +> installed before deploying NGINX Gateway Fabric. If they are already installed in your cluster, please ensure +> they are the correct version as supported by the NGINX Gateway Fabric - > [see the Technical Specifications](https://github.com/nginxinc/nginx-gateway-fabric/blob/main/README.md#technical-specifications). -To install the Gateway API CRDs from [the Gateway API repo](https://github.com/kubernetes-sigs/gateway-api), run: +If installing the latest stable release of NGINX Gateway Fabric, ensure you are deploying its supported version of +the Gateway API resources: -```shell -kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml -``` + ```shell + kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v0.8.1/standard-install.yaml + ``` -If you are running on Kubernetes 1.23 or 1.24, you also need to install the validating webhook. To do so, run: +If you are installing the edge version of NGINX Gateway Fabric: -```shell -kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/webhook-install.yaml -``` + ```shell + kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml + ``` + + If you are running on Kubernetes 1.23 or 1.24, you also need to install the validating webhook. To do so, run: + + ```shell + kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/webhook-install.yaml + ``` > **Important** > > The validating webhook is not needed if you are running Kubernetes 1.25+. Validation is done using CEL on the -> CRDs. See the [resource validation doc](https://github.com/nginxinc/nginx-gateway-fabric/blob/main/docs/resource-validation.md) +> CRDs. See the [resource validation doc](https://docs.nginx.com/nginx-gateway-fabric/overview/resource-validation/) > for more information. ## Installing the Chart diff --git a/docs/README.md b/docs/README.md index d93ff1c5a..516d7c910 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1,27 +1,10 @@ # NGINX Gateway Fabric Documentation -This directory contains all of the documentation relating to NGINX Gateway Fabric. +This directory contains the developer documentation and the enhancement proposals relating to NGINX Gateway Fabric. -## Contents - -- [Architecture](architecture.md): An overview of the architecture and design principles of NGINX Gateway Fabric. -- [Gateway API Compatibility](gateway-api-compatibility.md): Describes which Gateway API resources NGINX Gateway -Fabric supports and the extent of that support. -- [Installation](installation.md): Walkthrough on how to install NGINX Gateway Fabric on a generic Kubernetes cluster. -- [Resource Validation](resource-validation.md): Describes how NGINX Gateway Fabric validates Gateway API -resources. -- [Control Plane Configuration](control-plane-configuration.md): Describes how to dynamically update the NGINX -Gateway Fabric control plane configuration. -- [Building the Images](building-the-images.md): Steps on how to build the NGINX Gateway Fabric container images -yourself. -- [Running on Kind](running-on-kind.md): Walkthrough on how to run NGINX Gateway Fabric on a `kind` cluster. -- [CLI Help](cli-help.md): Describes the commands available in the `gateway` binary of `nginx-gateway-fabric` -container. -- [Monitoring](monitoring.md): Information on monitoring NGINX Gateway Fabric using Prometheus metrics. -- [Troubleshooting](troubleshooting.md): Troubleshooting guide for common or known issues. +_Please note: You can find the user documentation for NGINX Gateway Fabric in the [NGINX Documentation](https://docs.nginx.com/nginx-gateway-fabric/) website._ -### Directories +## Contents -- [Guides](guides): Guides about configuring NGINX Gateway Fabric for various use cases. - [Developer](developer/): Docs for developers of the project. Contains guides relating to processes and workflows. - [Proposals](proposals/): Enhancement proposals for new features. diff --git a/docs/architecture.md b/docs/architecture.md deleted file mode 100644 index 86f1f2177..000000000 --- a/docs/architecture.md +++ /dev/null @@ -1,160 +0,0 @@ -# Architecture - -This document provides an overview of the architecture and design principles of the NGINX Gateway Fabric. The target -audience includes the following groups: - -- *Cluster Operators* who would like to know how the software works and also better understand how it can fail. -- *Developers* who would like to [contribute][contribute] to the project. - -We assume that the reader is familiar with core Kubernetes concepts, such as Pods, Deployments, Services, and Endpoints. -Additionally, we recommend reading [this blog post][blog] for an overview of the NGINX architecture. - -[contribute]: https://github.com/nginxinc/nginx-gateway-fabric/blob/main/CONTRIBUTING.md - -[blog]: https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/ - -## What is NGINX Gateway Fabric? - -The NGINX Gateway Fabric is a component in a Kubernetes cluster that configures an HTTP load balancer according to -Gateway API resources created by Cluster Operators and Application Developers. - -> If you’d like to read more about the Gateway API, refer to [Gateway API documentation][sig-gateway]. - -This document focuses specifically on the NGINX Gateway Fabric, also known as NGF, which uses NGINX as its data -plane. - -[sig-gateway]: https://gateway-api.sigs.k8s.io/ - -## NGINX Gateway Fabric at a High Level - -To start, let's take a high-level look at the NGINX Gateway Fabric (NGF). The accompanying diagram illustrates an -example scenario where NGF exposes two web applications hosted within a Kubernetes cluster to external clients on the -internet: - -![NGF High Level](/docs/images/ngf-high-level.png) - -The figure shows: - -- A *Kubernetes cluster*. -- Users *Cluster Operator*, *Application Developer A* and *Application Developer B*. These users interact with the -cluster through the Kubernetes API by creating Kubernetes objects. -- *Clients A* and *Clients B* connect to *Applications A* and *B*, respectively. This applications have been deployed by -the corresponding users. -- The *NGF Pod*, [deployed by *Cluster Operator*](/docs/installation.md) in the Namespace *nginx-gateway*. For -scalability and availability, you can have multiple replicas. This Pod consists of two containers: `NGINX` and `NGF`. -The *NGF* container interacts with the Kubernetes API to retrieve the most up-to-date Gateway API resources created -within the cluster. It then dynamically configures the *NGINX* container based on these resources, ensuring proper -alignment between the cluster state and the NGINX configuration. -- *Gateway AB*, created by *Cluster Operator*, requests a point where traffic can be translated to Services within the -cluster. This Gateway includes a listener with a hostname `*.example.com`. Application Developers have the ability to -attach their application's routes to this Gateway if their application's hostname matches `*.example.com`. -- *Application A* with two Pods deployed in the *applications* Namespace by *Application Developer A*. To expose the -application to its clients (*Clients A*) via the host `a.example.com`, *Application Developer A* creates *HTTPRoute A* -and attaches it to `Gateway AB`. -- *Application B* with one Pod deployed in the *applications* Namespace by *Application Developer B*. To expose the -application to its clients (*Clients B*) via the host `b.example.com`, *Application Developer B* creates *HTTPRoute B* -and attaches it to `Gateway AB`. -- *Public Endpoint*, which fronts the *NGF* Pod. This is typically a TCP load balancer (cloud, software, or hardware) -or a combination of such load balancer with a NodePort Service. *Clients A* and *B* connect to their applications via -the *Public Endpoint*. - -The connections related to client traffic are depicted by the yellow and purple arrows, while the black arrows represent -access to the Kubernetes API. The resources within the cluster are color-coded based on the user responsible for their -creation. For example, the Cluster Operator is denoted by the color green, indicating that they have created and manage -all the green resources. - -> Note: For simplicity, many necessary Kubernetes resources like Deployment and Services aren't shown, -> which the Cluster Operator and the Application Developers also need to create. - -Next, let's explore the NGF Pod. - -## The NGINX Gateway Fabric Pod - -The NGINX Gateway Fabric consists of two containers: - -1. `nginx`: the data plane. Consists of an NGINX master process and NGINX worker processes. The master process controls -the worker processes. The worker processes handle the client traffic and load balance the traffic to the backend -applications. -2. `nginx-gateway`: the control plane. Watches Kubernetes objects and configures NGINX. - -These containers are deployed in a single Pod as a Kubernetes Deployment. - -The `nginx-gateway`, or the control plane, is a [Kubernetes controller][controller], written with -the [controller-runtime][runtime] library. It watches Kubernetes objects (Services, Endpoints, Secrets, and Gateway API -CRDs), translates them to NGINX configuration, and configures NGINX. This configuration happens in two stages. First, -NGINX configuration files are written to the NGINX configuration volume shared by the `nginx-gateway` and `nginx` -containers. Next, the control plane reloads the NGINX process. This is possible because the two -containers [share a process namespace][share], which allows the NGF process to send signals to the NGINX master process. - -The diagram below provides a visual representation of the interactions between processes within the `nginx` and -`nginx-gateway` containers, as well as external processes/entities. It showcases the connections and relationships between -these components. - -![NGF pod](/docs/images/ngf-pod.png) - -The following list provides a description of each connection, along with its corresponding type indicated in -parentheses. To enhance readability, the suffix "process" has been omitted from the process descriptions below. - -1. (HTTPS) - - Read: *NGF* reads the *Kubernetes API* to get the latest versions of the resources in the cluster. - - Write: *NGF* writes to the *Kubernetes API* to update the handled resources' statuses and emit events. If there's - more than one replica of *NGF* and [leader election](/deploy/helm-chart/README.md#configuration) is enabled, only - the *NGF* Pod that is leading will write statuses to the *Kubernetes API*. -2. (HTTP, HTTPS) *Prometheus* fetches the `controller-runtime` and NGINX metrics via an HTTP endpoint that *NGF* exposes. - The default is :9113/metrics. Note: Prometheus is not required by NGF, the endpoint can be turned off. -3. (File I/O) - - Write: *NGF* generates NGINX *configuration* based on the cluster resources and writes them as `.conf` files to the - mounted `nginx-conf` volume, located at `/etc/nginx/conf.d`. It also writes *TLS certificates* and *keys* - from [TLS Secrets][secrets] referenced in the accepted Gateway resource to the `nginx-secrets` volume at the - path `/etc/nginx/secrets`. - - Read: *NGF* reads the PID file `nginx.pid` from the `nginx-run` volume, located at `/var/run/nginx`. *NGF* - extracts the PID of the nginx process from this file in order to send reload signals to *NGINX master*. -4. (File I/O) *NGF* writes logs to its *stdout* and *stderr*, which are collected by the container runtime. -5. (HTTP) *NGF* fetches the NGINX metrics via the unix:/var/run/nginx/nginx-status.sock UNIX socket and converts it to - *Prometheus* format used in #2. -6. (Signal) To reload NGINX, *NGF* sends the [reload signal][reload] to the **NGINX master**. -7. (File I/O) - - Write: The *NGINX master* writes its PID to the `nginx.pid` file stored in the `nginx-run` volume. - - Read: The *NGINX master* reads *configuration files* and the *TLS cert and keys* referenced in the configuration when - it starts or during a reload. These files, certificates, and keys are stored in the `nginx-conf` and `nginx-secrets` - volumes that are mounted to both the `nginx-gateway` and `nginx` containers. -8. (File I/O) - - Write: The *NGINX master* writes to the auxiliary Unix sockets folder, which is located in the `/var/lib/nginx` - directory. - - Read: The *NGINX master* reads the `nginx.conf` file from the `/etc/nginx` directory. This [file][conf-file] contains - the global and http configuration settings for NGINX. In addition, *NGINX master* - reads the NJS modules referenced in the configuration when it starts or during a reload. NJS modules are stored in - the `/usr/lib/nginx/modules` directory. -9. (File I/O) The *NGINX master* sends logs to its *stdout* and *stderr*, which are collected by the container runtime. -10. (File I/O) An *NGINX worker* writes logs to its *stdout* and *stderr*, which are collected by the container runtime. -11. (Signal) The *NGINX master* controls the [lifecycle of *NGINX workers*][lifecycle] it creates workers with the new - configuration and shutdowns workers with the old configuration. -12. (HTTP) To consider a configuration reload a success, *NGF* ensures that at least one NGINX worker has the new - configuration. To do that, *NGF* checks a particular endpoint via the unix:/var/run/nginx/nginx-config-version.sock - UNIX socket. -13. (HTTP,HTTPS) A *client* sends traffic to and receives traffic from any of the *NGINX workers* on ports 80 and 443. -14. (HTTP,HTTPS) An *NGINX worker* sends traffic to and receives traffic from the *backends*. - -[controller]: https://kubernetes.io/docs/concepts/architecture/controller/ - -[runtime]: https://github.com/kubernetes-sigs/controller-runtime - -[secrets]: https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets - -[reload]: https://nginx.org/en/docs/control.html - -[lifecycle]: https://nginx.org/en/docs/control.html#reconfiguration - -[conf-file]: https://github.com/nginxinc/nginx-gateway-fabric/blob/main/internal/mode/static/nginx/conf/nginx.conf - -[share]: https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/ - -## Pod Readiness - -The `nginx-gateway` container includes a readiness endpoint available via the `/readyz` path. This endpoint -is periodically checked by a [readiness probe][readiness] on startup, and returns a 200 OK response when the Pod is -ready to accept traffic for the data plane. The Pod will become Ready after the control plane successfully starts. -If there are relevant Gateway API resources in the cluster, the control plane will also generate the first NGINX -configuration and successfully reload NGINX before the Pod is considered Ready. - -[readiness]: (https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes) diff --git a/docs/cli-help.md b/docs/cli-help.md deleted file mode 100644 index e3b795ae2..000000000 --- a/docs/cli-help.md +++ /dev/null @@ -1,49 +0,0 @@ -# Command-line Help - -This document describes the commands available in the `gateway` binary of the `nginx-gateway` container. - -## Static Mode - -This command configures NGINX in the scope of a single Gateway resource. - -Usage: - -```text - gateway static-mode [flags] -``` - -Flags: - -| Name | Type | Description | -|------------------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `gateway-ctlr-name` | `string` | The name of the Gateway controller. The controller name must be of the form: `DOMAIN/PATH`. The controller's domain is `gateway.nginx.org`. | -| `gatewayclass` | `string` | The name of the GatewayClass resource. Every NGINX Gateway Fabric must have a unique corresponding GatewayClass resource. | -| `gateway` | `string` | The namespaced name of the Gateway resource to use. Must be of the form: `NAMESPACE/NAME`. If not specified, the control plane will process all Gateways for the configured GatewayClass. However, among them, it will choose the oldest resource by creation timestamp. If the timestamps are equal, it will choose the resource that appears first in alphabetical order by {namespace}/{name}. | -| `config` | `string` | The name of the NginxGateway resource to be used for this controller's dynamic configuration. Lives in the same Namespace as the controller. | -| `service` | `string` | The name of the Service that fronts this NGINX Gateway Fabric Pod. Lives in the same Namespace as the controller. | -| `metrics-disable` | `bool` | Disable exposing metrics in the Prometheus format. (default false) | -| `metrics-listen-port` | `int` | Sets the port where the Prometheus metrics are exposed. Format: `[1024 - 65535]` (default `9113`) | -| `metrics-secure-serving` | `bool` | Configures if the metrics endpoint should be secured using https. Please note that this endpoint will be secured with a self-signed certificate. (default false) | -| `update-gatewayclass-status` | `bool` | Update the status of the GatewayClass resource. (default true) | -| `health-disable` | `bool` | Disable running the health probe server. (default false) | -| `health-port` | `int` | Set the port where the health probe server is exposed. Format: `[1024 - 65535]` (default `8081`) | -| `leader-election-disable` | `bool` | Disable leader election. Leader election is used to avoid multiple replicas of the NGINX Gateway Fabric reporting the status of the Gateway API resources. If disabled, all replicas of NGINX Gateway Fabric will update the statuses of the Gateway API resources. (default false) | -| `leader-election-lock-name` | `string` | The name of the leader election lock. A Lease object with this name will be created in the same Namespace as the controller. (default "nginx-gateway-leader-election-lock") | - -## Sleep - -This command sleeps for specified duration and exits. - -Usage: - -```text -Usage: - gateway sleep [flags] -``` - -| Name | Type | Description | -|----------|-----------------|-------------------------------------------------------------------------------------------------------| -| duration | `time.Duration` | Set the duration of sleep. Must be parsable by [`time.ParseDuration`][parseDuration]. (default `30s`) | - - -[parseDuration]:https://pkg.go.dev/time#ParseDuration diff --git a/docs/control-plane-configuration.md b/docs/control-plane-configuration.md deleted file mode 100644 index 2d2a13d03..000000000 --- a/docs/control-plane-configuration.md +++ /dev/null @@ -1,57 +0,0 @@ -# Control Plane Configuration - -This document describes how to dynamically update the NGINX Gateway Fabric control plane configuration. - -## Overview - -NGINX Gateway Fabric offers a way to update the control plane configuration dynamically without the need for a -restart. The control plane configuration is stored in the NginxGateway custom resource. This resource is created -during the installation of NGINX Gateway Fabric. - -If using manifests, the default name of the resource is `nginx-gateway-config`. If using Helm, the default name -of the resource is `-config`. It is deployed in the same Namespace as the controller -(default `nginx-gateway`). - -The control plane only watches this single instance of the custom resource. If the resource is invalid per the OpenAPI -schema, the Kubernetes API server will reject the changes. If the resource is deleted or deemed invalid by NGINX -Gateway Fabric, a warning Event is created in the `nginx-gateway` Namespace, and the default values will be used by -the control plane for its configuration. Additionally, the control plane updates the status of the resource (if it exists) -to reflect whether it is valid or not. - -### Spec - -| name | description | type | required | -|---------|-----------------------------------------------------------------|--------------------------|----------| -| logging | Logging defines logging related settings for the control plane. | [logging](#speclogging) | no | - -### Spec.Logging - -| name | description | type | required | -|-------|------------------------------------------------------------------------|--------|----------| -| level | Level defines the logging level. Supported values: info, debug, error. | string | no | - -## Viewing and Updating the Configuration - -> For the following examples, the name `nginx-gateway-config` should be updated to the name of the resource that -> was created by your installation. - -To view the current configuration: - -```shell -kubectl -n nginx-gateway get nginxgateways nginx-gateway-config -o yaml -``` - -To update the configuration: - -```shell -kubectl -n nginx-gateway edit nginxgateways nginx-gateway-config -``` - -This will open the configuration in your default editor. You can then update and save the configuration, which is -applied automatically to the control plane. - -To view the status of the configuration: - -```shell -kubectl -n nginx-gateway describe nginxgateways nginx-gateway-config -``` diff --git a/docs/developer/documentation.md b/docs/developer/documentation.md new file mode 100644 index 000000000..f927fa59b --- /dev/null +++ b/docs/developer/documentation.md @@ -0,0 +1,165 @@ +# NGINX Gateway Fabric Docs + +The `/site` directory contains the user documentation for NGINX Gateway Fabric and the requirements for linting, building, and publishing the docs. Run all the `hugo` commands below from this directory. + +We use [Hugo](https://gohugo.io/) to build the docs for NGINX, with the [nginx-hugo-theme](https://github.com/nginxinc/nginx-hugo-theme). + +Docs should be written in Markdown. + +In the `/site` directory, you will find the following files: + +- a [Netlify](https://netlify.com) configuration file; +- configuration files for [markdownlint](https://github.com/DavidAnson/markdownlint/) and [markdown-link-check](https://github.com/tcort/markdown-link-check) +- a `./config` directory that contains the [Hugo](https://gohugo.io) configuration. + +## Git Guidelines + +See the [Pull Request Guide](pull-request.md) for specfic instructions on how to submit a pull request. + +### Branching and Workflow + +This repo uses a [forking workflow](https://www.atlassian.com/git/tutorials/comparing-workflows/forking-workflow). See our [Branching and Workflow](branching-and-workflow.md) documentation for more information. + +### Publishing Documentation Updates + +**`main`** is the default branch in this repo. All the latest content updates are merged into this branch. + +The documentation is published from the latest public release branch, (for example, `release-4.0`). Work on your docs in a feature branch in your fork of the repo. Open pull requests into the `main` branch when you are ready to merge your work. + +If you are working on content for immediate publication in the docs site, cherrypick your changes to the current public release branch. + +If you are working on content for a future release, make sure that you **do not** cherrypick them to the current public release branch, as this will publish them automatically. See the [Release Process documentation](release-process.md) for more information. + + +## Setup + +### Golang + +Follow the instructions here to install Go: https://golang.org/doc/install + +> To support the use of Hugo mods, you need to install Go v1.15 or newer. + +### Hugo + +Follow the instructions here to install Hugo: [Hugo Installation](https://gohugo.io/installation/) + +> **NOTE:** We are currently running [Hugo v0.115.3](https://github.com/gohugoio/hugo/releases/tag/v0.115.3) in production. + +### Markdownlint + +We use markdownlint to check that Markdown files are correctly formatted. You can use `npm` to install markdownlint-cli: + +```shell +npm install -g markdownlint-cli +``` + +## How to write docs with Hugo + +### Add a new doc + +- To create a new doc that contains all of the pre-configured Hugo front-matter and the docs task template: + + `hugo new /.` + + e.g., + + hugo new install.md + + > The default template -- task -- should be used in most docs. +- To create other types of docs, you can add the `--kind` flag: + `hugo new tutorials/deploy.md --kind tutorial` + + +The available kinds are: + +- Task: Enable the customer to achieve a specific goal, based on use case scenarios. +- Concept: Help a customer learn about a specific feature or feature set. +- Reference: Describes an API, command line tool, config options, etc.; should be generated automatically from source code. +- Troubleshooting: Helps a customer solve a specific problem. +- Tutorial: Walk a customer through an example use case scenario; results in a functional PoC environment. + +### Format internal links + +Format links as [Hugo relrefs](https://gohugo.io/content-management/cross-references/). + +> Note: Using file extensions when linking to internal docs with `relref` is optional. + +- You can use relative paths or just the filename. We recommend using the filename +- Paths without a leading `/` are first resolved relative to the current page, then to the remainder of the site. +- Anchors are supported. + +For example: + +```md +To install NGINX Gateway Fabric, refer to the [installation instructions]({{< relref "/installation/install.md#section-1" >}}). +``` + +### Add images + +You can use the `img` [shortcode](#use-hugo-shortcodes to insert images into your documentation. + +1. Add the image to the static/img directory. + DO NOT include a forward slash at the beginning of the file path. This will break the image when it's rendered. + See the docs for the [Hugo relURL Function](https://gohugo.io/functions/relurl/#input-begins-with-a-slash) to learn more. + +1. Add the img shortcode: + + {{< img src="img/" >}} + +> Note: The shortcode accepts all of the same parameters as the [Hugo figure shortcode](https://gohugo.io/content-management/shortcodes/#figure). + +### Use Hugo shortcodes + +You can use Hugo [shortcodes](https://gohugo.io/content-management/shortcodes) to do things like format callouts, add images, and reuse content across different docs. + +For example, to use the note callout: + +```md +{{< note >}}Provide the text of the note here. {{< /note >}} +``` + +The callout shortcodes also support multi-line blocks: + +```md +{{< caution >}} +You should probably never do this specific thing in a production environment. If you do, and things break, don't say we didn't warn you. +{{< /caution >}} +``` + +Supported callouts: + +- caution +- important +- note +- see-also +- tip +- warning + +A few more useful shortcodes: + +- collapse: makes a section collapsible +- table: adds scrollbars to wide tables when viewed in small browser windows or mobile browsers +- fa: inserts a Font Awesome icon +- include: include the content of a file in another file (requires the included file to be in the /includes directory) +- link: makes it possible to link to a static file and prepend the path with the Hugo baseUrl +- openapi: loads an OpenAPI spec and renders as HTML using ReDoc +- raw-html: makes it possible to include a block of raw HTML +- readfile: includes the content of another file in the current file; useful for adding code examples + +## How to build docs locally + +To view the docs in a browser, run the Hugo server. This will reload the docs automatically so you can view updates as you work. + +> Note: The docs use build environments to control the baseURL that will be used for things like internal references and resource (CSS and JS) loading. +> You can view the config for each environment in the [config](./config) directory of this repo. +When running the Hugo server, you can specify the environment and baseURL if desired, but it's not necessary. + +For example: + +```shell +hugo server +``` + +```shell +hugo server -e development -b "http://127.0.0.1/nginx-gateway-fabric/" +``` diff --git a/docs/developer/implementing-a-feature.md b/docs/developer/implementing-a-feature.md index 262a93ef7..c56d5dd7e 100644 --- a/docs/developer/implementing-a-feature.md +++ b/docs/developer/implementing-a-feature.md @@ -32,18 +32,19 @@ practices to ensure a successful feature development process. the [testing](/docs/developer/testing.md#unit-test-guidelines) documentation. 9. **Manually verify your changes**: Refer to the [manual testing](/docs/developer/testing.md#manual-testing) section of the testing documentation for instructions on how to manually test your changes. -10. **Update any relevant documentation**: Here are some guidelines for updating documentation: +10. **Update any relevant documentation**: See the [documentation](/docs/developer/documentation.md) guide for in-depth information about the workflow to update the docs and how we publish them. + Here are some basic guidelines for updating documentation: - **Gateway API Feature**: If you are implementing a Gateway API feature, make sure to update - the [Gateway API Compatibility](/docs/gateway-api-compatibility.md) documentation. + the [Gateway API Compatibility](/site/content/concepts/gateway-api-compatibility.md) documentation. - **New Use Case:** If your feature introduces a new use case, add an example of how to use it in the [examples](/examples) directory. This example will help users understand how to leverage the new feature. > For security, a Docker image used in an example must be either managed by F5/NGINX or be an [official image](https://docs.docker.com/docker-hub/official_images/). - **Installation Changes**: If your feature involves changes to the installation process of NGF, update - the [installation](/docs/installation.md) documentation. + the [installation](/site/content/how-to/installation/installation.md) documentation. - **Helm Changes**: If your feature introduces or changes any values of the NGF Helm Chart, update the [Helm README](/deploy/helm-chart/README.md). - **Command-line Changes**: If your feature introduces or changes a command-line flag or subcommand, update - the [cli help](/docs/cli-help.md) documentation. + the [cli help](/site/content/reference/cli-help.md) documentation. - **Other Documentation Updates**: For any other changes that affect the behavior, usage, or configuration of NGF, review the existing documentation and update it as necessary. Ensure that the documentation remains accurate and up to date with the latest changes. diff --git a/docs/developer/release-process.md b/docs/developer/release-process.md index ddf8fe90d..966512eb8 100644 --- a/docs/developer/release-process.md +++ b/docs/developer/release-process.md @@ -36,26 +36,29 @@ To create a new release, follow these steps: URLs to point at `vX.Y.Z`, and bump the `version`. 2. Adjust the `VERSION` variable in the [Makefile](/Makefile) and the `TAG` in the [conformance tests Makefile](/conformance/Makefile) to `X.Y.Z`. - 3. Update the tag of NGF container images used in the Helm [values.yaml](/deploy/helm-chart/values.yaml) file, the - [provisioner manifest](/conformance/provisioner/provisioner.yaml), and all docs to `X.Y.Z`. + 3. Update the tag of NGF container images used in the Helm [values.yaml](/deploy/helm-chart/values.yaml) file, + the [provisioner manifest](/conformance/provisioner/provisioner.yaml), and all docs to `X.Y.Z`. 4. Ensure that the `imagePullPolicy` is `IfNotPresent` in the Helm [values.yaml](/deploy/helm-chart/values.yaml) file. 5. Generate the installation manifests by running `make generate-manifests`. 6. Modify any `git clone` instructions to use `vX.Y.Z` tag. 7. Modify any docs links that refer to `main` to instead refer to `vX.Y.Z`. - 8. Update the [README](/README.md) to include information about the release. - 9. Update the [changelog](/CHANGELOG.md). The changelog includes only important (from the user perspective) - changes to NGF. This is in contrast with the autogenerated full changelog, which is created in the next step. As - a starting point, copy the important features, bug fixes, and dependencies from the autogenerated draft of the - full changelog. This draft can be found under - the [GitHub releases](https://github.com/nginxinc/nginx-gateway-fabric/releases) after the release branch is - created. Use the previous changelog entries for formatting and content guidance. + 8. Update any installation instructions to ensure that the supported Gateway API and NGF versions are correct. + Specifically, helm README and `site/content/includes/installation/install-gateway-api-resources.md`. + 9. Update the [README](/README.md) to include information about the release. + 10. Update the [changelog](/CHANGELOG.md). The changelog includes only important (from the user perspective) + changes to NGF. This is in contrast with the autogenerated full changelog, which is created in the next + step. As a starting point, copy the important features, bug fixes, and dependencies from the autogenerated + draft of the full changelog. This draft can be found under + the [GitHub releases](https://github.com/nginxinc/nginx-gateway-fabric/releases) after the release branch is + created. Use the previous changelog entries for formatting and content guidance. 7. Create and push the release tag in the format `vX.Y.Z`. As a result, the CI/CD pipeline will: - Build NGF container images with the release tag `X.Y.Z` and push it to the registry. - Package and publish the Helm chart to the registry. - Create a GitHub release with an autogenerated changelog and attached release artifacts. -8. Prepare and merge a PR into the main branch to update the [README](/README.md) to include the information about the - latest release and also the [changelog](/CHANGELOG.md). +8. Prepare and merge a PR into the main branch to update the [README](/README.md) to include the information about + the latest release and also the [changelog](/CHANGELOG.md). Also update any installation instructions to ensure + that the supported Gateway API and NGF versions are correct. Specifically, helm README and `site/content/includes/installation/install-gateway-api-resources.md`. 9. Close the issue created in Step 1. 10. Ensure that the [associated milestone](https://github.com/nginxinc/nginx-gateway-fabric/milestones) is closed. 11. Verify that published artifacts in the release can be installed properly. diff --git a/docs/guides/README.md b/docs/guides/README.md deleted file mode 100644 index 6869f9883..000000000 --- a/docs/guides/README.md +++ /dev/null @@ -1,14 +0,0 @@ -# Guides - -This directory contains guides for configuring NGINX Gateway Fabric for various use cases. - -## Contents - -- [Routing Traffic to Your Application](routing-traffic-to-your-app.md): How to use NGINX Gateway Fabric to route - all Ingress traffic to your Kubernetes application. -- [Routing to Applications Using HTTP Matching Conditions](advanced-routing.md): Guide on how to deploy multiple - applications and HTTPRoutes with request conditions such as paths, methods, headers, and query parameters. -- [Securing Traffic using Let's Encrypt and Cert-Manager](integrating-cert-manager.md): Shows how to secure - traffic from clients to NGINX Gateway Fabric with TLS using Let's Encrypt and Cert-Manager. -- [Using NGINX Gateway Fabric to Upgrade Applications without Downtime](upgrade-apps-without-downtime.md): - Explains how to use NGINX Gateway Fabric to upgrade applications without downtime. diff --git a/docs/guides/routing-traffic-to-your-app.md b/docs/guides/routing-traffic-to-your-app.md deleted file mode 100644 index 9842492a5..000000000 --- a/docs/guides/routing-traffic-to-your-app.md +++ /dev/null @@ -1,405 +0,0 @@ -# Routing Traffic to Your Application - -In this guide, you will learn how to route external traffic to your Kubernetes applications using the Gateway API and -NGINX Gateway Fabric. Whether you're managing a web application or a REST backend API, you can use NGINX Gateway -Fabric to expose your application outside the cluster. - -## Prerequisites - -- [Install](/docs/installation.md) NGINX Gateway Fabric. -- [Expose NGINX Gateway Fabric](/docs/installation.md#expose-nginx-gateway-fabric) and save the public IP - address and port of NGINX Gateway Fabric into shell variables: - - ```text - GW_IP=XXX.YYY.ZZZ.III - GW_PORT= - ``` - -## The Application - -The application we are going to use in this guide is a simple coffee application comprised of one Service and two Pods: - -![coffee app](/docs/images/route-all-traffic-app.png) - -With this architecture, the coffee application is not accessible outside the cluster. We want to expose this application -on the hostname `cafe.example.com` so that clients outside the cluster can access it. - -To do this, we will install NGINX Gateway Fabric and create two Gateway API resources: -a [Gateway](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.Gateway) and -an [HTTPRoute](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.HTTPRoute). -With these resources, we will configure a simple routing rule to match all HTTP traffic with the -hostname `cafe.example.com` and route it to the coffee Service. - -## Setup - -Create the coffee application in Kubernetes by copying and pasting the following into your terminal: - -```yaml -kubectl apply -f - < 80/TCP 77s -``` - -## Application Architecture with NGINX Gateway Fabric - -To route traffic to the coffee application, we will create a Gateway and HTTPRoute. The following diagram shows the -configuration we'll be creating in the next step: - -![Configuration](/docs/images/route-all-traffic-config.png) - -We need a Gateway to create an entry point for HTTP traffic coming into the cluster. The `cafe` Gateway we are going to -create will open an entry point to the cluster on port 80 for HTTP traffic. - -To route HTTP traffic from the Gateway to the coffee Service, we need to create an HTTPRoute named `coffee` and attach -to the Gateway. This HTTPRoute will have a single routing rule that routes all traffic to the -hostname `cafe.example.com` from the Gateway to the coffee Service. - -Once NGINX Gateway Fabric processes the `cafe` Gateway and `coffee` HTTPRoute, it will configure its dataplane, NGINX, -to route all HTTP requests to `cafe.example.com` to the Pods that the `coffee` Service targets: - -![Traffic Flow](/docs/images/route-all-traffic-flow.png) - -The coffee Service is omitted from the diagram above because the NGINX Gateway Fabric routes directly to the Pods -that the coffee Service targets. - -> **Note** -> In the diagrams above, all resources that are the responsibility of the cluster operator are shown in green. -> The orange resources are the responsibility of the application developers. -> See the [roles and personas](https://gateway-api.sigs.k8s.io/concepts/roles-and-personas/#roles-and-personas_1) -> Gateway API document for more information on these roles. - -## Create the Gateway API Resources - -To create the `cafe` Gateway, copy and paste the following into your terminal: - -```yaml -kubectl apply -f - < **Note** -> Your clients should be able to resolve the domain name `cafe.example.com` to the public IP of the -> NGINX Gateway Fabric. In this guide we will simulate that using curl's `--resolve` option. - - -First, let's send a request to the path `/`: - -```shell -curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/ -``` - -We should get a response from one of the coffee Pods: - -```text -Server address: 10.12.0.18:8080 -Server name: coffee-7dd75bc79b-cqvb7 -``` - -Since the `cafe` HTTPRoute routes all traffic on any path to the coffee application, the following requests should also -be handled by the coffee Pods: - -```shell -curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/some-path -``` - -```text -Server address: 10.12.0.18:8080 -Server name: coffee-7dd75bc79b-cqvb7 -``` - -```shell -curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/some/path -``` - -```text -Server address: 10.12.0.19:8080 -Server name: coffee-7dd75bc79b-dett3 -``` - -Requests to hostnames other than `cafe.example.com` should _not_ be routed to the coffee application, since the `cafe` -HTTPRoute only matches requests with the `cafe.example.com` hostname. To verify this, send a request to the hostname -`pub.example.com`: - -```shell -curl --resolve pub.example.com:$GW_PORT:$GW_IP http://pub.example.com:$GW_PORT/ -``` - -You should receive a 404 Not Found error: - -```text - -404 Not Found - -

404 Not Found

-
nginx/1.25.2
- - -``` - -## Troubleshooting - -If you have any issues while testing the configuration, try the following to debug your configuration and setup: - -- Make sure you set the shell variables $GW_IP and $GW_PORT to the public IP and port of the NGINX Gateway Fabric - Service. Instructions for finding those values are [here](/docs/installation.md#expose-nginx-gateway-fabric). - -- Check the status of the Gateway: - - ```shell - kubectl describe gateway cafe - ``` - - The Gateway status should look similar to this: - - ```text - Status: - Addresses: - Type: IPAddress - Value: 10.244.0.85 - Conditions: - Last Transition Time: 2023-08-15T20:57:21Z - Message: Gateway is accepted - Observed Generation: 1 - Reason: Accepted - Status: True - Type: Accepted - Last Transition Time: 2023-08-15T20:57:21Z - Message: Gateway is programmed - Observed Generation: 1 - Reason: Programmed - Status: True - Type: Programmed - Listeners: - Attached Routes: 1 - Conditions: - Last Transition Time: 2023-08-15T20:57:21Z - Message: Listener is accepted - Observed Generation: 1 - Reason: Accepted - Status: True - Type: Accepted - Last Transition Time: 2023-08-15T20:57:21Z - Message: Listener is programmed - Observed Generation: 1 - Reason: Programmed - Status: True - Type: Programmed - Last Transition Time: 2023-08-15T20:57:21Z - Message: All references are resolved - Observed Generation: 1 - Reason: ResolvedRefs - Status: True - Type: ResolvedRefs - Last Transition Time: 2023-08-15T20:57:21Z - Message: No conflicts - Observed Generation: 1 - Reason: NoConflicts - Status: False - Type: Conflicted - Name: http - ``` - - Check that the conditions match and that the attached routes for the `http` listener equals 1. If it is 0, there may - be an issue with the HTTPRoute. - -- Check the status of the HTTPRoute: - - ```shell - kubectl describe httproute coffee - ``` - - The HTTPRoute status should look similar to this: - - ```text - Status: - Parents: - Conditions: - Last Transition Time: 2023-08-15T20:57:21Z - Message: The route is accepted - Observed Generation: 1 - Reason: Accepted - Status: True - Type: Accepted - Last Transition Time: 2023-08-15T20:57:21Z - Message: All references are resolved - Observed Generation: 1 - Reason: ResolvedRefs - Status: True - Type: ResolvedRefs - Controller Name: gateway.nginx.org/nginx-gateway-controller - Parent Ref: - Group: gateway.networking.k8s.io - Kind: Gateway - Name: cafe - Namespace: default - ``` - - Check for any error messages in the conditions. - -- Check the generated nginx config: - - ```shell - kubectl exec -it -n nginx-gateway -c nginx -- nginx -T - ``` - - The config should contain a server block with the server name `cafe.example.com` that listens on port 80. This - server block should have a single location `/` that proxy passes to the coffee upstream: - - ```nginx configuration - server { - listen 80; - - server_name cafe.example.com; - - location / { - ... - proxy_pass http://default_coffee_80$request_uri; # the upstream is named default_coffee_80 - ... - } - } - ``` - - There should also be an upstream block with a name that matches the upstream in the `proxy_pass` directive. This - upstream block should contain the Pod IPs of the coffee Pods: - - ```nginx configuration - upstream default_coffee_80 { - ... - server 10.12.0.18:8080; # these should be the Pod IPs of the coffee Pods - server 10.12.0.19:8080; - ... - } - ``` - - > **Note** - > The entire configuration is not shown because it is subject to change. - > Ellipses indicate that there's configuration not shown. - -If your issue persists, [contact us](https://github.com/nginxinc/nginx-gateway-fabric#contacts). - -## Further Reading - -To learn more about the Gateway API and the resources we created in this guide, check out the following resources: - -- [Gateway API Overview](https://gateway-api.sigs.k8s.io/concepts/api-overview/) -- [Deploying a simple Gateway](https://gateway-api.sigs.k8s.io/guides/simple-gateway/) -- [HTTP Routing](https://gateway-api.sigs.k8s.io/guides/http-routing/) diff --git a/docs/guides/upgrade-apps-without-downtime.md b/docs/guides/upgrade-apps-without-downtime.md deleted file mode 100644 index bbc5646fe..000000000 --- a/docs/guides/upgrade-apps-without-downtime.md +++ /dev/null @@ -1,173 +0,0 @@ -# Using NGINX Gateway Fabric to Upgrade Applications without Downtime - -This guide explains how to use NGINX Gateway Fabric to upgrade applications without downtime. - -Multiple upgrade methods are mentioned, assuming existing familiarity: this guide focuses primarily on how to use NGINX -Gateway Fabric to accomplish them. - -> See the [Architecture document](../architecture.md) to learn more about NGINX Gateway Fabric architecture. - -## NGINX Gateway Fabric Functionality - -To understand the upgrade methods, you should be aware of the NGINX features that help prevent application downtime: -graceful configuration reloads and upstream server updates. - -### Graceful Configuration Reloads - -If a relevant Gateway API or built-in Kubernetes resource is changed, NGINX Gateway Fabric will update NGINX by -regenerating the NGINX configuration. NGINX Gateway Fabric then sends a reload signal to the master NGINX process to -apply the new configuration. - -We call such an operation a reload, during which client requests are not dropped - which defines it as a graceful reload. - -This process is further explained in [NGINX's documentation](https://nginx.org/en/docs/control.html?#reconfiguration). - -### Upstream Server Updates - -Endpoints frequently change during application upgrades: Kubernetes creates Pods for the new version of an application -and removes the old ones, creating and removing the respective Endpoints as well. - -NGINX Gateway Fabric detects changes to Endpoints by watching their corresponding [EndpointSlices][endpoint-slices]. - -[endpoint-slices]:https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/ - -In NGINX configuration, a Service is represented as an [upstream][upstream], and an Endpoint as an -[upstream server][upstream-server]. - -[upstream]:https://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream - -[upstream-server]:https://nginx.org/en/docs/http/ngx_http_upstream_module.html#server - -Two common cases are adding and removing Endpoints: - -- If an Endpoint is added, NGINX Gateway Fabric adds an upstream server to NGINX that corresponds to the Endpoint, - then reload NGINX. After that, NGINX will start proxying traffic to that Endpoint. -- If an Endpoint is removed, NGINX Gateway Fabric removes the corresponding upstream server from NGINX. After - a reload, NGINX will stop proxying traffic to it. However, it will finish proxying any pending requests to that - server before switching to another Endpoint. - -As long as you have more than one ready Endpoint, the clients should not experience any downtime during upgrades. - -> It is good practice to configure a [Readiness probe][readiness-probe] in the Deployment so that a Pod can advertise -> when it is ready to receive traffic. Note that NGINX Gateway Fabric will not add any Endpoint to NGINX that is not -> ready. - -[readiness-probe]:https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ - -## Before You Begin - -For the upgrade methods covered in the next sections, we make the following assumptions: - -- You deploy your application as a [Deployment][deployment]. -- The Pods of the Deployment belong to a [Service][service] so that Kubernetes creates an [Endpoint][endpoints] for - each Pod. -- You expose the application to the clients via an [HTTPRoute][httproute] resource that references that Service. - -[deployment]:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ - -[service]:https://kubernetes.io/docs/concepts/services-networking/service/ - -[httproute]:https://gateway-api.sigs.k8s.io/api-types/httproute/ - -[endpoints]:https://kubernetes.io/docs/reference/kubernetes-api/service-resources/endpoints-v1/ - -For example, an application can be exposed using a routing rule like below: - -```yaml -- matches: - - path: - type: PathPrefix - value: / - backendRefs: - - name: my-app - port: 80 -``` - -> See the [Cafe example](../../examples/cafe-example) for a basic example. - -The upgrade methods in the next sections cover: - -- Rolling Deployment Upgrades -- Blue-green Deployments -- Canary Releases - -## Rolling Deployment Upgrade - -To start a [rolling Deployment upgrade][rolling-upgrade], you update the Deployment to use the new version tag of -the application. As a result, Kubernetes terminates the Pods with the old version and create new ones. By default, -Kubernetes also ensures that some number of Pods always stay available during the upgrade. - -[rolling-upgrade]:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment - -Such an upgrade will add new upstream servers to NGINX and remove the old ones. As long as the number -of Pods (ready Endpoints) during an upgrade does not reach zero, NGINX will be able to proxy traffic, and thus prevent -any downtime. - -This method does not require you to update the HTTPRoute. - -## Blue-Green Deployments - -With this method, you deploy a new version of the application (blue version) as a separate Deployment, -while the old version (green) keeps running and handling client traffic. Next, you switch the traffic from the -green version to the blue. If the blue works as expected, you terminate the green. Otherwise, you switch the traffic -back to the green. - -There are two ways to switch the traffic: - -- Update the Service selector to select the Pods of the blue version instead of the green. As a result, NGINX Gateway - Fabric removes the green upstream servers from NGINX and add the blue ones. With this approach, it is not - necessary to update the HTTPRoute. -- Create a separate Service for the blue version and update the backend reference in the HTTPRoute to reference this - Service, which leads to the same result as with the previous option. - -## Canary Releases - -To support canary releases, you can implement an approach with two Deployments behind the same Service (see -[Canary deployment][canary] in the Kubernetes documentation). However, this approach lacks precision for defining the -traffic split between the old and the canary version. You can greatly influence it by controlling the number of Pods -(for example, four Pods of the old version and one Pod of the canary). However, note that NGINX Gateway Fabric uses -[`random two least_conn`][random-method] load balancing method, which doesn't guarantee an exact split based on the -number of Pods (80/20 in the given example). - -[canary]:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#canary-deployment -[random-method]:https://nginx.org/en/docs/http/ngx_http_upstream_module.html#random - -A more flexible and precise way to implement canary releases is to configure a traffic split in an HTTPRoute. In this -case, you create a separate Deployment for the new version with a separate Service. For example, for the rule below, -NGINX will proxy 95% of the traffic to the old version Endpoints and only 5% to the new ones. - -```yaml -- matches: - - path: - type: PathPrefix - value: / - backendRefs: - - name: my-app-old - port: 80 - weight: 95 - - name: my-app-new - port: 80 - weight: 5 -``` - -> There is no stickiness for the requests from the same client. NGINX will independently split each request among -> the backend references. - -By updating the rule you can further increase the share of traffic the new version gets and finally completely switch -to the new version: - -```yaml -- matches: - - path: - type: PathPrefix - value: / - backendRefs: - - name: my-app-old - port: 80 - weight: 0 - - name: my-app-new - port: 80 - weight: 1 -``` - -See the [Traffic splitting example](/examples/traffic-splitting) from our repository. diff --git a/docs/images/route-all-traffic-config.png b/docs/images/route-all-traffic-config.png deleted file mode 100644 index 38c4e08a6..000000000 Binary files a/docs/images/route-all-traffic-config.png and /dev/null differ diff --git a/docs/images/route-all-traffic-flow.png b/docs/images/route-all-traffic-flow.png deleted file mode 100644 index 9fc2089c7..000000000 Binary files a/docs/images/route-all-traffic-flow.png and /dev/null differ diff --git a/docs/installation.md b/docs/installation.md deleted file mode 100644 index 09dfcd745..000000000 --- a/docs/installation.md +++ /dev/null @@ -1,297 +0,0 @@ -# Installation - -This guide walks you through how to install NGINX Gateway Fabric on a generic Kubernetes cluster. - -- [Installation](#installation) - - [Prerequisites](#prerequisites) - - [Deploy NGINX Gateway Fabric using Helm](#deploy-nginx-gateway-fabric-using-helm) - - [Deploy NGINX Gateway Fabric from Manifests](#deploy-nginx-gateway-fabric-from-manifests) - - [Expose NGINX Gateway Fabric](#expose-nginx-gateway-fabric) - - [Create a NodePort Service](#create-a-nodeport-service) - - [Create a LoadBalancer Service](#create-a-loadbalancer-service) - - [Upgrading NGINX Gateway Fabric](#upgrading-nginx-gateway-fabric) - - [Upgrade NGINX Gateway Fabric from Manifests](#upgrade-nginx-gateway-fabric-from-manifests) - - [Upgrade NGINX Gateway Fabric using Helm](#upgrade-nginx-gateway-fabric-using-helm) - - [Configure Delayed Termination for Zero Downtime Upgrades](#configure-delayed-termination-for-zero-downtime-upgrades) - - [Configure Delayed Termination Using Manifests](#configure-delayed-termination-using-manifests) - - [Configure Delayed Termination Using Helm](#configure-delayed-termination-using-helm) - - [Uninstalling NGINX Gateway Fabric](#uninstalling-nginx-gateway-fabric) - - [Uninstall NGINX Gateway Fabric from Manifests](#uninstall-nginx-gateway-fabric-from-manifests) - - [Uninstall NGINX Gateway Fabric using Helm](#uninstall-nginx-gateway-fabric-using-helm) - -## Prerequisites - -- [kubectl](https://kubernetes.io/docs/tasks/tools/) - -## Deploy NGINX Gateway Fabric using Helm - -To deploy NGINX Gateway Fabric using Helm, please follow the instructions on [this](/deploy/helm-chart/README.md) -page. - -## Deploy NGINX Gateway Fabric from Manifests - -> **Note** -> -> By default, NGINX Gateway Fabric (NGF) will be installed into the nginx-gateway Namespace. -> It is possible to run NGF in a different Namespace, although you'll need to make modifications to the installation -> manifests. - -1. To install the Gateway API CRDs from [the Gateway API repo](https://github.com/kubernetes-sigs/gateway-api), run: - - ```shell - kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml - ``` - - If you are running on Kubernetes 1.23 or 1.24, you also need to install the validating webhook. To do so, run: - - ```shell - kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/webhook-install.yaml - ``` - - > **Important** - > - > The validating webhook is not needed if you are running Kubernetes 1.25+. Validation is done using CEL on the - > CRDs. See the [resource validation doc](https://github.com/nginxinc/nginx-gateway-fabric/blob/main/docs/resource-validation.md) - > for more information. - -2. Deploy the NGINX Gateway Fabric CRDs: - - ```shell - kubectl apply -f https://github.com/nginxinc/nginx-gateway-fabric/releases/download/v1.0.0/crds.yaml - ``` - -3. Deploy the NGINX Gateway Fabric: - - ```shell - kubectl apply -f https://github.com/nginxinc/nginx-gateway-fabric/releases/download/v1.0.0/nginx-gateway.yaml - ``` - -4. Confirm the NGINX Gateway Fabric is running in `nginx-gateway` namespace: - - ```shell - kubectl get pods -n nginx-gateway - ``` - - ```text - NAME READY STATUS RESTARTS AGE - nginx-gateway-5d4f4c7db7-xk2kq 2/2 Running 0 112s - ``` - -## Expose NGINX Gateway Fabric - -You can gain access to NGINX Gateway Fabric by creating a `NodePort` Service or a `LoadBalancer` Service. -This Service must live in the same Namespace as the controller. The name of this Service is provided in -the `--service` argument to the controller. - -> **Important** -> -> The Service manifests expose NGINX Gateway Fabric on ports 80 and 443, which exposes any -> Gateway [Listener](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.Listener) -> configured for those ports. If you'd like to use different ports in your listeners, -> update the manifests accordingly. -> -> Additionally, NGINX Gateway Fabric will not listen on any ports until you configure a -[Gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/#gateway) resource with a valid listener. - -NGINX Gateway Fabric will use this Service to set the Addresses field in the Gateway Status resource. A LoadBalancer -Service sets the status field to the IP address and/or Hostname. If no Service exists, the Pod IP address is used. - -### Create a NodePort Service - -Create a Service with type `NodePort`: - -```shell -kubectl apply -f https://raw.githubusercontent.com/nginxinc/nginx-gateway-fabric/v1.0.0/deploy/manifests/service/nodeport.yaml -``` - -A `NodePort` Service will randomly allocate one port on every Node of the cluster. To access NGINX Gateway Fabric, -use an IP address of any Node in the cluster along with the allocated port. - -### Create a LoadBalancer Service - -Create a Service with type `LoadBalancer` using the appropriate manifest for your cloud provider. - -- For GCP or Azure: - - ```shell - kubectl apply -f https://raw.githubusercontent.com/nginxinc/nginx-gateway-fabric/v1.0.0/deploy/manifests/service/loadbalancer.yaml - ``` - - Lookup the public IP of the load balancer, which is reported in the `EXTERNAL-IP` column in the output of the - following command: - - ```shell - kubectl get svc nginx-gateway -n nginx-gateway - ``` - - Use the public IP of the load balancer to access NGINX Gateway Fabric. - -- For AWS: - - ```shell - kubectl apply -f https://raw.githubusercontent.com/nginxinc/nginx-gateway-fabric/v1.0.0/deploy/manifests/service/loadbalancer-aws-nlb.yaml - ``` - - In AWS, the NLB DNS name will be reported by Kubernetes in lieu of a public IP in the `EXTERNAL-IP` column. To get the - DNS name run: - - ```shell - kubectl get svc nginx-gateway -n nginx-gateway - ``` - - In general, you should rely on the NLB DNS name, however for testing purposes you can resolve the DNS name to get the - IP address of the load balancer: - - ```shell - nslookup - ``` - -## Upgrading NGINX Gateway Fabric - -> **Note** -> -> See [below](#configure-delayed-termination-for-zero-downtime-upgrades) for instructions on how to configure delayed -> termination if required for zero downtime upgrades in your environment. - -### Upgrade NGINX Gateway Fabric from Manifests - -1. Upgrade the Gateway Resources - - Before you upgrade, ensure the Gateway API resources are the correct version as supported by the NGINX Gateway - Fabric - [see the Technical Specifications](/README.md#technical-specifications). - The [release notes](https://github.com/kubernetes-sigs/gateway-api/releases/tag/v1.0.0) of the new version of the - Gateway API might include important upgrade-specific notes and instructions. We advise to check the release notes of - all versions between the one you're using and the new one. - - To upgrade the Gateway CRDs from [the Gateway API repo](https://github.com/kubernetes-sigs/gateway-api), run: - - ```shell - kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml - ``` - - If you are running on Kubernetes 1.23 or 1.24, you also need to update the validating webhook. To do so, run: - - ```shell - kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/webhook-install.yaml - ``` - - If you are running on Kubernetes 1.25 or newer and have the validating webhook installed, you should remove the - webhook. To do so, run: - - ```shell - kubectl delete -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/webhook-install.yaml - ``` - -2. Upgrade the NGINX Gateway Fabric CRDs - - Run the following command to upgrade the NGINX Gateway Fabric CRDs: - - ```shell - kubectl apply -f https://github.com/nginxinc/nginx-gateway-fabric/releases/download/v1.0.0/crds.yaml - ``` - -3. Upgrade NGINX Gateway Fabric Deployment - - Run the following command to upgrade NGINX Gateway Fabric: - - ```shell - kubectl apply -f https://github.com/nginxinc/nginx-gateway-fabric/releases/download/v1.0.0/nginx-gateway.yaml - ``` - -### Upgrade NGINX Gateway Fabric using Helm - -To upgrade NGINX Gateway Fabric when the deployment method is Helm, please follow the instructions -[here](/deploy/helm-chart/README.md#upgrading-the-chart). - -### Configure Delayed Termination for Zero Downtime Upgrades - -To achieve zero downtime upgrades (meaning clients will not see any interruption in traffic while a rolling upgrade is -being performed on NGF), you may need to configure delayed termination on the NGF Pod, depending on your environment. - -> **Note** -> -> When proxying Websocket or any long-lived connections, NGINX will not terminate until that connection is closed -> by either the client or the backend. This means that unless all those connections are closed by clients/backends -> before or during an upgrade, NGINX will not terminate, which means Kubernetes will kill NGINX. As a result, the -> clients will see the connections abruptly closed and thus experience downtime. - -#### Configure Delayed Termination Using Manifests - -Edit the `nginx-gateway.yaml` to include the following: - -1. Add `lifecycle` prestop hooks to both the nginx and the nginx-gateway container definitions: - - ```yaml - <...> - name: nginx-gateway - <...> - lifecycle: - preStop: - exec: - command: - - /usr/bin/gateway - - sleep - - --duration=40s # This flag is optional, the default is 30s - <...> - name: nginx - <...> - lifecycle: - preStop: - exec: - command: - - /bin/sleep - - "40" - <...> - ``` - -2. Ensure the `terminationGracePeriodSeconds` matches or exceeds the `sleep` value from the `preStopHook` (the default - is 30). This is to ensure Kubernetes does not terminate the Pod before the `preStopHook` is complete. - -> **Note** -> -> More information on container lifecycle hooks can be found -> [here](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks) and a detailed -> description of Pod termination behavior can be found in -> [Termination of Pods](https://kubernetes.io/docs/concepts/workloads/Pods/Pod-lifecycle/#Pod-termination). - -#### Configure Delayed Termination Using Helm - -To configure delayed termination on the NGF Pod when the deployment method is Helm, please follow the instructions -[here](/deploy/helm-chart/README.md#configure-delayed-termination-for-zero-downtime-upgrades). - -## Uninstalling NGINX Gateway Fabric - -### Uninstall NGINX Gateway Fabric from Manifests - -1. Uninstall the NGINX Gateway Fabric: - - ```shell - kubectl delete -f https://github.com/nginxinc/nginx-gateway-fabric/releases/download/v1.0.0/nginx-gateway.yaml - ``` - - ```shell - kubectl delete -f https://github.com/nginxinc/nginx-gateway-fabric/releases/download/v1.0.0/crds.yaml - ``` - -2. Uninstall the Gateway API CRDs: - - >**Warning** - > - > This command will delete all the corresponding custom resources in your cluster across all namespaces! - > Please ensure there are no custom resources that you want to keep and there are no other Gateway API - > implementations running in the cluster! - - ```shell - kubectl delete -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml - ``` - - If you are running on Kubernetes 1.23 or 1.24, you also need to delete the validating webhook. To do so, run: - - ```shell - kubectl delete -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/webhook-install.yaml - ``` - -### Uninstall NGINX Gateway Fabric using Helm - -To uninstall NGINX Gateway Fabric when the deployment method is Helm, please follow the instructions -[here](/deploy/helm-chart/README.md#uninstalling-the-chart). diff --git a/docs/monitoring.md b/docs/monitoring.md deleted file mode 100644 index e2dfe3c26..000000000 --- a/docs/monitoring.md +++ /dev/null @@ -1,106 +0,0 @@ -# Monitoring - -The NGINX Gateway Fabric exposes a number of metrics in the [Prometheus](https://prometheus.io/) format. Those -include NGINX and the controller-runtime metrics. These are delivered using a metrics server orchestrated by the -controller-runtime package. Metrics are enabled by default, and are served via http on port `9113`. - -> **Note** -> By default metrics are served via http. Please note that if serving metrics via https is enabled, this -> endpoint will be secured with a self-signed certificate. Since the metrics server is using a self-signed certificate, -> the Prometheus Pod scrape configuration will also require the `insecure_skip_verify` flag set. See -> [the Prometheus documentation](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#tls_config). - -## Changing the default Metrics configuration - -### Using Helm - -If you're using *Helm* to install the NGINX Gateway Fabric, set the `metrics.*` parameters to the required values -for your environment. See the [Helm README](/deploy/helm-chart/README.md). - -### Using Manifests - -If you're using *Kubernetes manifests* to install NGINX Gateway Fabric, you can modify the -[manifest](/deploy/manifests/nginx-gateway.yaml) to change the default metrics configuration: - -#### Disabling metrics - -1. Set the `-metrics-disable` [command-line argument](/docs/cli-help.md) to `true` and remove the other `-metrics-*` - command line arguments. - -2. Remove the metrics port entry from the list of the ports of the NGINX Gateway Fabric container in the template - of the NGINX Gateway Fabric Pod: - - ```yaml - - name: metrics - containerPort: 9113 - ``` - -3. Remove the following annotations from the template of the NGINX Gateway Fabric Pod: - - ```yaml - annotations: - prometheus.io/scrape: "true" - prometheus.io/port: "9113" - ``` - -#### Changing the default port - -1. Set the `-metrics-port` [command-line argument](/docs/cli-help.md) to the required value. - -2. Change the metrics port entry in the list of the ports of the NGINX Gateway Fabric container in the template - of the NGINX Gateway Fabric Pod: - - ```yaml - - name: metrics - containerPort: - ``` - -3. Change the following annotation in the template of the NGINX Gateway Fabric Pod: - - ```yaml - annotations: - <...> - prometheus.io/port: "" - <...> - ``` - -#### Enable serving metrics via https - -1. Set the `-metrics-secure-serving` [command-line argument](/docs/cli-help.md) to `true`. - -2. Add the following annotation in the template of the NGINX Gateway Fabric Pod: - - ```yaml - annotations: - <...> - prometheus.io/scheme: "https" - <...> - ``` - -## Available Metrics - -NGINX Gateway Fabric exports the following metrics: - -- NGINX metrics: - - Exported by NGINX. Refer to the [NGINX Prometheus Exporter developer docs](https://github.com/nginxinc/nginx-prometheus-exporter#metrics-for-nginx-oss) - - These metrics have the namespace `nginx_gateway_fabric`, and include the label `class` which is set to the - Gateway class of NGF. For example, `nginx_gateway_fabric_connections_accepted{class="nginx"}`. - -- NGINX Gateway Fabric metrics: - - nginx_reloads_total. Number of successful NGINX reloads. - - nginx_reload_errors_total. Number of unsuccessful NGINX reloads. - - nginx_stale_config. 1 means NGF failed to configure NGINX with the latest version of the configuration, which means - NGINX is running with a stale version. - - nginx_last_reload_milliseconds. Duration in milliseconds of NGINX reloads (histogram). - - event_batch_processing_milliseconds: Duration in milliseconds of event batch processing (histogram), which is the - time it takes NGF to process batches of Kubernetes events (changes to cluster resources). Note that NGF processes - events in batches, and while processing the current batch, it accumulates events for the next batch. - - These metrics have the namespace `nginx_gateway_fabric`, and include the label `class` which is set to the - Gateway class of NGF. For example, `nginx_gateway_fabric_nginx_reloads_total{class="nginx"}`. - -- [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) metrics. These include: - - Total number of reconciliation errors per controller - - Length of reconcile queue per controller - - Reconciliation latency - - Usual resource metrics such as CPU, memory usage, file descriptor usage - - Go runtime metrics such as number of Go routines, GC duration, and Go version information diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md deleted file mode 100644 index 4c97cd4ec..000000000 --- a/docs/troubleshooting.md +++ /dev/null @@ -1,11 +0,0 @@ -# Troubleshooting - -This document contains common or known issues and how to troubleshoot them. - -## failed to reload NGINX: failed to send the HUP signal to NGINX main: operation not permitted - -Depending on your environment's configuration, the control plane may not have the proper permissions to reload -NGINX. If NGINX configuration is not applied and you see the above error in the `nginx-gateway` logs, you will need -to set `allowPrivilegeEscalation` to `true`. If using Helm, you can set the -`nginxGateway.securityContext.allowPrivilegeEscalation` value. -If using the manifests directly, you can update this field under the `nginx-gateway` container's `securityContext`. diff --git a/examples/advanced-routing/README.md b/examples/advanced-routing/README.md index 1f7dc0a9b..fc1cb56e6 100644 --- a/examples/advanced-routing/README.md +++ b/examples/advanced-routing/README.md @@ -1,3 +1,3 @@ # Advanced Routing -This directory contains the YAML files used in the [Advanced Routing](/docs/guides/advanced-routing.md) guide. +This directory contains the YAML files used in the [Advanced Routing](https://docs.nginx.com/nginx-gateway-fabric/how-to/traffic-management/advanced-routing/) guide. diff --git a/examples/cafe-example/README.md b/examples/cafe-example/README.md index fba0e30db..e0254de81 100644 --- a/examples/cafe-example/README.md +++ b/examples/cafe-example/README.md @@ -7,7 +7,7 @@ to route traffic to that application using HTTPRoute resources. ## 1. Deploy NGINX Gateway Fabric -1. Follow the [installation instructions](/docs/installation.md) to deploy NGINX Gateway Fabric. +1. Follow the [installation instructions](https://docs.nginx.com/nginx-gateway-fabric/installation/) to deploy NGINX Gateway Fabric. 1. Save the public IP address of NGINX Gateway Fabric into a shell variable: diff --git a/examples/cross-namespace-routing/README.md b/examples/cross-namespace-routing/README.md index d8c2ba2e4..3e774cff4 100644 --- a/examples/cross-namespace-routing/README.md +++ b/examples/cross-namespace-routing/README.md @@ -7,7 +7,7 @@ in a different namespace from our HTTPRoutes. ## 1. Deploy NGINX Gateway Fabric -1. Follow the [installation instructions](/docs/installation.md) to deploy NGINX Gateway Fabric. +1. Follow the [installation instructions](https://docs.nginx.com/nginx-gateway-fabric/installation/) to deploy NGINX Gateway Fabric. 1. Save the public IP address of NGINX Gateway Fabric into a shell variable: diff --git a/examples/http-header-filter/README.md b/examples/http-header-filter/README.md index 6bc85d0f3..11b32c43e 100644 --- a/examples/http-header-filter/README.md +++ b/examples/http-header-filter/README.md @@ -3,11 +3,12 @@ In this example we will deploy NGINX Gateway Fabric and configure traffic routing for a simple echo server. We will use HTTPRoute resources to route traffic to the echo server, using the `RequestHeaderModifier` filter to modify headers to the request. + ## Running the Example ## 1. Deploy NGINX Gateway Fabric -1. Follow the [installation instructions](/docs/installation.md) to deploy NGINX Gateway Fabric. +1. Follow the [installation instructions](https://docs.nginx.com/nginx-gateway-fabric/installation/) to deploy NGINX Gateway Fabric. 1. Save the public IP address of NGINX Gateway Fabric into a shell variable: diff --git a/examples/traffic-splitting/README.md b/examples/traffic-splitting/README.md index 02c3d2fef..0479722cf 100644 --- a/examples/traffic-splitting/README.md +++ b/examples/traffic-splitting/README.md @@ -9,7 +9,7 @@ and `coffee-v2`. ## 1. Deploy NGINX Gateway Fabric -1. Follow the [installation instructions](/docs/installation.md) to deploy NGINX Gateway Fabric. +1. Follow the [installation instructions](https://docs.nginx.com/nginx-gateway-fabric/installation/) to deploy NGINX Gateway Fabric. 1. Save the public IP address of NGINX Gateway Fabric into a shell variable: diff --git a/site/.gitignore b/site/.gitignore new file mode 100644 index 000000000..919b131ed --- /dev/null +++ b/site/.gitignore @@ -0,0 +1,3 @@ +public +.hugo_build.lock +resources diff --git a/site/Makefile b/site/Makefile new file mode 100644 index 000000000..b5231e351 --- /dev/null +++ b/site/Makefile @@ -0,0 +1,92 @@ +HUGO?=hugo +# the officially recommended unofficial docker image +HUGO_IMG?=hugomods/hugo:0.115.3 + +THEME_MODULE = github.com/nginxinc/nginx-hugo-theme +## Pulls the current theme version from the Netlify settings +THEME_VERSION = $(NGINX_THEME_VERSION) +NETLIFY_DEPLOY_URL = ${DEPLOY_PRIME_URL} + +# if there's no local hugo, fallback to docker +ifeq (, $(shell ${HUGO} version 2> /dev/null)) +ifeq (, $(shell docker version 2> /dev/null)) + $(error Docker and Hugo are not installed. Hugo (<0.91) or Docker are required to build the local preview.) +else + HUGO=docker run --rm -it -v ${CURDIR}:/src -p 1313:1313 ${HUGO_IMG} hugo --bind 0.0.0.0 -p 1313 +endif +endif + +MARKDOWNLINT?=markdownlint +MARKDOWNLINT_IMG?=ghcr.io/igorshubovych/markdownlint-cli:latest + +# if there's no local markdownlint, fallback to docker +ifeq (, $(shell ${MARKDOWNLINT} version 2> /dev/null)) +ifeq (, $(shell docker version 2> /dev/null)) +ifneq (, $(shell $(NETLIFY) "true")) + $(error Docker and markdownlint are not installed. markdownlint or Docker are required to lint.) +endif +else + MARKDOWNLINT=docker run --rm -i -v ${CURDIR}:/src --workdir /src ${MARKDOWNLINT_IMG} +endif +endif + +MARKDOWNLINKCHECK?=markdown-link-check +MARKDOWNLINKCHECK_IMG?=ghcr.io/tcort/markdown-link-check:stable +# if there's no local markdown-link-check, fallback to docker +ifeq (, $(shell ${MARKDOWNLINKCHECK} --version 2> /dev/null)) +ifeq (, $(shell docker version 2> /dev/null)) +ifneq (, $(shell $(NETLIFY) "true")) + $(error Docker and markdown-link-check are not installed. markdown-link-check or Docker are required to check links.) +endif +else + MARKDOWNLINKCHECK=docker run --rm -it -v ${CURDIR}:/site --workdir /site ${MARKDOWNLINKCHECK_IMG} +endif +endif + + +.PHONY: all all-staging all-dev all-local clean hugo-mod build-production build-staging build-dev docs-drafts docs deploy-preview + +all: hugo-mod build-production + +all-staging: hugo-mod build-staging + +all-dev: hugo-mod build-dev + +all-local: clean hugo-mod build-production + +# Removes the public directory generated by the `hugo` command +clean: + if [[ -d ${PWD}/public ]] ; then rm -rf ${PWD}/public && echo "Removed public directory" ; else echo "Did not find a public directory to remove" ; fi + + +docs-drafts: + ${HUGO} server -D --disableFastRender + +docs-local: clean + ${HUGO} + +docs: + ${HUGO} server --disableFastRender + +lint-markdown: + ${MARKDOWNLINT} -c .markdownlint.yaml -- content + +link-check: + ${MARKDOWNLINKCHECK} $(shell find content -name '*.md') + + +## commands for use in Netlify CI +hugo-mod: + hugo mod get $(THEME_MODULE)@v$(THEME_VERSION) + +build-production: + hugo --gc -e production + +build-staging: + hugo --gc -e staging + +build-dev: + hugo --gc -e development + +deploy-preview: hugo-mod + hugo --gc -b ${NETLIFY_DEPLOY_URL} diff --git a/site/config/_default/config.toml b/site/config/_default/config.toml new file mode 100644 index 000000000..e269c1d8d --- /dev/null +++ b/site/config/_default/config.toml @@ -0,0 +1,68 @@ +title = "NGINX Gateway Fabric" +enableGitInfo = false +baseURL = "/" +publishDir = "public/nginx-gateway-fabric" +staticDir = ["static"] +languageCode = "en-us" +description = "NGINX Gateway Fabric." +refLinksErrorLevel = "ERROR" +enableRobotsTXT = "true" +#canonifyURLs = true +pluralizeListTitles = false +pygmentsCodeFences = true +pygmentsUseClasses = true + +[caches] + [caches.modules] + maxAge = -1 + +[module] +[[module.imports]] + path="github.com/nginxinc/nginx-hugo-theme" + +[markup] + [markup.highlight] + codeFences = true + guessSyntax = true + hl_Lines = "" + lineNoStart = 1 + lineNos = false + lineNumbersInTable = true + noClasses = true + style = "monokai" + tabWidth = 4 + [markup.goldmark] + [markup.goldmark.extensions] + definitionList = true + footnote = true + linkify = true + strikethrough = true + table = true + taskList = true + typographer = true + [markup.goldmark.parser] + attribute = true + autoHeadingID = true + autoHeadingIDType = "gitlab" + [markup.goldmark.renderer] + hardWraps = false + unsafe = true + xhtml = false + +[params] + useSectionPageLists = "false" + buildtype = "webdocs" + RSSLink = "/index.xml" + author = "NGINX Inc." # add your company name + github = "nginxinc" # add your github profile name + twitter = "@nginx" # add your twitter profile + #email = "" + noindex_kinds = [ + "taxonomy", + "taxonomyTerm" + ] + logo = "NGINX-product-icon.svg" + +sectionPagesMenu = "docs" + +ignoreFiles = [ "\\.sh$", "\\.DS_Store$", "\\.git.*$", "\\.txt$", "\\/config\\/.*", "README\\.*"] diff --git a/site/config/development/config.toml b/site/config/development/config.toml new file mode 100644 index 000000000..937119dff --- /dev/null +++ b/site/config/development/config.toml @@ -0,0 +1,3 @@ +baseURL = "https://docs-dev.nginx.com/nginx-gateway-fabric" +title = "DEV -- NGINX Gateway Fabric" +canonifyURLs = false diff --git a/site/config/production/config.toml b/site/config/production/config.toml new file mode 100644 index 000000000..f818400d2 --- /dev/null +++ b/site/config/production/config.toml @@ -0,0 +1,3 @@ +baseURL = "/nginx-gateway-fabric" +title = "NGINX Gateway Fabric" +canonifyURLs = false diff --git a/site/config/staging/config.toml b/site/config/staging/config.toml new file mode 100644 index 000000000..251ca5fdf --- /dev/null +++ b/site/config/staging/config.toml @@ -0,0 +1,3 @@ +baseURL = "https://docs-staging.nginx.com/nginx-gateway-fabric" +title = "STAGING -- NGINX Gateway Fabric" +canonifyURLs = false diff --git a/site/content/_index.md b/site/content/_index.md new file mode 100644 index 000000000..999aa3155 --- /dev/null +++ b/site/content/_index.md @@ -0,0 +1,7 @@ +--- +title: "Welcome to the NGINX Gateway Fabric documentation" +description: +weight: 300 +linkTitle: "NGINX Gateway Fabric" +menu: docs +--- diff --git a/site/content/changelog.md b/site/content/changelog.md new file mode 100644 index 000000000..cd22389b7 --- /dev/null +++ b/site/content/changelog.md @@ -0,0 +1,8 @@ +--- +title: "Changelog" +description: "No description" +weight: 10000 +toc: true +draft: true +docs: "DOCS-1358" +--- diff --git a/site/content/how-to/_index.md b/site/content/how-to/_index.md new file mode 100644 index 000000000..969045d4d --- /dev/null +++ b/site/content/how-to/_index.md @@ -0,0 +1,9 @@ +--- +title: "How-To Guides" +description: +weight: 300 +linkTitle: "Guides" +menu: + docs: + parent: NGINX Gateway Fabric +--- diff --git a/site/content/how-to/configuration/_index.md b/site/content/how-to/configuration/_index.md new file mode 100644 index 000000000..cd5e67c0e --- /dev/null +++ b/site/content/how-to/configuration/_index.md @@ -0,0 +1,9 @@ +--- +title: "Configuration" +description: +weight: 200 +linkTitle: "Configuration" +menu: + docs: + parent: How-To Guides +--- diff --git a/site/content/how-to/configuration/control-plane-configuration.md b/site/content/how-to/configuration/control-plane-configuration.md new file mode 100644 index 000000000..4be696711 --- /dev/null +++ b/site/content/how-to/configuration/control-plane-configuration.md @@ -0,0 +1,62 @@ +--- +title: "Control Plane Configuration" +description: "Learn how to dynamically update the Gateway Fabric control plane configuration." +weight: 100 +toc: true +docs: "DOCS-000" +--- + +## Overview + +NGINX Gateway Fabric can dynamically update the control plane configuration without restarting. The control plane configuration is stored in the NginxGateway custom resource, created during the installation of NGINX Gateway Fabric. + +NginxGateway is deployed in the same namespace as the controller (Default: `nginx-gateway`). The resource's default name is based on your [installation method]({{}}): + +- Helm: `-config` +- Manifests: `nginx-gateway-config` + +The control plane only watches this single instance of the custom resource. + +If the resource is invalid to the OpenAPI schema, the Kubernetes API server will reject the changes. If the resource is deleted or deemed invalid by NGINX Gateway Fabric, a warning event is created in the `nginx-gateway` namespace, and the default values will be used by the control plane for its configuration. + +Additionally, the control plane updates the status of the resource (if it exists) to reflect whether it is valid or not. + +### Spec + +{{< bootstrap-table "table table-striped table-bordered" >}} +| name | description | type | required | +|---------|-----------------------------------------------------------------|--------------------------|----------| +| logging | Logging defines logging related settings for the control plane. | [logging](#speclogging) | no | +{{< /bootstrap-table >}} + +### Spec.Logging + +{{< bootstrap-table "table table-striped table-bordered" >}} +| name | description | type | required | +|-------|------------------------------------------------------------------------|--------|----------| +| level | Level defines the logging level. Supported values: info, debug, error. | string | no | +{{< /bootstrap-table >}} + +## Viewing and Updating the Configuration + +{{< note >}} For the following examples, the name `nginx-gateway-config` should be updated to the name of the resource created for your installation. {{< /note >}} + +To view the current configuration: + +```shell +kubectl -n nginx-gateway get nginxgateways nginx-gateway-config -o yaml +``` + +To update the configuration: + +```shell +kubectl -n nginx-gateway edit nginxgateways nginx-gateway-config +``` + +This will open the configuration in your default editor. You can then update and save the configuration, which is applied automatically to the control plane. + +To view the status of the configuration: + +```shell +kubectl -n nginx-gateway describe nginxgateways nginx-gateway-config +``` diff --git a/site/content/how-to/maintenance/_index.md b/site/content/how-to/maintenance/_index.md new file mode 100644 index 000000000..5c33e95bb --- /dev/null +++ b/site/content/how-to/maintenance/_index.md @@ -0,0 +1,9 @@ +--- +title: "Maintenance and Upgrades" +description: +weight: 400 +linkTitle: "Maintenance and Upgrades" +menu: + docs: + parent: How-To Guides +--- diff --git a/site/content/how-to/maintenance/upgrade-apps-without-downtime.md b/site/content/how-to/maintenance/upgrade-apps-without-downtime.md new file mode 100644 index 000000000..c39fb0e07 --- /dev/null +++ b/site/content/how-to/maintenance/upgrade-apps-without-downtime.md @@ -0,0 +1,125 @@ +--- +title: "Upgrade applications without downtime" +description: "Learn how to use NGINX Gateway Fabric to upgrade applications without downtime." +weight: 100 +toc: true +docs: "DOCS-000" +--- + +{{}} + +## Overview + +{{< note >}} See the [Architecture document]({{< relref "/overview/gateway-architecture.md" >}}) to learn more about NGINX Gateway Fabric architecture.{{< /note >}} + +NGINX Gateway Fabric allows upgrading applications without downtime. To understand the upgrade methods, you need to be familiar with the NGINX features that help prevent application downtime: Graceful configuration reloads and upstream server updates. + +### Graceful configuration reloads + +If a relevant gateway API or built-in Kubernetes resource is changed, NGINX Gateway Fabric will update NGINX by regenerating the NGINX configuration. NGINX Gateway Fabric then sends a reload signal to the master NGINX process to apply the new configuration. + +We call such an operation a "reload", during which client requests are not dropped - which defines it as a graceful reload. + +This process is further explained in the [NGINX configuration documentation](https://nginx.org/en/docs/control.html?#reconfiguration). + +### Upstream server updates + +Endpoints frequently change during application upgrades: Kubernetes creates pods for the new version of an application and removes the old ones, creating and removing the respective endpoints as well. + +NGINX Gateway Fabric detects changes to endpoints by watching their corresponding [EndpointSlices](https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/). + +In an NGINX configuration, a service is represented as an [upstream](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream), and an endpoint as an [upstream server](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#server). + +Adding and removing endpoints are two of the most common cases: + +- If an endpoint is added, NGINX Gateway Fabric adds an upstream server to NGINX that corresponds to the endpoint, then reloads NGINX. Next, NGINX will start proxying traffic to that endpoint. +- If an endpoint is removed, NGINX Gateway Fabric removes the corresponding upstream server from NGINX. After a reload, NGINX will stop proxying traffic to that server. However, it will finish proxying any pending requests to that server before switching to another endpoint. + +As long as you have more than one endpoint ready, clients won't experience downtime during upgrades. + +{{< note >}}It is good practice to configure a [Readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) in the deployment so that a pod can report when it is ready to receive traffic. Note that NGINX Gateway Fabric will not add any endpoint to NGINX that is not ready.{{< /note >}} + +## Prerequisites + +- You have deployed your application as a [deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) +- The pods of the deployment belong to a [service](https://kubernetes.io/docs/concepts/services-networking/service/) so that Kubernetes creates an [endpoint](https://kubernetes.io/docs/reference/kubernetes-api/service-resources/endpoints-v1/) for each pod. +- You have exposed the application to the clients via an [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/) resource that references that service. + +For example, an application can be exposed using a routing rule like below: + +```yaml +- matches: + - path: + type: PathPrefix + value: / + backendRefs: + - name: my-app + port: 80 +``` + +{{< note >}}See the [Cafe example](https://github.com/nginxinc/nginx-gateway-fabric/blob/main/examples/cafe-example) for a basic example.{{< /note >}} + +The upgrade methods in the next sections cover: + +- Rolling deployment upgrades +- Blue-green deployments +- Canary releases + +## Rolling deployment upgrade + +To start a [rolling deployment upgrade](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment), you update the deployment to use the new version tag of the application. As a result, Kubernetes terminates the pods with the old version and create new ones. By default, Kubernetes also ensures that some number of pods always stay available during the upgrade. + +This upgrade will add new upstream servers to NGINX and remove the old ones. As long as the number of pods (ready endpoints) during an upgrade does not reach zero, NGINX will be able to proxy traffic, and therefore prevent any downtime. + +This method does not require you to update the **HTTPRoute**. + +## Blue-green deployments + +With this method, you deploy a new version of the application (blue version) as a separate deployment, while the old version (green) keeps running and handling client traffic. Next, you switch the traffic from the green version to the blue. If the blue works as expected, you terminate the green. Otherwise, you switch the traffic back to the green. + +There are two ways to switch the traffic: + +- Update the service selector to select the pods of the blue version instead of the green. As a result, NGINX Gateway Fabric removes the green upstream servers from NGINX and adds the blue ones. With this approach, it is not necessary to update the **HTTPRoute**. +- Create a separate service for the blue version and update the backend reference in the **HTTPRoute** to reference this service, which leads to the same result as with the previous option. + +## Canary releases + +Canary releases involve gradually introducing a new version of your application to a subset of nodes in a controlled manner, splitting the traffic between the old are new (canary) release. This allows for monitoring and testing the new release's performance and reliability before full deployment, helping to identify and address issues without impacting the entire user base. + +To support canary releases, you can implement an approach with two deployments behind the same service (see [Canary deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#canary-deployment) in the Kubernetes documentation). However, this approach lacks precision for defining the traffic split between the old and the canary version. You can greatly influence it by controlling the number of pods (for example, four pods of the old version and one pod of the canary). However, note that NGINX Gateway Fabric uses [`random two least_conn`](https://nginx.org/en/docs/http/ngx_http_upstream_module.html#random) load balancing method, which doesn't guarantee an exact split based on the number of pods (80/20 in the given example). + +A more flexible and precise way to implement canary releases is to configure a traffic split in an **HTTPRoute**. In this case, you create a separate deployment for the new version with a separate service. For example, for the rule below, NGINX will proxy 95% of the traffic to the old version endpoints and only 5% to the new ones. + +```yaml +- matches: + - path: + type: PathPrefix + value: / + backendRefs: + - name: my-app-old + port: 80 + weight: 95 + - name: my-app-new + port: 80 + weight: 5 +``` + +{{< note >}}Every request coming from the same client won't necessarily be sent to the same backend. NGINX will independently split each request among the backend references.{{< /note >}} + +By updating the rule you can further increase the share of traffic the new version gets and finally completely switch to the new version: + +```yaml +- matches: + - path: + type: PathPrefix + value: / + backendRefs: + - name: my-app-old + port: 80 + weight: 0 + - name: my-app-new + port: 80 + weight: 1 +``` + +See the [Traffic splitting example](https://github.com/nginxinc/nginx-gateway-fabric/blob/main/examples/traffic-splitting) from our repository. diff --git a/site/content/how-to/monitoring/_index.md b/site/content/how-to/monitoring/_index.md new file mode 100644 index 000000000..0213ee98e --- /dev/null +++ b/site/content/how-to/monitoring/_index.md @@ -0,0 +1,9 @@ +--- +title: "Monitoring and Troubleshooting" +description: +weight: 500 +linkTitle: "Monitoring and Troubleshooting" +menu: + docs: + parent: How-To Guides +--- diff --git a/site/content/how-to/monitoring/monitoring.md b/site/content/how-to/monitoring/monitoring.md new file mode 100644 index 000000000..de75eef20 --- /dev/null +++ b/site/content/how-to/monitoring/monitoring.md @@ -0,0 +1,117 @@ +--- +title: "Monitoring NGINX Gateway Fabric" +description: "Learn how to monitor your NGINX Gateway Fabric effectively. This guide provides easy steps for configuring monitoring settings and understanding key performance metrics." +weight: 100 +toc: true +docs: "DOCS-000" +--- + +{{}} + +## Overview + + +NGINX Gateway Fabric metrics are displayed in [Prometheus](https://prometheus.io/) format, simplifying monitoring. You can track NGINX and controller-runtime metrics through a metrics server orchestrated by the controller-runtime package. These metrics are enabled by default and can be accessed on HTTP port `9113`. + + +{{}} +Metrics are served over HTTP by default. Enabling HTTPS will secure the metrics endpoint with a self-signed certificate. When using HTTPS, adjust the Prometheus Pod scrape settings by adding the `insecure_skip_verify` flag to handle the self-signed certificate. For further details, refer to the [Prometheus documentation](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#tls_config). +{{}} + +## How to change the default metrics configuration + +Configuring NGINX Gateway Fabric for monitoring is straightforward. You can change metric settings using Helm or Kubernetes manifests, depending on your setup. + +### Using Helm + +If you're setting up NGINX Gateway Fabric with Helm, you can adjust the `metrics.*` parameters to fit your needs. For detailed options and instructions, see the [Helm README](/deploy/helm-chart/README.md). + +### Using Kubernetes manifests + +For setups using Kubernetes manifests, change the metrics configuration by editing the [NGINX Gateway manifest](/deploy/manifests/nginx-gateway.yaml). + +#### Disabling metrics + +If you need to disable metrics: + +1. Set the `-metrics-disable` [command-line argument]({{< relref "reference/cli-help.md">}}) to `true` in the NGINX Gateway Fabric Pod's configuration. Remove any other `-metrics-*` arguments. +2. In the Pod template for NGINX Gateway Fabric, delete the metrics port entry from the container ports list: + + ```yaml + - name: metrics + containerPort: 9113 + ``` + +3. Also, remove the following annotations from the NGINX Gateway Fabric Pod template: + + ```yaml + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "9113" + ``` + +#### Changing the default port + +To change the default port for metrics: + +1. Update the `-metrics-port` [command-line argument]({{< relref "reference/cli-help.md">}}) in the NGINX Gateway Fabric Pod's configuration to your chosen port number. +2. In the Pod template, change the metrics port entry to reflect the new port: + + ```yaml + - name: metrics + containerPort: + ``` + +3. Modify the `prometheus.io/port` annotation in the Pod template to match the new port: + + ```yaml + annotations: + <...> + prometheus.io/port: "" + <...> + ``` + +#### Enabling HTTPS for metrics + +For enhanced security with HTTPS: + +1. Enable HTTPS security by setting the `-metrics-secure-serving` [command-line argument]({{< relref "reference/cli-help.md">}}) to `true` in the NGINX Gateway Fabric Pod's configuration. + +2. Add an HTTPS scheme annotation to the Pod template: + + ```yaml + annotations: + <...> + prometheus.io/scheme: "https" + <...> + ``` + +## Available metrics in NGINX Gateway Fabric + +NGINX Gateway Fabric provides a variety of metrics to assist in monitoring and analyzing performance. These metrics are categorized as follows: + +### NGINX metrics + +NGINX metrics, essential for monitoring specific NGINX operations, include details like the total number of accepted client connections. For a complete list of available NGINX metrics, refer to the [NGINX Prometheus Exporter developer docs](https://github.com/nginxinc/nginx-prometheus-exporter#metrics-for-nginx-oss). + +These metrics use the `nginx_gateway_fabric` namespace and include the `class` label, indicating the NGINX Gateway class. For example, `nginx_gateway_fabric_connections_accepted{class="nginx"}`. + +### NGINX Gateway Fabric metrics + +Metrics specific to the NGINX Gateway Fabric include: + +- `nginx_reloads_total`: Counts successful NGINX reloads. +- `nginx_reload_errors_total`: Counts NGINX reload failures. +- `nginx_stale_config`: Indicates if NGINX Gateway Fabric couldn't update NGINX with the latest configuration, resulting in a stale version. +- `nginx_last_reload_milliseconds`: Time in milliseconds for NGINX reloads. +- `event_batch_processing_milliseconds`: Time in milliseconds to process batches of Kubernetes events. + +All these metrics are under the `nginx_gateway_fabric` namespace and include a `class` label set to the Gateway class of NGINX Gateway Fabric. For example, `nginx_gateway_fabric_nginx_reloads_total{class="nginx"}`. + +### Controller-runtime metrics + +Provided by the [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) library, these metrics cover a range of aspects: + +- General resource usage like CPU and memory. +- Go runtime metrics such as the number of Go routines, garbage collection duration, and Go version. +- Controller-specific metrics, including reconciliation errors per controller, length of the reconcile queue, and reconciliation latency. diff --git a/site/content/how-to/monitoring/troubleshooting.md b/site/content/how-to/monitoring/troubleshooting.md new file mode 100644 index 000000000..5c68a3288 --- /dev/null +++ b/site/content/how-to/monitoring/troubleshooting.md @@ -0,0 +1,23 @@ +--- +title: "Troubleshooting" + +weight: 200 +toc: true +docs: "DOCS-000" +--- + +{{< custom-styles >}} + +This topic describes possible issues users might encounter when using NGINX Gateway Fabric. When possible, suggested workarounds are provided. + +### NGINX fails to reload + +#### Description + +Depending on your environment's configuration, the control plane may not have the proper permissions to reload NGINX. The NGINX configuration will not be applied and you will see the following error in the _nginx-gateway_ logs: `failed to reload NGINX: failed to send the HUP signal to NGINX main: operation not permitted` + +#### Resolution +To resolve this issue you will need to set `allowPrivilegeEscalation` to `true`. + +- If using Helm, you can set the `nginxGateway.securityContext.allowPrivilegeEscalation` value. +- If using the manifests directly, you can update this field under the `nginx-gateway` container's `securityContext`. diff --git a/site/content/how-to/traffic-management/_index.md b/site/content/how-to/traffic-management/_index.md new file mode 100644 index 000000000..8bf8679ea --- /dev/null +++ b/site/content/how-to/traffic-management/_index.md @@ -0,0 +1,9 @@ +--- +title: "Traffic Management" +description: +weight: 300 +linkTitle: "Traffic Management" +menu: + docs: + parent: How-To Guides +--- diff --git a/docs/guides/advanced-routing.md b/site/content/how-to/traffic-management/advanced-routing.md similarity index 69% rename from docs/guides/advanced-routing.md rename to site/content/how-to/traffic-management/advanced-routing.md index 81f67fdf0..b47ff1607 100644 --- a/docs/guides/advanced-routing.md +++ b/site/content/how-to/traffic-management/advanced-routing.md @@ -1,23 +1,23 @@ -# Routing to Applications Using HTTP Matching Conditions +--- +title: "Routing to Applications Using HTTP Matching Conditions" +description: "Learn how to deploy multiple applications and HTTPRoutes with request conditions such as paths, methods, headers, and query parameters" +weight: 200 +toc: true +docs: "DOCS-000" +--- -In this guide we will configure advanced routing rules for multiple applications. These rules will showcase request -matching by path, headers, query parameters, and method. For an introduction to exposing your application, it is -recommended to go through the [basic guide](/docs/guides/routing-traffic-to-your-app.md) first. +In this guide we will configure advanced routing rules for multiple applications. These rules will showcase request matching by path, headers, query parameters, and method. For an introduction to exposing your application, we recommend that you follow the [basic guide]({{< relref "/how-to/traffic-management/routing-traffic-to-your-app.md" >}}) first. The following image shows the traffic flow that we will be creating with these rules. -![Traffic Flow Diagram](/docs/images/advanced-routing.png) +{{Traffic Flow Diagram}} -The goal is to create a set of rules that will result in client requests being sent to specific backends based on -the request attributes. In this diagram, we have two versions of the `coffee` service. Traffic for v1 needs to be -directed to the old application, while traffic for v2 needs to be directed towards the new application. We also -have two `tea` services, one that handles GET operations and one that handles POST operations. Both the `tea` -and `coffee` applications share the same Gateway. +The goal is to create a set of rules that will result in client requests being sent to specific backends based on the request attributes. In this diagram, we have two versions of the `coffee` service. Traffic for v1 needs to be directed to the old application, while traffic for v2 needs to be directed towards the new application. We also have two `tea` services, one that handles GET operations and one that handles POST operations. Both the `tea` and `coffee` applications share the same Gateway. ## Prerequisites -- [Install](/docs/installation.md) NGINX Gateway Fabric. -- [Expose NGINX Gateway Fabric](/docs/installation.md#expose-nginx-gateway-fabric) and save the public IP +- [Install]({{< relref "/installation/" >}}) NGINX Gateway Fabric. +- [Expose NGINX Gateway Fabric]({{< relref "installation/expose-nginx-gateway-fabric.md" >}}) and save the public IP address and port of NGINX Gateway Fabric into shell variables: ```text @@ -25,9 +25,7 @@ and `coffee` applications share the same Gateway. GW_PORT= ``` -> **Note** -> In a production environment, you should have a DNS record for the external IP address that is exposed, -> and it should refer to the hostname that the gateway will forward for. +{{< note >}}In a production environment, you should have a DNS record for the external IP address that is exposed, and it should refer to the hostname that the gateway will forward for.{{< /note >}} ## Coffee Applications @@ -41,8 +39,7 @@ kubectl apply -f https://raw.githubusercontent.com/nginxinc/nginx-gateway-fabric ### Deploy the Gateway API Resources for the Coffee Applications -The [Gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/) resource is typically deployed by the -[cluster operator][roles-and-personas]. To deploy the Gateway: +The [gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/) resource is typically deployed by the [cluster operator](https://gateway-api.sigs.k8s.io/concepts/roles-and-personas/#roles-and-personas_1). To deploy the gateway: ```yaml kubectl apply -f - < **Note** -> If you have a DNS record allocated for `cafe.example.com`, you can send the request directly to that -> hostname, without needing to resolve. +{{< note >}}If you have a DNS record allocated for `cafe.example.com`, you can send the request directly to that hostname, without needing to resolve.{{< /note >}} ```shell curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/coffee @@ -138,8 +128,7 @@ Server address: 10.244.0.9:8080 Server name: coffee-v2-68bd55f798-s9z5q ``` -If we want our request to be routed to `coffee-v2`, then we need to meet the defined conditions. We can include -a header: +If we want our request to be routed to `coffee-v2`, then we need to meet the defined conditions. We can include a header: ```shell curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/coffee -H "version:v2" @@ -160,8 +149,7 @@ Server name: coffee-v2-68bd55f798-s9z5q ## Tea Applications -Let's deploy a different set of applications now called `tea` and `tea-post`. These applications will -have their own set of rules, but will still attach to the same Gateway listener as the `coffee` apps. +Let's deploy a different set of applications now called `tea` and `tea-post`. These applications will have their own set of rules, but will still attach to the same gateway listener as the `coffee` apps. ### Deploy the Tea Applications @@ -171,7 +159,7 @@ kubectl apply -f https://raw.githubusercontent.com/nginxinc/nginx-gateway-fabric ### Deploy the HTTPRoute for the Tea Services -We are reusing the previous Gateway for these applications, so all we need to create is the HTTPRoute. +We are reusing the previous gateway for these applications, so all we need to create is the HTTPRoute. ```yaml kubectl apply -f - < **Note** -> If you have a DNS record allocated for `cafe.example.com`, you can send the request directly to that -> hostname, without needing to resolve. +{{< note >}}If you have a DNS record allocated for `cafe.example.com`, you can send the request directly to that hostname, without needing to resolve.{{< /note >}} ```shell curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/tea @@ -242,16 +227,14 @@ Server address: 10.244.0.7:8080 Server name: tea-post-b59b8596b-g586r ``` -This request should receive a response from the `tea-post` Pod. Any other type of method, such as PATCH, will -result in a `404 Not Found` response. +This request should receive a response from the `tea-post` pod. Any other type of method, such as PATCH, will result in a `404 Not Found` response. ## Troubleshooting If you have any issues while sending traffic, try the following to debug your configuration and setup: -- Make sure you set the shell variables $GW_IP and $GW_PORT to the public IP and port of the NGINX Gateway Fabric - Service. Instructions for finding those values are [here](/docs/installation.md#expose-nginx-gateway-fabric). +- Make sure you set the shell variables $GW_IP and $GW_PORT to the public IP and port of the NGINX Gateway Fabric service. Refer to the topic [Expose NGINX Gateway Fabric]({{< relref "installation/expose-nginx-gateway-fabric.md" >}}) for instructions on finding those values. - Check the status of the Gateway: @@ -309,8 +292,7 @@ If you have any issues while sending traffic, try the following to debug your co Name: http ``` - Check that the conditions match and that the attached routes for the `http` listener equals 2. If it is less than - 2, there may be an issue with the routes. + Check that the conditions match and that the attached routes for the `http` listener equals 2. If it is less than 2, there may be an issue with the routes. - Check the status of the HTTPRoutes: @@ -352,7 +334,7 @@ If you have any issues while sending traffic, try the following to debug your co ## Further Reading -To learn more about the Gateway API and the resources we created in this guide, check out the following resources: +To learn more about the Gateway API and the resources we created in this guide, check out the following Kubernetes documentation resources: - [Gateway API Overview](https://gateway-api.sigs.k8s.io/concepts/api-overview/) - [Deploying a simple Gateway](https://gateway-api.sigs.k8s.io/guides/simple-gateway/) diff --git a/docs/guides/integrating-cert-manager.md b/site/content/how-to/traffic-management/integrating-cert-manager.md similarity index 50% rename from docs/guides/integrating-cert-manager.md rename to site/content/how-to/traffic-management/integrating-cert-manager.md index 78b5b78b7..d417edec3 100644 --- a/docs/guides/integrating-cert-manager.md +++ b/site/content/how-to/traffic-management/integrating-cert-manager.md @@ -1,60 +1,44 @@ -# Securing Traffic using Let's Encrypt and Cert-Manager +--- +title: "Securing Traffic using Let's Encrypt and Cert-Manager" +description: "Learn how to issue and mange certificates using Let's Encrypt and cert-manager." +weight: 300 +toc: true +docs: "DOCS-000" +--- -Securing client server communication is a crucial part of modern application architectures. One of the most important -steps in this process is implementing HTTPS (HTTP over TLS/SSL) for all communications. This encrypts the data -transmitted between the client and server, preventing eavesdropping and tampering. To do this, you need an SSL/TLS -certificate from a trusted Certificate Authority (CA). However, issuing and managing certificates can be a complicated -manual process. Luckily, there are many services and tools available to simplify and automate certificate issuance and -management. +Securing client server communication is a crucial part of modern application architectures. One of the most important steps in this process is implementing HTTPS (HTTP over TLS/SSL) for all communications. This encrypts the data transmitted between the client and server, preventing eavesdropping and tampering. To do this, you need an SSL/TLS certificate from a trusted Certificate Authority (CA). However, issuing and managing certificates can be a complicated manual process. Luckily, there are many services and tools available to simplify and automate certificate issuance and management. -This guide will demonstrate how to: +Follow the steps in this guide to: -- Configure HTTPS for your application using a [Gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/). +- Configure HTTPS for your application using a [gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/). - Use [Let’s Encrypt](https://letsencrypt.org) as the Certificate Authority (CA) issuing the TLS certificate. - Use [cert-manager](https://cert-manager.io) to automate the provisioning and management of the certificate. ## Prerequisities -1. Administrator access to a Kubernetes cluster. -2. [Helm](https://helm.sh) and [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) must be installed locally. -3. Deploy NGINX Gateway Fabric (NGF) following the [deployment instructions](/docs/installation.md). -4. A DNS resolvable domain name is required. It must resolve to the public endpoint of the NGF deployment, and this - public endpoint must be an external IP address or alias accessible over the internet. The process here will depend - on your DNS provider. This DNS name will need to be resolvable from the Let’s Encrypt servers, which may require - that you wait for the record to propagate before it will work. +- Administrator access to a Kubernetes cluster. +- [Helm](https://helm.sh) and [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) must be installed locally. +- [NGINX Gateway Fabric deployed]({{< relref "/installation/" >}}) in the Kubernetes cluster. +- A DNS-resolvable domain name is required. It must resolve to the public endpoint of the NGINX Gateway Fabric deployment, and this public endpoint must be an external IP address or alias accessible over the internet. The process here will depend on your DNS provider. This DNS name will need to be resolvable from the Let’s Encrypt servers, which may require that you wait for the record to propagate before it will work. ## Overview +{{cert-manager ACME challenge and certificate management with Gateway API}} -![cert-manager ACME Challenge and certificate management with Gateway API](/docs/images/cert-manager-gateway-workflow.png) - -The diagram above shows a simplified representation of the cert-manager ACME Challenge and certificate issuance process -using Gateway API. Please note that not all of the Kubernetes objects created in this process are represented in -this diagram. +The diagram above shows a simplified representation of the cert-manager ACME challenge and certificate issuance process using Gateway API. Please note that not all of the kubernetes objects created in this process are represented in this diagram. At a high level, the process looks like this: -1. We deploy cert-manager and create a ClusterIssuer which specifies Let’s Encrypt as our CA and Gateway as our ACME - HTTP01 Challenge solver. -2. We create a Gateway resource for our domain (cafe.example.com) and configure cert-manager integration using an - annotation. -3. This kicks off the certificate issuance process – cert-manager contacts Let’s Encrypt to obtain a certificate, and - Let’s Encrypt starts the ACME challenge. As part of this challenge, a temporary HTTPRoute resource is created by - cert-manager which directs the traffic through NGF to verify we control the domain name in the certificate request. -4. Once the domain has been verified, the certificate is issued. Cert-manager stores the keypair in a Kubernetes secret - that is referenced by the Gateway resource. As a result, NGINX is configured to terminate HTTPS traffic from clients - using this signed keypair. -5. We deploy our application and our HTTPRoute which defines our routing rules. The routing rules defined configure - NGINX to direct requests to https://cafe.example.com/coffee to our coffee-app application, and to use the https - Listener defined in our Gateway resource. -6. When the client connects to https://cafe.example.com/coffee, the request is routed to the coffee-app application - and the communication is secured using the signed keypair contained in the cafe-secret Secret. -7. The certificate will be automatically renewed when it is close to expiry, the Secret will be updated using the new - Certificate, and NGF will dynamically update the keypair on the filesystem used by NGINX for HTTPS termination once - the Secret is updated. - -## Details - -### Step 1 – Deploy cert-manager +1. We deploy cert-manager and create a ClusterIssuer which specifies Let’s Encrypt as our CA and gateway as our ACME HTTP01 challenge solver. +1. We create a gateway resource for our domain (cafe.example.com) and configure cert-manager integration using an annotation. +1. This starts the certificate issuance process – cert-manager contacts Let’s Encrypt to obtain a certificate, and Let’s Encrypt starts the ACME challenge. As part of this challenge, cert-manager creates a temporary HTTPRoute resource which directs the traffic through NGINX Gateway Fabric to verify we control the domain name in the certificate request. +1. Once the domain has been verified, the certificate is issued. Cert-manager stores the keypair in a Kubernetes secret that is referenced by the gateway resource. As a result, NGINX is configured to terminate HTTPS traffic from clients using this signed keypair. +1. We deploy our application and our HTTPRoute which defines our routing rules. The routing rules defined configure NGINX to direct requests to https://cafe.example.com/coffee to our coffee-app application, and to use the HTTPS listener defined in our gateway resource. +1. When the client connects to https://cafe.example.com/coffee, the request is routed to the coffee-app application and the communication is secured using the signed keypair contained in the cafe-secret secret. +1. The certificate will be automatically renewed when it is close to expiry, the secret will be updated using the new certificate, and NGINX Gateway Fabric will dynamically update the keypair on the filesystem used by NGINX for HTTPS termination once the secret is updated. + +## Securing Traffic + +### Deploy cert-manager The first step is to deploy cert-manager onto the cluster. @@ -77,18 +61,11 @@ The first step is to deploy cert-manager onto the cluster. --set "extraArgs={--feature-gates=ExperimentalGatewayAPISupport=true}" ``` -### Step 2 – Create a ClusterIssuer +### Create a ClusterIssuer -Next we need to create a [ClusterIssuer](https://cert-manager.io/docs/concepts/issuer/), a Kubernetes resource that -represents the certificate authority (CA) that will generate the signed certificates by honouring certificate signing -requests. +Next we need to create a [ClusterIssuer](https://cert-manager.io/docs/concepts/issuer/), a Kubernetes resource that represents the certificate authority (CA) that will generate the signed certificates by honouring certificate signing requests. -We are using the ACME Issuer type, and Let's Encrypt as the CA server. In order for Let's Encypt to verify that we own -the domain a certificate is being requested for, we must complete "challenges". This is to ensure clients are -unable to request certificates for domains they do not own. We will configure the Issuer to use a HTTP01 challenge, and -our Gateway resource that we will create in the next step as the solver. To read more about HTTP01 challenges, see -[here](https://cert-manager.io/docs/configuration/acme/http01/). Use the following YAML definition to create the -resource, but please note the `email` field must be updated to your own email address. +We are using the ACME Issuer type, and Let's Encrypt as the CA server. In order for Let's Encypt to verify that we own the domain a certificate is being requested for, we must complete "challenges". This is to ensure clients are unable to request certificates for domains they do not own. We will configure the issuer to use a HTTP01 challenge, and our gateway resource that we will create in the next step as the solver. To read more about HTTP01 challenges, see the [cert-manager documentation](https://cert-manager.io/docs/configuration/acme/http01/). Use the following YAML definition to create the resource, but please note the `email` field must be updated to your own email address. ```yaml apiVersion: cert-manager.io/v1 @@ -105,7 +82,7 @@ spec: privateKeySecretRef: # Secret resource that will be used to store the account's private key. name: issuer-account-key - # Add a single challenge solver, HTTP01 using NGF + # Add a single challenge solver, HTTP01 using NGINX Gateway Fabric solvers: - http01: gatewayHTTPRoute: @@ -115,10 +92,9 @@ spec: kind: Gateway ``` -### Step 3 – Deploy our Gateway with the cert-manager annotation +### Deploy our Gateway with the cert-manager annotation -Next we need to deploy our Gateway. Use can use the below YAML manifest, updating the `spec.listeners[1].hostname` -field to the required value for your environment. +Next we need to deploy our gateway. You can use the YAML manifest below, updating the `spec.listeners[1].hostname` field to the required value for your environment. ```yaml apiVersion: gateway.networking.k8s.io/v1 @@ -147,22 +123,13 @@ spec: It's worth noting a couple of key details in this manifest: -- The cert-manager annotation is present in the metadata – this enables the cert-manager integration, and tells - cert-manager which ClusterIssuer configuration it should use for the certificates. -- There are two Listeners configured, an HTTP Listener on port 80, and an HTTPS Listener on port 443. - - The http Listener on port 80 is required for the HTTP01 ACME challenge to work. This is because as part of the - HTTP01 Challenge, a temporary HTTPRoute will be created by cert-manager to solve the ACME challenge, and this - HTTPRoute requires a Listener on port 80. See the [HTTP01 Gateway API solver documentation](https://cert-manager.io/docs/configuration/acme/http01/#configuring-the-http-01-gateway-api-solver) - for more information. - - The https Listener on port 443 is the Listener we will use in our HTTPRoute in the next step. Cert-manager will - create a Certificate for this Listener block. -- The hostname needs to set to the required value. A new certificate will be issued from the `letsencrypt-prod` - ClusterIssuer for the domain, e.g. "cafe.example.com", once the ACME challenge is successful. - -Once the certificate has been issued, cert-manager will create a Certificate resource on the cluster and the -`cafe-secret` Secret containing the signed keypair in the same Namespace as the Gateway. We can verify the Secret has -been created successfully using `kubectl`. Note it will take a little bit of time for the Challenge to complete and the -Secret to be created: +- The cert-manager annotation is present in the metadata – this enables the cert-manager integration, and tells cert-manager which ClusterIssuer configuration it should use for the certificates. +- There are two listeners configured, an HTTP listener on port 80, and an HTTPS listener on port 443. + - The HTTP listener on port 80 is required for the HTTP01 ACME challenge to work. This is because as part of the HTTP01 challenge, a temporary HTTPRoute will be created by cert-manager to solve the ACME challenge, and this HTTPRoute requires a listener on port 80. See the [HTTP01 Gateway API solver documentation](https://cert-manager.io/docs/configuration/acme/http01/#configuring-the-http-01-gateway-api-solver) for more information. + - The HTTPS listener on port 443 is the listener we will use in our HTTPRoute in the next step. Cert-manager will create a certificate for this listener block. +- The hostname needs to set to the required value. A new certificate will be issued from the `letsencrypt-prod` ClusterIssuer for the domain, e.g. "cafe.example.com", once the ACME challenge is successful. + +Once the certificate has been issued, cert-manager will create a certificate resource on the cluster and the `cafe-secret` Secret containing the signed keypair in the same Namespace as the gateway. We can verify the secret has been created successfully using `kubectl`. Note it will take a little bit of time for the challenge to complete and the secret to be created: ```shell kubectl get secret cafe-secret @@ -173,9 +140,8 @@ NAME TYPE DATA AGE cafe-secret kubernetes.io/tls 2 20s ``` -### Step 4 – Deploy our application and HTTPRoute -Now we can create our coffee Deployment and Service, and configure the routing rules. You can use the following manifest -to create the Deployment and Service: +### Deploy our application and HTTPRoute +Now we can create our coffee deployment and service, and configure the routing rules. You can use the following manifest to create the deployment and service: ```yaml apiVersion: apps/v1 @@ -212,8 +178,7 @@ spec: app: coffee ``` -Deploy our HTTPRoute to configure our routing rules for the coffee application. Note the `parentRefs` section in the -spec refers to the Listener configured in the previous step. +Deploy our HTTPRoute to configure our routing rules for the coffee application. Note the `parentRefs` section in the spec refers to the listener configured in the previous step. ```yaml apiVersion: gateway.networking.k8s.io/v1 @@ -238,14 +203,14 @@ spec: ## Testing -To test everything has worked correctly, we can use curl to the navigate to our endpoint, e.g. -https://cafe.example.com/coffee. To verify using curl, we can use the `-v` option to increase verbosity and inspect the -presented certificate. The output will look something like this: +To test everything has worked correctly, we can use curl to the navigate to our endpoint, for example, https://cafe.example.com/coffee. To verify using curl, we can use the `-v` option to increase verbosity and inspect the presented certificate. ```shell curl https://cafe.example.com/coffee -v ``` +The output will look similar to this: + ```text * Trying 54.195.47.105:443... * Connected to cafe.example.com (54.195.47.105) port 443 (#0) @@ -293,14 +258,10 @@ Request ID: e64c54a2ac253375ac085d48980f000a ## Troubleshooting -- For troubeshooting anything related to the cert-manager installation or Issuer setup, see - [the cert-manager troubleshooting guide](https://cert-manager.io/docs/troubleshooting/). -- For troubleshooting the HTTP01 ACME Challenge, please see the cert-manager - [ACME troubleshooting guide](https://cert-manager.io/docs/troubleshooting/acme/). - - Note that for the HTTP01 Challenge to work using the Gateway resource, HTTPS redirect must not be configured. - - The temporary HTTPRoute created by cert-manager routes the traffic between cert-manager and the Let's Encrypt server - through NGF. If the Challenge is not successful, it may be useful to inspect the NGINX logs to see the ACME - Challenge requests. You should see something like the following: +- To troubleshoot any issues related to the cert-manager installation or issuer setup, see [the cert-manager troubleshooting guide](https://cert-manager.io/docs/troubleshooting/). +- To troubleshoot the HTTP01 ACME challenge, please see the cert-manager [ACME troubleshooting guide](https://cert-manager.io/docs/troubleshooting/acme/). + - Note that for the HTTP01 challenge to work using the gateway resource, HTTPS redirect must not be configured. + - The temporary HTTPRoute created by cert-manager routes the traffic between cert-manager and the Let's Encrypt server through NGINX Gateway Fabric. If the challenge is not successful, it may be useful to inspect the NGINX logs to see the ACME challenge requests. You should see something like the following: ```shell kubectl logs -n nginx-gateway -c nginx @@ -318,8 +279,8 @@ Request ID: e64c54a2ac253375ac085d48980f000a ## Links -- Gateway docs: https://gateway-api.sigs.k8s.io -- Cert-manager Gateway usage: https://cert-manager.io/docs/usage/gateway/ -- Cert-manager ACME: https://cert-manager.io/docs/configuration/acme/ -- Let’s Encrypt: https://letsencrypt.org -- NGINX HTTPS docs: https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-http/ +- [Gateway docs](https://gateway-api.sigs.k8s.io) +- [Cert-manager gateway usage](https://cert-manager.io/docs/usage/gateway/) +- [Cert-manager ACME](https://cert-manager.io/docs/configuration/acme/) +- [Let’s Encrypt](https://letsencrypt.org) +- [NGINX HTTPS docs](https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-http/) diff --git a/site/content/how-to/traffic-management/routing-traffic-to-your-app.md b/site/content/how-to/traffic-management/routing-traffic-to-your-app.md new file mode 100644 index 000000000..21c0c3f94 --- /dev/null +++ b/site/content/how-to/traffic-management/routing-traffic-to-your-app.md @@ -0,0 +1,371 @@ +--- +title: "Routing Traffic to Your Application" +description: "Learn how to route external traffic to your Kubernetes applications using NGINX Gateway Fabric." +weight: 100 +toc: true +docs: "DOCS-000" +--- + +{{}} + +## Overview + +You can route traffic to your Kubernetes applications using the Gateway API and NGINX Gateway Fabric. Whether you're managing a web application or a REST backend API, you can use NGINX Gateway Fabric to expose your application outside the cluster. + +## Prerequisites + +- [Install]({{< relref "installation/" >}}) NGINX Gateway Fabric. +- [Expose NGINX Gateway Fabric]({{< relref "installation/expose-nginx-gateway-fabric.md" >}}) and save the public IP address and port of NGINX Gateway Fabric into shell variables: + + ```text + GW_IP=XXX.YYY.ZZZ.III + GW_PORT= + ``` + +## Example application + +The application we are going to use in this guide is a simple **coffee** application comprised of one service and two pods: + +{{coffee app}} + +Using this architecture, the **coffee** application is not accessible outside the cluster. We want to expose this application on the hostname "cafe.example.com" so that clients outside the cluster can access it. + +Install NGINX Gateway Fabric and create two Gateway API resources: a [gateway](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.Gateway) and an [HTTPRoute](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.HTTPRoute). + +Using these resources we will configure a simple routing rule to match all HTTP traffic with the hostname "cafe.example.com" and route it to the **coffee** service. + +## Set up + +Create the **coffee** application in Kubernetes by copying and pasting the following block into your terminal: + +```yaml +kubectl apply -f - < 80/TCP 77s +``` + +## Application architecture with NGINX Gateway Fabric + +To route traffic to the **coffee** application, we will create a gateway and HTTPRoute. The following diagram shows the configuration we are creating in the next step: + +{{Configuration}} + +We need a gateway to create an entry point for HTTP traffic coming into the cluster. The **cafe** gateway we are going to create will open an entry point to the cluster on port 80 for HTTP traffic. + +To route HTTP traffic from the gateway to the **coffee** service, we need to create an HTTPRoute named **coffee** and attach it to the gateway. This HTTPRoute will have a single routing rule that routes all traffic to the hostname "cafe.example.com" from the gateway to the **coffee** service. + +Once NGINX Gateway Fabric processes the **cafe** gateway and **coffee** HTTPRoute, it will configure its data plane (NGINX) to route all HTTP requests sent to "cafe.example.com" to the pods that the **coffee** service targets: + +{{Traffic Flow}} + +The **coffee** service is omitted from the diagram above because the NGINX Gateway Fabric routes directly to the pods that the **coffee** service targets. + +{{< note >}}In the diagrams above, all resources that are the responsibility of the cluster operator are shown in blue. The orange resources are the responsibility of the application developers. + +See the [roles and personas](https://gateway-api.sigs.k8s.io/concepts/roles-and-personas/#roles-and-personas_1) Gateway API document for more information on these roles.{{< /note >}} + +## Create the Gateway API resources + +To create the **cafe** gateway, copy and paste the following into your terminal: + +```yaml +kubectl apply -f - <}}Your clients should be able to resolve the domain name "cafe.example.com" to the public IP of the NGINX Gateway Fabric. In this guide we will simulate that using curl's `--resolve` option. {{< /note >}} + + +First, let's send a request to the path "/": + +```shell +curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/ +``` + +We should get a response from one of the **coffee** pods: + +```text +Server address: 10.12.0.18:8080 +Server name: coffee-7dd75bc79b-cqvb7 +``` + +Since the **cafe** HTTPRoute routes all traffic on any path to the **coffee** application, the following requests should also be handled by the **coffee** pods: + +```shell +curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/some-path +``` + +```text +Server address: 10.12.0.18:8080 +Server name: coffee-7dd75bc79b-cqvb7 +``` + +```shell +curl --resolve cafe.example.com:$GW_PORT:$GW_IP http://cafe.example.com:$GW_PORT/some/path +``` + +```text +Server address: 10.12.0.19:8080 +Server name: coffee-7dd75bc79b-dett3 +``` + +Requests to hostnames other than "cafe.example.com" should _not_ be routed to the coffee application, since the **cafe** HTTPRoute only matches requests with the "cafe.example.com"need hostname. To verify this, send a request to the hostname "pub.example.com": + +```shell +curl --resolve pub.example.com:$GW_PORT:$GW_IP http://pub.example.com:$GW_PORT/ +``` + +You should receive a 404 Not Found error: + +```text + +404 Not Found + +

404 Not Found

+
nginx/1.25.2
+ + +``` + +## Troubleshooting + +If you have any issues while testing the configuration, try the following to debug your configuration and setup: + +- Make sure you set the shell variables $GW_IP and $GW_PORT to the public IP and port of the NGINX Gateway Fabric Service. Instructions for finding those values are in the [Expose NGINX Gateway Fabric]({{< relref "installation/expose-nginx-gateway-fabric.md" >}}) guide. + +- Check the status of the gateway: + + ```shell + kubectl describe gateway cafe + ``` + + The gateway status should look similar to this: + + ```text + Status: + Addresses: + Type: IPAddress + Value: 10.244.0.85 + Conditions: + Last Transition Time: 2023-08-15T20:57:21Z + Message: Gateway is accepted + Observed Generation: 1 + Reason: Accepted + Status: True + Type: Accepted + Last Transition Time: 2023-08-15T20:57:21Z + Message: Gateway is programmed + Observed Generation: 1 + Reason: Programmed + Status: True + Type: Programmed + Listeners: + Attached Routes: 1 + Conditions: + Last Transition Time: 2023-08-15T20:57:21Z + Message: Listener is accepted + Observed Generation: 1 + Reason: Accepted + Status: True + Type: Accepted + Last Transition Time: 2023-08-15T20:57:21Z + Message: Listener is programmed + Observed Generation: 1 + Reason: Programmed + Status: True + Type: Programmed + Last Transition Time: 2023-08-15T20:57:21Z + Message: All references are resolved + Observed Generation: 1 + Reason: ResolvedRefs + Status: True + Type: ResolvedRefs + Last Transition Time: 2023-08-15T20:57:21Z + Message: No conflicts + Observed Generation: 1 + Reason: NoConflicts + Status: False + Type: Conflicted + Name: http + ``` + + Check that the conditions match and that the attached routes for the **http** listener equals 1. If it is 0, there may be an issue with the HTTPRoute. + +- Check the status of the HTTPRoute: + + ```shell + kubectl describe httproute coffee + ``` + + The HTTPRoute status should look similar to this: + + ```text + Status: + Parents: + Conditions: + Last Transition Time: 2023-08-15T20:57:21Z + Message: The route is accepted + Observed Generation: 1 + Reason: Accepted + Status: True + Type: Accepted + Last Transition Time: 2023-08-15T20:57:21Z + Message: All references are resolved + Observed Generation: 1 + Reason: ResolvedRefs + Status: True + Type: ResolvedRefs + Controller Name: gateway.nginx.org/nginx-gateway-controller + Parent Ref: + Group: gateway.networking.k8s.io + Kind: Gateway + Name: cafe + Namespace: default + ``` + + Check for any error messages in the conditions. + +- Check the generated nginx config: + + ```shell + kubectl exec -it -n nginx-gateway -c nginx -- nginx -T + ``` + + The config should contain a server block with the server name "cafe.example.com" that listens on port 80. This server block should have a single location `/` that proxy passes to the coffee upstream: + + ```nginx configuration + server { + listen 80; + + server_name cafe.example.com; + + location / { + ... + proxy_pass http://default_coffee_80$request_uri; # the upstream is named default_coffee_80 + ... + } + } + ``` + + There should also be an upstream block with a name that matches the upstream in the **proxy_pass** directive. This upstream block should contain the pod IPs of the **coffee** pods: + + ```nginx configuration + upstream default_coffee_80 { + ... + server 10.12.0.18:8080; # these should be the pod IPs of the coffee pods + server 10.12.0.19:8080; + ... + } + ``` + +{{< note >}}The entire configuration is not shown because it is subject to change. Ellipses indicate that there's configuration not shown.{{< /note >}} + +If your issue persists, [contact us](https://github.com/nginxinc/nginx-gateway-fabric#contacts). + +## Further Reading + +To learn more about the Gateway API and the resources we created in this guide, check out the following resources: + +- [Gateway API Overview](https://gateway-api.sigs.k8s.io/concepts/api-overview/) +- [Deploying a simple Gateway](https://gateway-api.sigs.k8s.io/guides/simple-gateway/) +- [HTTP Routing](https://gateway-api.sigs.k8s.io/guides/http-routing/) diff --git a/site/content/includes/index.md b/site/content/includes/index.md new file mode 100644 index 000000000..ca03031f1 --- /dev/null +++ b/site/content/includes/index.md @@ -0,0 +1,3 @@ +--- +headless: true +--- diff --git a/site/content/includes/installation/delay-pod-termination/delay-pod-termination-overview.md b/site/content/includes/installation/delay-pod-termination/delay-pod-termination-overview.md new file mode 100644 index 000000000..4da3c375a --- /dev/null +++ b/site/content/includes/installation/delay-pod-termination/delay-pod-termination-overview.md @@ -0,0 +1,9 @@ +--- +docs: +--- + +To avoid client service interruptions when upgrading NGINX Gateway Fabric, you can configure [`PreStop` hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) to delay terminating the NGINX Gateway Fabric pod, allowing the pod to complete certain actions before shutting down. This ensures a smooth upgrade without any downtime, also known as a zero downtime upgrade. + +For an in-depth explanation of how Kubernetes handles pod termination, see the [Termination of Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) topic on their official website. + +{{}}Keep in mind that NGINX won't shut down while WebSocket or other long-lived connections are open. NGINX will only stop when these connections are closed by the client or the backend. If these connections stay open during an upgrade, Kubernetes might need to shut down NGINX forcefully. This sudden shutdown could interrupt service for clients.{{}} diff --git a/site/content/includes/installation/delay-pod-termination/termination-grace-period.md b/site/content/includes/installation/delay-pod-termination/termination-grace-period.md new file mode 100644 index 000000000..0d45f6691 --- /dev/null +++ b/site/content/includes/installation/delay-pod-termination/termination-grace-period.md @@ -0,0 +1,9 @@ +--- +docs: +--- + +Set `terminationGracePeriodSeconds` to a value that is equal to or greater than the `sleep` duration specified in the `preStop` hook (default is `30`). This setting prevents Kubernetes from terminating the pod before before the `preStop` hook has completed running. + + ```yaml + terminationGracePeriodSeconds: 50 + ``` diff --git a/site/content/includes/installation/helm/pulling-the-chart.md b/site/content/includes/installation/helm/pulling-the-chart.md new file mode 100644 index 000000000..a2495ef06 --- /dev/null +++ b/site/content/includes/installation/helm/pulling-the-chart.md @@ -0,0 +1,12 @@ +--- +docs: +--- + +Pull the latest stable release of the NGINX Gateway Fabric chart: + + ```shell + helm pull oci://ghcr.io/nginxinc/charts/nginx-gateway-fabric --untar + cd nginx-gateway-fabric + ``` + + If you want the latest version from the **main** branch, add `--version 0.0.0-edge` to your pull command. diff --git a/site/content/includes/installation/install-gateway-api-resources.md b/site/content/includes/installation/install-gateway-api-resources.md new file mode 100644 index 000000000..8ceb6195f --- /dev/null +++ b/site/content/includes/installation/install-gateway-api-resources.md @@ -0,0 +1,30 @@ +--- +docs: +--- + +{{}}The [Gateway API resources](https://github.com/kubernetes-sigs/gateway-api) from the standard channel must be installed before deploying NGINX Gateway Fabric. If they are already installed in your cluster, please ensure they are the correct version as supported by the NGINX Gateway Fabric - [see the Technical Specifications](https://github.com/nginxinc/nginx-gateway-fabric/blob/main/README.md#technical-specifications).{{}} + +**Stable release** + +If installing the latest stable release of NGINX Gateway Fabric, ensure you are deploying its supported version of +the Gateway API resources: + +```shell +kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v0.8.1/standard-install.yaml +``` + +**Edge version** + +If installing the edge version of NGINX Gateway Fabric from the **main** branch: + +```shell +kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml +``` + +If you are running on Kubernetes 1.23 or 1.24, you also need to install the validating webhook. To do so, run: + +```shell +kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/webhook-install.yaml +``` + +{{< important >}}The validating webhook is not needed if you are running Kubernetes 1.25+. Validation is done using CEL on the CRDs. See the [resource validation doc]({{< relref "/overview/resource-validation.md" >}}) for more information.{{< /important >}} diff --git a/site/content/includes/installation/next-step-expose-fabric.md b/site/content/includes/installation/next-step-expose-fabric.md new file mode 100644 index 000000000..eecadbb83 --- /dev/null +++ b/site/content/includes/installation/next-step-expose-fabric.md @@ -0,0 +1,5 @@ +--- +docs: +--- + +After installing NGINX Gateway Fabric, the next step is to make it accessible. Detailed instructions can be found in [Expose the NGINX Gateway Fabric]({{< relref "installation/expose-nginx-gateway-fabric.md" >}}). diff --git a/site/content/includes/installation/uninstall-gateway-api-resources.md b/site/content/includes/installation/uninstall-gateway-api-resources.md new file mode 100644 index 000000000..d0f878670 --- /dev/null +++ b/site/content/includes/installation/uninstall-gateway-api-resources.md @@ -0,0 +1,30 @@ +--- +docs: +--- + + {{}}This will remove all corresponding custom resources in your entire cluster, across all namespaces. Double-check to make sure you don't have any custom resources you need to keep, and confirm that there are no other Gateway API implementations active in your cluster.{{}} + + To uninstall the Gateway API resources, including the CRDs and the validating webhook, run the following: + + **Stable release** + + If you were running the latest stable release version of NGINX Gateway Fabric: + + ```shell + kubectl delete -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v0.8.1/standard-install.yaml + ``` + + **Edge version** + + If you were running the edge version of NGINX Gateway Fabric from the **main** branch: + + ```shell + kubectl delete -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml + ``` + + + If you are running on Kubernetes 1.23 or 1.24, you also need to delete the validating webhook. To do so, run: + + ```shell + kubectl delete -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/webhook-install.yaml + ``` diff --git a/site/content/installation/_index.md b/site/content/installation/_index.md new file mode 100644 index 000000000..3144876a5 --- /dev/null +++ b/site/content/installation/_index.md @@ -0,0 +1,9 @@ +--- +title: "Installation" +description: +weight: 200 +linkTitle: "Installation" +menu: + docs: + parent: NGINX Gateway Fabric +--- diff --git a/docs/building-the-images.md b/site/content/installation/building-the-images.md similarity index 70% rename from docs/building-the-images.md rename to site/content/installation/building-the-images.md index c43f471cb..5c8614722 100644 --- a/docs/building-the-images.md +++ b/site/content/installation/building-the-images.md @@ -1,4 +1,15 @@ -# Building the Images +--- +title: "Building NGINX Gateway Fabric and NGINX Images" +weight: 300 +toc: true +docs: "DOCS-000" +--- + +{{}} + +## Overview + +While most users will install NGINX Gateway Fabric [with Helm]({{< relref "/installation/installing-ngf/helm.md" >}}) or [Kubernetes manifests]({{< relref "/installation/installing-ngf/manifests.md" >}}), manually building the [NGINX Gateway Fabric and NGINX images]({{< relref "/overview/gateway-architecture.md#the-nginx-gateway-fabric-pod" >}}) can be helpful for testing and development purposes. Follow the steps in this document to build the NGINX Gateway Fabric and NGINX images. ## Prerequisites diff --git a/site/content/installation/expose-nginx-gateway-fabric.md b/site/content/installation/expose-nginx-gateway-fabric.md new file mode 100644 index 000000000..7e92c1de2 --- /dev/null +++ b/site/content/installation/expose-nginx-gateway-fabric.md @@ -0,0 +1,71 @@ +--- +title: "Expose NGINX Gateway Fabric" +description: "" +weight: 300 +toc: true +docs: "DOCS-000" +--- + +{{}} + +## Overview + +Gain access to NGINX Gateway Fabric by creating either a **NodePort** service or a **LoadBalancer** service in the same namespace as the controller. The service name is specified in the `--service` argument of the controller. + +{{}}The service manifests configure NGINX Gateway Fabric on ports `80` and `443`, affecting any gateway [listeners](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.Listener) on these ports. To use different ports, update the manifests. NGINX Gateway Fabric requires a configured [gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/#gateway) resource with a valid listener to listen on any ports.{{}} + +NGINX Gateway Fabric uses the created service to update the **Addresses** field in the **Gateway Status** resource. Using a **LoadBalancer** service sets this field to the IP address and/or hostname of that service. Without a service, the pod IP address is used. + +This gateway is associated with the NGINX Gateway Fabric through the **gatewayClassName** field. The default installation of NGINX Gateway Fabric creates a **GatewayClass** with the name **nginx**. NGINX Gateway Fabric will only configure gateways with a **gatewayClassName** of **nginx** unless you change the name via the `--gatewayclass` [command-line flag](/docs/cli-help.md#static-mode). + +## Create a NodePort service + +To create a **NodePort** service run the following command: + +```shell +kubectl apply -f https://raw.githubusercontent.com/nginxinc/nginx-gateway-fabric/v1.0.0/deploy/manifests/service/nodeport.yaml +``` + +A **NodePort** service allocates a port on every cluster node. Access NGINX Gateway Fabric using any node's IP address and the allocated port. + +## Create a LoadBalancer Service + +To create a **LoadBalancer** service, use the appropriate manifest for your cloud provider: + +### GCP (Google Cloud Platform) and Azure + +1. Run the following command: + + ```shell + kubectl apply -f https://raw.githubusercontent.com/nginxinc/nginx-gateway-fabric/v1.0.0/deploy/manifests/service/loadbalancer.yaml + ``` + +2. Lookup the public IP of the load balancer, which is reported in the `EXTERNAL-IP` column in the output of the following command: + + ```shell + kubectl get svc nginx-gateway -n nginx-gateway + ``` + +3. Use the public IP of the load balancer to access NGINX Gateway Fabric. + +### AWS (Amazon Web Services) + +1. Run the following command: + + ```shell + kubectl apply -f https://raw.githubusercontent.com/nginxinc/nginx-gateway-fabric/v1.0.0/deploy/manifests/service/loadbalancer-aws-nlb.yaml + ``` + +2. In AWS, the NLB (Network Load Balancer) DNS (directory name system) name will be reported by Kubernetes instead of a public IP in the `EXTERNAL-IP` column. To get the DNS name, run: + + ```shell + kubectl get svc nginx-gateway -n nginx-gateway + ``` + + {{< note >}} We recommend using the NLB DNS whenever possible, but for testing purposes, you can resolve the DNS name to get the IP address of the load balancer: + + ```shell + nslookup + ``` + + {{< /note >}} diff --git a/site/content/installation/installing-ngf/_index.md b/site/content/installation/installing-ngf/_index.md new file mode 100644 index 000000000..e37f66847 --- /dev/null +++ b/site/content/installation/installing-ngf/_index.md @@ -0,0 +1,9 @@ +--- +title: "Installing NGINX Gateway Fabric" +description: +weight: 200 +linkTitle: "Installing NGINX Gateway Fabric" +menu: + docs: + parent: Installation +--- diff --git a/site/content/installation/installing-ngf/helm.md b/site/content/installation/installing-ngf/helm.md new file mode 100644 index 000000000..33ebb0162 --- /dev/null +++ b/site/content/installation/installing-ngf/helm.md @@ -0,0 +1,188 @@ +--- +title: "Installation with Helm" +description: "Learn how to install, upgrade, and uninstall NGINX Gateway Fabric in a Kubernetes cluster with Helm." +weight: 100 +toc: true +docs: "DOCS-000" +--- + +{{}} + +## Prerequisites + +To complete this guide, you'll need to install: + +- [kubectl](https://kubernetes.io/docs/tasks/tools/), a command-line tool for managing Kubernetes clusters. +- [Helm 3.0 or later](https://helm.sh/docs/intro/install/), for deploying and managing applications on Kubernetes. + + +## Deploy NGINX Gateway Fabric + +### Installing the Gateway API resources + +{{}} + +### Install from the OCI registry + +- To install the latest stable release of NGINX Gateway Fabric in the **nginx-gateway** namespace, run the following command: + + ```shell + helm install oci://ghcr.io/nginxinc/charts/nginx-gateway-fabric --create-namespace --wait -n nginx-gateway + ``` + + Change `` to the name you want for your release. If the namespace already exists, you can omit the optional `--create-namespace` flag. If you want the latest version from the **main** branch, add `--version 0.0.0-edge` to your install command. + +### Install from sources {#install-from-sources} + +1. {{}} + +2. To install the chart into the **nginx-gateway** namespace, run the following command. + + ```shell + helm install . --create-namespace --wait -n nginx-gateway + ``` + + Change `` to the name you want for your release. If the namespace already exists, you can omit the optional `--create-namespace` flag. + +## Upgrade NGINX Gateway Fabric + +{{}}For guidance on zero downtime upgrades, see the [Delay Pod Termination](#configure-delayed-pod-termination-for-zero-downtime-upgrades) section below.{{}} + +To upgrade NGINX Gateway Fabric and get the latest features and improvements, take the following steps: + +### Upgrade Gateway resources + +To upgrade your Gateway API resources, take the following steps: + +- Verify the Gateway API resources are compatible with your NGINX Gateway Fabric version. Refer to the [Technical Specifications]({{< relref "reference/technical-specifications.md" >}}) for details. +- Review the [release notes](https://github.com/kubernetes-sigs/gateway-api/releases/tag/v1.0.0) for any important upgrade-specific information. +- To upgrade the Gateway API resources, run: + + ```shell + kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml + ``` + +### Upgrade NGINX Gateway Fabric CRDs + +Helm's upgrade process does not automatically upgrade the NGINX Gateway Fabric CRDs (Custom Resource Definitions). + +To upgrade the CRDs, take the following steps: + +1. {{}} + +2. Upgrade the CRDs: + + ```shell + kubectl apply -f crds/ + ``` + + {{}}Ignore the following warning, as it is expected.{{}} + + ``` text + Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply. + ``` + +### Upgrade NGINX Gateway Fabric release + +#### Upgrade from the OCI registry + +- To upgrade to the latest stable release of NGINX Gateway Fabric, run: + + ```shell + helm upgrade oci://ghcr.io/nginxinc/charts/nginx-gateway-fabric -n nginx-gateway + ``` + + Replace `` with your chosen release name. + +#### Upgrade from sources + +1. {{}} + +1. To upgrade, run: the following command: + + ```shell + helm upgrade . -n nginx-gateway + ``` + + Replace `` with your chosen release name. + +## Delay pod termination for zero downtime upgrades {#configure-delayed-pod-termination-for-zero-downtime-upgrades} + +{{< include "installation/delay-pod-termination/delay-pod-termination-overview.md" >}} + +Follow these steps to configure delayed pod termination: + +1. Open the `values.yaml` for editing. + +1. **Add delayed shutdown hooks**: + + - In the `values.yaml` file, add `lifecycle: preStop` hooks to both the `nginx` and `nginx-gateway` container definitions. These hooks instruct the containers to delay their shutdown process, allowing time for connections to close gracefully. Update the `sleep` value to what works for your environment. + + ```yaml + nginxGateway: + <...> + lifecycle: + preStop: + exec: + command: + - /usr/bin/gateway + - sleep + - --duration=40s # This flag is optional, the default is 30s + + nginx: + <...> + lifecycle: + preStop: + exec: + command: + - /bin/sleep + - "40" + ``` + +1. **Set the termination grace period**: + + - {{}} + +1. Save the changes. + +{{}} +For additional information on configuring and understanding the behavior of containers and pods during their lifecycle, refer to the following Kubernetes documentation: + +- [Container Lifecycle Hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks) +- [Pod Lifecycle](https://kubernetes.io/docs/concepts/workloads/Pods/Pod-lifecycle/#Pod-termination) + +{{}} + + +## Uninstall NGINX Gateway Fabric + +Follow these steps to uninstall NGINX Gateway Fabric and Gateway API from your Kubernetes cluster: + +1. **Uninstall NGINX Gateway Fabric:** + + - To uninstall NGINX Gateway Fabric, run: + + ```shell + helm uninstall -n nginx-gateway + ``` + + Replace `` with your chosen release name. + +2. **Remove namespace and CRDs:** + + - To remove the **nginx-gateway** namespace and its custom resource definitions (CRDs), run: + + ```shell + kubectl delete ns nginx-gateway + kubectl delete crd nginxgateways.gateway.nginx.org + ``` + +3. **Remove the Gateway API resources:** + + - {{}} + +## Next steps + +### Expose NGINX Gateway Fabric + +{{}} diff --git a/site/content/installation/installing-ngf/manifests.md b/site/content/installation/installing-ngf/manifests.md new file mode 100644 index 000000000..772edf3a1 --- /dev/null +++ b/site/content/installation/installing-ngf/manifests.md @@ -0,0 +1,194 @@ +--- +title: "Installation with Kubernetes manifests" +description: "Learn how to install, upgrade, and uninstall NGINX Gateway Fabric using Kubernetes manifests." +weight: 200 +toc: true +docs: "DOCS-000" +--- + +{{}} + +## Prerequisites + +To complete this guide, you'll need to install: + +- [kubectl](https://kubernetes.io/docs/tasks/tools/), a command-line interface for managing Kubernetes clusters. + + +## Deploy NGINX Gateway Fabric + +Deploying NGINX Gateway Fabric with Kubernetes manifests takes only a few steps. With manifests, you can configure your deployment exactly how you want. Manifests also make it easy to replicate deployments across environments or clusters, ensuring consistency. + +### 1. Install the Gateway API resources + +{{}} + +### 2. Deploy the NGINX Gateway Fabric CRDs + +#### Stable release + + ```shell + kubectl apply -f https://github.com/nginxinc/nginx-gateway-fabric/releases/download/v1.0.0/crds.yaml + ``` + +#### Edge version + + ```shell + git clone https://github.com/nginxinc/nginx-gateway-fabric.git + cd nginx-gateway-fabric + ``` + + ```shell + kubectl apply -f deploy/manifests/crds + ``` + +### 3. Deploy NGINX Gateway Fabric + + {{}}By default, NGINX Gateway Fabric is installed in the **nginx-gateway** namespace. You can deploy in another namespace by modifying the manifest files.{{}} + +#### Stable release + + ```shell + kubectl apply -f https://github.com/nginxinc/nginx-gateway-fabric/releases/download/v1.0.0/nginx-gateway.yaml + ``` + +#### Edge version + + ```shell + kubectl apply -f deploy/manifests/nginx-gateway.yaml + ``` + +### 4. Verify the Deployment + +To confirm that NGINX Gateway Fabric is running, check the pods in the `nginx-gateway` namespace: + + ```shell + kubectl get pods -n nginx-gateway + ``` + + The output should look similar to this (note that the pod name will include a unique string): + + ```text + NAME READY STATUS RESTARTS AGE + nginx-gateway-5d4f4c7db7-xk2kq 2/2 Running 0 112s + ``` + + +## Upgrade NGINX Gateway Fabric + +{{}}For guidance on zero downtime upgrades, see the [Delay Pod Termination](#configure-delayed-pod-termination-for-zero-downtime-upgrades) section below.{{}} + +To upgrade NGINX Gateway Fabric and get the latest features and improvements, take the following steps: + +1. **Upgrade Gateway API resources:** + + - Verify that your NGINX Gateway Fabric version is compatible with the Gateway API resources. Refer to the [Technical Specifications]({{< relref "reference/technical-specifications.md" >}}) for details. + - Review the [release notes](https://github.com/kubernetes-sigs/gateway-api/releases) for any important upgrade-specific information. + - To upgrade the Gateway API resources, run: + + ```shell + kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml + ``` + + - If you are running on Kubernetes 1.23 or 1.24, you also need to update the validating webhook: + + ```shell + kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/webhook-install.yaml + ``` + + - If you are running on Kubernetes 1.25 or newer and have the validating webhook installed, you should remove the + webhook: + + ```shell + kubectl delete -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/webhook-install.yaml + ``` + +1. **Upgrade NGINX Gateway Fabric CRDs:** + - To upgrade the Custom Resource Definitions (CRDs), run: + + ```shell + kubectl apply -f https://github.com/nginxinc/nginx-gateway-fabric/releases/download/v1.0.0/crds.yaml + ``` + +1. **Upgrade NGINX Gateway Fabric deployment:** + - To upgrade the deployment, run: + + ```shell + kubectl apply -f https://github.com/nginxinc/nginx-gateway-fabric/releases/download/v1.0.0/nginx-gateway.yaml + ``` + +## Delay pod termination for zero downtime upgrades {#configure-delayed-pod-termination-for-zero-downtime-upgrades} + +{{< include "installation/delay-pod-termination/delay-pod-termination-overview.md" >}} + +Follow these steps to configure delayed pod termination: + +1. Open the `nginx-gateway.yaml` for editing. + +1. **Add delayed shutdown hooks**: + + - In the `nginx-gateway.yaml` file, add `lifecycle: preStop` hooks to both the `nginx` and `nginx-gateway` container definitions. These hooks instruct the containers to delay their shutdown process, allowing time for connections to close gracefully. Update the `sleep` value to what works for your environment. + + ```yaml + <...> + name: nginx-gateway + <...> + lifecycle: + preStop: + exec: + command: + - /usr/bin/gateway + - sleep + - --duration=40s # This flag is optional, the default is 30s + <...> + name: nginx + <...> + lifecycle: + preStop: + exec: + command: + - /bin/sleep + - "40" + <...> + ``` + +1. **Set the termination grace period**: + + - {{}} + +1. Save the changes. + +{{}} +For additional information on configuring and understanding the behavior of containers and pods during their lifecycle, refer to the following Kubernetes documentation: + +- [Container Lifecycle Hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks) +- [Pod Lifecycle](https://kubernetes.io/docs/concepts/workloads/Pods/Pod-lifecycle/#Pod-termination) + +{{}} + + +## Uninstall NGINX Gateway Fabric + +Follow these steps to uninstall NGINX Gateway Fabric and Gateway API from your Kubernetes cluster: + +1. **Uninstall NGINX Gateway Fabric:** + + - To remove NGINX Gateway Fabric and its custom resource definitions (CRDs), run: + + ```shell + kubectl delete -f https://github.com/nginxinc/nginx-gateway-fabric/releases/download/v1.0.0/nginx-gateway.yaml + ``` + + ```shell + kubectl delete -f https://github.com/nginxinc/nginx-gateway-fabric/releases/download/v1.0.0/crds.yaml + ``` + +1. **Remove the Gateway API resources:** + + - {{}} + +## Next steps + +### Expose NGINX Gateway Fabric + +{{}} diff --git a/docs/running-on-kind.md b/site/content/installation/running-on-kind.md similarity index 76% rename from docs/running-on-kind.md rename to site/content/installation/running-on-kind.md index db181b3d1..01e6cf734 100644 --- a/docs/running-on-kind.md +++ b/site/content/installation/running-on-kind.md @@ -1,6 +1,10 @@ -# Running on `kind` - -This guide walks you through how to run NGINX Gateway Fabric on a [kind](https://kind.sigs.k8s.io/) cluster. +--- +title: "Running on kind" +description: "Learn how to run NGINX Gateway Fabric on a kind cluster." +weight: 300 +toc: true +docs: "DOCS-000" +--- ## Prerequisites @@ -19,7 +23,7 @@ make create-kind-cluster ## Deploy NGINX Gateway Fabric -Follow the [installation](./installation.md) instructions to deploy NGINX Gateway Fabric on your Kind cluster. +Follow the [installation](./how-to/installation/installation.md) instructions to deploy NGINX Gateway Fabric on your Kind cluster. ## Access NGINX Gateway Fabric diff --git a/site/content/overview/_index.md b/site/content/overview/_index.md new file mode 100644 index 000000000..fb6a7d7d8 --- /dev/null +++ b/site/content/overview/_index.md @@ -0,0 +1,9 @@ +--- +title: "Overview" +description: +weight: 100 +linkTitle: "Overview" +menu: + docs: + parent: NGINX Gateway Fabric +--- diff --git a/docs/gateway-api-compatibility.md b/site/content/overview/gateway-api-compatibility.md similarity index 97% rename from docs/gateway-api-compatibility.md rename to site/content/overview/gateway-api-compatibility.md index 32380fbaa..c748ab37f 100644 --- a/docs/gateway-api-compatibility.md +++ b/site/content/overview/gateway-api-compatibility.md @@ -1,9 +1,14 @@ -# Gateway API Compatibility - -This document describes which Gateway API resources NGINX Gateway Fabric supports and the extent of that support. +--- +title: "Gateway API Compatibility" +description: "Learn which Gateway API resources NGINX Gateway Fabric supports and the extent of that support." +weight: 700 +toc: true +docs: "DOCS-000" +--- ## Summary +{{< bootstrap-table "table table-striped table-bordered" >}} | Resource | Core Support Level | Extended Support Level | Implementation-Specific Support Level | API Version | |-------------------------------------|--------------------|------------------------|---------------------------------------|-------------| | [GatewayClass](#gatewayclass) | Supported | Not supported | Not Supported | v1 | @@ -14,6 +19,7 @@ This document describes which Gateway API resources NGINX Gateway Fabric support | [TLSRoute](#tlsroute) | Not supported | Not supported | Not Supported | N/A | | [TCPRoute](#tcproute) | Not supported | Not supported | Not Supported | N/A | | [UDPRoute](#udproute) | Not supported | Not supported | Not Supported | N/A | +{{< /bootstrap-table >}} ## Terminology diff --git a/site/content/overview/gateway-architecture.md b/site/content/overview/gateway-architecture.md new file mode 100644 index 000000000..cc44836d1 --- /dev/null +++ b/site/content/overview/gateway-architecture.md @@ -0,0 +1,99 @@ +--- +title: "Gateway Architecture" +description: "Learn about the architecture and design principles of NGINX Gateway Fabric." +weight: 100 +toc: true +docs: "DOCS-000" +--- + +The intended audience for this information is primarily the two following groups: + +- _Cluster Operators_ who would like to know how the software works and understand how it can fail. +- _Developers_ who would like to [contribute](https://github.com/nginxinc/nginx-gateway-fabric/blob/main/CONTRIBUTING.md) to the project. + +The reader needs to be familiar with core Kubernetes concepts, such as pods, deployments, services, and endpoints. For an understanding of how NGINX itself works, you can read the ["Inside NGINX: How We Designed for Performance & Scale"](https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/) blog post. + +## NGINX Gateway Fabric Overview + +NGINX Gateway Fabric is an open source project that provides an implementation of the [Gateway API](https://gateway-api.sigs.k8s.io/) using [NGINX](https://nginx.org/) as the data plane. The goal of this project is to implement the core Gateway APIs -- _Gateway_, _GatewayClass_, _HTTPRoute_, _TCPRoute_, _TLSRoute_, and _UDPRoute_ -- to configure an HTTP or TCP/UDP load balancer, reverse proxy, or API gateway for applications running on Kubernetes. NGINX Gateway Fabric supports a subset of the Gateway API. + +For a list of supported Gateway API resources and features, see the [Gateway API Compatibility]({{< relref "/overview/gateway-api-compatibility.md" >}}) documentation. + +We have more information regarding our [design principles](https://github.com/nginxinc/nginx-gateway-fabric/blob/main/docs/developer/design-principles.md) in the project's GitHub repository. + +## NGINX Gateway Fabric at a high level + +This figure depicts an example of NGINX Gateway Fabric exposing two web applications within a Kubernetes cluster to clients on the internet: + +{{}} + +{{< note >}} The figure does not show many of the necessary Kubernetes resources the Cluster Operators and Application Developers need to create, like deployment and services. {{< /note >}} + +The figure shows: + +- A _Kubernetes cluster_. +- Users _Cluster Operator_, _Application Developer A_ and _Application Developer B_. These users interact with the cluster through the Kubernetes API by creating Kubernetes objects. +- _Clients A_ and _Clients B_ connect to _Applications A_ and _B_, respectively, which they have deployed. +- The _NGF Pod_, [deployed by _Cluster Operator_]({{< relref "installation">}}) in the namespace _nginx-gateway_. For scalability and availability, you can have multiple replicas. This pod consists of two containers: `NGINX` and `NGF`. The _NGF_ container interacts with the Kubernetes API to retrieve the most up-to-date Gateway API resources created within the cluster. It then dynamically configures the _NGINX_ container based on these resources, ensuring proper alignment between the cluster state and the NGINX configuration. +- _Gateway AB_, created by _Cluster Operator_, requests a point where traffic can be translated to Services within the cluster. This Gateway includes a listener with a hostname `*.example.com`. Application Developers have the ability to attach their application's routes to this Gateway if their application's hostname matches `*.example.com`. +- _Application A_ with two pods deployed in the _applications_ namespace by _Application Developer A_. To expose the application to its clients (_Clients A_) via the host `a.example.com`, _Application Developer A_ creates _HTTPRoute A_ and attaches it to `Gateway AB`. +- _Application B_ with one pod deployed in the _applications_ namespace by _Application Developer B_. To expose the application to its clients (_Clients B_) via the host `b.example.com`, _Application Developer B_ creates _HTTPRoute B_ and attaches it to `Gateway AB`. +- _Public Endpoint_, which fronts the _NGF_ pod. This is typically a TCP load balancer (cloud, software, or hardware) or a combination of such load balancer with a NodePort service. _Clients A_ and _B_ connect to their applications via the _Public Endpoint_. + +The yellow and purple arrows represent connections related to the client traffic, and the black arrows represent access to the Kubernetes API. The resources within the cluster are color-coded based on the user responsible for their creation. + +For example, the Cluster Operator is denoted by the color green, indicating they create and manage all the green resources. + +## The NGINX Gateway Fabric pod + +NGINX Gateway Fabric consists of two containers: + +1. `nginx`: the data plane. Consists of an NGINX master process and NGINX worker processes. The master process controls the worker processes. The worker processes handle the client traffic and load balance traffic to the backend applications. +1. `nginx-gateway`: the control plane. Watches Kubernetes objects and configures NGINX. + +These containers are deployed in a single pod as a Kubernetes Deployment. + +The `nginx-gateway`, or the control plane, is a [Kubernetes controller](https://kubernetes.io/docs/concepts/architecture/controller/), written with the [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) library. It watches Kubernetes objects (services, endpoints, secrets, and Gateway API CRDs), translates them to NGINX configuration, and configures NGINX. + +This configuration happens in two stages: + +1. NGINX configuration files are written to the NGINX configuration volume shared by the `nginx-gateway` and `nginx` containers. +1. The control plane reloads the NGINX process. + +This is possible because the two containers [share a process namespace](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/), allowing the NGF process to send signals to the NGINX main process. + +The following diagram represents the connections, relationships and interactions between process with the `nginx` and `nginx-gateway` containers, as well as external processes/entities. + +{{}} + +The following list describes the connections, preceeded by their types in parentheses. For brevity, the suffix "process" has been omitted from the process descriptions. + +1. (HTTPS) + - Read: _NGF_ reads the _Kubernetes API_ to get the latest versions of the resources in the cluster. + - Write: _NGF_ writes to the _Kubernetes API_ to update the handled resources' statuses and emit events. If there's more than one replica of _NGF_ and [leader election](https://github.com/nginxinc/nginx-gateway-fabric/tree/main/deploy/helm-chart#configuration) is enabled, only the _NGF_ pod that is leading will write statuses to the _Kubernetes API_. +1. (HTTP, HTTPS) _Prometheus_ fetches the `controller-runtime` and NGINX metrics via an HTTP endpoint that _NGF_ exposes (`:9113/metrics` by default). Prometheus is **not** required by NGF, and its endpoint can be turned off. +1. (File I/O) + - Write: _NGF_ generates NGINX _configuration_ based on the cluster resources and writes them as `.conf` files to the mounted `nginx-conf` volume, located at `/etc/nginx/conf.d`. It also writes _TLS certificates_ and _keys_ from [TLS secrets](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets) referenced in the accepted Gateway resource to the `nginx-secrets` volume at the path `/etc/nginx/secrets`. + - Read: _NGF_ reads the PID file `nginx.pid` from the `nginx-run` volume, located at `/var/run/nginx`. _NGF_ extracts the PID of the nginx process from this file in order to send reload signals to _NGINX master_. +1. (File I/O) _NGF_ writes logs to its _stdout_ and _stderr_, which are collected by the container runtime. +1. (HTTP) _NGF_ fetches the NGINX metrics via the unix:/var/run/nginx/nginx-status.sock UNIX socket and converts it to _Prometheus_ format used in #2. +1. (Signal) To reload NGINX, _NGF_ sends the [reload signal](https://nginx.org/en/docs/control.html) to the **NGINX master**. +1. (File I/O) + - Write: The _NGINX master_ writes its PID to the `nginx.pid` file stored in the `nginx-run` volume. + - Read: The _NGINX master_ reads _configuration files_ and the _TLS cert and keys_ referenced in the configuration when it starts or during a reload. These files, certificates, and keys are stored in the `nginx-conf` and `nginx-secrets` volumes that are mounted to both the `nginx-gateway` and `nginx` containers. +1. (File I/O) + - Write: The _NGINX master_ writes to the auxiliary Unix sockets folder, which is located in the `/var/lib/nginx` + directory. + - Read: The _NGINX master_ reads the `nginx.conf` file from the `/etc/nginx` directory. This [file](https://github.com/nginxinc/nginx-gateway-fabric/blob/main/internal/mode/static/nginx/conf/nginx.conf) contains the global and http configuration settings for NGINX. In addition, _NGINX master_ reads the NJS modules referenced in the configuration when it starts or during a reload. NJS modules are stored in the `/usr/lib/nginx/modules` directory. +1. (File I/O) The _NGINX master_ sends logs to its _stdout_ and _stderr_, which are collected by the container runtime. +1. (File I/O) An _NGINX worker_ writes logs to its _stdout_ and _stderr_, which are collected by the container runtime. +1. (Signal) The _NGINX master_ controls the [lifecycle of _NGINX workers_](https://nginx.org/en/docs/control.html#reconfiguration) it creates workers with the new configuration and shutdowns workers with the old configuration. +1. (HTTP) To consider a configuration reload a success, _NGF_ ensures that at least one NGINX worker has the new configuration. To do that, _NGF_ checks a particular endpoint via the unix:/var/run/nginx/nginx-config-version.sock UNIX socket. +1. (HTTP, HTTPS) A _client_ sends traffic to and receives traffic from any of the _NGINX workers_ on ports 80 and 443. +1. (HTTP, HTTPS) An _NGINX worker_ sends traffic to and receives traffic from the _backends_. + +## Pod readiness + +The `nginx-gateway` container includes a readiness endpoint available through the path `/readyz`. A [readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes) periodically checks the endpoint on startup, returning a `200 OK` response when the pod can accept traffic for the data plane. Once the control plane successfully starts, the pod becomes ready. + +If there are relevant Gateway API resources in the cluster, the control plane will generate the first NGINX configuration and successfully reload NGINX before the pod is considered ready. diff --git a/docs/resource-validation.md b/site/content/overview/resource-validation.md similarity index 97% rename from docs/resource-validation.md rename to site/content/overview/resource-validation.md index 6c8881ee6..26407c431 100644 --- a/docs/resource-validation.md +++ b/site/content/overview/resource-validation.md @@ -1,6 +1,10 @@ -# Gateway API Resource Validation - -This document describes how NGINX Gateway Fabric (NGF) validates Gateway API resources. +--- +title: "Gateway API Resource Validation" +description: "Learn how NGINX Gateway Fabric validates Gateway API resources." +weight: 800 +toc: true +docs: "DOCS-000" +--- ## Overview diff --git a/site/content/reference/_index.md b/site/content/reference/_index.md new file mode 100644 index 000000000..e2262acf2 --- /dev/null +++ b/site/content/reference/_index.md @@ -0,0 +1,9 @@ +--- +title: "Reference" +description: +weight: 400 +linkTitle: "Reference" +menu: + docs: + parent: NGINX Gateway Fabric +--- diff --git a/site/content/reference/cli-help.md b/site/content/reference/cli-help.md new file mode 100644 index 000000000..5bc5ae1c9 --- /dev/null +++ b/site/content/reference/cli-help.md @@ -0,0 +1,53 @@ +--- +title: "Command-line Reference Guide" +description: "Learn about the commands available for the executable file of the NGINX Gateway Fabric container." +weight: 100 +toc: true +docs: "DOCS-000" +--- + +## Static Mode + +This command configures NGINX for a single NGINX Gateway Fabric resource. + +_Usage_: + +```shell + gateway static-mode [flags] +``` + +### Flags + +{{< bootstrap-table "table table-bordered table-striped table-responsive" >}} +| Name | Type | Description | +|------------------------------|----------|-------------| +| _gateway-ctlr-name_ | _string_ | The name of the Gateway controller. The controller name must be in the form: `DOMAIN/PATH`. The controller's domain is `gateway.nginx.org`. | +| _gatewayclass_ | _string_ | The name of the GatewayClass resource. Every NGINX Gateway Fabric must have a unique corresponding GatewayClass resource. | +| _gateway_ | _string_ | The namespaced name of the Gateway resource to use. Must be of the form: `NAMESPACE/NAME`. If not specified, the control plane will process all Gateways for the configured GatewayClass. Among them, it will choose the oldest resource by creation timestamp. If the timestamps are equal, it will choose the resource that appears first in alphabetical order by {namespace}/{name}. | +| _config_ | _string_ | The name of the NginxGateway resource to be used for this controller's dynamic configuration. Lives in the same namespace as the controller. | +| _service_ | _string_ | The name of the service that fronts this NGINX Gateway Fabric pod. Lives in the same namespace as the controller. | +| _metrics-disable_ | _bool_ | Disable exposing metrics in the Prometheus format (Default: `false`). | +| _metrics-listen-port_ | _int_ | Sets the port where the Prometheus metrics are exposed. An integer between 1024 - 65535 (Default: `9113`) | +| _metrics-secure-serving_ | _bool_ | Configures if the metrics endpoint should be secured using https. Note that this endpoint will be secured with a self-signed certificate (Default `false`). | +| _update-gatewayclass-status_ | _bool_ | Update the status of the GatewayClass resource (Default: `true`). | +| _health-disable_ | _bool_ | Disable running the health probe server (Default: `false`). | +| _health-port_ | _int_ | Set the port where the health probe server is exposed. An integer between 1024 - 65535 (Default: `8081`). | +| _leader-election-disable_ | _bool_ | Disable leader election, which is used to avoid multiple replicas of the NGINX Gateway Fabric reporting the status of the Gateway API resources. If disabled, all replicas of NGINX Gateway Fabric will update the statuses of the Gateway API resources (Default: `false`). | +| _leader-election-lock-name_ | _string_ | The name of the leader election lock. A lease object with this name will be created in the same namespace as the controller (Default: `"nginx-gateway-leader-election-lock"`). | +{{% /bootstrap-table %}} + +## Sleep + +This command sleeps for specified duration, then exits. + +_Usage_: + +```shell + gateway sleep [flags] +``` + +{{< bootstrap-table "table table-bordered table-striped table-responsive" >}} +| Name | Type | Description | +|----------|-----------------|-------------------------------------------------------------------------------------------------------| +| duration | `time.Duration` | Set the duration of sleep. Must be parsable by [`time.ParseDuration`](https://pkg.go.dev/time#ParseDuration). (default `30s`) | +{{% /bootstrap-table %}} diff --git a/site/content/reference/technical-specifications.md b/site/content/reference/technical-specifications.md new file mode 100644 index 000000000..7a31f83dc --- /dev/null +++ b/site/content/reference/technical-specifications.md @@ -0,0 +1,13 @@ +--- +title: "Technical Specifications" +draft: false +description: "NGINX Gateway Fabric technical specifications." +weight: 200 +toc: true +tags: [ "docs" ] +docs: "DOCS-000" +--- + +See the NGINX Gateway Fabric technical specifications page: + +https://github.com/nginxinc/nginx-gateway-fabric#technical-specifications diff --git a/site/content/releases.md b/site/content/releases.md new file mode 100644 index 000000000..92f630639 --- /dev/null +++ b/site/content/releases.md @@ -0,0 +1,13 @@ +--- +title: "Releases" +draft: false +description: "NGINX Gateway Fabric releases." +weight: 1200 +toc: true +tags: [ "docs" ] +docs: "DOCS-1359" +--- + +See the NGINX Gateway Fabric changelog page: + +https://github.com/nginxinc/nginx-gateway-fabric/blob/main/CHANGELOG.md diff --git a/site/go.mod b/site/go.mod new file mode 100644 index 000000000..36179c78d --- /dev/null +++ b/site/go.mod @@ -0,0 +1,5 @@ +module github.com/nginxinc/nginx-gateway-fabric/site + +go 1.21 + +require github.com/nginxinc/nginx-hugo-theme v0.40.0 // indirect diff --git a/site/go.sum b/site/go.sum new file mode 100644 index 000000000..ef95ed80d --- /dev/null +++ b/site/go.sum @@ -0,0 +1,6 @@ +github.com/nginxinc/nginx-hugo-theme v0.35.0 h1:7XB2GMy6qeJgKEJy9wOS3SYKYpfvLW3/H+UHRPLM4FU= +github.com/nginxinc/nginx-hugo-theme v0.35.0/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M= +github.com/nginxinc/nginx-hugo-theme v0.39.0 h1:P1hOPpityVUOM5OyIpQZa1UJyuUunGSmz0oZh/GYSJM= +github.com/nginxinc/nginx-hugo-theme v0.39.0/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M= +github.com/nginxinc/nginx-hugo-theme v0.40.0 h1:YP0I0+bRKcJ5WEb1s/OWcnlcvNvIcKscagJkCzsa+Vs= +github.com/nginxinc/nginx-hugo-theme v0.40.0/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M= diff --git a/site/layouts/shortcodes/call-out.html b/site/layouts/shortcodes/call-out.html new file mode 100644 index 000000000..d8e591d83 --- /dev/null +++ b/site/layouts/shortcodes/call-out.html @@ -0,0 +1,3 @@ +
+
{{ .Get 1 }}
{{ .Inner | markdownify }}
+
diff --git a/site/layouts/shortcodes/custom-styles.html b/site/layouts/shortcodes/custom-styles.html new file mode 100644 index 000000000..8ec2a0bbf --- /dev/null +++ b/site/layouts/shortcodes/custom-styles.html @@ -0,0 +1,43 @@ + diff --git a/site/md-linkcheck-config.json b/site/md-linkcheck-config.json new file mode 100644 index 000000000..aff372717 --- /dev/null +++ b/site/md-linkcheck-config.json @@ -0,0 +1,13 @@ +{ + "replacementPatterns": [ + { + "pattern": "^/", + "replacement": "/" + } + ], + "ignorePatterns": [ + { + "pattern": "^.+localhost.+$|/.+yaml" + } + ] +} diff --git a/site/mdlint_conf.json b/site/mdlint_conf.json new file mode 100644 index 000000000..c35d58405 --- /dev/null +++ b/site/mdlint_conf.json @@ -0,0 +1,19 @@ +{ + "MD009": false, + "MD012": false, + "MD010": false, + "MD013": false, + "MD004": { + "style": "dash" + }, + "MD022": false, + "MD033": false, + "MD041": false, + "MD003": false, + "MD002": false, + "MD024": { + "siblings_only": true + }, + "MD046": false, + "MD001": false +} diff --git a/site/netlify.toml b/site/netlify.toml new file mode 100644 index 000000000..480908355 --- /dev/null +++ b/site/netlify.toml @@ -0,0 +1,34 @@ +[build] + base = "site/" + publish = "public" + +[context.production] + command = "make all" + +[context.docs-development] + command = "make all-dev" + +[context.docs-staging] + command = "make all-staging" + +[context.branch-deploy] + command = "make deploy-preview" + +[context.deploy-preview] + command = "make deploy-preview" + +[[headers]] + for = "/*" + [headers.values] + Access-Control-Allow-Origin = "https://docs.nginx.com" + +[[redirects]] + from = "/" + to = "/nginx-gateway-fabric/" + status = 301 + force = true + +[[redirects]] + from = "/nginx-gateway-fabric/*" + to = "/nginx-gateway-fabric/404.html" + status = 404 diff --git a/docs/images/advanced-routing.png b/site/static/img/advanced-routing.png similarity index 100% rename from docs/images/advanced-routing.png rename to site/static/img/advanced-routing.png diff --git a/docs/images/cert-manager-gateway-workflow.png b/site/static/img/cert-manager-gateway-workflow.png similarity index 100% rename from docs/images/cert-manager-gateway-workflow.png rename to site/static/img/cert-manager-gateway-workflow.png diff --git a/docs/images/ngf-high-level.png b/site/static/img/ngf-high-level.png similarity index 100% rename from docs/images/ngf-high-level.png rename to site/static/img/ngf-high-level.png diff --git a/docs/images/ngf-pod.png b/site/static/img/ngf-pod.png similarity index 100% rename from docs/images/ngf-pod.png rename to site/static/img/ngf-pod.png diff --git a/docs/images/route-all-traffic-app.png b/site/static/img/route-all-traffic-app.png similarity index 100% rename from docs/images/route-all-traffic-app.png rename to site/static/img/route-all-traffic-app.png diff --git a/site/static/img/route-all-traffic-config.png b/site/static/img/route-all-traffic-config.png new file mode 100644 index 000000000..01ae22103 Binary files /dev/null and b/site/static/img/route-all-traffic-config.png differ diff --git a/site/static/img/route-all-traffic-flow.png b/site/static/img/route-all-traffic-flow.png new file mode 100644 index 000000000..04a09c94c Binary files /dev/null and b/site/static/img/route-all-traffic-flow.png differ diff --git a/docs/images/src/README.md b/site/static/img/src/README.md similarity index 100% rename from docs/images/src/README.md rename to site/static/img/src/README.md diff --git a/docs/images/src/advanced-routing.mermaid b/site/static/img/src/advanced-routing.mermaid similarity index 100% rename from docs/images/src/advanced-routing.mermaid rename to site/static/img/src/advanced-routing.mermaid diff --git a/docs/images/src/route-all-traffic-app.mermaid b/site/static/img/src/route-all-traffic-app.mermaid similarity index 100% rename from docs/images/src/route-all-traffic-app.mermaid rename to site/static/img/src/route-all-traffic-app.mermaid diff --git a/docs/images/src/route-all-traffic-config.mermaid b/site/static/img/src/route-all-traffic-config.mermaid similarity index 100% rename from docs/images/src/route-all-traffic-config.mermaid rename to site/static/img/src/route-all-traffic-config.mermaid diff --git a/docs/images/src/route-all-traffic-flow.mermaid b/site/static/img/src/route-all-traffic-flow.mermaid similarity index 100% rename from docs/images/src/route-all-traffic-flow.mermaid rename to site/static/img/src/route-all-traffic-flow.mermaid diff --git a/tests/graceful-recovery/graceful-recovery.md b/tests/graceful-recovery/graceful-recovery.md index b99ad303d..feeab637e 100644 --- a/tests/graceful-recovery/graceful-recovery.md +++ b/tests/graceful-recovery/graceful-recovery.md @@ -136,7 +136,7 @@ if the configuration and version were correctly updated. 1. Switch over to a one-Node Kind cluster. Can run `make create-kind-cluster` from main directory. 2. Run steps 4-11 of the [Setup](#setup) section above using -[this guide](https://github.com/nginxinc/nginx-gateway-fabric/blob/main/docs/running-on-kind.md) for running on Kind. +[this guide](https://docs.nginx.com/nginx-gateway-fabric/installation/running-on-kind/) for running on Kind. 3. Ensure NGF and NGINX container logs are set up and traffic flows through the example application correctly. 4. Drain the Node of its resources. diff --git a/tests/zero-downtime-upgrades/zero-downtime-upgrades.md b/tests/zero-downtime-upgrades/zero-downtime-upgrades.md index c37d50c31..4dd57e757 100644 --- a/tests/zero-downtime-upgrades/zero-downtime-upgrades.md +++ b/tests/zero-downtime-upgrades/zero-downtime-upgrades.md @@ -118,7 +118,7 @@ Notes: ### Upgrade -1. Follow the [upgrade instructions](/docs/installation.md#upgrade-nginx-gateway-fabric-from-manifests) to: +1. Follow the [upgrade instructions](https://docs.nginx.com/nginx-gateway-fabric/installation/installing-ngf/manifests/) to: 1. Upgrade Gateway API version to the one that matches the supported version of new release. 2. Upgrade NGF CRDs. 2. Start sending traffic using wrk from tester VMs for 1 minute: @@ -149,7 +149,7 @@ Notes: ``` 3. **Immediately** upgrade NGF manifests by - following [upgrade instructions](/docs/installation.md#upgrade-nginx-gateway-fabric-from-manifests). + following [upgrade instructions](https://docs.nginx.com/nginx-gateway-fabric/installation/installing-ngf/manifests/). > Don't forget to modify the manifests to have 2 replicas and Pod affinity. 4. Ensure the new Pods are running and the old ones terminate.