diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml index 8837b7b0d4..32ae04676e 100644 --- a/.github/workflows/lint.yml +++ b/.github/workflows/lint.yml @@ -77,7 +77,7 @@ jobs: - name: Checkout Repository uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1 - - uses: DavidAnson/markdownlint-cli2-action@ed4dec634fd2ef689c7061d5647371d8248064f1 # v13.0.0 + - uses: DavidAnson/markdownlint-cli2-action@455b6612a7b7a80f28be9e019b70abdd11696e4e # v14.0.0 with: config: ${{ github.workspace }}/.markdownlint-cli2.yaml globs: "**/*.md" diff --git a/conformance/README.md b/conformance/README.md index 7dfc984117..52c97d86f5 100644 --- a/conformance/README.md +++ b/conformance/README.md @@ -88,6 +88,7 @@ make install-ngf-local-build ``` #### *Option 2* Install NGINX Gateway Fabric from local already built image to configured kind cluster + You can optionally skip the actual *build* step. ```makefile @@ -101,6 +102,7 @@ make update-ngf-manifest PREFIX= TAG= ``` #### *Option 3* Install NGINX Gateway Fabric from edge to configured kind cluster + You can also skip the build NGF image step and prepare the environment to instead use the `edge` image ```makefile @@ -148,6 +150,7 @@ make uninstall-ngf ``` ### Step 6 - Revert changes to Go modules + **Optional** Not required if you aren't running the `main` Gateway API tests. ```makefile diff --git a/site/content/how-to/monitoring/troubleshooting.md b/site/content/how-to/monitoring/troubleshooting.md index 5c68a3288b..ffce351540 100644 --- a/site/content/how-to/monitoring/troubleshooting.md +++ b/site/content/how-to/monitoring/troubleshooting.md @@ -17,6 +17,7 @@ This topic describes possible issues users might encounter when using NGINX Gate Depending on your environment's configuration, the control plane may not have the proper permissions to reload NGINX. The NGINX configuration will not be applied and you will see the following error in the _nginx-gateway_ logs: `failed to reload NGINX: failed to send the HUP signal to NGINX main: operation not permitted` #### Resolution + To resolve this issue you will need to set `allowPrivilegeEscalation` to `true`. - If using Helm, you can set the `nginxGateway.securityContext.allowPrivilegeEscalation` value. diff --git a/site/content/how-to/traffic-management/integrating-cert-manager.md b/site/content/how-to/traffic-management/integrating-cert-manager.md index d417edec36..4b1bcdc9cb 100644 --- a/site/content/how-to/traffic-management/integrating-cert-manager.md +++ b/site/content/how-to/traffic-management/integrating-cert-manager.md @@ -22,6 +22,7 @@ Follow the steps in this guide to: - A DNS-resolvable domain name is required. It must resolve to the public endpoint of the NGINX Gateway Fabric deployment, and this public endpoint must be an external IP address or alias accessible over the internet. The process here will depend on your DNS provider. This DNS name will need to be resolvable from the Let’s Encrypt servers, which may require that you wait for the record to propagate before it will work. ## Overview + {{cert-manager ACME challenge and certificate management with Gateway API}} The diagram above shows a simplified representation of the cert-manager ACME challenge and certificate issuance process using Gateway API. Please note that not all of the kubernetes objects created in this process are represented in this diagram. @@ -141,6 +142,7 @@ cafe-secret kubernetes.io/tls 2 20s ``` ### Deploy our application and HTTPRoute + Now we can create our coffee deployment and service, and configure the routing rules. You can use the following manifest to create the deployment and service: ```yaml diff --git a/site/content/overview/resource-validation.md b/site/content/overview/resource-validation.md index 26407c4314..5dbea20740 100644 --- a/site/content/overview/resource-validation.md +++ b/site/content/overview/resource-validation.md @@ -114,6 +114,7 @@ Error from server: error when creating "some-gateway.yaml": admission webhook "v > If this happens, Step 3 will reject the invalid values. ### Step 3 - Webhook validation by NGF + To ensure that the resources are validated with the webhook validation rules, even if the webhook is not running, NGF performs the same validation. However, NGF performs the validation *after* the Kubernetes API server accepts the resource. diff --git a/tests/graceful-recovery/results/1.0.0/1.0.0.md b/tests/graceful-recovery/results/1.0.0/1.0.0.md index 2fe3791e65..3ab75d33e3 100644 --- a/tests/graceful-recovery/results/1.0.0/1.0.0.md +++ b/tests/graceful-recovery/results/1.0.0/1.0.0.md @@ -45,9 +45,11 @@ Platform:"linux/arm64"} ## Tests ### Restart nginx-gateway container + Passes test with no errors. ### Restart NGINX container + The NGF Pod was unable to recover after sending a SIGKILL signal to the NGINX master process. The following appeared in the NGINX logs: @@ -84,9 +86,11 @@ Issue Filed: https://github.com/nginxinc/nginx-gateway-fabric/issues/1108 ### Restart Node with draining + Passes test with no errors. ### Restart Node without draining + The NGF Pod was unable to recover the majority of times after running `docker restart kind-control-plane`. The following appeared in the NGINX logs: diff --git a/tests/reconfig/results/1.0.0/1.0.0.md b/tests/reconfig/results/1.0.0/1.0.0.md index 30524405a1..89b9ceeb47 100644 --- a/tests/reconfig/results/1.0.0/1.0.0.md +++ b/tests/reconfig/results/1.0.0/1.0.0.md @@ -54,6 +54,7 @@ NGF deployment: ## NumResources -> Total Resources + | NumResources | Gateways | Secrets | ReferenceGrants | Namespaces | application Pods | application Services | HTTPRoutes | Total Resources | | ------------ | -------- | ------- | --------------- | ---------- | ---------------- | -------------------- | ---------- | --------------- | | x | 1 | 1 | 1 | x+1 | 2x | 2x | 3x | | diff --git a/tests/zero-downtime-scaling/results/1.0.0/1.0.0.md b/tests/zero-downtime-scaling/results/1.0.0/1.0.0.md index 66fdd4e113..8e444e4ace 100644 --- a/tests/zero-downtime-scaling/results/1.0.0/1.0.0.md +++ b/tests/zero-downtime-scaling/results/1.0.0/1.0.0.md @@ -330,6 +330,7 @@ Logs: - 288,528 200s ## 10 Node Cluster + ### Scale Up Gradually HTTP wrk output: