Kong for Kubernetes is an open-source Ingress Controller for Kubernetes that offers API management capabilities with a plugin architecture.
This chart bootstraps all the components needed to run Konggg on a Kubernetes cluster using the Helm package manager.
$ helm repo add kong https://charts.konghq.com
$ helm repo update
$ helm install kong/kong --generate-name
- Prerequisites
- Install
- Uninstall
- FAQs
- Kong Enterprise
- Deployment Options
- Configuration
- Kong Enterprise Parameters
- Prometheus Operator integration
- Changelog
- Upgrading
- Seeking help
- Kubernetes 1.17+. Older chart releases support older Kubernetes versions. Refer to the supported version matrix and the chart changelog for information about the default chart controller versions and Kubernetes versions supported by controller releases.
- PV provisioner support in the underlying infrastructure if persistence is needed for Kong datastore.
To install Kong:
$ helm repo add kong https://charts.konghq.com
$ helm repo update
$ helm install kong/kong --generate-name --set ingressController.installCRDs=false
To uninstall/delete a Helm release my-release
:
$ helm delete my-release
The command removes all the Kubernetes components associated with the chart and deletes the release.
Tip: List all releases using
helm list
Please read the FAQs document.
If using Kong Enterprise, several additional steps are necessary before installing the chart:
- Set
enterprise.enabled
totrue
invalues.yaml
file. - Update values.yaml to use a Kong Enterprise image.
- Satisfy the two prerequisites below for Enterprise License and Enterprise Docker Registry.
- (Optional) set a
password
environment variable to create the initial super-admin. Though not required, this is recommended for users that wish to use RBAC, as it cannot be done after initial setup.
Once you have these set, it is possible to install Kong Enterprise.
Please read through Kong Enterprise considerations to understand all settings that are enterprise specific.
Kong is a highly configurable piece of software that can be deployed in a number of different ways, depending on your use-case.
All combinations of various runtimes, databases and configuration methods are supported by this Helm chart. The recommended approach is to use the Ingress Controller based configuration along-with DB-less mode.
Following sections detail on various high-level architecture options available:
Kong can run with or without a database (DB-less). By default, this chart installs Kong without a database.
You can set the database the env.database
parameter. For more details, please
read the env section.
When deploying Kong in DB-less mode(env.database: "off"
)
and without the Ingress Controller(ingressController.enabled: false
),
you have to provide a declarative configuration for Kong to run.
You can provide an existing ConfigMap
(dblessConfig.configMap
) or place the whole configuration into
values.yaml
(dblessConfig.config
)
parameter. See the example configuration in the default values.yaml
for more details. You can use --set-file dblessConfig.config=/path/to/declarative-config.yaml
in Helm commands to substitute in a complete declarative config file.
Note that externally supplied ConfigMaps are not hashed or tracked in deployment annotations.
Subsequent ConfigMap updates will require user-initiated new deployment rollouts
to apply the new configuration. You should run kubectl rollout restart deploy
after updating externally supplied ConfigMap content.
The chart can optionally spawn a Postgres instance using Bitnami's Postgres
chart
as a sub-chart. Set postgresql.enabled=true
to enable the sub-chart. Enabling
this will auto-populate Postgres connection settings in Kong's environment.
The Postgres sub-chart is best used to quickly provision temporary environments without installing and configuring your database separately. For longer-lived environments, we recommend you manage your database outside the Kong Helm release.
There are three different packages of Kong that are available:
- Kong Gateway
This is the Open-Source offering. It is a full-blown API Gateway and Ingress solution with a wide-array of functionality. When Kong Gateway is combined with the Ingress based configuration method, you get Kong for Kubernetes. This is the default deployment for this Helm Chart. - Kong Enterprise K8S
This package builds up on top of the Open-Source Gateway and bundles in all the Enterprise-only plugins as well. When Kong Enterprise K8S is combined with the Ingress based configuration method, you get Kong for Kubernetes Enterprise. This package also comes with 24x7 support from Kong Inc. - Kong Enterprise
This is the full-blown Enterprise package which packs with itself all the Enterprise functionality like Manager, Portal, Vitals, etc. This package can't be run in DB-less mode.
The package to run can be changed via image.repository
and image.tag
parameters. If you would like to run the Enterprise package, please read
the Kong Enterprise Parameters section.
Kong can be configured via two methods:
- Ingress and CRDs
The configuration for Kong is done viakubectl
and Kubernetes-native APIs. This is also known as Kong Ingress Controller or Kong for Kubernetes and is the default deployment pattern for this Helm Chart. The configuration for Kong is managed via Ingress and a few Custom Resources. For more details, please read the documentation on Kong Ingress Controller. To configure and fine-tune the controller, please read the Ingress Controller Parameters section. - Admin API
This is the traditional method of running and configuring Kong. By default, the Admin API of Kong is not exposed as a Service. This can be controlled viaadmin.enabled
andenv.admin_listen
parameters.
Note: although this section is titled "Separate admin and proxy nodes", this split release technique is generally applicable to any deployment with different types of Kong nodes. Separating Admin API and proxy nodes is one of the more common use cases for splitting across multiple releases, but you can also split releases for hybrid mode CP/DP nodes, split proxy and Developer Portal nodes, etc.
Users may wish to split their Kong deployment into multiple instances that only
run some of Kong's services (i.e. you run helm install
once for every
instance type you wish to create).
To disable Kong services on an instance, you should set SVC.enabled
,
SVC.http.enabled
, SVC.tls.enabled
, and SVC.ingress.enabled
all to
false
, where SVC
is proxy
, admin
, manager
, portal
, or portalapi
.
The standard chart upgrade automation process assumes that there is only a
single Kong release in the Kong cluster, and runs both migrations up
and
migrations finish
jobs. To handle clusters split across multiple releases,
you should:
- Upgrade one of the releases with
helm upgrade RELEASENAME -f values.yaml --set migrations.preUpgrade=true --set migrations.postUpgrade=false
. - Upgrade all but one of the remaining releases with
helm upgrade RELEASENAME -f values.yaml --set migrations.preUpgrade=false --set migrations.postUpgrade=false
. - Upgrade the final release with
helm upgrade RELEASENAME -f values.yaml --set migrations.preUpgrade=false --set migrations.postUpgrade=true
.
This ensures that all instances are using the new Kong package before running
kong migrations finish
.
Users should note that Helm supports supplying multiple values.yaml files, allowing you to separate shared configuration from instance-specific configuration. For example, you may have a shared values.yaml that contains environment variables and other common settings, and then several instance-specific values.yamls that contain service configuration only. You can then create releases with:
helm install proxy-only -f shared-values.yaml -f only-proxy.yaml kong/kong
helm install admin-only -f shared-values.yaml -f only-admin.yaml kong/kong
The chart can deploy releases that contain the controller only, with no Kong
container, by setting deployment.kong.enabled: false
in values.yaml. There
are several controller settings that must be populated manually in this
scenario and several settings that are useful when using multiple controllers:
ingressController.env.kong_admin_url
must be set to the Kong Admin API URL. If the Admin API is exposed by a service in the cluster, this should look something likehttps://my-release-kong-admin.kong-namespace.svc:8444
ingressController.env.publish_service
must be set to the Kong proxy service, e.g.namespace/my-release-kong-proxy
.ingressController.ingressClass
should be set to a different value for each instance of the controller.ingressController.env.admin_filter_tag
should be set to a different value for each instance of the controller.- If using Kong Enterprise,
ingressController.env.kong_workspace
can optionally create configuration in a workspace other thandefault
.
Standalone controllers require a database-backed Kong instance, as DB-less mode requires that a single controller generate a complete Kong configuration.
Kong supports hybrid mode deployments as of Kong 2.0.0 and Kong Enterprise 2.1.0. These deployments split Kong nodes into control plane (CP) nodes, which provide the admin API and interact with the database, and data plane (DP) nodes, which provide the proxy and receive configuration from control plane nodes.
You can deploy hybrid mode Kong clusters by creating separate releases for each node type, i.e. use separate control and data plane values.yamls that are then installed separately. The control plane and data plane configuration sections below cover the values.yaml specifics for each.
Cluster certificates are not generated automatically. You must create a certificate and key pair for intra-cluster communication.
Hybrid mode uses TLS to secure the CP/DP node communication channel, and
requires certificates for it. You can generate these either using kong hybrid gen_cert
on a local Kong installation or using OpenSSL:
openssl req -new -x509 -nodes -newkey ec:<(openssl ecparam -name secp384r1) \
-keyout /tmp/cluster.key -out /tmp/cluster.crt \
-days 1095 -subj "/CN=kong_clustering"
You must then place these certificates in a Secret:
kubectl create secret tls kong-cluster-cert --cert=/tmp/cluster.crt --key=/tmp/cluster.key
You must configure the control plane nodes to mount the certificate secret on the container filesystem is serve it from the cluster listen. In values.yaml:
secretVolumes:
- kong-cluster-cert
env:
role: control_plane
cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
Furthermore, you must enable the cluster listen and Kubernetes Service, and should typically disable the proxy:
cluster:
enabled: true
tls:
enabled: true
servicePort: 8005
containerPort: 8005
proxy:
enabled: false
Enterprise users with Vitals enabled must also enable the cluster telemetry service:
clustertelemetry:
enabled: true
tls:
enabled: true
servicePort: 8006
containerPort: 8006
If using the ingress controller, you must also specify the DP proxy service as its publish target to keep Ingress status information up to date:
ingressController:
env:
publish_service: hybrid/example-release-data-kong-proxy
Replace hybrid
with your DP nodes' namespace and example-release-data
with
the name of the DP release.
Data plane configuration also requires the certificate and role
configuration, and the database should always be set to off
. You must also
trust the cluster certificate and indicate what hostname/port Kong should use
to find control plane nodes.
Though not strictly required, you should disable the admin service (it will not work on DP nodes anyway, but should be disabled to avoid creating an invalid Service resource).
secretVolumes:
- kong-cluster-cert
admin:
enabled: false
env:
role: data_plane
database: off
cluster_cert: /etc/secrets/kong-cluster-cert/tls.crt
cluster_cert_key: /etc/secrets/kong-cluster-cert/tls.key
lua_ssl_trusted_certificate: /etc/secrets/kong-cluster-cert/tls.crt
cluster_control_plane: control-plane-release-name-kong-cluster.hybrid.svc.cluster.local:8005
cluster_telemetry_endpoint: control-plane-release-name-kong-clustertelemetry.hybrid.svc.cluster.local:8006 # Enterprise-only
Note that the cluster_control_plane
value will differ depending on your
environment. control-plane-release-name
will change to your CP release name,
hybrid
will change to whatever namespace it resides in. See Kubernetes'
documentation on Service
DNS
for more detail.
Earlier versions of this chart (<2.0) created CRDs associated with the ingress controller as part of the release. This raised two challenges:
- Multiple release of the chart would conflict with one another, as each would attempt to create its own set of CRDs.
- Because deleting a CRD also deletes any custom resources associated with it, deleting a release of the chart could destroy user configuration without providing any means to restore it.
Helm 3 introduced a simplified CRD management method that was safer, but requires some manual work when a chart added or modified CRDs: CRDs are created on install if they are not already present, but are not modified during release upgrades or deletes. Our chart release upgrade instructions call out when manual action is necessary to update CRDs. This CRD handling strategy is recommended for most users.
Some users may wish to manage their CRDs automatically. If you manage your CRDs this way, we strongly recommend that you back up all associated custom resources in the event you need to recover from unintended CRD deletion.
While Helm 3's CRD management system is recommended, there is no simple means
of migrating away from release-managed CRDs if you previously installed your
release with the old system (you would need to back up your existing custom
resources, delete your release, reinstall, and restore your custom resources
after). As such, the chart detects if you currently use release-managed CRDs
and continues to use the old CRD templates when using chart version 2.0+. If
you do (your resources will have a meta.helm.sh/release-name
annotation), we
strongly recommend that you back up all associated custom resources in the
event you need to recover from unintended CRD deletion.
The chart able to deploy initcontainers along with Kong. This can be very useful when require to setup additional custom initialization. The deployment.initcontainers
field in values.yaml takes an array of objects that get appended as-is to the existing spec.template.initContainers
array in the kong deployment resource.
The chart able to inject host aliases into containers. This can be very useful when require to resolve additional domain name which can't
be looked-up directly from dns server. The deployment.hostAliases
field in values.yaml takes an array of objects that set to spec.template.hostAliases
field in the kong deployment resource.
The chart can deploy additional containers along with the Kong and Ingress
Controller containers, sometimes referred to as "sidecar containers". This can
be useful to include network proxies or logging services along with Kong. The
deployment.sidecarContainers
field in values.yaml takes an array of objects
that get appended as-is to the existing spec.template.spec.containers
array
in the Kong deployment resource.
In the same way sidecar containers are attached to the Kong and Ingress
Controller containers the chart can add sidecars to the containers that runs
the migrations. The
migrations.sidecarContainers
field in values.yaml takes an array of objects
that get appended as-is to the existing spec.template.spec.containers
array
in the pre-upgrade-migrations, post-upgrade-migrations and migration resrouces.
Keep in mind the containers should be finite and they should be terminated
with the migration containers, otherwise the migration could get the status
as finished and the deployment of the chart will reach the timeout.
The chart can deploy additional volumes along with Kong. This can be useful to include additional volumes which required during iniatilization phase (InitContainer). The deployment.userDefinedVolumes
field in values.yaml takes an array of objects that get appended as-is to the existing spec.template.spec.volumes
array in the kong deployment resource.
The chart can mount the volumes which defined in the user defined volume
or others. The deployment.userDefinedVolumeMounts
field in values.yaml takes an array of object that get appended as-is to the existing spec.template.spec.containers[].volumeMounts
and spec.template.spec.initContainers[].volumeMounts
array in the kong deployment resource.
Setting deployment.daemonset: true
deploys Kong using a DaemonSet
controller
instead of a Deployment controller. This runs a Kong Pod on every kubelet in
the Kubernetes cluster.
The chart able to inject custom DNS configuration into containers. This can be useful when you have EKS cluster with NodeLocal DNSCache configured and attach AWS security groups directly to pod using security groups for pods feature.
Several example values.yaml are available in the example-values directory.
Parameter | Description | Default |
---|---|---|
image.repository | Kong image | kong |
image.tag | Kong image version | 2.5 |
image.pullPolicy | Image pull policy | IfNotPresent |
image.pullSecrets | Image pull secrets | null |
replicaCount | Kong instance count. It has no effect when autoscaling.enabled is set to true |
1 |
plugins | Install custom plugins into Kong via ConfigMaps or Secrets | {} |
env | Additional Kong configurations | |
customEnv | Custom Environment variables without KONG_ prefix |
|
migrations.preUpgrade | Run "kong migrations up" jobs | true |
migrations.postUpgrade | Run "kong migrations finish" jobs | true |
migrations.annotations | Annotations for migration job pods | `{"sidecar.istio.io/inject": "false" |
migrations.jobAnnotations | Additional annotations for migration jobs | {} |
migrations.backoffLimit | Override the system backoffLimit | {} |
waitImage.enabled | Spawn init containers that wait for the database before starting Kong | true |
waitImage.repository | Image used to wait for database to become ready. Uses the Kong image if none set | |
waitImage.tag | Tag for image used to wait for database to become ready | |
waitImage.pullPolicy | Wait image pull policy | IfNotPresent |
postgresql.enabled | Spin up a new postgres instance for Kong | false |
dblessConfig.configMap | Name of an existing ConfigMap containing the kong.yml file. This must have the key kong.yml . |
`` |
dblessConfig.config | Yaml configuration file for the dbless (declarative) configuration of Kong | see in values.yaml |
The various SVC.*
parameters below are common to the various Kong services
(the admin API, proxy, Kong Manger, the Developer Portal, and the Developer
Portal API) and define their listener configuration, K8S Service properties,
and K8S Ingress properties. Defaults are listed only if consistent across the
individual services: see values.yaml for their individual default values.
SVC
below can be substituted with each of:
proxy
udpProxy
admin
manager
portal
portalapi
cluster
clustertelemetry
status
status
is intended for internal use within the cluster. Unlike other
services it cannot be exposed externally, and cannot create a Kubernetes
service or ingress. It supports the settings under SVC.http
and SVC.tls
only.
cluster
is used on hybrid mode control plane nodes. It does not support the
SVC.http.*
settings (cluster communications must be TLS-only) or the
SVC.ingress.*
settings (cluster communication requires TLS client
authentication, which cannot pass through an ingress proxy). clustertelemetry
is similar, and used when Vitals is enabled on Kong Enterprise control plane
nodes.
udpProxy
is used for UDP stream listens (Kubernetes does not yet support
mixed TCP/UDP LoadBalancer Services). It does not support the http
, tls
,
or ingress
sections, as it is used only for stream listens.
Parameter | Description | Default |
---|---|---|
SVC.enabled | Create Service resource for SVC (admin, proxy, manager, etc.) | |
SVC.http.enabled | Enables http on the service | |
SVC.http.servicePort | Service port to use for http | |
SVC.http.containerPort | Container port to use for http | |
SVC.http.nodePort | Node port to use for http | |
SVC.http.hostPort | Host port to use for http | |
SVC.http.parameters | Array of additional listen parameters | [] |
SVC.tls.enabled | Enables TLS on the service | |
SVC.tls.containerPort | Container port to use for TLS | |
SVC.tls.servicePort | Service port to use for TLS | |
SVC.tls.nodePort | Node port to use for TLS | |
SVC.tls.hostPort | Host port to use for TLS | |
SVC.tls.overrideServiceTargetPort | Override service port to use for TLS without touching Kong containerPort | |
SVC.tls.parameters | Array of additional listen parameters | ["http2"] |
SVC.type | k8s service type. Options: NodePort, ClusterIP, LoadBalancer | |
SVC.clusterIP | k8s service clusterIP | |
SVC.loadBalancerSourceRanges | Limit service access to CIDRs if set and service type is LoadBalancer |
[] |
SVC.loadBalancerIP | Reuse an existing ingress static IP for the service | |
SVC.externalIPs | IPs for which nodes in the cluster will also accept traffic for the servic | [] |
SVC.externalTrafficPolicy | k8s service's externalTrafficPolicy. Options: Cluster, Local | |
SVC.ingress.enabled | Enable ingress resource creation (works with SVC.type=ClusterIP) | false |
SVC.ingress.ingressClassName | Set the ingressClassName to associate this Ingress with an IngressClass | |
SVC.ingress.tls | Name of secret resource, containing TLS secret | |
SVC.ingress.hostname | Ingress hostname | "" |
SVC.ingress.path | Ingress path. | / |
SVC.ingress.annotations | Ingress annotations. See documentation for your ingress controller for details | {} |
SVC.annotations | Service annotations | {} |
SVC.labels | Service labels | {} |
The proxy configuration additionally supports creating stream listens. These
are configured using an array of objects under proxy.stream
and udpProxy.stream
:
Parameter | Description | Default |
---|---|---|
protocol | The listen protocol, either "TCP" or "UDP" | |
containerPort | Container port to use for a stream listen | |
servicePort | Service port to use for a stream listen | |
nodePort | Node port to use for a stream listen | |
hostPort | Host port to use for a stream listen | |
parameters | Array of additional listen parameters | [] |
All of the following properties are nested under the ingressController
section of values.yaml
file:
Parameter | Description | Default |
---|---|---|
enabled | Deploy the ingress controller, rbac and crd | true |
image.repository | Docker image with the ingress controller | kong/kubernetes-ingress-controller |
image.tag | Version of the ingress controller | 2.0 |
image.effectiveSemver | Version of the ingress controller used for version-specific features when image.tag is not a valid semantic version | |
readinessProbe | Kong ingress controllers readiness probe | |
livenessProbe | Kong ingress controllers liveness probe | |
installCRDs | Creates managed CRDs. | false |
env | Specify Kong Ingress Controller configuration via environment variables | |
ingressClass | The name of this controller's ingressClass | kong |
ingressClassAnnotations | The ingress-class value for controller | kong |
args | List of ingress-controller cli arguments | [] |
watchNamespaces | List of namespaces to watch. Watches all namespaces if empty | [] |
admissionWebhook.enabled | Whether to enable the validating admission webhook | false |
admissionWebhook.failurePolicy | How unrecognized errors from the admission endpoint are handled (Ignore or Fail) | Fail |
admissionWebhook.port | The port the ingress controller will listen on for admission webhooks | 8080 |
admissionWebhook.certificate.provided | Whether to generate the admission webhook certificate if not provided | false |
admissionWebhook.certificate.secretName | Name of the TLS secret for the provided webhook certificate | |
admissionWebhook.certificate.caBundle | PEM encoded CA bundle which will be used to validate the provided webhook certificate |
For a complete list of all configuration values you can set in the
env
section, please read the Kong Ingress Controller's
configuration document.
Parameter | Description | Default |
---|---|---|
namespace | Namespace to deploy chart resources | |
deployment.kong.enabled | Enable or disable deploying Kong | true |
deployment.initContainers | Create initContainers. Please go to Kubernetes doc for the spec of the initContainers | |
deployment.daemonset | Use a DaemonSet instead of a Deployment | false |
deployment.hostNetwork | Enable hostNetwork, which binds to the ports to the host | false |
deployment.userDefinedVolumes | Create volumes. Please go to Kubernetes doc for the spec of the volumes | |
deployment.userDefinedVolumeMounts | Create volumeMounts. Please go to Kubernetes doc for the spec of the volumeMounts | |
deployment.serviceAccount.create | Create Service Account for the Deployment / Daemonset and the migrations | true |
deployment.serviceAccount.name | Name of the Service Account, a default one will be generated if left blank. | "" |
deployment.serviceAccount.annotations | Annotations for the Service Account | {} |
autoscaling.enabled | Set this to true to enable autoscaling |
false |
autoscaling.minReplicas | Set minimum number of replicas | 2 |
autoscaling.maxReplicas | Set maximum number of replicas | 5 |
autoscaling.targetCPUUtilizationPercentage | Target Percentage for when autoscaling takes affect. Only used if cluster doesnt support autoscaling/v2beta2 |
80 |
autoscaling.metrics | metrics used for autoscaling for clusters that support autoscaling/v2beta2` | See values.yaml |
updateStrategy | update strategy for deployment | {} |
readinessProbe | Kong readiness probe | |
livenessProbe | Kong liveness probe | |
lifecycle | Proxy container lifecycle hooks | see values.yaml |
terminationGracePeriodSeconds | Sets the termination grace period for Deployment pods | 30 |
affinity | Node/pod affinities | |
topologySpreadConstraints | Control how Pods are spread across cluster among failure-domains | |
nodeSelector | Node labels for pod assignment | {} |
deploymentAnnotations | Annotations to add to deployment | see values.yaml |
podAnnotations | Annotations to add to each pod | {} |
podLabels | Labels to add to each pod | {} |
resources | Pod resource requests & limits | {} |
tolerations | List of node taints to tolerate | [] |
dnsPolicy | Pod dnsPolicy | |
dnsConfig | Pod dnsConfig | |
podDisruptionBudget.enabled | Enable PodDisruptionBudget for Kong | false |
podDisruptionBudget.maxUnavailable | Represents the minimum number of Pods that can be unavailable (integer or percentage) | 50% |
podDisruptionBudget.minAvailable | Represents the number of Pods that must be available (integer or percentage) | |
podSecurityPolicy.enabled | Enable podSecurityPolicy for Kong | false |
podSecurityPolicy.spec | Collection of PodSecurityPolicy settings | |
priorityClassName | Set pod scheduling priority class for Kong pods | "" |
secretVolumes | Mount given secrets as a volume in Kong container to override default certs and keys. | [] |
securityContext | Set the securityContext for Kong Pods | {} |
containerSecurityContext | Set the securityContext for Containers | {} |
serviceMonitor.enabled | Create ServiceMonitor for Prometheus Operator | false |
serviceMonitor.interval | Scraping interval | 30s |
serviceMonitor.namespace | Where to create ServiceMonitor | |
serviceMonitor.labels | ServiceMonitor labels | {} |
serviceMonitor.targetLabels | ServiceMonitor targetLabels | {} |
serviceMonitor.honorLabels | ServiceMonitor honorLabels | {} |
serviceMonitor.metricRelabelings | ServiceMonitor metricRelabelings | {} |
extraConfigMaps | ConfigMaps to add to mounted volumes | [] |
extraSecrets | Secrets to add to mounted volumes | [] |
Note: If you are using deployment.hostNetwork
to bind to lower ports ( < 1024), which may be the desired option (ports 80 and 433), you also
need to tweak the containerSecurityContext
configuration as in the example:
containerSecurityContext: # run as root to bind to lower ports
capabilities:
add: [NET_BIND_SERVICE]
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
The env
section can be used to configured all properties of Kong.
Any key value put under this section translates to environment variables
used to control Kong's configuration. Every key is prefixed with KONG_
and upper-cased before setting the environment variable.
Furthermore, all kong.env
parameters can also accept a mapping instead of a
value to ensure the parameters can be set through configmaps and secrets.
An example:
kong:
env: # load PG password from a secret dynamically
pg_user: kong
pg_password:
valueFrom:
secretKeyRef:
key: kong
name: postgres
nginx_worker_processes: "2"
For complete list of Kong configurations please check the Kong configuration docs.
Tip: You can use the default values.yaml
The customEnv
section can be used to configure all custom properties of other than Kong.
Any key value put under this section translates to environment variables
that can be used in Kong's plugin configurations. Every key is upper-cased before setting the environment variable.
An example:
kong:
customEnv:
api_token:
valueFrom:
secretKeyRef:
key: token
name: api_key
client_name: testClient
The extraLabels
section can be used to configure some extra labels that will be added to each Kubernetes object generated.
For example, you can add the acme.com/some-key: some-value
label to each Kubernetes object by putting the following in your Helm values:
extraLabels:
acme.com/some-key: some-value
Kong Enterprise requires some additional configuration not needed when using Kong Open-Source. To use Kong Enterprise, at the minimum, you need to do the following:
- Set
enterprise.enabled
totrue
invalues.yaml
file. - Update values.yaml to use a Kong Enterprise image.
- Satisfy the two prerequisites below for Enterprise License and Enterprise Docker Registry.
- (Optional) set a
password
environment variable to create the initial super-admin. Though not required, this is recommended for users that wish to use RBAC, as it cannot be done after initial setup.
Once you have these set, it is possible to install Kong Enterprise, but please make sure to review the below sections for other settings that you should consider configuring before installing Kong.
Some of the more important configuration is grouped in sections
under the .enterprise
key in values.yaml, though most enterprise-specific
configuration can be placed under the .env
key.
Kong Enterprise 2.3+ can run with or without a license. If you wish to run 2.3+
without a license, you can skip this step and leave enterprise.license_secret
unset. In this case only a limited subset of features will be available. Earlier versions require a license.
If you have paid for a license, but you do not have a copy of yours, please contact Kong Support. Once you have it, you will need to store it in a Secret:
$ kubectl create secret generic kong-enterprise-license --from-file=license=./license.json
Set the secret name in values.yaml
, in the .enterprise.license_secret
key.
Please ensure the above secret is created in the same namespace in which
Kong is going to be deployed.
Kong Enterprise versions 2.2 and earlier use a private Docker registry and require a pull secret. If you use 2.3 or newer, you can skip this step.
You should have received credentials to log into docker hub after purchasing Kong Enterprise. After logging in, you can retrieve your API key from <your username> > Edit Profile > API Key. Use this to create registry secrets:
$ kubectl create secret docker-registry kong-enterprise-edition-docker \
--docker-server=hub.docker.io \
--docker-username=<username-provided-to-you> \
--docker-password=<password-provided-to-you>
secret/kong-enterprise-edition-docker created
Set the secret names in values.yaml
in the image.pullSecrets
section.
Again, please ensure the above secret is created in the same namespace in which
Kong is going to be deployed.
Kong Enterprise add two GUIs, Kong Manager and the Kong Developer Portal, that
must know where other Kong services (namely the admin and files APIs) can be
accessed in order to function properly. Kong's default behavior for attempting
to locate these absent configuration is unlikely to work in common Kubernetes
environments. Because of this, you should set each of admin_gui_url
,
admin_api_uri
, proxy_url
, portal_api_url
, portal_gui_host
, and
portal_gui_protocol
under the .env
key in values.yaml to locations where
each of their respective services can be accessed to ensure that Kong services
can locate one another and properly set CORS headers. See the
Property Reference documentation
for more details on these settings.
You can create a default RBAC superuser when initially running helm install
by setting a password
environment variable under env
in values.yaml. It
should be a reference to a secret key containing your desired password. This
will create a kong_admin
admin whose token and basic-auth password match the
value in the secret. For example:
env:
password:
valueFrom:
secretKeyRef:
name: kong-enterprise-superuser-password
key: password
If using the ingress controller, it needs access to the token as well, by
specifying kong_admin_token
in its environment variables:
ingressController:
env:
kong_admin_token:
valueFrom:
secretKeyRef:
name: kong-enterprise-superuser-password
key: password
Although the above examples both use the initial super-admin, we recommend
creating a less-privileged RBAC user
for the controller after installing. It needs at least workspace admin
privileges in its workspace (default
by default, settable by adding a
workspace
variable under ingressController.env
). Once you create the
controller user, add its token to a secret and update your kong_admin_token
variable to use it. Remove the password
variable from Kong's environment
variables and the secret containing the super-admin token after.
Login sessions for Kong Manager and the Developer Portal make use of the Kong Sessions plugin. When configured via values.yaml, their configuration must be stored in Secrets, as it contains an HMAC key.
Kong Manager's session configuration must be configured via values.yaml, whereas this is optional for the Developer Portal on versions 0.36+. Providing Portal session configuration in values.yaml provides the default session configuration, which can be overridden on a per-workspace basis.
$ cat admin_gui_session_conf
{"cookie_name":"admin_session","cookie_samesite":"off","secret":"admin-secret-CHANGEME","cookie_secure":true,"storage":"kong"}
$ cat portal_session_conf
{"cookie_name":"portal_session","cookie_samesite":"off","secret":"portal-secret-CHANGEME","cookie_secure":true,"storage":"kong"}
$ kubectl create secret generic kong-session-config --from-file=admin_gui_session_conf --from-file=portal_session_conf
secret/kong-session-config created
The exact plugin settings may vary in your environment. The secret
should
always be changed for both configurations.
After creating your secret, set its name in values.yaml in
.enterprise.rbac.session_conf_secret
. If you create a Portal configuration,
add it at env.portal_session_conf
using a secretKeyRef.
Email is used to send invitations for Kong Admins and Developers.
Email invitations rely on setting a number of SMTP settings at once. For
convenience, these are grouped under the .enterprise.smtp
key in values.yaml.
Setting .enterprise.smtp.disabled: true
will set KONG_SMTP_MOCK=on
and
allow Admin/Developer invites to proceed without sending email. Note, however,
that these have limited functionality without sending email.
If your SMTP server requires authentication, you must provide the username
and smtp_password_secret
keys under .enterprise.smtp.auth
. smtp_password_secret
must be a Secret containing an smtp_password
key whose value is your SMTP password.
By default, SMTP uses AUTH
PLAIN
when you provide credentials. If your provider requires AUTH LOGIN
, set smtp_auth_type: login
.
The chart can configure a ServiceMonitor resource to instruct the Prometheus
Operator to
collect metrics from Kong Pods. To enable this, set
serviceMonitor.enabled=true
in values.yaml
.
Kong exposes memory usage and connection counts by default. You can enable traffic metrics for routes and services by configuring the Prometheus plugin.
The ServiceMonitor requires an enable-metrics: "true"
label on one of the
chart's Services to collect data. By default, this label is set on the proxy
Service. It should only be set on a single chart Service to avoid duplicate
data. If you disable the proxy Service (e.g. on a hybrid control plane instance
or Portal-only instance) and still wish to collect memory usage metrics, add
this label to another Service, e.g. on the admin API Service:
admin:
labels:
enable-metrics: "true"
If you run into an issue, bug or have a question, please reach out to the Kong community via Kong Nation. Please do not open issues in this repository as the maintainers will not be notified and won't respond.