Skip to content

Commit

Permalink
feat(doc): Create a separate page dedicated to Integration scaling
Browse files Browse the repository at this point in the history
  • Loading branch information
astefanutti committed Nov 18, 2020
1 parent f581116 commit 671d62a
Show file tree
Hide file tree
Showing 3 changed files with 69 additions and 34 deletions.
2 changes: 2 additions & 0 deletions docs/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@
** xref:observability/monitoring.adoc[Monitoring]
*** xref:observability/operator.adoc[Operator Monitoring]
*** xref:observability/integration.adoc[Integration Monitoring]
* Scaling
** xref:scaling/integration.adoc[Integration Scaling]
* xref:traits:traits.adoc[Traits]
// Start of autogenerated code - DO NOT EDIT! (trait-nav)
** xref:traits:3scale.adoc[3scale]
Expand Down
34 changes: 0 additions & 34 deletions docs/modules/ROOT/pages/observability/integration.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -106,37 +106,3 @@ EOF

More information can be found in the Prometheus Operator https://github.com/coreos/prometheus-operator/blob/v0.38.0/Documentation/user-guides/alerting.md[Alerting] user guide.
You can also find more details in https://docs.openshift.com/container-platform/4.4/monitoring/monitoring-your-own-services.html#creating-alerting-rules_monitoring-your-own-services[Creating alerting rules] from the OpenShift documentation.

== Autoscaling

Integration metrics can be exported for horizontal pod autoscaling (HPA), using the https://github.com/DirectXMan12/k8s-prometheus-adapter[custom metrics Prometheus adapter].
If you have an OpenShift cluster, you can follow https://docs.openshift.com/container-platform/4.4/monitoring/exposing-custom-application-metrics-for-autoscaling.html[Exposing custom application metrics for autoscaling] to set it up.

Assuming you have the Prometheus adapter up and running, you can create a `HorizontalPodAutoscaler` resource, e.g.:

[source,sh]
----
$ cat <<EOF | kubectl apply -f -
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: camel-k-autoscaler
spec:
scaleTargetRef:
apiVersion: camel.apache.org/v1
kind: Integration
name: example
minReplicas: 1
maxReplicas: 10
metrics:
- type: Pods
pods:
metric:
name: application_camel_context_exchanges_inflight_count
target:
type: AverageValue
averageValue: 1k
EOF
----

More information can be found in https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[Horizontal Pod Autoscaler] from the Kubernetes documentation.
67 changes: 67 additions & 0 deletions docs/modules/ROOT/pages/scaling/integration.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
[[integration-scaling]]
= Camel K Integration Scaling

== Manual Scaling

An Integration can be scaled using the `kubectl scale` command, e.g.:

[source,sh]
----
$ kubectl scale it <integration_name> --replicas <number_of_replicas>
----

This can also be achieved by editing the Integration resource directly, e.g.:

[source,sh]
----
$ kubectl patch it <integration_name> -p '{"spec":{"replicas":<number_of_replicas>}}'
----

The Integration also reports its number of replicas in the `.Status.Replicas` field, e.g.:

[source,sh]
----
$ kubectl get it <integration_name> -o jsonpath='{.spec.replicas}'
----

== Autoscaling with HPA

An Integration can automatically scale based on its CPU utilization and custom metrics using https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[horizontal pod autoscaling (HPA)].

For example, executing the following command creates an _autoscaler_ for the Integration, with target CPU utilization set to 80%, and the number of replicas between 2 and 5:

[source,sh]
----
$ kubectl autoscale it <integration_name> --min=2 --max=5 --cpu-percent=80
----

xref:observability/integration.adoc[Integration metrics] can also be exported for horizontal pod autoscaling (HPA), using the https://github.com/DirectXMan12/k8s-prometheus-adapter[custom metrics Prometheus adapter], so that the Integration can scale automatically based on its own metrics.

If you have an OpenShift cluster, you can follow https://docs.openshift.com/container-platform/4.4/monitoring/exposing-custom-application-metrics-for-autoscaling.html[Exposing custom application metrics for autoscaling] to set it up.

Assuming you have the Prometheus adapter up and running, you can create a `HorizontalPodAutoscaler` resource based on a particular Integration metric, e.g.:

[source,yaml]
----
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: camel-k-autoscaler
spec:
scaleTargetRef:
apiVersion: camel.apache.org/v1
kind: Integration
name: example
minReplicas: 1
maxReplicas: 10
metrics:
- type: Pods
pods:
metric:
name: application_camel_context_exchanges_inflight_count
target:
type: AverageValue
averageValue: 1k
----

More information can be found in https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/[Horizontal Pod Autoscaler] from the Kubernetes documentation.

0 comments on commit 671d62a

Please sign in to comment.