-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge remote-tracking branch 'origin/v2' into feature/kafka-scaler-of…
…fset-reset-policy
- Loading branch information
Showing
134 changed files
with
9,438 additions
and
2,495 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,46 @@ | ||
name: master build | ||
on: | ||
push: | ||
branches: | ||
- v2 | ||
jobs: | ||
validate: | ||
name: Validate | ||
runs-on: ubuntu-latest | ||
# build-tools is built from ../../tools/build-tools.Dockerfile | ||
container: kedacore/build-tools:v2 | ||
steps: | ||
- name: Check out code | ||
uses: actions/checkout@v1 | ||
with: | ||
fetch-depth: 1 | ||
|
||
- name: Go modules cache | ||
uses: actions/cache@v1 | ||
with: | ||
path: /go/pkg | ||
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }} | ||
restore-keys: | | ||
${{ runner.os }}-go- | ||
- name: Test | ||
run: make test | ||
|
||
- name: Login to Docker Hub | ||
env: | ||
DOCKER_HUB_ACCESS_TOKEN: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }} | ||
DOCKER_HUB_USERNAME: ${{ secrets.DOCKER_HUB_USERNAME }} | ||
run: echo $DOCKER_HUB_ACCESS_TOKEN | docker login -u $DOCKER_HUB_USERNAME --password-stdin | ||
|
||
- name: Publish | ||
run: make publish | ||
|
||
- name: Run end to end tests | ||
env: | ||
AZURE_SUBSCRIPTION: ${{ secrets.AZURE_SUBSCRIPTION }} | ||
AZURE_RESOURCE_GROUP: ${{ secrets.AZURE_RESOURCE_GROUP }} | ||
AZURE_SP_ID: ${{ secrets.AZURE_SP_ID }} | ||
AZURE_SP_KEY: ${{ secrets.AZURE_SP_KEY }} | ||
AZURE_SP_TENANT: ${{ secrets.AZURE_SP_TENANT }} | ||
TEST_STORAGE_CONNECTION_STRING: ${{ secrets.TEST_STORAGE_CONNECTION_STRING }} | ||
run: make e2e-test |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -8,14 +8,14 @@ In order to develop a scaler, a developer should do the following: | |
2. Create the new scaler struct under the `pkg/scalers` folder. | ||
3. Implement the methods defined in the [scaler interface](#scaler-interface) section. | ||
4. Create a constructor according to [this](#constructor). | ||
5. Change the `getScaler` function in `pkg/handler/scale_handler.go` by adding another switch case that matches your scaler. | ||
5. Change the `getScaler` function in `pkg/scaling/scale_handler.go` by adding another switch case that matches your scaler. | ||
6. Run `make build` from the root of KEDA and your scaler is ready. | ||
|
||
If you want to deploy locally | ||
If you want to deploy locally | ||
1. Run `export VERSION=local` | ||
2. Open the terminal and go to the root of the source code | ||
3. Run `make build` | ||
5. If you haven't done it yet clone the charts repository: `git clone [email protected]:kedacore/charts.git` | ||
5. If you haven't done it yet clone the charts repository: `git clone [email protected]:kedacore/charts.git` | ||
6. In the terminal, navigate to the `chart/keda` folder (the charts downloaded in step 3), and run the following command (don't forget to replace the placeholder text in the command) `helm install . --set image.keda=kedacore/keda:[tag used in step 1],image.pullPolicy=IfNotPresent`. | ||
|
||
The last step assumes that you have `helm` already installed in the cluster. In this step we install the helm chart, and we substitute the image with the image we built in step 1. Notice that we are also overriding the image PullPolice to `IfNotPresent` since this is a local cluster. | ||
|
@@ -29,12 +29,12 @@ The scalers in KEDA are implementations of a KEDA `Scaler` Go interface declared | |
This is the key function of a scaler; it returns a value that represents a current state of an external metric (e.g. length of a queue). The return type is an `ExternalMetricValue` struct which has the following fields: | ||
- `MetricName`: this is the name of the metric that we are returning. | ||
- `Timestamp`: indicates the time at which the metrics were produced. | ||
- `WindowSeconds`: //TODO | ||
- `WindowSeconds`: //TODO | ||
- `Value`: A numerical value that represents the state of the metric. It could be the length of a queue, or it can be the amount of lag in a stream, but it can also be a simple representation of the state. | ||
|
||
Kubernetes HPA (Horizontal Pod Autoscaler) will poll `GetMetrics` reulgarly through KEDA's metric server (as long as there is at least one pod), and compare the returned value to a configured value in the ScaledObject configuration. Kubernetes will use the following formula to decide whether to scale the pods up and down: | ||
Kubernetes HPA (Horizontal Pod Autoscaler) will poll `GetMetrics` reulgarly through KEDA's metric server (as long as there is at least one pod), and compare the returned value to a configured value in the ScaledObject configuration. Kubernetes will use the following formula to decide whether to scale the pods up and down: | ||
|
||
`desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]`. | ||
`desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]`. | ||
|
||
For more details check [Kubernetes HPA documentation](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). | ||
|
||
|
@@ -50,15 +50,15 @@ The return type of this function is `MetricSpec`, but in KEDA's case we will mos | |
|
||
### IsActive | ||
|
||
For some reason, the scaler might need to declare itself as in-active, and the way it can do this is through implementing the function `IsActive`. | ||
For some reason, the scaler might need to declare itself as in-active, and the way it can do this is through implementing the function `IsActive`. | ||
|
||
KEDA polls ScaledObject object according to the `pollingInterval` confiugred in the ScaledObject; it checks the last time it was polled, it checks if the number of replicas is greater than 0, and if the scaler itself is active. So if the scaler returns false for `IsActive`, and if current number of replicas is greater than 0, and there is no configured minimum pods, then KEDA scales down to 0. | ||
|
||
### Close | ||
After each poll on the scaler to retrieve the metrics, KEDA calls this function for each scaler to give the scaler the opportunity to close any resources, like http clients for example. | ||
|
||
### Constructor | ||
What is missing from the `scaler` interface is a function that constructs the scaler itself. Up until the moment of writing this document, KEDA does not have a dynamic way to load scalers (at least not officially)[***]; instead scalers are part of KEDA's code-base, and they are shipped with KEDA's binary. | ||
What is missing from the `scaler` interface is a function that constructs the scaler itself. Up until the moment of writing this document, KEDA does not have a dynamic way to load scalers (at least not officially)[***]; instead scalers are part of KEDA's code-base, and they are shipped with KEDA's binary. | ||
|
||
Thus, each scaler should have a constructing function, KEDA will [explicitly invoke](https://github.com/kedacore/keda/blob/4d0cf5ef09ef348cf3a158634910f00741ae5258/pkg/handler/scale_handler.go#L565) the construction function based on the `trigger` property configured in the ScaledObject. | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,11 +1,3 @@ | ||
#!/bin/sh -e | ||
|
||
# This is documented here: | ||
# https://docs.openshift.com/container-platform/3.11/creating_images/guidelines.html#openshift-specific-guidelines | ||
|
||
if ! whoami &>/dev/null; then | ||
if [ -w /etc/passwd ]; then | ||
echo "${USER_NAME:-keda}:x:$(id -u):$(id -g):${USER_NAME:-keda} user:${HOME}:/sbin/nologin" >> /etc/passwd | ||
fi | ||
fi | ||
exec ${OPERATOR} $@ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.