diff --git a/docs/user/tutorials/01-10-create-inline-function.md b/docs/user/tutorials/01-10-create-inline-function.md deleted file mode 100644 index bb7bacb6..00000000 --- a/docs/user/tutorials/01-10-create-inline-function.md +++ /dev/null @@ -1,147 +0,0 @@ -# Create and Modify an Inline Function - -This tutorial shows how you can create a simple "Hello World" Function in Node.js. The Function's code and dependencies are defined as an inline code in the Function's **spec**. - -Serverless also allows you to store the Function's code and dependencies as sources in a Git repository. To learn more, read how to [Create a Git Function](01-11-create-git-function.md). -To learn more about Function's signature, `event` and `context` objects, and custom HTTP responses the Function returns, read [Function’s specification](../technical-reference/07-70-function-specification.md). - -> [!NOTE] -> Read about [Istio sidecars in Kyma and why you want them](https://kyma-project.io/docs/kyma/latest/01-overview/service-mesh/smsh-03-istio-sidecars-in-kyma/). Then, check how to [enable automatic Istio sidecar proxy injection](https://kyma-project.io/docs/kyma/latest/04-operation-guides/operations/smsh-01-istio-enable-sidecar-injection/). For more details, see [Default Istio setup in Kyma](https://kyma-project.io/docs/kyma/latest/01-overview/service-mesh/smsh-02-default-istio-setup-in-kyma/). - -## Steps - -You can create a Function with Kyma dashboard, Kyma CLI, or kubectl: - - - -#### **Kyma Dashboard** - -> [!NOTE] -> Kyma dashboard uses Busola, which is not installed by default. Follow the [installation instructions](https://github.com/kyma-project/busola/blob/main/docs/install-kyma-dashboard-manually.md). - -1. Create a namespace or select one from the drop-down list in the top navigation panel. - -2. Go to **Workloads** > **Functions** and select **Create Function**. - -3. In the dialog box, provide the Function's name or click on **Generate**. - -> [!NOTE] -> The **Node.js Function** preset is selected by default. It means that the selected runtime is `Node.js`, and the **Source** code is autogenerated. You can choose the Python runtime by clicking on the **Choose preset** button. - - ```js - module.exports = { - main: async function (event, context) { - const message = - `Hello World` + - ` from the Kyma Function ${context['function-name']}` + - ` running on ${context.runtime}!`; - console.log(message); - return message; - }, - }; - ``` - -The dialog box closes. Wait for the **Status** field to change into `RUNNING`, confirming that the Function was created successfully. - -1. If you decide to modify it, click **Edit** and confirm changes afterward by selecting the **Update** button. You will see the message at the bottom of the screen confirming the Function was updated. - -#### **Kyma CLI** - -1. Export these variables: - - ```bash - export NAME={FUNCTION_NAME} - export NAMESPACE={FUNCTION_NAMESPACE} - ``` - -2. Create your local development workspace. - - a. Create a new folder to keep the Function's code and configuration in one place: - - ```bash - mkdir {FOLDER_NAME} - cd {FOLDER_NAME} - ``` - - b. Create initial scaffolding for the Function: - - ```bash - kyma init function --name $NAME --namespace $NAMESPACE - ``` - -3. Code and configure. - - Open the workspace in your favorite IDE. If you have Visual Studio Code installed, run the following command from the terminal in your workspace folder: - - ```bash - code . - ``` - - It's time to inspect the code and the `config.yaml` file. Feel free to adjust the "Hello World" sample code. - -4. Deploy and verify. - - a. Call the `apply` command from the workspace folder. It will build the container and run it on the Kyma runtime pointed by your current KUBECONFIG file: - - ```bash - kyma apply function - ``` - - b. Check if your Function was created successfully: - - ```bash - kubectl get functions $NAME -n $NAMESPACE - ``` - - You should get a result similar to this example: - - ```bash - NAME CONFIGURED BUILT RUNNING RUNTIME VERSION AGE - test-function True True True nodejs20 1 96s - ``` - -#### **kubectl** - -1. Export these variables: - - ```bash - export NAME={FUNCTION_NAME} - export NAMESPACE={FUNCTION_NAMESPACE} - ``` - -2. Create a Function CR that specifies the Function's logic: - - ```bash - cat < diff --git a/docs/user/tutorials/01-100-customize-function-traces.md b/docs/user/tutorials/01-100-customize-function-traces.md deleted file mode 100644 index b8ede922..00000000 --- a/docs/user/tutorials/01-100-customize-function-traces.md +++ /dev/null @@ -1,100 +0,0 @@ -# Customize Function Traces - -This tutorial shows how to use the built-in OpenTelemetry tracer object to send custom trace data to the trace backend. - -Kyma Functions are instrumented to handle trace headers. This means that every time you call your Function, the executed logic is traceable using a dedicated span visible in the trace backend (that is, start time and duration). -Additionally, you can extend the default trace context and create your own custom spans as you wish (that is, when calling a remote service in your distributed application) or add additional information to the tracing context by introducing events and tags. The following tutorial shows you how to do it using tracer client that is available as part of the [event](../technical-reference/07-70-function-specification.md#event-object) object. - -## Prerequisites - -Before you start, make sure you have these tools installed: - -- [Telemetry component installed](https://kyma-project.io/docs/kyma/latest/04-operation-guides/operations/02-install-kyma/#install-specific-components) -- [Trace pipeline configured](https://github.com/kyma-project/telemetry-manager/blob/main/docs/user/03-traces.md#setting-up-a-tracepipeline) - -## Steps - -The following code samples illustrate how to enrich the default trace with custom spans, events, and tags: - -1. [Create an inline Function](01-10-create-inline-function.md) with the following body: - - - -#### **Node.js** - - ```javascript - - const { SpanStatusCode } = require("@opentelemetry/api/build/src/trace/status"); - const axios = require("axios") - module.exports = { - main: async function (event, context) { - - const data = { - name: "John", - surname: "Doe", - type: "Employee", - id: "1234-5678" - } - - const span = event.tracer.startSpan('call-to-acme-service'); - return await callAcme(data) - .then(resp => { - if(resp.status!==200){ - throw new Error("Unexpected response from acme service"); - } - span.addEvent("Data sent"); - span.setAttribute("data-type", data.type); - span.setAttribute("data-id", data.id); - span.setStatus({code: SpanStatusCode.OK}); - return "Data sent"; - }).catch(err=> { - console.error(err) - span.setStatus({ - code: SpanStatusCode.ERROR, - message: err.message, - }); - return err.message; - }).finally(()=>{ - span.end(); - }); - } - } - - let callAcme = (data)=>{ - return axios.post('https://acme.com/api/people', data) - } - ``` - -#### **Python** - - [OpenTelemetry SDK](https://opentelemetry.io/docs/instrumentation/python/manual/#traces) allows you to customize trace spans and events. - - ```python - import requests - import time - - def main(event, context): - # Create a new span to track some work - with event.tracer.start_as_current_span("parent"): - time.sleep(1) - - # Create a nested span to track nested work - with event.tracer.start_as_current_span("child"): - time.sleep(2) - # the nested span is closed when it's out of scope - - # Now the parent span is the current span again - time.sleep(1) - - # This span is also closed when it goes out of scope - - # This request will be auto-intrumented - r = requests.get('https://swapi.dev/api/people/2') - return r.json() - ``` - - - -2. [Expose your Function](01-20-expose-function.md). - -3. Find the traces for the Function in the trace backend. diff --git a/docs/user/tutorials/01-11-create-git-function.md b/docs/user/tutorials/01-11-create-git-function.md deleted file mode 100644 index 4df1abfe..00000000 --- a/docs/user/tutorials/01-11-create-git-function.md +++ /dev/null @@ -1,158 +0,0 @@ -# Create a Git Function - -This tutorial shows how you can build a Function from code and dependencies stored in a Git repository, which is an alternative way to keeping the code in the Function CR. The tutorial is based on the Function from the [`orders service` example](https://github.com/kyma-project/examples/tree/main/orders-service). It describes steps required to fetch the Function's source code and dependencies from a public Git repository that does not need any authentication method. However, it also provides additional guidance on how to secure it if you are using a private repository. - -To learn more about Git repository sources for Functions and different ways of securing your repository, read about the [Git source type](../technical-reference/07-40-git-source-type.md). - -> [!NOTE] -> Read about [Istio sidecars in Kyma and why you want them](https://kyma-project.io/docs/kyma/latest/01-overview/service-mesh/smsh-03-istio-sidecars-in-kyma/). Then, check how to [enable automatic Istio sidecar proxy injection](https://kyma-project.io/docs/kyma/latest/04-operation-guides/operations/smsh-01-istio-enable-sidecar-injection/). For more details, see [Default Istio setup in Kyma](https://kyma-project.io/docs/kyma/latest/01-overview/service-mesh/smsh-02-default-istio-setup-in-kyma/). - -## Steps - -You can create a Function either with kubectl or Kyma dashboard: - - - -#### **Kyma Dashboard** - -> [!NOTE] -> Kyma dashboard uses Busola, which is not installed by default. Follow the [installation instructions](https://github.com/kyma-project/busola/blob/main/docs/install-kyma-dashboard-manually.md). - -1. Create a namespace or select one from the drop-down list in the top navigation panel. - -2. Create a Secret (optional). - - If you use a secured repository, you must first create a Secret with either basic (username and password or token) or SSH key authentication to this repository in the same namespace as the Function. To do that, follow these sub-steps: - - - Open your namespace view. In the left navigation panel, go to **Configuration** > **Secrets** and select the **Create Secret** button. - - - Open the **Advanced** view and enter the Secret name and type. - - - Under **Data**, enter these key-value pairs with credentials: - - - Basic authentication: `username: {USERNAME}` and `password: {PASSWORD_OR_TOKEN}` - - - SSH key: `key: {SSH_KEY}` - - > [!NOTE] - > Read more about the [supported authentication methods](../technical-reference/07-40-git-source-type.md). - - - Confirm by selecting **Create**. - -3. To connect the repository, go to **Workloads** > **Functions** > **Create Function**. - -4. Provide or generate the Function's name. - -5. Go to **Advanced**, change **Source Type** from **Inline** to **Git Repository**. - -6. Choose `JavaScript` from the **Language** dropdown and select the proper runtime. - -7. Click on the **Git Repository** section and enter the following values: - - Repository **URL**: `https://github.com/kyma-project/examples.git` - - **Base Dir**:`orders-service/function` - - **Reference**:`main` - - > [!NOTE] - > If you want to connect a secured repository instead of a public one, toggle the **Auth** switch. In the **Auth** section, choose **Secret** from the list and choose the preferred type. - -8. Click **Create**. - - After a while, a message confirms that the Function has been created. - Make sure that the new Function has the `RUNNING` status. - -#### **kubectl** - -1. Export these variables: - - ```bash - export GIT_FUNCTION={GIT_FUNCTION_NAME} - export NAMESPACE={FUNCTION_NAMESPACE} - ``` - -2. Create a Secret (optional). - - If you use a secured repository, follow the sub-steps for the basic or SSH key authentication: - - - Basic authentication (username and password or token) to this repository in the same namespace as the Function: - - 1. Generate a [personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic) and copy it. - 2. Create a Secret containg your username and the generated token. - - ```bash - kubectl -n $NAMESPACE create secret generic git-creds-basic --from-literal=username={GITHUB_USERNAME} --from-literal=password={GENERATED_PERSONAL_TOKEN} - ``` - - - SSH key: - - 1. Generate a new SSH key pair (private and public). Follow [this tutorial](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) to learn how to do it. Alternatively, you can use the existing pair. - 2. Install the generated private key in Kyma, as a Kubernetes Secret that lives in the same namespace as your Function. - - ```bash - kubectl -n $NAMESPACE create secret generic git-creds-ssh --from-file=key={PATH_TO_THE_FILE_WITH_PRIVATE_KEY} - ``` - - 3. Configure the public key in GitHub. Follow the steps described in [this tutorial](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account). - - > [!NOTE] - > Read more about the [supported authentication methods](../technical-reference/07-40-git-source-type.md). - -3. Create a Function CR that specifies the Function's logic and points to the directory with code and dependencies in the given repository. It also specifies the Git repository metadata: - - ```bash - cat < [!NOTE] - > If you use a secured repository, add the **auth** object with the adequate **type** and **secretName** fields to the spec under **gitRepository**: - - ```yaml - gitRepository: - ... - auth: - type: # "basic" or "key" - secretName: # "git-creds-basic" or "git-creds-ssh" - ``` - If you use the `key` type authentication, the SSH URL format must be used to configure the Function URL: - - ```yaml - gitRepository: - ... - url: git@github.com//.git - auth: - type: key - secretName: "git-creds-ssh" - ``` - - > [!NOTE] - > To avoid performance degradation caused by large Git repositories and large monorepos, [Function Controller](../resources/06-10-function-cr.md#related-resources-and-components) implements a configurable backoff period for the source checkout based on `APP_FUNCTION_REQUEUE_DURATION`. If you want to allow the controller to perform the source checkout with every reconciliation loop, disable the backoff period by marking the Function CR with the annotation `serverless.kyma-project.io/continuousGitCheckout: true` - - > [!NOTE] - > See this [Function's code and dependencies](https://github.com/kyma-project/examples/tree/main/orders-service). - -4. Check if your Function was created and all conditions are set to `True`: - - ```bash - kubectl get functions $GIT_FUNCTION -n $NAMESPACE - ``` - - You should get a result similar to this example: - - ```bash - NAME CONFIGURED BUILT RUNNING RUNTIME VERSION AGE - test-function True True True nodejs20 1 96s - ``` - - diff --git a/docs/user/tutorials/01-110-override-runtime-image.md b/docs/user/tutorials/01-110-override-runtime-image.md deleted file mode 100644 index a469e419..00000000 --- a/docs/user/tutorials/01-110-override-runtime-image.md +++ /dev/null @@ -1,88 +0,0 @@ -# Override Runtime Image - -This tutorial shows how to build a custom runtime image and override the Function's base image with it. - -## Prerequisites - -Before you start, make sure you have these tools installed: - -- [Serverless module installed](https://kyma-project.io/docs/kyma/latest/04-operation-guides/operations/08-install-uninstall-upgrade-kyma-module/) in a cluster - -## Steps - -Follow these steps: - -1. Follow [this example](https://github.com/kyma-project/serverless/tree/main/examples/custom-serverless-runtime-image) to build the Python's custom runtime image. - - - -#### **Kyma CLI** - -2. Export these variables: - - ```bash - export NAME={FUNCTION_NAME} - export NAMESPACE={FUNCTION_NAMESPACE} - export RUNTIME_IMAGE={RUNTIME_IMAGE_WITH_TAG} - ``` - -3. Create your local development workspace using the built image: - - ```bash - mkdir {FOLDER_NAME} - cd {FOLDER_NAME} - kyma init function --name $NAME --namespace $NAMESPACE --runtime-image-override $RUNTIME_IMAGE --runtime python312 - ``` - -4. Deploy your Function: - - ```bash - kyma apply function - ``` - -5. Verify whether your Function is running: - - ```bash - kubectl get functions $NAME -n $NAMESPACE - ``` - -#### **kubectl** - -2. Export these variables: - - ```bash - export NAME={FUNCTION_NAME} - export NAMESPACE={FUNCTION_NAMESPACE} - export RUNTIME_IMAGE={RUNTIME_IMAGE_WITH_TAG} - ``` - -3. Create a Function CR that specifies the Function's logic: - - ```bash - cat < diff --git a/docs/user/tutorials/01-120-inject-envs.md b/docs/user/tutorials/01-120-inject-envs.md deleted file mode 100644 index abe22e6c..00000000 --- a/docs/user/tutorials/01-120-inject-envs.md +++ /dev/null @@ -1,147 +0,0 @@ -# Inject Environment Variables - -This tutorial shows how to inject environment variables into Function. - -You can specify environment variables in the Function definition, or define references to the Kubernetes Secrets or ConfigMaps. - -## Prerequisites - -Before you start, make sure you have these tools installed: - -- [Serverless module installed](https://kyma-project.io/docs/kyma/latest/04-operation-guides/operations/08-install-uninstall-upgrade-kyma-module/) in a cluster - -## Steps - -Follow these steps: - -1. Create your ConfigMap - -```bash -kubectl create configmap my-config --from-literal config-env="I come from config map" -``` - -2. Create your Secret - -```bash -kubectl create secret generic my-secret --from-literal secret-env="I come from secret" -``` - - - -#### **Kyma CLI** - -3. Generate the Function's configuration and sources: - - ```bash - kyma init function --name my-function - ``` - -4. Define environment variables as part of the Function configuration file. Modify `config.yaml` with the following: - - ```yaml - name: my-function - namespace: default - runtime: nodejs20 - source: - sourceType: inline - env: - - name: env1 - value: "I come from function definition" - - name: env2 - valueFrom: - configMapKeyRef: - name: my-config - key: config-env - - name: env3 - valueFrom: - secretKeyRef: - name: my-secret - key: secret-env - ``` - -5. Use injected environment variables in the handler file. Modify `handler.js` with the following: - - ```js - module.exports = { - main: function (event, context) { - envs = ["env1", "env2", "env3"] - envs.forEach(function(key){ - console.log(`${key}:${readEnv(key)}`) - }); - return 'Hello Serverless' - } - } - - readEnv=(envKey) => { - if(envKey){ - return process.env[envKey]; - } - return - } - ``` - -6. Deploy your Function: - - ```bash - kyma apply function - ``` - -7. Verify whether your Function is running: - - ```bash - kubectl get functions my-function - ``` - -#### **kubectl** - -3. Create a Function CR that specifies the Function's logic: - - ```bash - cat < { - if(envKey){ - return process.env[envKey]; - } - return - } - EOF - ``` - -4. Verify whether your Function is running: - - ```bash - kubectl get functions my-function - ``` - - diff --git a/docs/user/tutorials/01-130-use-external-scalers.md b/docs/user/tutorials/01-130-use-external-scalers.md deleted file mode 100644 index 19893376..00000000 --- a/docs/user/tutorials/01-130-use-external-scalers.md +++ /dev/null @@ -1,195 +0,0 @@ -# Use External Scalers - -This tutorial shows how to use an external resource scaler, for example, HorizontalPodAutoscaler (HPA) or Keda's ScaledObject, with the Serverless Function. - -Keep in mind that the Serverless Functions implement the [scale subresource](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#scale-subresource), which means that you can use any Kubernetes-based scaler. - -## Prerequisites - -Before you start, make sure you have these tools installed: - -- [Keda module enabled](https://kyma-project.io/docs/kyma/latest/04-operation-guides/operations/08-install-uninstall-upgrade-kyma-module/) - -## Steps - -Follow these steps: - - - -#### **HPA** - -1. Create your Function with the `replicas` value set to 1, to prevent the internal Serverless HPA creation: - - ```bash - cat < [!NOTE] - > This tutorial uses the `cpu` trigger because of its simple configuration. If you want to use another trigger, check the official [list of supported triggers](https://keda.sh/docs/scalers/). - -3. After a few seconds, ScaledObject should be up to date and contain information about the actual replicas: - - ```bash - kubectl get scaledobject scaled-function - ``` - - You should get a result similar to this example: - - ```bash - NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE - scaled-function serverless.kyma-project.io/v1alpha2.Function scaled-function 5 10 cpu True True Unknown 4m15s - ``` - -#### **Keda Prometheus** - -1. Create your Function with the **replicas** value set to `1` to prevent the internal Serverless HPA creation: - - ```bash - cat < [!NOTE] - > This tutorial uses the `prometheus` trigger because of its simple configuration. If you want to use another trigger, check the official [list of supported triggers](https://keda.sh/docs/scalers/). - -3. After a few seconds, ScaledObject should be up to date and contain information about the actual replicas: - - ```bash - kubectl get scaledobject scaled-function - ``` - - You should get a result similar to this example: - - ```bash - NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE - scaled-function serverless.kyma-project.io/v1alpha2.Function scaled-function 1 5 prometheus True True Unknown 4m15s - ``` - -Check out this [example](https://github.com/kyma-project/keda-manager/tree/main/examples/scale-to-zero-with-keda) to see how to use Kyma Serverless and Eventing in combination with Keda to accomplish scaling to zero. - - diff --git a/docs/user/tutorials/01-140-use-secret-mounts.md b/docs/user/tutorials/01-140-use-secret-mounts.md deleted file mode 100644 index c0a5de59..00000000 --- a/docs/user/tutorials/01-140-use-secret-mounts.md +++ /dev/null @@ -1,123 +0,0 @@ -# Access to Secrets Mounted as Volume - -This tutorial shows how to use Secrets mounted as volume with the Serverless Function. -It's based on a simple Function in Python 3.9. The Function reads data from Secret and returns it. - -## Prerequisites - -Before you start, make sure you have these tools installed: - -- [Serverless module installed](https://kyma-project.io/docs/kyma/latest/04-operation-guides/operations/08-install-uninstall-upgrade-kyma-module/) in a cluster - -## Steps - -Follow these steps: - -1. Export these variables: - - ```bash - export FUNCTION_NAME={FUNCTION_NAME} - export NAMESPACE={FUNCTION_NAMESPACE} - export DOMAIN={DOMAIN_NAME} - - export SECRET_NAME={SECRET_NAME} - export SECRET_DATA_KEY={SECRET_DATA_KEY} - export SECRET_MOUNT_PATH={SECRET_MOUNT_PATH} - ``` - -2. Create a Secret: - - ```bash - kubectl -n $NAMESPACE create secret generic $SECRET_NAME \ - --from-literal=$SECRET_DATA_KEY={SECRET_DATA_VALUE} - ``` - -3. Create your Function with `secretMounts`: - - ```bash - cat < [!NOTE] - > Read more about [creating Functions](01-10-create-inline-function.md). - -4. Create an APIRule: - - The following steps allow you to test the Function in action. - - ```bash - cat < [!NOTE] - > Read more about [exposing Functions](01-20-expose-function.md). - -5. Call Function: - - ```bash - curl https://$FUNCTION_NAME.$DOMAIN - ``` - - You should get `{SECRET_DATA_VALUE}` as a result. - -6. Next steps: - - Now you can edit the Secret and see if the Function returns the new value from the Secret. - - To edit your Secret, use: - - ```bash - kubectl -n $NAMESPACE edit secret $SECRET_NAME - ``` - - To encode values used in `data` from the Secret, use `base64`, for example: - - ```bash - echo -n '{NEW_SECRET_DATA_VALUE}' | base64 - ``` - - Calling the Function again (using `curl`) must return `{NEW_SECRET_DATA_VALUE}`. - Note that the Secret propagation may take some time, and the call may initially return the old value. diff --git a/docs/user/tutorials/01-20-expose-function.md b/docs/user/tutorials/01-20-expose-function.md deleted file mode 100644 index 053fff64..00000000 --- a/docs/user/tutorials/01-20-expose-function.md +++ /dev/null @@ -1,163 +0,0 @@ -# Expose a Function with an API Rule - -This tutorial shows how you can expose your Function to access it outside the cluster, through an HTTP proxy. To expose it, use an [APIRule custom resource (CR)](https://kyma-project.io/docs/kyma/latest/05-technical-reference/00-custom-resources/apix-01-apirule/). Function Controller reacts to an instance of the APIRule CR and, based on its details, it creates an Istio VirtualService and Oathkeeper Access Rules that specify your permissions for the exposed Function. - -When you complete this tutorial, you get a Function that: - -- Is available on an unsecured endpoint (**handler** set to `noop` in the APIRule CR). -- Accepts the `GET`, `POST`, `PUT`, and `DELETE` methods. - -To learn more about securing your Function, see the [Expose and secure a workload with OAuth2](https://kyma-project.io/docs/kyma/latest/03-tutorials/00-api-exposure/apix-05-expose-and-secure-a-workload/apix-05-01-expose-and-secure-workload-oauth2/) or [Expose and secure a workload with JWT](https://kyma-project.io/docs/kyma/latest/03-tutorials/00-api-exposure/apix-05-expose-and-secure-a-workload/apix-05-03-expose-and-secure-workload-jwt/) tutorials. - -Read also about [Function’s specification](../technical-reference/07-70-function-specification.md) if you are interested in its signature, `event` and `context` objects, and custom HTTP responses the Function returns. - -## Prerequisites - -- [Existing Function](01-10-create-inline-function.md) -- [API Gateway component installed](https://kyma-project.io/docs/kyma/latest/04-operation-guides/operations/02-install-kyma/#install-specific-components) - -## Steps - -You can expose a Function with Kyma dashboard, Kyma CLI, or kubectl: - - - -#### **Kyma Dashboard** - -> [!NOTE] -> Kyma dashboard uses Busola, which is not installed by default. Follow the [installation instructions](https://github.com/kyma-project/busola/blob/main/docs/install-kyma-dashboard-manually.md). - -1. Select a namespace from the drop-down list in the top navigation panel. Make sure the namespace includes the Function that you want to expose through an APIRule. - -2. Go to **Discovery and Network** > **API Rules**, and click on **Create API Rule**. - -3. Enter the following information: - - - The APIRule's **Name** matching the Function's name. - - > [!NOTE] - > The APIRule CR can have a name different from that of the Function, but it is recommended that all related resources share a common name. - - - **Service Name** matching the Function's name. - - - **Host** to determine the host on which you want to expose your Function. You must change the `*` symbol at the beginning to the subdomain name you want. - -4. In the **Rules > Access Strategies > Config** section, change the handler from `allow` to `noop` and select all the methods below. - -5. Select **Create** to confirm your changes. - -6. Check if you can access the Function by selecting the HTTPS link under the **Host** column for the newly created APIRule. - -#### **Kyma CLI** - -1. Export these variables: - - ```bash - export DOMAIN={DOMAIN_NAME} - export NAME={FUNCTION_NAME} - export NAMESPACE={NAMESPACE_NAME} - ``` - - > [!NOTE] - > The Function takes the name from the Function CR name. The APIRule CR can have a different name but for the purpose of this tutorial, all related resources share a common name defined under the **NAME** variable. -2. Download the latest configuration of the Function from the cluster. This way, you update the local `config.yaml` file with the Function's code. - - ```bash - kyma sync function $NAME -n $NAMESPACE - ``` - -3. Edit the local `config.yaml` file and add the **apiRules** schema for the Function at the end of the file: - - ```yaml - apiRules: - - name: {FUNCTION_NAME} - service: - host: {FUNCTION_NAME}.{DOMAIN_NAME} - rules: - - methods: - - GET - - POST - - PUT - - DELETE - accessStrategies: - - handler: noop - ``` - -4. Apply the new configuration to the cluster: - - ```bash - kyma apply function - ``` - -5. Check if the Function's code was pushed to the cluster and reflects the local configuration: - - ```bash - kubectl get apirules $NAME -n $NAMESPACE - ``` - -6. Check that the APIRule was created successfully and has the status `OK`: - - ```bash - kubectl get apirules $NAME -n $NAMESPACE -o=jsonpath='{.status.APIRuleStatus.code}' - ``` - -7. Call the Function's external address: - - ```bash - curl https://$NAME.$DOMAIN - ``` - -#### **kubectl** - -1. Export these variables: - - ```bash - export DOMAIN={DOMAIN_NAME} - export NAME={FUNCTION_NAME} - export NAMESPACE={FUNCTION_NAMESPACE} - ``` - - > [!NOTE] - > The Function takes the name from the Function CR name. The APIRule CR can have a different name but for the purpose of this tutorial, all related resources share a common name defined under the **NAME** variable. - -2. Create an APIRule CR for your Function. It is exposed on port `80`, which is the default port of the [Service Placeholder](../technical-reference/04-10-architecture.md). - - ```bash - cat < \ No newline at end of file diff --git a/docs/user/tutorials/01-30-manage-functions-with-kyma-cli.md b/docs/user/tutorials/01-30-manage-functions-with-kyma-cli.md deleted file mode 100644 index d6ef5249..00000000 --- a/docs/user/tutorials/01-30-manage-functions-with-kyma-cli.md +++ /dev/null @@ -1,111 +0,0 @@ -# Manage Functions with Kyma CLI - -This tutorial shows how to use the available CLI commands to manage Functions in Kyma. You will see how to: - -1. Create local files that contain the basic configuration for a sample "Hello World" Python Function (`kyma init function`). -2. Generate a Function custom resource (CR) from these files and apply it on your cluster (`kyma apply function`). -3. Fetch the current state of your Function's cluster configuration after it was modified (`kyma sync function`). - -> [!NOTE] -> Read about [Istio sidecars in Kyma and why you want them](https://kyma-project.io/docs/kyma/latest/01-overview/service-mesh/smsh-03-istio-sidecars-in-kyma/). Then, check how to [enable automatic Istio sidecar proxy injection](https://kyma-project.io/docs/kyma/latest/04-operation-guides/operations/smsh-01-istio-enable-sidecar-injection/). For more details, see [Default Istio setup in Kyma](https://kyma-project.io/docs/kyma/latest/01-overview/service-mesh/smsh-02-default-istio-setup-in-kyma/). - -This tutorial is based on a sample Python Function run in a lightweight [k3d](https://k3d.io/) cluster. - -## Prerequisites - -Before you start, make sure you have these tools installed: - -- [Docker](https://www.docker.com/) -- [Kyma CLI](https://github.com/kyma-project/cli) -- [Serverless module installed](https://kyma-project.io/docs/kyma/latest/04-operation-guides/operations/08-install-uninstall-upgrade-kyma-module/) locally or in a cluster - -## Steps - -Follow these steps: - -1. To create local files with the default configuration for a Python Function, go to the folder in which you want to initiate the workspace content and run the `init` Kyma CLI command: - - ```bash - kyma init function --runtime python312 --name {FUNCTION_NAME} - ``` - - You can also use the `--dir {FULL_FOLDER_PATH}` flag to point to the directory where you want to create the Function's source files. - - > [!NOTE] - > Python 3.9 is only one of the available runtimes. Read about all [supported runtimes and sample Functions to run on them](../technical-reference/07-10-sample-functions.md). - - The `init` command creates these files in your workspace folder: - - - `config.yaml` with the Function's configuration - - > [!NOTE] - > See the detailed description of all fields available in the [`config.yaml` file](../technical-reference/07-60-function-configuration-file.md). - - - `handler.py` with the Function's code and the simple "Hello World" logic - - `requirements.txt` with an empty file for your Function's custom dependencies - - The `kyma init` command also sets **sourcePath** in the `config.yaml` file to the full path of the workspace folder: - - ```yaml - name: my-function - namespace: default - runtime: python312 - source: - sourceType: inline - sourcePath: {FULL_PATH_TO_WORKSPACE_FOLDER} - ``` - -1. Run the `apply` Kyma CLI command to create a Function CR in the YAML format on your cluster: - - ```bash - kyma apply function - ``` - - > [!TIP] - > To apply a Function from a different location, use the `--filename` flag followed by the full path to the `config.yaml` file. - - Alternatively, use the `--dry-run` flag to list the file that will be created before you apply it. You can also preview the file's content in the format of your choice by adding the `--output {FILE_FORMAT}` flag, such as `--output yaml`. - -3. Once applied, view the Function's details in the cluster: - - ```bash - kubectl describe function {FUNCTION_NAME} - ``` - -4. Change the Function's source code in the cluster to return "Hello Serverless!": - - a) Edit the Function: - - ```bash - kubectl edit function {FUNCTION_NAME} - ``` - - b) Modify **source** as follows: - - ```yaml - ... - spec: - runtime: python312 - source: |- - def main(event, context): - return "Hello Serverless!" - ``` - -5. Fetch the content of the resource to synchronize your local workspace sources with the cluster changes: - - ```bash - kyma sync function {FUNCTION_NAME} - ``` - -6. Check the local `handler.py` file with the Function's code to make sure that the cluster changes were fetched: - - ```bash - cat handler.py - ``` - - This command returns the result confirming that the local sources were synchronized with cluster changes: - - ```python - def main(event, context): - return "Hello Serverless!" - ``` diff --git a/docs/user/tutorials/01-40-debug-function.md b/docs/user/tutorials/01-40-debug-function.md deleted file mode 100644 index 99067a9a..00000000 --- a/docs/user/tutorials/01-40-debug-function.md +++ /dev/null @@ -1,83 +0,0 @@ -# Debug a Function - -This tutorial shows how to use an external IDE to debug a Function in Kyma CLI. - -## Steps - -Learn how to debug a Function with Visual Studio Code for Node.js or Python, or GoLand: - - - -#### **Visual Studio Code** - -1. In VSC, navigate to the location of the file with the Function definition. -2. Create the `.vscode` directory. -3. In the `.vscode` directory, create the `launch.json` file with the following content: - - For Node.js: - - ```json - { - "version": "0.2.0", - "configurations": [ - { - "name": "attach", - "type": "node", - "request": "attach", - "port": 9229, - "address": "localhost", - "localRoot": "${workspaceFolder}/kubeless", - "remoteRoot": "/kubeless", - "restart": true, - "protocol": "inspector", - "timeout": 1000 - } - ] - } - ``` - - For Python: - - ```json - { - "version": "0.2.0", - "configurations": [ - { - "name": "Python: Kyma function", - "type": "python", - "request": "attach", - "pathMappings": [ - { - "localRoot": "${workspaceFolder}", - "remoteRoot": "/kubeless" - } - ], - "connect": { - "host": "localhost", - "port": 5678 - } - } - ] - } - ``` - -4. Run the Function with the `--debug` flag. - - ```bash - kyma run function --debug - ``` - -#### **GoLand** - -1. In GoLand, navigate to the location of the file with the Function definition. -2. Choose the **Add Configuration...** option. -3. Add new **Attach to Node.js/Chrome** configuration with these options: - - Host: `localhost` - - Port: `9229` -4. Run the Function with the `--debug` flag. - - ```bash - kyma run function --debug - ``` - - \ No newline at end of file diff --git a/docs/user/tutorials/01-50-sync-function-with-gitops.md b/docs/user/tutorials/01-50-sync-function-with-gitops.md deleted file mode 100644 index 1a21e66b..00000000 --- a/docs/user/tutorials/01-50-sync-function-with-gitops.md +++ /dev/null @@ -1,222 +0,0 @@ -# Synchronize Git Resources with the Cluster Using a Gitops Operator - -This tutorial shows how you can automate the deployment of local Kyma resources in a cluster using the GitOps logic. You will use [Kyma CLI](https://github.com/kyma-project/cli) to create an inline Python Function. You will later push the resource to a GitHub repository of your choice and set up a GitOps operator to monitor the given repository folder and synchronize any changes in it with your cluster. For the purpose of this tutorial, you will install and use the [Flux](https://fluxcd.io/flux/get-started/) GitOps operator and a lightweight [k3d](https://k3d.io/) cluster. - -> [!TIP] -> Although this tutorial uses Flux to synchronize Git resources with the cluster, you can use an alternative GitOps operator for this purpose, such as [Argo](https://argoproj.github.io/argo-cd/). - -## Prerequisites - -All you need before you start is to have the following: - -- [Docker](https://www.docker.com/) -- Git repository -- [Homebrew](https://docs.brew.sh/Installation) -- Kyma CLI -- Kubeconfig file to your Kyma cluster - -## Steps - -These sections will lead you through the whole installation, configuration, and synchronization process. You will first install k3d and create a cluster for your custom resources (CRs). Then, you will need to apply the necessary CustomResourceDefinition (CRD) from Kyma to be able to create Functions. Finally, you will install Flux and authorize it with the `write` access to your GitHub repository in which you store the resource files. Flux will automatically synchronize any new changes pushed to your repository with your k3d cluster. - -### Install and Configure a k3d Cluster - -1. Install k3d using Homebrew on macOS: - - ```bash - brew install k3d - ``` - -2. Create a default k3d cluster with a single server node: - - ```bash - k3d cluster create {CLUSTER_NAME} - ``` - - This command also sets your context to the newly created cluster. Run this command to display the cluster information: - - ```bash - kubectl cluster-info - ``` - -3. Apply the `functions.serverless.kyma-project.io` CRD from sources in the [`serverless`](https://github.com/kyma-project/serverless/tree/main/components/serverless/config/crd) repository. You will need it to create the Function CR in the cluster. - - ```bash - kubectl apply -f https://raw.githubusercontent.com/kyma-project/serverless/main/components/serverless/config/crd/bases/serverless.kyma-project.io_functions.yaml - ``` - -4. Run this command to make sure the CRs are applied: - - ```bash - kubectl get customresourcedefinitions - ``` - -### Prepare Your Local Workspace - -1. Create a workspace folder in which you will create source files for your Function: - - ```bash - mkdir {WORKSPACE_FOLDER} - ``` - -2. Use the `init` Kyma CLI command to create a local workspace with default configuration for a Python Function: - - ```bash - kyma init function --runtime python312 --dir $PWD/{WORKSPACE_FOLDER} - ``` - - > [!TIP] - > Python 3.9 is only one of the available runtimes. Read about all [supported runtimes and sample Functions to run on them](../technical-reference/07-10-sample-functions.md). - - This command will download the following files to your workspace folder: - - - `config.yaml` with the Function's configuration - - `handler.py` with the Function's code and the simple "Hello World" logic - - `requirements.txt` with an empty file for your Function's custom dependencies - -### Install and Configure Flux - -You can now install the Flux operator, connect it with a specific Git repository folder, and authorize Flux to automatically pull changes from this repository folder and apply them on your cluster. - -1. Install Flux: - - ```bash - brew install fluxctl - ``` - -2. Create a `flux` namespace for the Flux operator's CRDs: - - ```bash - kubectl create namespace flux - kubectl label namespace flux istio-injection=enabled --overwrite - ``` - -3. Export details of your GitHub repository - its name, the account name, and related e-mail address. You must also specify the name of the folder in your GitHub repository to which you will push the Function CR built from local sources. If you don't have this folder in your repository yet, you will create it in further steps. Flux will synchronize the cluster with the content of this folder on the `main` branch. - - ```bash - export GH_USER="{USERNAME}" - export GH_REPO="{REPOSITORY_NAME}" - export GH_EMAIL="{EMAIL_OF_YOUR_GH_ACCOUNT}" - export GH_FOLDER="{GIT_REPO_FOLDER_FOR_FUNCTION_RESOURCES}" - ``` - -4. Run this command to apply CRDs of the Flux operator to the `flux` namespace on your cluster: - - ```bash - fluxctl install \ - --git-user=${GH_USER} \ - --git-email=${GH_EMAIL} \ - --git-url=git@github.com:${GH_USER}/${GH_REPO}.git \ - --git-path=${GH_FOLDER} \ - --namespace=flux | kubectl apply -f - - ``` - - You will see that Flux created these CRDs: - - ```bash - serviceaccount/flux created - clusterrole.rbac.authorization.k8s.io/flux created - clusterrolebinding.rbac.authorization.k8s.io/flux created - deployment.apps/flux created - secret/flux-git-deploy created - deployment.apps/memcached created - service/memcached created - ``` - -5. List all Pods in the `flux` namespace to make sure that the one for Flux is in the `Running` state: - - ```bash - kubectl get pods --namespace flux - ``` - - Expect a response similar to this one: - - ```bash - NAME READY STATUS RESTARTS AGE - flux-75758595b9-m4885 1/1 Running 0 32m - ``` - -6. Obtain the certificate (SSH key) that Flux generated: - - ```bash - fluxctl identity --k8s-fwd-ns flux - ``` - -7. Run this command to copy the SSH key to the clipboard: - - ```bash - fluxctl identity --k8s-fwd-ns flux | pbcopy - ``` - -8. Go to **Settings** in your GitHub account: - - ![GitHub account settings](../../assets/svls-settings.png) - -9. Go to the **SSH and GPG keys** section and select the **New SSH key** button: - - ![Create a new SSH key](../../assets/svls-create-ssh-key.png) - -10. Provide the new key name, paste the previously copied SSH key, and confirm changes by selecting the **Add SSH Key** button: - - ![Add a new SSH key](../../assets/svls-add-ssh-key.png) - -### Create a Function - -Now that Flux is authenticated to pull changes from your Git repository, you can start creating CRs from your local workspace files. - -In this section, you will create a sample inline Function. - -1. Back in the terminal, clone this GitHub repository to your current workspace location: - - ```bash - git clone https://github.com/${GH_USER}/${GH_REPO}.git - ``` - - > [!NOTE] - > You can also clone the repository using SSH. To do that, you need to [generate a new SSH key and add it to the ssh-agent](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent). - -2. Go to the repository folder: - - ```bash - cd ${GH_REPO} - ``` - -3. If the folder you specified during the Flux configuration does not exist yet in the Git repository, create it: - - ```bash - mkdir ${GH_FOLDER} - ``` - -4. Run the `apply` Kyma CLI command to create a Function CR in the YAML format in your remote GitHub repository. This command will generate the output in the `my-function.yaml` file. - - ```bash - kyma apply function --filename {FULL_PATH_TO_LOCAL_WORKSPACE_FOLDER}/config.yaml --output yaml --dry-run > ./${GH_FOLDER}/my-function.yaml - ``` - -5. Push the local changes to the remote repository: - - ```bash - git add . # Stage changes for the commit - git commit -m 'Add my-function' # Add a commit message - git push origin main # Push changes to the "main" branch of your Git repository. If you have a repository with the "main" branch, use this command instead: git push origin main - ``` - -6. Go to the GitHub repository to check that the changes were pushed. - -7. By default, Flux pulls CRs from the Git repository and pushes them to the cluster in 5-minute intervals. To enforce immediate synchronization, run this command from the terminal: - - ```bash - fluxctl sync --k8s-fwd-ns flux - ``` - -8. Make sure that the Function CR was applied by Flux to the cluster: - - ```bash - kubectl get functions - ``` - -You can see that Flux synchronized the resource and the new Function CR was added to your cluster. - -## Reverting Feature - -Once you set it up, Flux will keep monitoring the given Git repository folder for any changes. If you modify the existing resources directly in the cluster, Flux will automatically revert these changes and update the given resource back to its version on the `main` branch of the Git repository. diff --git a/docs/user/tutorials/01-60-set-external-registry.md b/docs/user/tutorials/01-60-set-external-registry.md deleted file mode 100644 index e257e686..00000000 --- a/docs/user/tutorials/01-60-set-external-registry.md +++ /dev/null @@ -1,244 +0,0 @@ -# Set an External Docker Registry - -By default, you install Kyma with Serverless that uses the internal Docker registry running in a cluster. This tutorial shows how to override this default setup with an external Docker registry from one of these cloud providers: - -- [Docker Hub](https://hub.docker.com/) -- [Google Artifact Registry (GAR)](https://cloud.google.com/artifact-registry) -- [Azure Container Registry (ACR)](https://azure.microsoft.com/en-us/services/container-registry/) - -> [!WARNING] -> Function images are not cached in the Docker Hub. The reason is that this registry is not compatible with the caching logic defined in [Kaniko](https://cloud.google.com/cloud-build/docs/kaniko-cache) that Serverless uses for building images. - -## Prerequisites - - - -#### **Docker Hub** - -- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) - -#### **GAR** - -- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) -- [gcloud](https://cloud.google.com/sdk/gcloud/) -- [Google Cloud Platform (GCP)](https://cloud.google.com) project - -#### **ACR** - -- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) -- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure) -- [Microsoft Azure](http://azure.com) subscription - - - -## Steps - -### Create Required Cloud Resources - - - -#### **Docker Hub** - -1. Run the `export {VARIABLE}={value}` command to set up these environment variables, where: - - - **USER_NAME** is the name of the account in the Docker Hub. - - **PASSWORD** is the password for the account in the Docker Hub. - - **SERVER_ADDRESS** is the server address of the Docker Hub. At the moment, Kyma only supports the `https://index.docker.io/v1/` server address. - - **REGISTRY_ADDRESS** is the registry address in the Docker Hub. - - > [!TIP] - > Usually, the Docker registry address is the same as the account name. - - Example: - - ```bash - export USER_NAME=kyma-rocks - export PASSWORD=admin123 - export SERVER_ADDRESS=https://index.docker.io/v1/ - export REGISTRY_ADDRESS=kyma-rocks - ``` - -#### **GAR** - -To use GAR, create a Google service account that has a private key and the **Storage Admin** role permissions. Follow these steps: - -1. Run the `export {VARIABLE}={value}` command to set up these environment variables, where: - - - **SA_NAME** is the name of the service account. - - **SA_DISPLAY_NAME** is the display name of the service account. - - **PROJECT** is the GCP project ID. - - **SECRET_FILE** is the path to the private key. - - **ROLE** is the **Storage Admin** role bound to the service account. - - **SERVER_ADDRESS** is the server address of the Docker registry. - - Example: - - ```bash - export SA_NAME=my-service-account - export SA_DISPLAY_NAME=service-account - export PROJECT=test-project-012345 - export SECRET_FILE=my-private-key-path - export ROLE=roles/storage.admin - export SERVER_ADDRESS=gar.io - ``` - -2. When you communicate with Google Cloud for the first time, set the context for your Google Cloud project. Run this command: - - ```bash - gcloud config set project ${PROJECT} - ``` - -3. Create a service account. Run: - - ```bash - gcloud iam service-accounts create ${SA_NAME} --display-name ${SA_DISPLAY_NAME} - ``` - -4. Add a policy binding for the **Storage Admin** role to the service account. Run: - - ```bash - gcloud projects add-iam-policy-binding ${PROJECT} --member=serviceAccount:${SA_NAME}@${PROJECT}.iam.gserviceaccount.com --role=${ROLE} - ``` - -5. Create a private key for the service account: - - ```bash - gcloud iam service-accounts keys create ${SECRET_FILE} --iam-account=${SA_NAME}@${PROJECT}.iam.gserviceaccount.com - ``` - -6. Export the private key as an environment variable: - - ```bash - export GCS_KEY_JSON=$(< "$SECRET_FILE" base64 | tr -d '\n') - ``` - -#### **ACR** - -Create an ACR and a service principal. Follow these steps: - -1. Run the `export {VARIABLE}={value}` command to set up these environment variables, where: - - - **AZ_REGISTRY_NAME** is the name of the ACR. - - **AZ_RESOURCE_GROUP** is the name of the resource group. - - **AZ_RESOURCE_GROUP_LOCATION** is the location of the resource group. - - **AZ_SUBSCRIPTION_ID** is the ID of the Azure subscription. - - **AZ_SERVICE_PRINCIPAL_NAME** is the name of the Azure service principal. - - **ROLE** is the **acrpush** role bound to the service principal. - - **SERVER_ADDRESS** is the server address of the Docker registry. - - Example: - - ```bash - export AZ_REGISTRY_NAME=registry - export AZ_RESOURCE_GROUP=my-resource-group - export AZ_RESOURCE_GROUP_LOCATION=westeurope - export AZ_SUBSCRIPTION_ID=123456-123456-123456-1234567 - export AZ_SERVICE_PRINCIPAL_NAME=acr-service-principal - export ROLE=acrpush - export SERVER_ADDRESS=azurecr.io - ``` - -2. When you communicate with Microsoft Azure for the first time, log into your Azure account. Run this command: - - ```bash - az login - ``` - -3. Create a resource group. Run: - - ```bash - az group create --name ${AZ_RESOURCE_GROUP} --location ${AZ_RESOURCE_GROUP_LOCATION} --subscription ${AZ_SUBSCRIPTION_ID} - ``` - -4. Create an ACR. Run: - - ```bash - az acr create --name ${AZ_REGISTRY_NAME} --resource-group ${AZ_RESOURCE_GROUP} --subscription ${AZ_SUBSCRIPTION_ID} --sku {Basic, Classic, Premium, Standard} - ``` - -5. Obtain the full ACR ID. Run: - - ```bash - export AZ_REGISTRY_ID=$(az acr show --name ${AZ_REGISTRY_NAME} --query id --output tsv) - ``` - -6. Create a service principal with rights scoped to the ACR. Run: - - ```bash - export SP_PASSWORD=$(az ad sp create-for-rbac --name http://${AZ_SERVICE_PRINCIPAL_NAME} --scopes ${AZ_REGISTRY_ID} --role ${ROLE} --query password --output tsv) - export SP_APP_ID=$(az ad sp show --id http://${AZ_SERVICE_PRINCIPAL_NAME} --query appId --output tsv) - ``` - - Alternatively, assign the desired role to the existing service principal. Run: - - ```bash - export SP_APP_ID=$(az ad sp show --id http://${AZ_SERVICE_PRINCIPAL_NAME} --query appId --output tsv) - export SP_PASSWORD=$(az ad sp show --id http://${AZ_SERVICE_PRINCIPAL_NAME} --query password --output tsv) - az role assignment create --assignee ${SP_APP_ID} --scope ${AZ_REGISTRY_ID} --role ${ROLE} - ``` - - - -### Override Serverless Configuration - -Prepare yaml file with overrides that match your Docker registry provider: - - - -#### **Docker Hub** - -```bash -cat > docker-registry-overrides.yaml < docker-registry-overrides.yaml < docker-registry-overrides.yaml < - -> [!WARNING] -> If you want to set an external Docker registry before you install Kyma, manually apply the Secret to the cluster before you run the installation script. - -### Apply Configuration - -Deploy Kyma with different configuration for Docker registry . Run: - -```bash -kyma deploy --values-file docker-registry-overrides.yaml -``` - -> [!NOTE] -> To learn more, read about [changing Kyma configuration](https://kyma-project.io/docs/kyma/latest/04-operation-guides/operations/03-change-kyma-config-values). diff --git a/docs/user/tutorials/01-80-log-into-private-packages-registry.md b/docs/user/tutorials/01-80-log-into-private-packages-registry.md deleted file mode 100644 index 6e3b7c84..00000000 --- a/docs/user/tutorials/01-80-log-into-private-packages-registry.md +++ /dev/null @@ -1,118 +0,0 @@ -# Log Into a Private Package Registry Using Credentials from a Secret - -Serverless allows you to consume private packages in your Functions. This tutorial shows how you can log into a private package registry by defining credentials in a Secret custom resource (CR). - -## Steps - -### Create a Secret - -Create a Secret CR for your Node.js or Python Functions. You can also create one combined Secret CR for both runtimes. - - - -#### **Node.js** - -1. Export these variables: - - ```bash - export REGISTRY={ADDRESS_TO_REGISTRY} - export TOKEN={TOKEN_TO_REGISTRY} - export NAMESPACE={FUNCTION_NAMESPACE} - ``` - -2. Create a Secret: - - ```bash - cat < - -### Test the Package Registry Switch - -[Create a Function](01-10-create-inline-function.md) with dependencies from the external registry. Check if your Function was created and all conditions are set to `True`: - -```bash -kubectl get functions -n $NAMESPACE -``` - -You should get a result similar to the this example: - -```bash -NAME CONFIGURED BUILT RUNNING RUNTIME VERSION AGE -test-function True True True nodejs20 1 96s -``` - -> [!WARNING] -> If you want to create a cluster-wide Secret, you must create it in the `kyma-system` namespace and add the `serverless.kyma-project.io/config: credentials` label. diff --git a/docs/user/tutorials/01-90-set-asynchronous-connection.md b/docs/user/tutorials/01-90-set-asynchronous-connection.md deleted file mode 100644 index 840d2712..00000000 --- a/docs/user/tutorials/01-90-set-asynchronous-connection.md +++ /dev/null @@ -1,146 +0,0 @@ -# Set Asynchronous Communication Between Functions - -This tutorial demonstrates how to connect two Functions asynchronously. It is based on the [in-cluster Eventing example](https://github.com/kyma-project/serverless/tree/main/examples/incluster_eventing). - -The example provides a very simple scenario of asynchronous communication between two Functions. The first Function accepts the incoming traffic via HTTP, sanitizes the payload, and publishes the content as an in-cluster event using [Kyma Eventing](https://kyma-project.io/docs/kyma/latest/01-overview/eventing/). -The second Function is a message receiver. It subscribes to the given event type and stores the payload. - -This tutorial shows only one possible use case. There are many more use cases on how to orchestrate your application logic into specialized Functions and benefit from decoupled, re-usable components and event-driven architecture. - -## Prerequisites - -- [Kyma CLI](https://github.com/kyma-project/cli) -- [Eventing and Istio components installed](https://kyma-project.io/docs/kyma/latest/04-operation-guides/operations/02-install-kyma/#install-specific-components) - -## Steps - -1. Export the `KUBECONFIG` variable: - - ```bash - export KUBECONFIG={KUBECONFIG_PATH} - ``` - -2. Create the `emitter` and `receiver` folders in your project. - -### Create the Emitter Function - -1. Go to the `emitter` folder and run Kyma CLI `init` command to initialize the scaffold for your first Function: - - ```bash - kyma init function - ``` - - The `init` command creates these files in your workspace folder: - - - `config.yaml` with the Function's configuration - - > [!NOTE] - > See the detailed description of all fields available in the [`config.yaml` file](../technical-reference/07-60-function-configuration-file.md). - - - `handler.js` with the Function's code and the simple "Hello Serverless" logic - - - `package.json` with the Function's dependencies - -2. In the `config.yaml` file, configure an APIRule to expose your Function to the incoming traffic over HTTP. Provide the subdomain name in the `host` property: - - ```yaml - apiRules: - - name: incoming-http-trigger - service: - host: incoming - rules: - - methods: - - GET - accessStrategies: - - handler: allow - ``` - -3. Provide your Function logic in the `handler.js` file: - - > [!NOTE] - > In this example, there's no sanitization logic. The `sanitize` Function is just a placeholder. - - ```js - module.exports = { - main: async function (event, context) { - let sanitisedData = sanitise(event.data) - - const eventType = "sap.kyma.custom.acme.payload.sanitised.v1"; - const eventSource = "kyma"; - - return await event.emitCloudEvent(eventType, eventSource, sanitisedData) - .then(resp => { - return "Event sent"; - }).catch(err=> { - console.error(err) - return err; - }); - } - } - let sanitise = (data)=>{ - console.log(`sanitising data...`) - console.log(data) - return data - } - ``` - - The `sap.kyma.custom.acme.payload.sanitised.v1` is a sample event type that the emitter Function declares when publishing events. You can choose a different one that better suits your use case. Keep in mind the constraints described on the [Event names](https://kyma-project.io/docs/kyma/latest/05-technical-reference/evnt-01-event-names/) page. The receiver subscribes to the event type to consume the events. - - The event object provides convenience functions to build and publish events. To send the event, build the Cloud Event. To learn more, read [Function's specification](../technical-reference/07-70-function-specification.md#event-object-sdk). In addition, your **eventOut.source** key must point to `“kyma”` to use Kyma in-cluster Eventing. - There is a `require('axios')` line even though the Function code is not using it directly. This is needed for the auto-instrumentation to properly handle the outgoing requests sent using the `publishCloudEvent` method (which uses `axios` library under the hood). Without the `axios` import the Function still works, but the published events are not reflected in the trace backend. - -4. Apply your emitter Function: - - ```bash - kyma apply function - ``` - - Your Function is now built and deployed in Kyma runtime. Kyma exposes it through the APIRule. The incoming payloads are processed by your emitter Function. It then sends the sanitized content to the workload that subscribes to the selected event type. In our case, it's the receiver Function. - -5. Test the first Function. Send the payload and see if your HTTP traffic is accepted: - - ```bash - export KYMA_DOMAIN={KYMA_DOMAIN_VARIABLE} - - curl -X POST https://incoming.${KYMA_DOMAIN} -H 'Content-Type: application/json' -d '{"foo":"bar"}' - ``` - -### Create the Receiver Function - -1. Go to your `receiver` folder and run Kyma CLI `init` command to initialize the scaffold for your second Function: - - ```bash - kyma init function - ``` - - The `init` command creates the same files as in the `emitter` folder. - -2. In the `config.yaml` file, configure event types your Function will subscribe to: - - ```yaml - name: event-receiver - namespace: default - runtime: nodejs20 - source: - sourceType: inline - subscriptions: - - name: event-receiver - typeMatching: exact - source: "" - types: - - sap.kyma.custom.acme.payload.sanitised.v1 - schemaVersion: v1 - ``` - -3. Apply your receiver Function: - - ```bash - kyma apply function - ``` - - The Function is configured, built, and deployed in Kyma runtime. The Subscription becomes active and all events with the selected type are processed by the Function. - -### Test the Whole Setup - -Send a payload to the first Function. For example, use the POST request mentioned above. As the Functions are joined by the in-cluster Eventing, the payload is processed in sequence by both of your Functions. -In the Function's logs, you can see that both sanitization logic (using the first Function) and the storing logic (using the second Function) are executed. diff --git a/docs/user/tutorials/README.md b/docs/user/tutorials/README.md deleted file mode 100644 index 9edc9149..00000000 --- a/docs/user/tutorials/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Tutorials - -This section will help you understand how the Serverless Function works and how to use it in different scenarios. You can also learn how to set and switch a Docker registry.