Skip to content

Latest commit

 

History

History
420 lines (302 loc) · 12.6 KB

README.md

File metadata and controls

420 lines (302 loc) · 12.6 KB

Trade Zoo

main

A simple trading application that runs in the public cloud but keeps its data in a private Kafka cluster

This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites.

Contents

Overview

This example is a simple Kafka application that shows how you can use Skupper to access a Kafka cluster at a remote site without exposing it to the public internet.

It contains four services:

  • A Kafka cluster running in a private data center. The cluster has two topics, "orders" and "updates".

  • An order processor running in the public cloud. It consumes from "orders", matching buy and sell offers to make trades. It publishes new and updated orders and trades to "updates".

  • A market data service running in the public cloud. It looks at the completed trades and computes the latest and average prices, which it then publishes to "updates".

  • A web frontend service running in the public cloud. It submits buy and sell orders to "orders" and consumes from "updates" in order to show what's happening.

To set up the Kafka cluster, this example uses the Kubernetes operator from the Strimzi project. The other services are small Python programs.

The example uses two Kubernetes namespaces, "private" and "public", to represent the private data center and public cloud.

Prerequisites

Step 1: Install the Skupper command-line tool

This example uses the Skupper command-line tool to deploy Skupper. You need to install the skupper command only once for each development environment.

On Linux or Mac, you can use the install script (inspect it here) to download and extract the command:

curl https://skupper.io/install.sh | sh

The script installs the command under your home directory. It prompts you to add the command to your path if necessary.

For Windows and other installation options, see Installing Skupper.

Step 2: Set up your namespaces

Skupper is designed for use with multiple Kubernetes namespaces, usually on different clusters. The skupper and kubectl commands use your kubeconfig and current context to select the namespace where they operate.

Your kubeconfig is stored in a file in your home directory. The skupper and kubectl commands use the KUBECONFIG environment variable to locate it.

A single kubeconfig supports only one active context per user. Since you will be using multiple contexts at once in this exercise, you need to create distinct kubeconfigs.

For each namespace, open a new terminal window. In each terminal, set the KUBECONFIG environment variable to a different path and log in to your cluster. Then create the namespace you wish to use and set the namespace on your current context.

Note: The login procedure varies by provider. See the documentation for yours:

Public:

export KUBECONFIG=~/.kube/config-public
# Enter your provider-specific login command
kubectl create namespace public
kubectl config set-context --current --namespace public

Private:

export KUBECONFIG=~/.kube/config-private
# Enter your provider-specific login command
kubectl create namespace private
kubectl config set-context --current --namespace private

Step 3: Deploy the Kafka cluster

In Private, use the kubectl create and kubectl apply commands with the listed YAML files to install the operator and deploy the cluster and topic.

Private:

kubectl create -f kafka-cluster/strimzi.yaml
kubectl apply -f kafka-cluster/cluster1.yaml
kubectl wait --for condition=ready --timeout 900s kafka/cluster1

Note:

By default, the Kafka bootstrap server returns broker addresses that include the Kubernetes namespace in their domain name. When, as in this example, the Kafka client is running in a namespace with a different name from that of the Kafka cluster, this prevents the client from resolving the Kafka brokers.

To make the Kafka brokers reachable, set the advertisedHost property of each broker to a domain name that the Kafka client can resolve at the remote site. In this example, this is achieved with the following listener configuration:

spec:
  kafka:
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
        configuration:
          brokers:
            - broker: 0
              advertisedHost: cluster1-kafka-0.cluster1-kafka-brokers

See Advertised addresses for brokers for more information.

Step 4: Deploy the application services

In Public, use the kubectl apply command with the listed YAML files to install the application services.

Public:

kubectl apply -f order-processor/kubernetes.yaml
kubectl apply -f market-data/kubernetes.yaml
kubectl apply -f frontend/kubernetes.yaml

Step 5: Create your sites

A Skupper site is a location where components of your application are running. Sites are linked together to form a network for your application. In Kubernetes, a site is associated with a namespace.

For each namespace, use skupper init to create a site. This deploys the Skupper router and controller. Then use skupper status to see the outcome.

Note: If you are using Minikube, you need to start minikube tunnel before you run skupper init.

Public:

skupper init
skupper status

Sample output:

$ skupper init
Waiting for LoadBalancer IP or hostname...
Waiting for status...
Skupper is now installed in namespace 'public'.  Use 'skupper status' to get more information.

$ skupper status
Skupper is enabled for namespace "public". It is not connected to any other sites. It has no exposed services.

Private:

skupper init
skupper status

Sample output:

$ skupper init
Waiting for LoadBalancer IP or hostname...
Waiting for status...
Skupper is now installed in namespace 'private'.  Use 'skupper status' to get more information.

$ skupper status
Skupper is enabled for namespace "private". It is not connected to any other sites. It has no exposed services.

As you move through the steps below, you can use skupper status at any time to check your progress.

Step 6: Link your sites

A Skupper link is a channel for communication between two sites. Links serve as a transport for application connections and requests.

Creating a link requires use of two skupper commands in conjunction, skupper token create and skupper link create.

The skupper token create command generates a secret token that signifies permission to create a link. The token also carries the link details. Then, in a remote site, The skupper link create command uses the token to create a link to the site that generated it.

Note: The link token is truly a secret. Anyone who has the token can link to your site. Make sure that only those you trust have access to it.

First, use skupper token create in site Public to generate the token. Then, use skupper link create in site Private to link the sites.

Public:

skupper token create ~/secret.token

Sample output:

$ skupper token create ~/secret.token
Token written to ~/secret.token

Private:

skupper link create ~/secret.token

Sample output:

$ skupper link create ~/secret.token
Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1)
Check the status of the link using 'skupper link status'.

If your terminal sessions are on different machines, you may need to use scp or a similar tool to transfer the token securely. By default, tokens expire after a single use or 15 minutes after creation.

Step 7: Expose the Kafka cluster

In Private, use skupper expose with the --headless option to expose the Kafka cluster as a headless service on the Skupper network.

Then, in Public, use kubectl get service to check that the cluster1-kafka-brokers service appears after a moment.

Private:

skupper expose statefulset/cluster1-kafka --headless --port 9092

Public:

kubectl get service/cluster1-kafka-brokers

Step 8: Access the frontend

In order to use and test the application, we need external access to the frontend.

Use kubectl expose with --type LoadBalancer to open network access to the frontend service.

Once the frontend is exposed, use kubectl get service/frontend to look up the external IP of the frontend service. If the external IP is <pending>, try again after a moment.

Once you have the external IP, use curl or a similar tool to request the /api/health endpoint at that address.

Note: The <external-ip> field in the following commands is a placeholder. The actual value is an IP address.

Public:

kubectl expose deployment/frontend --port 8080 --type LoadBalancer
kubectl get service/frontend
curl http://<external-ip>:8080/api/health

Sample output:

$ kubectl expose deployment/frontend --port 8080 --type LoadBalancer
service/frontend exposed

$ kubectl get service/frontend
NAME       TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
frontend   LoadBalancer   10.103.232.28   <external-ip>   8080:30407/TCP   15s

$ curl http://<external-ip>:8080/api/health
OK

If everything is in order, you can now access the web interface by navigating to http://<external-ip>:8080/ in your browser.

Cleaning up

To remove Skupper and the other resources from this exercise, use the following commands.

Private:

skupper delete
kubectl delete -f kafka-cluster/cluster1.yaml
kubectl delete -f kafka-cluster/strimzi.yaml

Public:

skupper delete
kubectl delete -f frontend/kubernetes.yaml
kubectl delete -f market-data/kubernetes.yaml
kubectl delete -f order-processor/kubernetes.yaml

Next steps

Check out the other examples on the Skupper website.

About this example

This example was produced using Skewer, a library for documenting and testing Skupper examples.

Skewer provides utility functions for generating the README and running the example steps. Use the ./plano command in the project root to see what is available.

To quickly stand up the example using Minikube, try the ./plano demo command.