Skip to content

Commit

Permalink
Merge pull request #298 from splunk/conf24
Browse files Browse the repository at this point in the history
Conf24 merge
  • Loading branch information
hagen-p authored Jun 4, 2024
2 parents d52fce7 + 1429ef3 commit 3992ed6
Show file tree
Hide file tree
Showing 14 changed files with 297 additions and 25 deletions.
3 changes: 2 additions & 1 deletion content/en/conf24/1-zero-config-k8s/2-preparation/1-otel.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: Deploy Splunk OpenTelemetry Collector

title: Deploy the Splunk OpenTelemetry Collector
linkTitle: 1. Deploy OpenTelemetry Collector
weight: 2
---
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ time: 10 minutes

Once the installation has been completed, you can log in to **Splunk Observability Cloud** and verify that the metrics are flowing in from your Kubernetes cluster.

From the left-hand menu click on **Infrastructure** ![infra](../images/infra-icon.png?classes=inline&height=25px) and select **Kubernetes**, then select the **K8s nodes** pane. Once you are in the **K8s nodes** view, change the **Time** filter from **-4h** to the last 15 minutes **(-15m)** to focus on the latest data.
From the left-hand menu click on **Infrastructure** ![infra](../images/infra-icon.png?classes=inline&height=25px) and select **Kubernetes**, then select the **Kubernetes nodes** pane. Once you are in the **Kubernetes nodes** view, change the **Time** filter from **-4h** to the last 15 minutes **(-15m)** to focus on the latest data.

Next, click **Add filters** (next to the **Time filter**) and add the filter `k8s.cluster.name` **(1)**. Type or select the cluster name of your workshop instance (you can get the unique part from your cluster name by using the `INSTANCE` from the output from the shell script you ran earlier). You can also select your cluster by clicking on its image in the cluster pane. You will now only have your cluster visible **(2)**.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,4 +76,6 @@ Navigate back to the Kubernetes Navigator in **Splunk Observability Cloud**. Aft

![restart](../../images/k8s-navigator-restarted-pods.png)

Wait for the Pods to turn green in the Kubernetes Navigator, then go to **APM** ![APM](../../images/apm-icon.png?classes=inline&height=25px) to see the data generated by the traces from the newly instrumented services.
Wait for the Pods to turn green in the Kubernetes Navigator, then go tho the next section.


4 changes: 2 additions & 2 deletions content/en/conf24/1-zero-config-k8s/4-apm/2-apm-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@ linkTitle: 2. Viewing APM Data
weight: 2
---

Change the **Environment** filter **(1)** to the name of your workshop instance in the dropdown box (this will be **`<INSTANCE>-workshop`** where **`INSTANCE`** is the value from the shell script you ran earlier) and make sure it is the only one selected.
Log in to Splunk Observability Cloud, from the left-hand menu click on **APM** ![APM](../../images/apm-icon.png?classes=inline&height=25px) to see the data generated by the traces from the newly instrumented services. Change the **Environment** filter **(1)** to the name of your workshop instance in the dropdown box (this will be **`<INSTANCE>-workshop`** where **`INSTANCE`** is the value from the shell script you ran earlier) and make sure it is the only one selected.

![apm](../../images/zero-config-first-services-overview.png)

You will see the name **(2)** of the **api-gateway** service and metrics in the Latency and Request & Errors charts (you can ignore the Critical Alert, as it is caused by the sudden request increase generated by the load generator). You will also see the rest of the services appear.

We will visit the **Service Map** **(3)** in the next section.
Once you see the Customer service, Vets service and Visits services like show in the screenshot above, let's click on the **Service Map** **(3)** pane to get ready for the next section.
4 changes: 2 additions & 2 deletions content/en/conf24/1-zero-config-k8s/5-traces/1-service-map.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ weight: 1

![apm map](../../images/zero-config-first-services-map.png)

The above shows all the interactions between all of the services. The map may still be in an interim state as it will take the Petclinic Microservice application a few minutes to start up and fully synchronize. Reducing the time filter to a custom time of **2 minutes** will help. The initial startup-related errors (red dots) will eventually disappear.
The above map shows all the interactions between all of the services. The map may still be in an interim state as it will take the Petclinic Microservice application a few minutes to start up and fully synchronize. Reducing the time filter to a custom time of **2 minutes** will help. The initial startup-related errors (red dots) will eventually disappear.

Next, let's examine the metrics that are available for each service that is instrumented and visit the request, error, and duration (RED) metrics Dashboard

For this exercise we are going to use a common scenario you would use if the service operation was showing high latency, or errors for example.

Select the **Customer Service** in the Dependency map **(1)**, then make sure the `customers-service` is selected in the **Services** dropdown box **(2)**. Next, select `GET /Owners` from the Operations dropdown **(3**)**.
Select (click) on the **Customer Service** in the Dependency map **(1)**, then make sure the `customers-service` is selected in the **Services** dropdown box **(2)**. Next, select `GET /Owners` from the Operations dropdown **(3**)**.

This should give you the workflow with a filter on `GET /owners` **(1)** as shown below.

Expand Down
4 changes: 2 additions & 2 deletions content/en/conf24/1-zero-config-k8s/5-traces/4-red-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,6 @@ Splunk APM provide **Service Centric Views** that provide engineers a deep under

To see this dashboard for the `api-gateway`, make sure you have the `api-gateway` service selected in the Service Map, then click on the ***View Service** button in the top of the right-hand pane. This will bring you to the Service Centric View dashboard:

This view, which is available for each of your instrumented services, offers an overview of **Service metrics**, **Runtime metrics** and **Infrastruture metrics**.
This view, which is available for each of your instrumented services, offers an overview of **Service metrics**, **Runtime metrics** and **Infrastructure metrics**.

![metrics dashboard](../../images/service-centric-view.png)
You can select the **Back* function of you browser to go back to the previous view.
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,9 @@ This will bring you to the Always-on Profiling main screen, with the Memory view
* Java Function calls identified. **(3)**, allowing you to drill down into the Methods called from that function.
* The Flame Graph **(4)**, with the visualization of hierarchy based on the stack traces of the profiled service.

Once you have identified the relevant Function or Method you are interested in, `com.mysql.cj.protocol.a.NativePacketPayload.readBytes` in our example but yours may differ, so pick the top one **(1)** and find it at the e bottom of the Flame Graph **(2)**. Click on it in the Flame Graph, it will show a pane as shown in the image below, where you can see the Thread information **(3)** by clicking on the blue *Show Thread Info* link. If you click on the *Copy Stack Trace* **(4)** button, you grab the actual stack trace that you can use in your coding platform to go to the actual lines of code used at this point (depending of course on your preferred Coding platform)
For further investigation the UI let's you grab the actual stack trace, so you can use in your coding platform to go to the actual lines of code used at this point (depending of course on your preferred Coding platform)
<!-- Once you have identified the relevant Function or Method you are interested in, `com.mysql.cj.protocol.a.NativePacketPayload.readBytes` in our example but yours may differ, so pick the top one **(1)** and find it at the e bottom of the Flame Graph **(2)**. Click on it in the Flame Graph, it will show a pane as shown in the image below, where you can see the Thread information **(3)** by clicking on the blue *Show Thread Info* link. If you click on the *Copy Stack Trace* **(4)** button, you grab the actual stack trace that you can use in your coding platform to go to the actual lines of code used at this point (depending of course on your preferred Coding platform)
![stack trace](../../images/grab-stack-trace.png)
For more details on Profiling, check the the **Debug Problems workshop**, or check the documents [here](https://docs.splunk.com/observability/en/apm/profiling/intro-profiling.html#introduction-to-alwayson-profiling-for-splunk-apm)
For more details on Profiling, check the the **Debug Problems workshop**, or check the documents [here](https://docs.splunk.com/observability/en/apm/profiling/intro-profiling.html#introduction-to-alwayson-profiling-for-splunk-apm)> -->
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,37 @@ Script execution completed.

We can verify if the replacement has been successful by examining the `logback-spring.xml` file from one of the services:

```bash
{{< tabs >}}
{{% tab title="cat logback-spring.xml" %}}

``` bash
cat /home/splunk/spring-petclinic-microservices/spring-petclinic-customers-service/src/main/resources/logback-spring.xml
```

{{% /tab %}}
{{% tab title="Output" %}}

```text
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>
logback: %d{HH:mm:ss.SSS} [%thread] severity=%-5level %logger{36} - trace_id=%X{trace_id} span_id=%X{span_id} service.name=%property{otel.resource.service.name} trace_flags=%X{trace_flags} - %msg %kvp{DOUBLE}%n
</pattern>
</encoder>
</appender>
<appender name="OpenTelemetry"
class="io.opentelemetry.instrumentation.logback.appender.v1_0.OpenTelemetryAppender">
<captureExperimentalAttributes>true</captureExperimentalAttributes>
<captureKeyValuePairAttributes>true</captureKeyValuePairAttributes>
</appender>
<root level="INFO">
<appender-ref ref="console"/>
<appender-ref ref="OpenTelemetry"/>
</root>
</configuration>
```

{{% /tab %}}
{{< /tabs >}}
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,31 @@ linkTitle: 2. Rebuild PetClinic
weight: 2
---

Before we can build the new services with the updated log format we need to add the Opentelemetry dependency that handles field injection to the `pom.xml` of our services:
Before we can build the new services with the updated log format we need to add the OpenTelemetry dependency that handles field injection to the `pom.xml` of our services:
{{< tabs >}}
{{% tab title="Adding OTel dependencies" %}}

```bash
. ~/workshop/petclinic/scripts/add_otel.sh
```

{{% /tab %}}
{{% tab title="Output" %}}

```text
Dependencies added successfully in spring-petclinic-admin-server
Dependencies added successfully in spring-petclinic-api-gateway
Dependencies added successfully in spring-petclinic-config-server
Dependencies added successfully in spring-petclinic-discovery-server
Dependencies added successfully in spring-petclinic-customers-service
Dependencies added successfully in spring-petclinic-vets-service
Dependencies added successfully in spring-petclinic-visits-service
Dependency addition complete!
```

{{% /tab %}}
{{< /tabs >}}

The Services are now ready to be built, so run the script that will use the `maven` command to compile/build/package the PetClinic microservices:

{{% notice note %}}
Expand Down
115 changes: 107 additions & 8 deletions content/en/conf24/1-zero-config-k8s/7-log-observer-connect/3-deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,22 +6,99 @@ weight: 3

To see the changes in effect, we need to redeploy the services, First, let's change the location of the images from the external repo to the local one by running the following script:

{{< tabs >}}
{{% tab title="Change deployment to local containers" %}}

```bash
. ~/workshop/petclinic/scripts/set_local.sh
```

{{% /tab %}}
{{% tab title="Output" %}}

```text
Script execution completed. Modified content saved to /home/splunk/workshop/petclinic/petclinic-local.yaml
```

{{% /tab %}}
{{< /tabs >}}

The result is a new file on disk called `petclinic-local.yaml`. Switch to the local versions by using the new version of the deployment YAML. First delete the old containers from the original deployment with:

{{< tabs >}}
{{% tab title="Deleting remote Petclinic services" %}}

```bash
kubectl delete -f ~/workshop/petclinic/petclinic-deploy.yaml
```

{{% /tab %}}
{{% tab title="Output" %}}

```text
deployment.apps "config-server" deleted
service "config-server" deleted
deployment.apps "discovery-server" deleted
service "discovery-server" deleted
deployment.apps "api-gateway" deleted
service "api-gateway" deleted
service "api-gateway-external" deleted
deployment.apps "customers-service" deleted
service "customers-service" deleted
deployment.apps "vets-service" deleted
service "vets-service" deleted
deployment.apps "visits-service" deleted
service "visits-service" deleted
deployment.apps "admin-server" deleted
service "admin-server" deleted
service "petclinic-db" deleted
deployment.apps "petclinic-db" deleted
configmap "petclinic-db-initdb-config" deleted
deployment.apps "petclinic-loadgen-deployment" deleted
configmap "scriptfile" deleted
```

{{% /tab %}}
{{< /tabs >}}

followed by:

{{< tabs >}}
{{% tab title="Starting local Petclinic services" %}}

```bash
kubectl apply -f ~/workshop/petclinic/petclinic-local.yaml
```

{{% /tab %}}
{{% tab title="Output" %}}

```text
deployment.apps/config-server created
service/config-server created
deployment.apps/discovery-server created
service/discovery-server created
deployment.apps/api-gateway created
service/api-gateway created
service/api-gateway-external created
deployment.apps/customers-service created
service/customers-service created
deployment.apps/vets-service created
service/vets-service created
deployment.apps/visits-service created
service/visits-service created
deployment.apps/admin-server created
service/admin-server created
service/petclinic-db created
deployment.apps/petclinic-db created
configmap/petclinic-db-initdb-config created
deployment.apps/petclinic-loadgen-deployment created
configmap/scriptfile created
```

{{% /tab %}}
{{< /tabs >}}

This will cause the containers to be replaced with the local version, you can verify this by checking the containers:

```bash
Expand Down Expand Up @@ -67,25 +144,47 @@ deployment.apps/api-gateway patched

Check the `api-gateway` container (again if you see two `api-gateway` containers, it's the old container being terminated so give it a few seconds):

{{< tabs >}}
{{% tab title="Check Container" %}}

```bash
kubectl describe pods api-gateway | grep Image:
```

{{% /tab %}}
{{% tab title="Output" %}}
The resulting output will show the local api gateway version `localhost:9999` and the auto-instrumentation container:

```text
Image: ghcr.io/signalfx/splunk-otel-java/splunk-otel-java:v1.32.1
Image: localhost:9999/spring-petclinic-api-gateway:local
```

{{% /tab %}}
{{< /tabs >}}

Now that the Pods have been patched validate they are all running by executing the following command:

{{< tabs >}}
{{% tab title="Checking if all Pods are running" %}}

```bash
kubectl get pods
```

{{% /tab %}}
{{% tab title="Output" %}}

```text
NAME READY STATUS RESTARTS AGE
splunk-otel-collector-certmanager-cainjector-cd8459647-d42ls 1/1 Running 0 22h
splunk-otel-collector-certmanager-85cbb786b6-xgjgb 1/1 Running 0 22h
splunk-otel-collector-certmanager-webhook-75d888f9f7-477x4 1/1 Running 0 22h
splunk-otel-collector-agent-nmmkm 1/1 Running 0 22h
splunk-otel-collector-k8s-cluster-receiver-7f96c94fd9-fv4p8 1/1 Running 0 22h
splunk-otel-collector-operator-6b56bc9d79-r8p7w 2/2 Running 0 22h
petclinic-loadgen-deployment-765b96d4b9-gm8fp 1/1 Running 0 21h
petclinic-db-774dbbf969-2q6md 1/1 Running 0 21h
config-server-5784c9fbb4-9pdc8 1/1 Running 0 21h
admin-server-849d877b6-pncr2 1/1 Running 0 21h
discovery-server-6d856d978b-7x69f 1/1 Running 0 21h
visits-service-c7cd56876-grfn7 1/1 Running 0 21h
customers-service-6c57cb68fd-hx68n 1/1 Running 0 21h
vets-service-688fd4cb47-z42t5 1/1 Running 0 21h
api-gateway-59f4c7fbd6-prx5f 1/1 Running 0 20h
```

{{% /tab %}}
{{< /tabs >}}
63 changes: 61 additions & 2 deletions content/en/conf24/1-zero-config-k8s/8-rum/1-rebuild-app.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,32 +44,91 @@ cat ~/spring-petclinic-microservices/spring-petclinic-api-gateway/src/main/resou
RUM_AUTH: '[redacted]',
RUM_APP_NAME: 'k8s-petclinic-workshop-store',
RUM_ENVIRONMENT: 'k8s-petclinic-workshop-workshop'
}

}
```

{{% /tab %}}
{{< /tabs >}}

Change into the `api-gateway` directory and force a new build for just the `api-gateway` service:

{{< tabs >}}
{{% tab title="Building api-gateway" %}}

``` bash
cd ~/spring-petclinic-microservices/spring-petclinic-api-gateway
../mvnw clean install -D skipTests -P buildDocker
```

and push the new container to the local registry
{{% /tab %}}
{{% tab title=" Output" %}}

```text
Successfully built 2d409c1eeccc
Successfully tagged localhost:9999/spring-petclinic-api-gateway:local
[INFO] Built localhost:9999/spring-petclinic-api-gateway:local
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 26.250 s
[INFO] Finished at: 2024-05-31T15:51:20Z
[INFO] ------------------------------------------------------------------------
```

{{% /tab %}}
{{< /tabs >}}

and now push the new container to the local registry, the others get skipped:

{{< tabs >}}
{{% tab title="Updating docker repo with RUM api-gateway" %}}

``` bash
. ~/workshop/petclinic/scripts/push_docker.sh
```

{{% /tab %}}
{{% tab title="Output" %}}

```text
The push refers to repository [localhost:9999/spring-petclinic-api-gateway]
9a7b16677cf9: Pushed
f2e09ed98998: Layer already exists
291752eeb66b: Layer already exists
ac28fe526c24: Layer already exists
0a37fe4a02de: Layer already exists
4b1e7b998de9: Layer already exists
a2a8ef39e636: Layer already exists
86cb6a9eb3cd: Layer already exists
985fdc63de98: Layer already exists
4ab2850febd7: Layer already exists
2db7720a8970: Layer already exists
629ca62fb7c7: Layer already exists
```

{{% /tab %}}
{{< /tabs >}}

As soon as the container is pushed into the repository, just restart the `api-gateway` to apply the changes:

{{< tabs >}}
{{% tab title="Rollout restart api-gateway" %}}

``` bash
kubectl rollout restart deployment api-gateway
```

{{% /tab %}}
{{% tab title=" Output" %}}

```text
deployment.apps/api-gateway restarted
```

{{% /tab %}}
{{< /tabs >}}

Validate that the application is running by visiting **http://<IP_ADDRESS>:81** (replace **<IP_ADDRESS>** with the IP address you obtained above). Make sure the application is working correctly by visiting the **All Owners** **(1)** and select an owner, then add a **visit** **(2)**. We will use this action when checking RUM

![pet](../../images/petclinic-pet.png)
Expand Down
Loading

0 comments on commit 3992ed6

Please sign in to comment.