From 0d34a852fbeba99fc3389f0e82a65e701bb06aed Mon Sep 17 00:00:00 2001 From: Shlomo Swidler Date: Thu, 14 Dec 2023 04:57:17 +0900 Subject: [PATCH] Update index.md - minor typos --- content/en/docs/demo/scenarios/recommendation-cache/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/demo/scenarios/recommendation-cache/index.md b/content/en/docs/demo/scenarios/recommendation-cache/index.md index 05a7f5108783..881a19916d38 100644 --- a/content/en/docs/demo/scenarios/recommendation-cache/index.md +++ b/content/en/docs/demo/scenarios/recommendation-cache/index.md @@ -45,7 +45,7 @@ our p95, 99, and 99.9 histograms. We can also see that there are intermittent spikes in the memory utilization of this service. We know that we're emitting trace data from our application as well, so let's -think about another way that we'd be able to determine that a problem exist. +think about another way that we'd be able to determine that a problem exists. ![Jaeger](jaeger.png) @@ -53,7 +53,7 @@ Jaeger allows us to search for traces and display the end-to-end latency of an entire request with visibility into each individual part of the overall request. Perhaps we noticed an increase in tail latency on our frontend requests. Jaeger allows us to then search and filter our traces to include only those that -include requests to recommendation service. +include requests to the recommendation service. By sorting by latency, we're able to quickly find specific traces that took a long time. Clicking on a trace in the right panel, we're able to view the