diff --git a/_benchmark/user-guide/concepts.md b/_benchmark/user-guide/concepts.md index b353538a4a..2ef8a67606 100644 --- a/_benchmark/user-guide/concepts.md +++ b/_benchmark/user-guide/concepts.md @@ -2,7 +2,9 @@ layout: default title: Concepts nav_order: 3 -parent: User guide +parent: User Guide +redirect_from: + - /benchmark/user-guide/concepts/concepts/ --- # Concepts @@ -11,7 +13,9 @@ Before using OpenSearch Benchmark, familiarize yourself with the following conce ## Core concepts and definitions -- **Workload**: The description of one or more benchmarking scenarios that use a specific document corpus to perform a benchmark against your cluster. The document corpus contains any indexes, data files, and operations invoked when the workflow runs. You can list the available workloads by using `opensearch-benchmark list workloads` or view any included workloads in the [OpenSearch Benchmark Workloads repository](https://github.com/opensearch-project/opensearch-benchmark-workloads/). For more information about the elements of a workload, see [Anatomy of a workload]({{site.url}}{{site.baseurl}}/benchmark/user-guide/understanding-workloads/anatomy-of-a-workload/). For information about building a custom workload, see [Creating custom workloads]({{site.url}}{{site.baseurl}}/benchmark/creating-custom-workloads/). +- **Workload**: A collection of one or more benchmarking scenarios that use a specific document corpus to perform a benchmark against your cluster. The document corpus contains any indexes, data files, and operations invoked when the workload runs. You can list the available workloads by using `opensearch-benchmark list workloads` or view any included workloads in the [OpenSearch Benchmark Workloads repository](https://github.com/opensearch-project/opensearch-benchmark-workloads/). For more information about the elements of a workload, see [Anatomy of a workload]({{site.url}}{{site.baseurl}}/benchmark/user-guide/understanding-workloads/anatomy-of-a-workload/). For information about building a custom workload, see [Creating custom workloads]({{site.url}}{{site.baseurl}}/benchmark/creating-custom-workloads/). A workload typically includes the following: + - One or more data streams that are ingested into indexes. + - A set of queries and operations that are invoked as part of the benchmark. - **Pipeline**: A series of steps occurring before and after a workload is run that determines benchmark results. OpenSearch Benchmark supports three pipelines: - `from-sources`: Builds and provisions OpenSearch, runs a benchmark, and then publishes the results. @@ -20,19 +24,21 @@ Before using OpenSearch Benchmark, familiarize yourself with the following conce - **Test**: A single invocation of the OpenSearch Benchmark binary. -A workload is a specification of one or more benchmarking scenarios. A workload typically includes the following: +## Test concepts -- One or more data streams that are ingested into indexes. -- A set of queries and operations that are invoked as part of the benchmark. +At the end of each test, OpenSearch Benchmark produces a table that summarizes the following: -## Throughput and latency + - [Processing time](#processing-time) + - [Took time](#took-time) + - [Service time](#service-time) + - [Latency](#latency) + - [Throughput](#throughput) -At the end of each test, OpenSearch Benchmark produces a table that summarizes the following: +The following diagram illustrates how each component of the table is measured during the lifecycle of a request involving the OpenSearch cluster, the OpenSearch client, and OpenSearch Benchmark. + + -- [Service time](#service-time) -- Throughput -- [Latency](#latency) -- The error rate for each completed task or OpenSearch operation. +### Differences between OpenSearch Benchmark and a traditional client-server system While the definition for _throughput_ remains consistent with other client-server systems, the definitions for `service time` and `latency` differ from most client-server systems in the context of OpenSearch Benchmark. The following table compares the OpenSearch Benchmark definition of service time and latency versus the common definitions for a client-server system. @@ -40,75 +46,36 @@ While the definition for _throughput_ remains consistent with other client-serve | :--- | :--- |:--- | | **Throughput** | The number of operations completed in a given period of time. | The number of operations completed in a given period of time. | | **Service time** | The amount of time that the server takes to process a request, from the point it receives the request to the point the response is returned.

It includes the time spent waiting in server-side queues but _excludes_ network latency, load balancer overhead, and deserialization/serialization. | The amount of time that it takes for `opensearch-py` to send a request and receive a response from the OpenSearch cluster.

It includes the amount of time that it takes for the server to process a request and also _includes_ network latency, load balancer overhead, and deserialization/serialization. | -| **Latency** | The total amount of time, including the service time and the amount of time that the request waited before responding. | Based on the `target-throughput` set by the user, the total amount of time that the request waited before receiving the response, in addition to any other delays that occured before the request is sent. | +| **Latency** | The total amount of time, including the service time and the amount of time that the request waits before responding. | Based on the `target-throughput` set by the user, the total amount of time that the request waits before receiving the response, in addition to any other delays that occur before the request is sent. | For more information about service time and latency in OpenSearch Benchmark, see the [Service time](#service-time) and [Latency](#latency) sections. -### Service time - -OpenSearch Benchmark does not have insight into how long OpenSearch takes to process a request, apart from extracting the `took` time for the request. In OpenSearch, **service time** tracks the amount of time between when OpenSearch issues a request and receives a response. - -OpenSearch Benchmark makes function calls to `opensearch-py` to communicate with an OpenSearch cluster. OpenSearch Benchmark tracks the amount of time between when the `opensearch-py` client sends a request and receives a response from the OpenSearch cluster and considers this to be the service time. Unlike the traditional definition of service time, the OpenSearch Benchmark definition of service time includes overhead, such as network latency, load balancer overhead, or deserialization/serialization. The following image highlights the differences between the traditional definition of service time and the OpenSearch Benchmark definition of service time. - - - -### Latency - -Target throughput is key to understanding the OpenSearch Benchmark definition of **latency**. Target throughput is the rate at which OpenSearch Benchmark issues requests, assuming that responses will be returned instantaneously. `target-throughput` is one of the common workload parameters that can be set for each test and is measured in operations per second. - -OpenSearch Benchmark always issues one request at a time for a single client thread, specified as `search-clients` in the workload parameters. If `target-throughput` is set to `0`, OpenSearch Benchmark issues a request immediately after it receives the response from the previous request. If the `target-throughput` is not set to `0`, OpenSearch Benchmark issues the next request to match the `target-throughput`, assuming that responses are returned instantaneously. +### Processing time -#### Example A +*Processing time* accounts for any extra overhead tasks that OpenSearch Benchmark performs during the lifecycle of a request, such as setting up a request context manager or calling a method to pass the request to the OpenSearch client. This is in contrast to *service time*, which only accounts for the difference between when a request is sent and when the OpenSearch client receives the response. -The following diagrams illustrate how latency is calculated with an expected request response time of 200ms and the following settings: +### Took time -- `search-clients` is set to `1`. -- `target-throughput` is set to `1` operation per second. +*Took time* measures the amount of time that the cluster spends processing a request on the server side. It does not include the time taken for the request to transit from the client to the cluster or for the response to transit from the cluster to the client. - - -When a request takes longer than 200ms, such as when a request takes 1110ms instead of 400ms, OpenSearch Benchmark sends the next request that was supposed to occur at 4.00s based on the `target-throughput` at 4.10s. All subsequent requests after the 4.10s request attempt to resynchronize with the `target-throughput` setting. - - - -When measuring the overall latency, OpenSearch Benchmark includes all performed requests. All requests have a latency of 200ms, except for the following two requests: - -- The request that lasted 1100ms. -- The subsquent request that was supposed to start at 4:00s. This request was delayed by 100ms, denoted by the orange area in the following diagram, and had a response time of 200ms. When calculating the latency for this request, OpenSearch Benchmark will account for the delayed start time and combine it with the response time. Thus, the latency for this request is **300ms**. - - - -#### Example B - -In this example, OpenSearch Benchmark assumes a latency of 200ms and uses the following latency settings: - -- `search_clients` is set to `1`. -- `target-throughput` is set to `10` operations per second. +### Service time -The following diagram shows the schedule built by OpenSearch Benchmark with the expected response times. - +OpenSearch Benchmark does not have insight into how long OpenSearch takes to process a request, apart from extracting the [took time](#took-time) for the request. It makes function calls to `opensearch-py` to communicate with an OpenSearch cluster. -However, if the assumption is that all responses will take 200ms, 10 operations per second won't be possible. Therefore, the highest throughput OpenSearch Benchmark can reach is 5 operations per second, as shown in the following diagram. +OpenSearch Benchmark measures *service time*, which is the amount of time between when the `opensearch-py` client sends a request to and receives a response from the OpenSearch cluster. Unlike the traditional definition of service time, the OpenSearch Benchmark definition includes overhead, such as network latency, load balancer overhead, or deserialization/serialization. The following image shows the differences between the traditional definition and the OpenSearch Benchmark definition. - + -OpenSearch Benchmark does not account for this and continues to try to achieve the `target-throughput` of 10 operations per second. Because of this, delays for each request begin to cascade, as illustrated in the following diagram. +### Latency - +*Latency* measures the total time that the request waits before receiving the response as well as any delays that occur prior to sending the request. In most circumstances, latency is measured in the same way as service time, unless you are testing in [throughput-throttled mode]({{site.url}}{{site.baseurl}}/benchmark/user-guide/target-throughput/). In this case, latency is measured as service time plus the time that the request spends waiting in the queue. -Combining the service time with the delay for each operation provides the following latency measurements for each operation: -- 200 ms for operation 1 -- 300 ms for operation 2 -- 400 ms for operation 3 -- 500 ms for operation 4 -- 600 ms for operation 5 +### Throughput -This latency cascade continues, increasing latency by 100ms for each subsequent request. +**Throughput** measures the rate at which OpenSearch Benchmark issues requests, assuming that responses will be returned instantaneously. -### Recommendation -As shown by the preceding examples, you should be aware of the average service time of each task and provide a `target-throughput` that accounts for the service time. The OpenSearch Benchmark latency is calculated based on the `target-throughput` set by the user, that is, the latency could be redefined as "throughput-based latency." diff --git a/_benchmark/user-guide/index.md b/_benchmark/user-guide/index.md index 358285521d..a9d9ce963f 100644 --- a/_benchmark/user-guide/index.md +++ b/_benchmark/user-guide/index.md @@ -7,4 +7,4 @@ has_children: true # OpenSearch Benchmark User Guide -The OpenSearch Benchmark User Guide includes core [concepts]({{site.url}}{{site.baseurl}}/benchmark/user-guide/concepts/), [installation]({{site.url}}{{site.baseurl}}/benchmark/installing-benchmark/) instructions, and [configuration options]({{site.url}}{{site.baseurl}}/benchmark/configuring-benchmark/) to help you get the most out of OpenSearch Benchmark. \ No newline at end of file +The OpenSearch Benchmark User Guide includes core [concepts]({{site.url}}{{site.baseurl}}/benchmark/user-guide/concepts/concepts/), [installation]({{site.url}}{{site.baseurl}}/benchmark/installing-benchmark/) instructions, and [configuration options]({{site.url}}{{site.baseurl}}/benchmark/configuring-benchmark/) to help you get the most out of OpenSearch Benchmark. \ No newline at end of file diff --git a/_benchmark/user-guide/target-throughput.md b/_benchmark/user-guide/target-throughput.md new file mode 100644 index 0000000000..3554ef91a9 --- /dev/null +++ b/_benchmark/user-guide/target-throughput.md @@ -0,0 +1,81 @@ +--- +layout: default +title: Target throughput +nav_order: 150 +--- + +# Target throughput + +Target throughput is key to understanding the OpenSearch Benchmark definition of *latency*. Target throughput is the rate at which OpenSearch Benchmark issues requests, assuming that responses will be returned instantaneously. `target-throughput` is a common workload parameter that can be set for each test and is measured in operations per second. + +OpenSearch Benchmark has two testing modes, both of which are related to throughput, latency, and service time: + +- [Benchmarking mode](#benchmarking-mode): Latency is measured in the same way as service time. +- [Throughput-throttled mode](#throughput-throttled-mode): Latency is measured as service time plus the time that a request spends waiting in the queue. + +## Benchmarking mode + +When you do not specify a `target-throughput`, OpenSearch Benchmark latency tests are performed in *benchmarking mode*. In this mode, the OpenSearch client sends requests to the OpenSearch cluster as fast as possible. After the cluster receives a response from the previous request, OpenSearch Benchmark immediately sends the next request to the OpenSearch client. In this testing mode, latency is identical to service time. + +## Throughput-throttled mode + +**Throughput** measures the rate at which OpenSearch Benchmark issues requests, assuming that responses will be returned instantaneously. However, users can set a `target-throughput`, which is a common workload parameter that can be set for each test and is measured in operations per second. + +OpenSearch Benchmark issues one request at a time for a single-client thread, which is specified as `search-clients` in the workload parameters. If `target-throughput` is set to `0`, then OpenSearch Benchmark issues a request immediately after it receives the response from the previous request. If the `target-throughput` is not set to `0`, then OpenSearch Benchmark issues the next request in accordance with the `target-throughput`, assuming that responses are returned instantaneously. + +When you want to simulate the type of traffic you might encounter when deploying a production cluster, set the `target-throughput` in your benchmark test to match the number of requests you estimate that the production cluster might receive. The following examples show how the `target-throughput` setting affects the latency measurement. + +### Example A + +The following diagrams illustrate how latency is calculated with an expected request response time of 200 ms and the following settings: + +- `search-clients` is set to `1`. +- `target-throughput` is set to `1` operation per second. + + + +When a request takes longer than 200 ms, such as when a request takes 1110 ms instead of 400 ms, OpenSearch Benchmark sends the next request that was supposed to occur at 4.00 s based on the `target-throughput` of 4.10 s. All requests subsequent to the 4.10 s request attempt to re-synchronize with the `target-throughput` setting, as shown in the following image: + + + +When measuring the overall latency, OpenSearch Benchmark includes all performed requests. All requests have a latency of 200 ms, except for the following two requests: + +- The request that lasted 1100 ms. +- The subsequent request which should have started at 4.00 s. This request was delayed by 100 ms, denoted by the orange-colored area in the following diagram, and had a response time of 200 ms. When calculating the latency for this request, OpenSearch Benchmark accounts for the delayed start time and combines it with the response time. The latency for this request is **300 ms**. + + + +### Example B + +In this example, OpenSearch Benchmark assumes a latency of 200 ms and uses the following latency settings: + +- `search_clients` is set to `1`. +- `target-throughput` is set to `10` operations per second. + +The following diagram shows the schedule built by OpenSearch Benchmark with the expected response times. + + + +However, if it is assumed that all responses will have a latency of 200 ms, then 10 operations per second won't be possible. Therefore, the highest throughput that OpenSearch Benchmark can reach is 5 operations per second, as shown in the following diagram. + + + +OpenSearch Benchmark does not account for this limitation and continues to try to achieve the `target-throughput` of 10 operations per second. Because of this, delays for each request begin to cascade, as illustrated in the following diagram. + + + +By combining the service time and the delay for each operation, the following latency measurements are provided for each operation: + +- 200 ms for operation 1 +- 300 ms for operation 2 +- 400 ms for operation 3 +- 500 ms for operation 4 +- 600 ms for operation 5 + +This latency cascade continues, increasing latency by 100 ms for each subsequent request. + +### Recommendation + +As shown in the preceding examples, you should be aware of each task's average service time and should provide a `target-throughput` that accounts for the service time. OpenSearch Benchmark latency is calculated based on the `target-throughput` set by the user; therefore, *latency* could be redefined as *throughput-based latency*. + + diff --git a/_benchmark/user-guide/understanding-workloads/anatomy-of-a-workload.md b/_benchmark/user-guide/understanding-workloads/anatomy-of-a-workload.md index b54932470d..3bf339e4d5 100644 --- a/_benchmark/user-guide/understanding-workloads/anatomy-of-a-workload.md +++ b/_benchmark/user-guide/understanding-workloads/anatomy-of-a-workload.md @@ -160,7 +160,7 @@ According to this `schedule`, the actions will run in the following order: 3. The `clients` field defines the number of clients, in this example, eight, that will run the bulk indexing operation concurrently. 4. The `search` operation runs a `match_all` query to match all documents after they have been indexed by the `bulk` API using the specified clients. - The `iterations` field defines the number of times each client runs the `search` operation. The benchmark report automatically adjusts the percentile numbers based on this number. To generate a precise percentile, the benchmark needs to run at least 1,000 iterations. - - The `target-throughput` field defines the number of requests per second that each client performs. When set, the setting can help reduce benchmark latency. For example, a `target-throughput` of 100 requests divided by 8 clients means that each client will issue 12 requests per second. For more information about how target throughput is defined in OpenSearch Benchmark, see [Throughput and latency](https://opensearch.org/docs/latest/benchmark/user-guide/concepts/#throughput-and-latency). + - The `target-throughput` field defines the number of requests per second performed by each client. This setting can help reduce benchmark latency. For example, a `target-throughput` of 100 requests divided by 8 clients means that each client will issue 12 requests per second. For more information about how target throughput is defined in OpenSearch Benchmark, see [Target throughput]({{site.url}}{{site.baseurl}}/benchmark/user-guide/target-throughput/). ## index.json diff --git a/images/benchmark/concepts-diagram.png b/images/benchmark/concepts-diagram.png new file mode 100644 index 0000000000..195cdd88d0 Binary files /dev/null and b/images/benchmark/concepts-diagram.png differ