Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Benchmark concepts of service time and latency #5916

Merged
merged 11 commits into from
Dec 22, 2023
Merged

Conversation

Naarcha-AWS
Copy link
Collaborator

Checklist

  • By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license and subject to the Developers Certificate of Origin.
    For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Signed-off-by: Naarcha-AWS <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>
Copy link
Contributor

@IanHoang IanHoang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left some comments

@Naarcha-AWS
Copy link
Collaborator Author

@IanHoang: This is ready for your review again.

@Naarcha-AWS Naarcha-AWS added the 4 - Doc review PR: Doc review in progress label Dec 21, 2023
Copy link
Contributor

@hdhalter hdhalter left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, just a few suggestions.

_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
Co-authored-by: Heather Halter <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>
@Naarcha-AWS Naarcha-AWS added 5 - Editorial review PR: Editorial review in progress and removed 3 - Tech review PR: Tech review in progress 4 - Doc review PR: Doc review in progress labels Dec 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @Naarcha-AWS, I believe the service time measured in OSB is just the time taken from "request reached server" to "response provided by server". Please correct me if I am wrong. Thanks :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@IanHoang: What do you think? I can adjust the wording if needed.

Copy link
Collaborator

@natebower natebower left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Naarcha-AWS Please see my comments and changes and let me know if you have any questions. Thanks!

_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
- `search_clients` set to 1
- `target-throughput` set to 10 operations per second

The following diagram shows the schedule built by OSB with the expected response time.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should "time" be "times"?

_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
_benchmark/user-guide/concepts.md Outdated Show resolved Hide resolved
Co-authored-by: Nathan Bower <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>
@Naarcha-AWS Naarcha-AWS merged commit c56b2f6 into main Dec 22, 2023
4 checks passed
@Naarcha-AWS Naarcha-AWS added the backport 2.11 PR: Backport label for 2.11 label Dec 22, 2023
opensearch-trigger-bot bot pushed a commit that referenced this pull request Dec 22, 2023
* Add Benchmark concepts of service time and latency

Signed-off-by: Naarcha-AWS <[email protected]>

* Fix typo

Signed-off-by: Naarcha-AWS <[email protected]>

* Add table, fix typos

Signed-off-by: Naarcha-AWS <[email protected]>

* A few more small tweaks

Signed-off-by: Naarcha-AWS <[email protected]>

* Apply suggestions from code review

Signed-off-by: Naarcha-AWS <[email protected]>

* Update concepts.md

Signed-off-by: Naarcha-AWS <[email protected]>

* Update concepts.md

Signed-off-by: Naarcha-AWS <[email protected]>

* Update concepts.md

Signed-off-by: Naarcha-AWS <[email protected]>

* Apply suggestions from code review

Co-authored-by: Heather Halter <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>

* Apply suggestions from code review

Co-authored-by: Nathan Bower <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>

---------

Signed-off-by: Naarcha-AWS <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>
Co-authored-by: Heather Halter <[email protected]>
Co-authored-by: Nathan Bower <[email protected]>
(cherry picked from commit c56b2f6)
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Naarcha-AWS pushed a commit that referenced this pull request Dec 22, 2023
* Add Benchmark concepts of service time and latency



* Fix typo



* Add table, fix typos



* A few more small tweaks



* Apply suggestions from code review



* Update concepts.md



* Update concepts.md



* Update concepts.md



* Apply suggestions from code review




* Apply suggestions from code review




---------





(cherry picked from commit c56b2f6)

Signed-off-by: Naarcha-AWS <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Heather Halter <[email protected]>
Co-authored-by: Nathan Bower <[email protected]>
| Metric | Common definition | **OpenSearch Benchmark definition** |
| :--- | :--- |:--- |
| **Throughput** | The number of operations completed in a given period of time. | The number of operations completed in a given period of time. |
| **Service time** | The amount of time that the server takes to process a request, from the point it receives the request to the point the response is returned. </br></br> It includes the time spent waiting in server-side queues but _excludes_ network latency, load balancer overhead, and deserialization/serialization. | The amount of time that it takes for `opensearch-py` to send a request and receive a response from the OpenSearch cluster. </br> </br> It includes the amount of time that it takes for the server to process a request and also _includes_ network latency, load balancer overhead, and deserialization/serialization. |

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct me if I am wrong @dblock, @IanHoang.

In OpenSearch benchmarks, Service Time is calculated in osbenchmark/client.py using the following trace configuration:

trace_config = aiohttp.TraceConfig()
trace_config.on_request_start.append(on_request_start)
trace_config.on_request_end.append(on_request_end)

I believe the first definition (common definition) in the above documentation code aligns more accurately with our interpretation of Service Time.

Service Time: Represents the interval from the server receiving the request to the server sending the response.

Additional Information:
I attempted to calculate the times by implementing a function perform_request within the AIOHttpConnection in osbenchmark/async_connection.py. The times obtained indicate that the calculated service time doesn't include client processing time.

async def perform_request(self, method, url, params=None, body=None, timeout=None, ignore=(), headers=None):
    print("AIOHttpConnection perform_request start time", time.perf_counter())
    status, headers, raw_data = await super().perform_request(method=method, url=url, params=params, body=body, timeout=timeout, ignore=ignore, headers=self.headers)
    print("AIOHttpConnection perform_request end time", time.perf_counter())
    return status, headers, raw_data

Sample output:

AIOHttpConnection perform_request start time 19.0016295
AIOHttpConnection perform_request end time 19.007145792
service time start 19.001905542
service time end 19.007082167
service time 0.005176625000000712
AIOHttpConnection perform_request start time 19.008452584
AIOHttpConnection perform_request end time 19.012769334
service time start 19.008625875
service time end 19.012713375
service time 0.004087500000000688

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
5 - Editorial review PR: Editorial review in progress backport 2.11 PR: Backport label for 2.11 benchmark
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants