Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(netbench) enable jumbo frame probing by default #1648

Merged
merged 10 commits into from
Mar 3, 2023
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 56 additions & 4 deletions netbench/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,26 @@ Here are a few examples of questions that netbench aims to answer:
* How does certificate chain length affect handshake throughput?
* Is implementation "X" interoperable with implementation "Y" of "Z" protocol?

## Quickstart
A basic use of netbench is demonstrated in the `netbench-run.sh` script. This script will
- compile all necessary netbench utilities
- generate scenario files
- execute the `request-response.json` scenario using `s2n-quic` and `s2n-tls` drivers
- execute the `connect.json` scenario using `s2n-quic` and `s2n-tls` drivers
- collect statistics from the drivers using `netbench-collector`
- generate a report in the `./target/netbench/report` directory

From the main `netbench` folder, run the following commands
```
./scripts/netbench-run
cd target/netbench/report
python3 -m http.server 9000
```
Then navigate to `localhost:9000` in a browser to view the netbench results.

## How it works

### netbench-scenarios
`netbench` provides tools to write [scenarios](./netbench-scenarios/) that describe application workloads. An example of a scenario is a simple request/response pattern between a client and server:

```rust
Expand Down Expand Up @@ -62,16 +80,50 @@ pub fn scenario(config: Config) -> Scenario {
}
```

This scenario generates a json file of instructions. These instructions are protocol and language independent, which means they can easily be executed by a ["netbench driver"](./netbench-driver/), written in any language or runtime. Implemented drivers include:
This scenario generates a json file of instructions. These instructions are protocol and language independent, which means they can easily be executed by a ["netbench driver"](./netbench-driver/), written in any language or runtime.

### netbench-driver
Netbench drivers are responsible for executing netbench scenarios. Each transport protocol has a `client` and `server` implementation. Each of these implementations is a self-container binary that consumes a `scenario.json` file. Implemented drivers include:

* `TCP`
* [`native-tls`](https://crates.io/crates/native-tls)
* OpenSSL on Linux
* Secure Transport on macOS
* SChannel on Windows
* `s2n-quic`
* `s2n-tls` (coming soon)
* `s2n-tls`

### netbench-collector
Driver metrics are collected with the [`netbench-collector`](./netbench-collector/) utility. There are two implementation of this available - a generic utility and a bpftrace utility. The generic utility uses the `proc fs` to gather information about the process, while the `bpftrace` implementation is able to collect a wider variety of statistics through ebpf probes.

The collector binary takes a `netbench-driver` as an argument. The driver binary is spawned as a child process. The collector will continuously gather metrics from the driver and emit those metrics to `stdout`.

### netbench-cli
`netbench-cli` is used to visualize the results of the scenarios. This reports use [vega](https://vega.github.io/) which is "a declarative format for creating, saving, and sharing visualization designs".

Driver metrics are collected with the [`netbench-collector`](./netbench-collector/) utility. Reports are then generated for the collected metrics with the [`cli`](./netbench-cli/).
`report` is used to generate individual `.json` reports. These can be visualized by pasting them into the [vega editor](https://vega.github.io/editor/).

A [sample report can be found here](https://dnglbrstg7yg.cloudfront.net/8e1890f04727ef7d3acdcb521c5b3cda257778f0/netbench/index.html#request_response/clients.json).
`report-tree` is used to to generate a human-readable `.html` report. Given a directory structure like the following
```
request-response/ # scenario
├─ tls/ # driver
│ ├─ client.json
│ ├─ server.json
├─ quic/
├─ client.json
├─ server.json
```
`report-tree` will generate the individual `reports` and package them into a human readable `index.html` file that can be used to view graphs of the results.

A [sample report can be found here](https://dnglbrstg7yg.cloudfront.net/8e1890f04727ef7d3acdcb521c5b3cda257778f0/netbench/index.html#request_response/clients.json).

Note that you will not be able to open the report directly since the report relies on the jsdelivr cdn. This request will fail when the URL is a local file scheme with a [CORS request not HTTP](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors/CORSRequestNotHttp) error.

To get around this, use a local server.
```
# assuming the report is in ./report
cd report
# start a local server on port 9000
python3 -m http.server 9000
```
In a browser, navigate to `localhost:9000` to view the netbench report.
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,9 @@ pub struct Client {
#[structopt(flatten)]
opts: netbench_driver::Client,

#[structopt(long, default_value = "9001", env = "MAX_MTU")]
max_mtu: u16,

#[structopt(long, env = "DISABLE_GSO")]
disable_gso: bool,
}
Expand Down Expand Up @@ -56,8 +59,9 @@ impl Client {

let tls = tls.build()?;

let mut io_builder =
io::Default::builder().with_receive_address((self.opts.local_ip, 0u16).into())?;
let mut io_builder = io::Default::builder()
.with_max_mtu(self.max_mtu)?
.with_receive_address((self.opts.local_ip, 0u16).into())?;

if self.disable_gso {
io_builder = io_builder.with_gso_disabled()?;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ pub struct Server {
#[structopt(flatten)]
opts: netbench_driver::Server,

#[structopt(long, default_value = "9001", env = "MAX_MTU")]
max_mtu: u16,

#[structopt(long, env = "DISABLE_GSO")]
disable_gso: bool,
}
Expand Down Expand Up @@ -76,8 +79,9 @@ impl Server {
.with_key_logging()?
.build()?;

let mut io_builder =
io::Default::builder().with_receive_address((self.opts.ip, self.opts.port).into())?;
let mut io_builder = io::Default::builder()
.with_max_mtu(self.max_mtu)?
.with_receive_address((self.opts.ip, self.opts.port).into())?;

if self.disable_gso {
io_builder = io_builder.with_gso_disabled()?;
Expand Down
30 changes: 27 additions & 3 deletions netbench/netbench-scenarios/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,12 @@
# netbench-scenarios

### Executable
The executable includes three default scenarios
- [`request response`](https://github.com/aws/s2n-quic/blob/main/netbench/netbench-scenarios/src/request_response.rs) sends `N` number of bytes to the server, which responds with `M` number of bytes.
- [`ping`](https://github.com/aws/s2n-quic/blob/main/netbench/netbench-scenarios/src/ping.rs) will "ping-pong" a data payload from client to the server and back
- [`connect`](https://github.com/aws/s2n-quic/blob/main/netbench/netbench-scenarios/src/connect.rs) will open a number of connections and then exchange a single byte. This is useful for evaluating connection setup times.

The executable includes a single default scenario: [`request_response`](https://github.com/aws/s2n-quic/blob/main/netbench/netbench-scenarios/src/request_response.rs). This sends `N` number of bytes to the server, which responds with `M` number of bytes. Several options are available for configuration:

Several options are available for configuration:

```shell
$ cargo run --bin netbench-scenarios -- --help
Expand All @@ -22,14 +26,31 @@ FLAGS:
-V, --version
Prints version information


OPTIONS:
--connect.connections <COUNT>
The number of separate connections to create [default: 1000]

--ping.connections <COUNT>
The number of concurrent connections to create [default: 1]

--ping.size <BYTES>
The amount of data to send in each ping [default: 1KB,10KB,100KB,1MB]

--ping.streams <COUNT>
The number of concurrent streams to ping on [default: 1]

--ping.time <TIME>
The amount of time to spend pinging for each size [default: 15s]

--request_response.client_receive_rate <RATE>
The rate at which the client receives data [default: NONE]

--request_response.client_send_rate <RATE>
The rate at which the client sends data [default: NONE]

--request_response.connections <COUNT>
The number of separate connections to create [default: 1]

--request_response.count <COUNT>
The number of requests to make [default: 1]

Expand All @@ -42,6 +63,9 @@ OPTIONS:
--request_response.response_size <BYTES>
The size of the server's response to the client [default: 10MB]

--request_response.response_unblock <BYTES>
The number of bytes that must be received before the next request [default: 0B]

--request_response.server_receive_rate <RATE>
The rate at which the server receives data [default: NONE]

Expand Down
64 changes: 64 additions & 0 deletions netbench/scripts/netbench-run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
#!/usr/bin/env bash
#
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
#

ARTIFACT_FOLDER="target/release"
NETBENCH_ARTIFACT_FOLDER="target/netbench"

# the run_trial function will run the request-response scenario
# with the driver passed in as the first argument
run_trial() {
# e.g. request-response
SCENARIO=$1
# e.g. s2n-quic
DRIVER=$2
echo "running the $SCENARIO scenario with $DRIVER"

# make a directory to hold the collected statistics
mkdir -p $NETBENCH_ARTIFACT_FOLDER/results/$SCENARIO/$DRIVER

# run the server while collecting metrics.
echo " running the server"
./$ARTIFACT_FOLDER/netbench-collector \
./$ARTIFACT_FOLDER/netbench-driver-$DRIVER-server \
--scenario ./$NETBENCH_ARTIFACT_FOLDER/$SCENARIO.json \
> $NETBENCH_ARTIFACT_FOLDER/results/$SCENARIO/$DRIVER/server.json &
# store the server process PID. $! is the most recently spawned child pid
SERVER_PID=$!

# sleep for a small amount of time to allow the server to startup before the
# client
sleep 1

# run the client. Port 4433 is the default for the server.
echo " running the client"
SERVER_0=localhost:4433 ./$ARTIFACT_FOLDER/netbench-collector \
./$ARTIFACT_FOLDER/netbench-driver-$DRIVER-client \
--scenario ./$NETBENCH_ARTIFACT_FOLDER/$SCENARIO.json \
> $NETBENCH_ARTIFACT_FOLDER/results/$SCENARIO/$DRIVER/client.json

# cleanup server processes. The collector PID (which is the parent) is stored in
# SERVER_PID. The collector forks the driver process. The following incantation
# kills the child processes as well.
echo " killing the server"
kill $(ps -o pid= --ppid $SERVER_PID)
}

# build all tools in the netbench workspace
cargo build --release

# generate the scenario files. This will generate .json files that can be found
# in the netbench/target/netbench directory. Config for all scenarios is done
# through this binary.
cargo run --manifest-path netbench-scenarios/Cargo.toml -- --request_response.response_size=8GiB --connect.connections 42
jmayclin marked this conversation as resolved.
Show resolved Hide resolved

run_trial request_response s2n-quic
run_trial request_response s2n-tls

run_trial connect s2n-quic
run_trial connect s2n-tls

echo "generating the report"
./$ARTIFACT_FOLDER/netbench-cli report-tree $NETBENCH_ARTIFACT_FOLDER/results $NETBENCH_ARTIFACT_FOLDER/report