Skip to content

Local Stress Testing

Andrei Paduroiu edited this page Oct 15, 2019 · 7 revisions

Pravega comes equipped with a stress test tool that can be used either locally or against a cluster. This is different from Pravega Benchmark whose sole purpose is to do performance testing.

This tool is called Self Tester and is part of the core Pravega repository.

Intended Uses

The Self Tester can be used for the following purposes:

  • Benchmarking the performance of the Segment Store Segment API in the following scenarios:
    • With or without BookKeeper as Tier 1 (if not, then it uses a zero-latency in-memory implementation).
    • Directly or via the Pravega Client.
    • In the same process as the Self Tester (useful for debugging in the IDE).
    • Creating an entire Pravega Cluster locally and testing against that.
    • Against an existing (external) Pravega cluster.
  • Testing the correctness of the Segment Store (ordering, transactions, etc.)
  • Benchmarking the performance of the Segment Store Table API.

All operations invoked are categorized by type (i.e., Append, Seal, Read, etc.) and their latencies are output at the end of the test. Throughput is output as the test progresses. See Interpreting Output for details.

How to Run

The Self Tester can be invoked in two ways:

  • From the command line:
    • ./gradlew selftest <args-in-javaopts-format>
  • From the IDE:
    • Class io.pravega.test.integration.selftest.SelfTestRunner. The args are passed as Java Opts (which can easily be configured from your IDE).

Arguments

There are three ways of passing arguments to the Self Tester. In order of priority (highest to lowest):

  1. Java Opts.
  2. Config file (selftest.configFile)
  3. Hardcoded defaults.

Some arguments have shortcuts (aliases) associated with them. To see a list of all available shortcuts, execute the SelfTester without any arguments (i.e., ./gradlew selftest).

Test Types

Argument selftest.testType or shortcut target.

  • SegmentStore: Executes a Streaming Append test by instantiating the Segment Store in the same process as the Self Tester.
  • InProcessStore: Same as SegmentStore, but uses the Pravega Client to communicate with the Segment Store.
  • InProcessMock: Executes a Streaming Append test using the Client but with a no-op Segment Store (zero-latency store).
  • OutOfProcess: Creates a new Pravega Cluster (ZK, BK, Segment Store, Controller) on the local machine and executes the test against it.
  • External: Executes a test against an existing Pravega Cluster.
  • SegmentStoreTable: Executes a Table Segment API test by instantiating the Segment Store in the same process as the Self Tester.
  • BookKeeper: Executes a test against BookKeeper. No Pravega code is involved.

SelfTest Arguments

These are defined in io.pravega.test.integration.selftest.TestConfig and have reasonable defaults hardcoded. There is no need to specify all of them when running the test. Values in parenthesis are shortcut names (aliases).

Core Arguments

  • selftest.testType (target): Type of test to run. See above.
  • selftest.operationCount (o): Number of operations to execute.
  • selftest.containerCount (c): Number of Segment Containers in the cluster.
  • selftest.streamCount (s): Number of Streams to create
    • For tests using the Pravega Client, this represents the number of Streams to create.
    • For SegmentStore test, this is the number of Segments to create.
    • For BookKeeper test, this is number of Ledgers to create.
  • selftest.segmentsPerStream (sc): Number of segments per stream. Defaults to 1.
    • Only used for tests where the Pravega Client is involved.
  • selftest.producerCount (p): Number of producers.
  • selftest.producerParallelism (pp): Producer batch size
    • Each producer issues requests using this batch size and waits for all operations in the batch to complete before moving on to the next batch.
  • selftest.minAppendSize (ws): Min append size.
  • selftest.maxAppendSize (ws): Max append size (If you use the shortcut, min==max).
  • selftest.transactionFrequency (txnf): How often to begin a new transaction (how many operations apart). Defaults to 0.
    • This only applies to tests where transactions are supported.
  • selftest.maxTransactionSize (tnnc): How many events per transaction.
    • This only applies if txnf != 0.

Table Tests Arguments

  • selftest.tableConditionalUpdates (tcu): Whether to execute a test with conditional updates (true) or unconditional updates (false). Defaults to false.
  • selftest.tableRemovePercentage : What percentage of all operations should be Key removals.
  • selftest.tableNewKeyPercentage: What percentage of all operations should be new Key inserts.
  • selftest.consumersPerTable (tct): Number of consumers (readers) per Table Segment. Defaults to 1.

Cluster Config

  • selftest.bookieCount (bkc): Number of Bookies to use. 0 means in-memory Tier 1 (no Bookkeeper). Defaults to 0.
  • selftest.controllerCount (cc): Number of Controllers to use. Only applies for OutOfProcess tests. Defaults to 0.
  • selftest.segmentStoreCount (ssc): Number of Segment Stores to use. Only applies for OutOfProcess tests. Defaults to 0.
  • selftest.controllerHost (controller): Controller Host. Only applies for External tests.
  • selftest.controllerPort (controllerport): Controller Port. Only applies for OutOfProcess and External tests.
  • selftest.zkPort: Zookeeper port. Only applies for OutOfProcess and External tests.
  • selftest.bkBasePort: Port number where to begin assigning ports for Bookies. Each Bookie will get a port number above this value (i.e., if 9000 and 3 Bookies, then it will use 9000, 9001, 9002).
  • selftest.segmentStorePort: Segment Store Port. Only applies for OutOfProcess and External tests.

Admin Arguments

  • selftest.treadPoolSize: Number of threads to use for SelfTester. Defaults to 30.
  • selftest.warmupPercentage: Percentage of Operations to use for warm-up (all results discarded). Defaults to 10.
  • selftest.pauseBeforeExit (pause): Whether to prompt user input before exiting process. Useful if wanting to attach a debugger or test the Pravega Admin CLI against this cluster. Defaults to false.
  • selftest.reads (reads): Whether read testing is enabled. Defaults to true but is subject to type of test.
    • If true, this will attempt to do Tail Reading, Catchup Reads and Tier 2 reads (where supported).
  • selftest.metrics (metrics): Whether metrics are enabled. Defaults to false.
  • selftest.enableSecurity: Whether security is enabled. Defaults to false.

Sample invocations

  • Segment Store, 1 Bookie, 1 Container, 1 Segment, 10 producers, 10 batch size, 100 byte appends, 1M operations, Reads enabled
    • -Dtarget=SegmentStore -Dbkc=1 -Dc=1 -Ds=1 -Dp=10 -Dpp=10 -Dws=100 -Do=1000000
  • Segment Store via Client, 1 Bookie, 1 Container, 1 Stream x 1 Segment, 10 producers, 100 batch size, 100 byte appends, 100K operations, Reads disabled:
    • -Dtarget=InProcessStore -Dbkc=1 -Dc=1 -Ds=1 -Dsc=1 -Dp=10 -Dpp=10 -Dws=100 -Do=100000 -Dreads=false

Interpreting output

It is important to know that performance benchmark results cannot be compared across different hardware. All results are specific to the local configuration and should best be used to compare the effects of changes in the code itself or to test specific scenarios.

Progressive output every 1 second

0:00:08.734 [Reporter]: 883031/1000000; Ops = 220974/238198; Data (P/T/C): 84.2/84.1/81.8 MB; TPut: 21.1/22.7 MB/s; TPools (Q/T/S): S = 4/1/80, T = 0/2/80, FJ = 0/0/0.

Intepretation:

  • After running for 8.7 seconds (includes bootstrapping, etc.) we processed 883031 operations out of a target of 1000000.
  • We are currently executing 220974 operations per second (with an average of 238198 for the whole test so far)
  • We have sent 84.2MB of data, tail-read 84.1MB of data and catchup-read 81.8MB of data.
  • We have an instant throughput of 21.1MB/s with a cumulative average of 22.7MB.
  • The Segment Store Core Threadpool has 4 tasks queued, 1 active thread and 80 max. Threadpool stats are also shown for SelfTester (T) and ForkJoinPool (FJ).

Final output

Operation Summary
    Operation Type |   Count |  LAvg |   L50 |   L75 |   L90 |   L99 |  L999
            Append | 1000000 |    38 |    32 |    41 |    64 |    98 |   118
              Seal |       1 |     4 |     4 |     4 |     4 |     4 |     4
        End to End | 1000000 |    41 |    35 |    46 |    80 |   111 |   142
      Catchup Read | 1000000 |     0 |     0 |     0 |     0 |     0 |     0

Interpretation:

  • We had 4 types of operations: 1000000 Appends, 1 Seal, 1000000 E2E (Tail) reads and 1000000 catchup (historical) reads.
  • Average latencies, as well as P50/75/90/99/999 latencies are output for each operation.
Clone this wiki locally