-
Notifications
You must be signed in to change notification settings - Fork 408
Local Stress Testing
Pravega comes equipped with a stress test tool that can be used either locally or against a cluster. This is different from Pravega Benchmark whose sole purpose is to do performance testing.
This tool is called Self Tester and is part of the core Pravega repository.
The Self Tester can be used for the following purposes:
- Benchmarking the performance of the Segment Store Segment API in the following scenarios:
- With or without BookKeeper as Tier 1 (if not, then it uses a zero-latency in-memory implementation).
- Directly or via the Pravega Client.
- In the same process as the Self Tester (useful for debugging in the IDE).
- Creating an entire Pravega Cluster locally and testing against that.
- Against an existing (external) Pravega cluster.
- Testing the correctness of the Segment Store (ordering, transactions, etc.)
- Benchmarking the performance of the Segment Store Table API.
All operations invoked are categorized by type (i.e., Append, Seal, Read, etc.) and their latencies are output at the end of the test. Throughput is output as the test progresses. See Interpreting Output for details.
The Self Tester can be invoked in two ways:
- From the command line:
./gradlew selftest <args-in-javaopts-format>
- From the IDE:
- Class
io.pravega.test.integration.selftest.SelfTestRunner
. The args are passed as Java Opts (which can easily be configured from your IDE).
- Class
There are three ways of passing arguments to the Self Tester. In order of priority (highest to lowest):
- Java Opts.
- Config file (
selftest.configFile
) - Hardcoded defaults.
Some arguments have shortcuts (aliases) associated with them. To see a list of all available shortcuts, execute the SelfTester without any arguments (i.e., ./gradlew selftest
).
Argument selftest.testType
or shortcut target
.
-
SegmentStore
: Executes a Streaming Append test by instantiating the Segment Store in the same process as the Self Tester. -
InProcessStore
: Same asSegmentStore
, but uses the Pravega Client to communicate with the Segment Store. -
InProcessMock
: Executes a Streaming Append test using the Client but with a no-op Segment Store (zero-latency store). -
OutOfProcess
: Creates a new Pravega Cluster (ZK, BK, Segment Store, Controller) on the local machine and executes the test against it. -
External
: Executes a test against an existing Pravega Cluster. -
SegmentStoreTable
: Executes a Table Segment API test by instantiating the Segment Store in the same process as the Self Tester. -
BookKeeper
: Executes a test against BookKeeper. No Pravega code is involved.
These are defined in io.pravega.test.integration.selftest.TestConfig
and have reasonable defaults hardcoded. There is no need to specify all of them when running the test. Values in parenthesis are shortcut names (aliases).
-
selftest.testType
(target
): Type of test to run. See above. -
selftest.operationCount
(o
): Number of operations to execute. -
selftest.containerCount
(c
): Number of Segment Containers in the cluster. -
selftest.streamCount
(s
): Number of Streams to create- For tests using the Pravega Client, this represents the number of Streams to create.
- For
SegmentStore
test, this is the number of Segments to create. - For
BookKeeper
test, this is number of Ledgers to create.
-
selftest.segmentsPerStream
(sc
): Number of segments per stream. Defaults to 1.- Only used for tests where the Pravega Client is involved.
-
selftest.producerCount
(p
): Number of producers. -
selftest.producerParallelism
(pp
): Producer batch size- Each producer issues requests using this batch size and waits for all operations in the batch to complete before moving on to the next batch.
-
selftest.minAppendSize
(ws
): Min append size. -
selftest.maxAppendSize
(ws
): Max append size (If you use the shortcut, min==max). -
selftest.transactionFrequency
(txnf
): How often to begin a new transaction (how many operations apart). Defaults to 0.- This only applies to tests where transactions are supported.
-
selftest.maxTransactionSize
(tnnc
): How many events per transaction.- This only applies if
txnf
!= 0.
- This only applies if
-
selftest.tableConditionalUpdates
(tcu
): Whether to execute a test with conditional updates (true
) or unconditional updates (false
). Defaults tofalse
. -
selftest.tableRemovePercentage
: What percentage of all operations should be Key removals. -
selftest.tableNewKeyPercentage
: What percentage of all operations should be new Key inserts. -
selftest.consumersPerTable
(tct
): Number of consumers (readers) per Table Segment. Defaults to 1.
-
selftest.bookieCount
(bkc
): Number of Bookies to use.0
means in-memory Tier 1 (no Bookkeeper). Defaults to 0. -
selftest.controllerCount
(cc
): Number of Controllers to use. Only applies forOutOfProcess
tests. Defaults to 0. -
selftest.segmentStoreCount
(ssc
): Number of Segment Stores to use. Only applies forOutOfProcess
tests. Defaults to 0. -
selftest.controllerHost
(controller
): Controller Host. Only applies forExternal
tests. -
selftest.controllerPort
(controllerport
): Controller Port. Only applies forOutOfProcess
andExternal
tests. -
selftest.zkPort
: Zookeeper port. Only applies forOutOfProcess
andExternal
tests. -
selftest.bkBasePort
: Port number where to begin assigning ports for Bookies. Each Bookie will get a port number above this value (i.e., if9000
and 3 Bookies, then it will use9000
,9001
,9002
). -
selftest.segmentStorePort
: Segment Store Port. Only applies forOutOfProcess
andExternal
tests.
-
selftest.treadPoolSize
: Number of threads to use for SelfTester. Defaults to 30. -
selftest.warmupPercentage
: Percentage of Operations to use for warm-up (all results discarded). Defaults to 10. -
selftest.pauseBeforeExit
(pause
): Whether to prompt user input before exiting process. Useful if wanting to attach a debugger or test the Pravega Admin CLI against this cluster. Defaults tofalse
. -
selftest.reads
(reads
): Whether read testing is enabled. Defaults totrue
but is subject to type of test.- If true, this will attempt to do Tail Reading, Catchup Reads and Tier 2 reads (where supported).
-
selftest.metrics
(metrics
): Whether metrics are enabled. Defaults tofalse
. -
selftest.enableSecurity
: Whether security is enabled. Defaults tofalse
.
- Segment Store, 1 Bookie, 1 Container, 1 Segment, 10 producers, 10 batch size, 100 byte appends, 1M operations, Reads enabled
-Dtarget=SegmentStore -Dbkc=1 -Dc=1 -Ds=1 -Dp=10 -Dpp=10 -Dws=100 -Do=1000000
- Segment Store via Client, 1 Bookie, 1 Container, 1 Stream x 1 Segment, 10 producers, 100 batch size, 100 byte appends, 100K operations, Reads disabled:
-Dtarget=InProcessStore -Dbkc=1 -Dc=1 -Ds=1 -Dsc=1 -Dp=10 -Dpp=10 -Dws=100 -Do=100000 -Dreads=false
It is important to know that performance benchmark results cannot be compared across different hardware. All results are specific to the local configuration and should best be used to compare the effects of changes in the code itself or to test specific scenarios.
0:00:08.734 [Reporter]: 883031/1000000; Ops = 220974/238198; Data (P/T/C): 84.2/84.1/81.8 MB; TPut: 21.1/22.7 MB/s; TPools (Q/T/S): S = 4/1/80, T = 0/2/80, FJ = 0/0/0.
Intepretation:
- After running for 8.7 seconds (includes bootstrapping, etc.) we processed 883031 operations out of a target of 1000000.
- We are currently executing 220974 operations per second (with an average of 238198 for the whole test so far)
- We have sent 84.2MB of data, tail-read 84.1MB of data and catchup-read 81.8MB of data.
- We have an instant throughput of 21.1MB/s with a cumulative average of 22.7MB.
- The Segment Store Core Threadpool has 4 tasks queued, 1 active thread and 80 max. Threadpool stats are also shown for SelfTester (
T
) and ForkJoinPool (FJ
).
Operation Summary
Operation Type | Count | LAvg | L50 | L75 | L90 | L99 | L999
Append | 1000000 | 38 | 32 | 41 | 64 | 98 | 118
Seal | 1 | 4 | 4 | 4 | 4 | 4 | 4
End to End | 1000000 | 41 | 35 | 46 | 80 | 111 | 142
Catchup Read | 1000000 | 0 | 0 | 0 | 0 | 0 | 0
Interpretation:
- We had 4 types of operations: 1000000 Appends, 1 Seal, 1000000 E2E (Tail) reads and 1000000 catchup (historical) reads.
- Average latencies, as well as P50/75/90/99/999 latencies are output for each operation.
- Contributing
- Guidelines for committers
- Testing
-
Pravega Design Documents (PDPs)
- PDP-19: Retention
- PDP-20: Txn Timeouts
- PDP-21: Protocol Revisioning
- PDP-22: Bookkeeper Based Tier-2
- PDP-23: Pravega Security
- PDP-24: Rolling Transactions
- PDP-25: Read-Only Segment Store
- PDP-26: Ingestion Watermarks
- PDP-27: Admin Tools
- PDP-28: Cross Routing Key Ordering
- PDP-29: Tables
- PDP-30: Byte Stream API
- PDP-31: End-to-End Request Tags
- PDP-32: Controller Metadata Scalability
- PDP-33: Watermarking
- PDP-34: Simplified-Tier-2
- PDP-35: Move Controller Metadata to KVS
- PDP-36: Connection Pooling
- PDP-37: Server-Side Compression
- PDP-38: Schema Registry
- PDP-39: Key-Value Tables Beta 1
- PDP-40: Consistent Order Guarantees for Storage Flushes
- PDP-41: Enabling Transport Layer Security (TLS) for External Clients
- PDP-42: New Resource String Format for Authorization
- PDP-43: Large Events
- PDP-44: Lightweight Transactions
- PDP-45: Health Check
- PDP-46: Read Only Permissions For Reading Data
- PDP-47: Pravega Consumption Based Retention
- PDP-48: Key-Value Tables Beta 2
- PDP-49: Segment Store Admin Gateway
- PDP-50: Stream Tags
- PDP-51: Segment Container Event Processor
- PDP-53: Robust Garbage Collection for SLTS
- PDP-54: Tier-1 Repair Tool
- PDP-55: New Reader API on segment level