This is a work in progress, so code is :lava:
-
Nodejs LTS (I tested it on v8.11.3 and v10.14.1) I recommend nvm
That's alotta dependencies, I know. technically you don't need docker-compose
if
if you're not going to run the test harness locally. so there is that :)
$ git clone https://github.com/livepeer/test-harness.git
$ cd test-harness
$ npm install
-
check
examples/local.js
, note that in theconfig
objectlocal
istrue
. note that this will usedocker-compose up
to run instead of docker-swarm. this is easier to debug for smallish setups locally. -
important edit the
examples/local.js
filelivepeerBinaryPath
value to the livepeer binary you'd like to test. this has to be built for linux -
run
node examples/local.js
to fire up the test-harness. -
thats it. now you got a running setup. note that in the
dist
folder there will be a folder for this experiment, which will contain the docker-compose generated. this will have the port forwarding for each node and should be accessible at your dev machine'slocalhost
If flag publicImage
is set to true in config, then image from Docker Hub will be used (livepeer/go-livepeer:edge). This image is built on Docker Hub from master
branch of go-livepeer
repository. Also publicImage
could be set to name of any other public image, which in turn will be used.
If flag localBuild
is set to true in config, then livepeer binary will be taken from local
docker image tagged livepeerbinary:debian
. It should be build by running
make localdocker
-
setup
gcloud
,docker-machine
Google driver uses Application Default Credentials to get authorization credentials for use in calling Google APIs. follow https://cloud.google.com/sdk/docs/#deb togcloud init
. -
run
gcloud auth login
-
now you should have
gcloud
and ready to spin up instances, if you're having issues , let me know (open an issue or buzz me at discord @Yahya#0606 ) -
there is a ready made example in
/examples/index.js
, Change the testname
and run innode examples/index.js
which will spin up a docker cluster of 2 hosts, with livepeer containers andgeth with protocol
ready to go
-
local
: must betrue
for local test-harness runs. -
localBuild
: build the livepeer binary locally or use the binary in the gcp bucket. -
publicImage
: iftrue
, uselivepeer/go-livepeer:edge
image from Docker Hub, which is being built from master branch ofgo-livepeer
repository. Can be set to string, - in this case it should refer to any image publicly available on Docker Hub. -
metrics
: it will start Prometheus and Grafana iftrue
. -
standardSetup
: request token, register orchestartors, etc... -
updateMachines
: iftrue
, will runapt upgrade
on newly created VMs. Not really needed for benchmarking, so it is nowfalse
by default. -
installNodeExporter
: iftrue
installs Prometheus Node Explorer on newly created machines (allows to scrape system metrics like CPU, Memory load etc).false
by default to save time. -
installGoogleMonitoring
: iftrue
installs Google's montiring agent.false
by default, not really needed for benchmarking. -
constrainResources
: flag to activate resource constraint within docker swarm. -
name
: name of the configuration or experiment, must be unique for each deployment. -
livepeerBinaryPath
: relative path to the livepeer binary, set it tonull
to use the binary in the gcp bucket. -
GCPLogging
: setting to true will enable sending logs to Google Cloud. Enabling this will make impossible to usedocker logs
command (and./test-harness logs
). -
blockchain
:name
: network name, should be 'lpTestNet' for test networks, or 'offchain' for offchain mode.networkId
: network id, default54321
,controllerAddress
: address of the livepeer controller contract
-
machines
: an object used for remote deployments configurations like number of host machines, zones, machine types and so on.-
zone
: gcp zone defaults to 'us-east1-b' ORzones
: an array of gcp zones for multi region support -
orchestratorMachineType
: type of machine for Orchestrator , ex: 'n1-highcpu-8', -
broadcasterMachineType
: type of machine for Broadcaster , ex: 'n1-highcpu-8', -
transcoderMachineType
: type of machine for Transcoder , ex: 'n1-highcpu-8', -
streamerMachineType
: type of machine for Streamer , ex: 'n1-standard-1', -
managerMachineType
: type of the instance used as manager,
-
-
nodes
: the object that plans the O/T/B within a deployment.transcoders
: the transcoder group.instances
: how many containers to run as transcoders.
// these are the livepeer binary flags, add them as you wish. // the test-harness overrides flags that has to do with directories or // ip/port bindings, these are automated.
flags
:the livepeer flags passed to the livepeer binary container.orchestrators
: the orchestrator group.broadcasters
: the broadcaster group.googleStorage
: optional object if you would like to use google buckets as storage.bucket
: bucket name,key
: the path key to access the bucket. usually a JSON key
instances
: number of livepeer broadcaster containers
this isn't complete yet. but it's functioning . checkout this example along with the comments in the code to get an idea of how to use it.
$ ./test-harness disrupt -h
Usage: disrupt [options] [name] [group]
uses pumba to kill containers in a specified livepeer group randomly
Options:
-i --interval <interval> recurrent interval for chaos command; use with optional unit suffix: 'ms/s/m/h'
-h, --help output usage information
example: ./test-harness disrupt -i 30s my-deployment o_a
# Kill a random livepeer container in group o_a every 30 seconds
To stop an ongoing disruption
./test-harness disrupt-stop my-deployment
$ ./test-harness delay -h
Usage: delay [options] [name] [group]
uses pumba to cause network delays for a livepeer group
Options:
-i --interval <interval> recurrent interval for chaos command; use with optional unit suffix: 'ms/s/m/h'
-d --duration <duration> network emulation duration; should be smaller than recurrent interval; use with optional unit suffix: 'ms/s/m/h'
-h, --help output usage information
to stop a network delay run the following command
./test-harness delay-stop my-deployment