diff --git a/API.md b/API.md index 6044177..a0d4dd1 100644 --- a/API.md +++ b/API.md @@ -35,9 +35,28 @@ Shout a phrase really loudly by adding an exclamation to the end, asynchronously **Examples** ```javascript +// Shout var HW = new HelloWorld(); HW.shout('rawr', {}, function(err, shout) { if (err) throw err; - console.log(shout); // => 'rawr!' + console.log(shout); // => 'rawr!...and just did a bunch of stuff' +}); +``` + +```javascript +// Shout louder +var HW = new HelloWorld(); +HW.shout('rawr', { louder: true }, function(err, shout) { + if (err) throw err; + console.log(shout); // => 'rawr!!!!!' +}); +``` + +```javascript +// Shout then sleep for x seconds +var HW = new HelloWorld(); +HW.shout('rawr', { sleep: 2 }, function(err, shout) { + if (err) throw err; + console.log(shout); // => 'rawr! zzzZZZ' }); ``` diff --git a/Makefile b/Makefile index 43ba39f..850f16f 100644 --- a/Makefile +++ b/Makefile @@ -25,8 +25,8 @@ clean: rm -rf lib/binding rm -rf build # remove remains from running 'make coverage' - rm *.profraw - rm *.profdata + rm -f *.profraw + rm -f *.profdata @echo "run 'make distclean' to also clear node_modules, mason_packages, and .mason directories" distclean: clean diff --git a/README.md b/README.md index 5aa9e12..c1db0d3 100644 --- a/README.md +++ b/README.md @@ -87,4 +87,73 @@ The `.travis.yml` file uses the `matrix` to set up each individual job, which sp install: *setup script: *test after_script: *publish -``` \ No newline at end of file +``` + +# Benchmark Performance + +This project includes [bench tests](https://github.com/mapbox/node-cpp-skel/tree/master/test/bench) you can use to experiment with and measure performance. We've included a couple different scenarios that demonstrate the affects of concurrency and threads within a process or processes. + +For example, you can run: + +``` +node test/bench/bench-batch.js --iterations 50 --concurrency 10 --mode shout +``` + +This will run a bunch of calls to HelloWorld's `shout()` function. You can control three things: + +- iterations: number of times to call `shout()` +- concurrency: max number of threads the test can utilize, by setting `UV_THREADPOOL_SIZE`. When running the bench-batch test, you can see this number of threads reflected in your [Activity Monitor](https://github.com/springmeyer/profiling-guide#activity-monitorapp-on-os-x)/[htop window](https://hisham.hm/htop/). +- mode: you can specify which scenario youd like to bench. Ex: shout (rename this to basic async function...or something), contentiousThreads, busyThreads... + +## This bench-batch test can demonstrate various performance scenarios: + +### Good scenarios + +**Ideally, you want your workers to run your code ~99% of the time.** + +These scenarios demonstrate ideal behavior for a healthy node c++ addon. They are what you would ideally expect to see when you've picked a good problem to solve with node. + +1. An async function that is CPU intensive and takes a while to finish (expensive creation and querying of a `std::map` and string comparisons). This scenario demonstrates when worker threads are busy doing a lot of work, and the main loop is relatively idle. Depending on how many threads (concurrency) you enable, you may see your CPU% sky-rocket and your cores max out. Yeaahhh!!! + +``` +node test/bench/bench-batch.js --iterations 100 --concurrency 10 --mode busyThreads +``` + +If you bump up `--iterations` to 500 and profile in Activity Monitor.app, you'll see the main loop is idle as expected since the threads are doing all the work. You'll also see the threads busy doing work in AsyncBusyThreads function 99% of the time :tada: + +![screenshot 2016-11-07 11 50 59](https://user-images.githubusercontent.com/1209162/27053695-cce91e9c-4f83-11e7-904b-b717feb065cf.png) + +### Bad scenarios + +These scenarios demonstrate non-ideal behavior for a node c++ addon. They represent situations you need to watch out for that may spell trouble in your code or that you are trying to solve a problem that is not well suited to node. + +#### Contentious Threads (using a mutex lock) + +1. An async function where the code running inside the threadpool locks a global mutex and continues to do expensive work. Only one thread at a time can have access to the global mutex, therefore only one thread can do work at one time. This causes all threads to contend with one another. In this situation, all threads are full with work, but they are really slow since they're each waiting for their turn for the mutex lock. This is called "lock contention". + +``` +node test/bench/bench-batch.js --iterations 50 --concurrency 10 --mode contentiousThreads +``` + +If you bump up `--iterations` to 500 and profile in Activity Monitor.app, you'll see the main loop is idle. This is expected because it is only dispatching work to the threads. The threads however are all "majority busy" in `psynch_mutexwait` (waiting for a locked mutex) as more time is spent waiting than doing the expensive work. This is because one thread will grab a lock, do work, all others will wait, another will grab the released lock, do work, all other threads will wait. This is all too common and the reason you don't want to use mutex locks. This is the profiling output of this non-ideal situation: + +~[](https://cloud.githubusercontent.com/assets/20300/19990905/7e9a677a-a1ee-11e6-8ba2-c63ff63b1a1b.png) + +When locks are unavoidable in real-world applications, we would hope that the % of time spent in `psynch_mutexwait` would be very small rather than very big. The real-world optimization would be to either rewrite the code to avoid needing locks or at least to rewrite the code to hold onto a lock for less time (scope the lock more). + +#### Sleepy Threads + +2. An async function that sleeps in the thread pool. This is a bizarre example since you'd never want to do this in practice. This scenario demonstrates when all worker threads have work (threadpool is full) but the work they are doing is not CPU intensive. This is an antipattern: it does not make sense to push work to the threadpool unless it is CPU intensive. Typically in this situation, the callstack of your process will show your workers spending most of their time in some kind of 'cond_wait' state. To run this scenario, be sure to set the number of seconds you'd like your workers to `--sleep`: + +``` +node test/bench/bench-batch.js --iterations 50 --concurrency 10 --sleep 1 +``` + +#### Activity Monitor will display a few different kinds of threads: +- main thread (this is the event loop) +- [worker threads (libuv)](https://github.com/libuv/libuv/blob/1a96fe33343f82721ba8bc93adb5a67ddcf70ec4/src/threadpool.c#L64-L104) will include `worker (in node)` in the callstack. These are usually unnamed: `Thread_2206161` (some of these might not actually be running your code) +- V8 WorkerThread: we dont really need to care about these right now. They dont actually run your code. + +To learn more about what exactly is happening with threads behind the scenes in Node and how `UV_THREADPOOL_SIZE` is involved, check out [this great blogpost](https://www.future-processing.pl/blog/on-problems-with-threads-in-node-js/). + +Feel free to play around with these bench tests, and profile the code to get a better idea of how threading can affect the performance of your code. We are in the process of [adding more benchmarks](https://github.com/mapbox/node-cpp-skel/issues/30) that demonstrate a number of other scenarios. \ No newline at end of file diff --git a/package.json b/package.json index e7bca60..ec5ab7c 100644 --- a/package.json +++ b/package.json @@ -23,7 +23,10 @@ "bundledDependencies":["node-pre-gyp"], "devDependencies": { "aws-sdk": "^2.4.7", - "tape": "^4.5.1" + "documentation": "^4.0.0-beta5", + "tape": "^4.5.1", + "d3-queue": "^3.0.1", + "minimist": "~1.2.0" }, "binary": { "module_name": "hello_world", diff --git a/src/hello_world.cpp b/src/hello_world.cpp index d41d0bc..e8e09cb 100644 --- a/src/hello_world.cpp +++ b/src/hello_world.cpp @@ -3,6 +3,15 @@ #include #include #include +#include // time lib +#include // sleep_for is a function within the thread lib +#include +#include + +// Global vars for demonstrating contentiousThreads() +static std::uint32_t a = 0; +static std::uint32_t b = 0; +static std::mutex mutex; #include @@ -126,12 +135,14 @@ class AsyncBaton cb(), phrase(), louder(false), + sleep(), error_name(), result() {} uv_work_t request; // required Nan::Persistent cb; // callback function type - std::string phrase; - bool louder; + std::string phrase; // required + bool louder; // optional + std::uint32_t sleep; // optional (# of seconds) std::string error_name; std::string result; }; @@ -141,15 +152,15 @@ NAN_METHOD(HelloWorld::shout) std::string phrase = ""; bool louder = false; - // check third argument, should be a 'callback' function. + // check last argument, should be a 'callback' function. // This allows us to set the callback so we can use it to return errors // instead of throwing as well. - if (!info[2]->IsFunction()) + if (info.Length() < 1 || !info[info.Length()-1]->IsFunction()) { - Nan::ThrowTypeError("third arg 'callback' must be a function"); + Nan::ThrowTypeError("last arg 'callback' must be a function"); return; } - v8::Local callback = info[2].As(); + v8::Local callback = info[info.Length()-1].As(); // check first argument, should be a 'phrase' string if (!info[0]->IsString()) @@ -195,11 +206,205 @@ NAN_METHOD(HelloWorld::shout) 3) operations to be executed within the threadpool 4) operations to be executed after #3 is complete to pass into the callback */ - uv_queue_work(uv_default_loop(), &baton->request, AsyncShout, (uv_after_work_cb)AfterShout); + uv_queue_work(uv_default_loop(), &baton->request, AsyncShout, (uv_after_work_cb)AfterAsync); + return; +} + +NAN_METHOD(HelloWorld::busyThreads) +{ + std::string phrase = ""; + + // check last argument, should be a 'callback' function. + // This allows us to set the callback so we can use it to return errors + // instead of throwing as well. + if (info.Length() < 1 || !info[info.Length()-1]->IsFunction()) + { + Nan::ThrowTypeError("last arg 'callback' must be a function"); + return; + } + v8::Local callback = info[info.Length()-1].As(); + + // check first argument, should be a 'phrase' string + if (!info[0]->IsString()) + { + CallbackError("first arg 'phrase' must be a string", callback); + return; + } + phrase = *v8::String::Utf8Value((info[0])->ToString()); + + // set up the baton to pass into our threadpool + AsyncBaton *baton = new AsyncBaton(); + baton->request.data = baton; + baton->phrase = phrase; + baton->cb.Reset(callback); + + /* + `uv_queue_work` is the all-important way to pass info into the threadpool. + It cannot take v8 objects, so we need to do some manipulation above to convert into cpp objects + otherwise things get janky. It takes four arguments: + + 1) which loop to use, node only has one so we pass in a pointer to the default + 2) the baton defined above, we use this to access information important for the method + 3) operations to be executed within the threadpool + 4) operations to be executed after #3 is complete to pass into the callback + */ + uv_queue_work(uv_default_loop(), &baton->request, AsyncBusyThreads, (uv_after_work_cb)AfterAsync); + return; +} + +NAN_METHOD(HelloWorld::sleepyThreads) +{ + std::string phrase = ""; + std::uint32_t sleep = 0; + + // check last argument, should be a 'callback' function. + // This allows us to set the callback so we can use it to return errors + // instead of throwing as well. + if (info.Length() < 1 || !info[info.Length()-1]->IsFunction()) + { + Nan::ThrowTypeError("last arg 'callback' must be a function"); + return; + } + v8::Local callback = info[info.Length()-1].As(); + + // check first argument, should be a 'phrase' string + if (!info[0]->IsString()) + { + CallbackError("first arg 'phrase' must be a string", callback); + return; + } + phrase = *v8::String::Utf8Value((info[0])->ToString()); + + // check second argument, should be an 'options' object + if (!info[1]->IsObject()) + { + CallbackError("second arg 'options' must be an object", callback); + return; + } + v8::Local options = info[1].As(); + + if (options->Has(Nan::New("sleep").ToLocalChecked())) + { + v8::Local sleep_val = options->Get(Nan::New("sleep").ToLocalChecked()); + if (!sleep_val->IsUint32()) + { + CallbackError("option 'sleep' must be a positive integer", callback); + return; + } + sleep = sleep_val->Uint32Value(); + } + + // set up the baton to pass into our threadpool + AsyncBaton *baton = new AsyncBaton(); + baton->request.data = baton; + baton->phrase = phrase; + baton->sleep = sleep; + baton->cb.Reset(callback); + + /* + `uv_queue_work` is the all-important way to pass info into the threadpool. + It cannot take v8 objects, so we need to do some manipulation above to convert into cpp objects + otherwise things get janky. It takes four arguments: + + 1) which loop to use, node only has one so we pass in a pointer to the default + 2) the baton defined above, we use this to access information important for the method + 3) operations to be executed within the threadpool + 4) operations to be executed after #3 is complete to pass into the callback + */ + uv_queue_work(uv_default_loop(), &baton->request, AsyncSleepyThreads, (uv_after_work_cb)AfterAsync); + return; +} + +NAN_METHOD(HelloWorld::contentiousThreads) +{ + std::string phrase = ""; + + // check last argument, should be a 'callback' function. + // This allows us to set the callback so we can use it to return errors + // instead of throwing as well. + if (info.Length() < 1 || !info[info.Length()-1]->IsFunction()) + { + Nan::ThrowTypeError("last arg 'callback' must be a function"); + return; + } + v8::Local callback = info[info.Length()-1].As(); + + // check first argument, should be a 'phrase' string + if (!info[0]->IsString()) + { + CallbackError("first arg 'phrase' must be a string", callback); + return; + } + phrase = *v8::String::Utf8Value((info[0])->ToString()); + + // set up the baton to pass into our threadpool + AsyncBaton *baton = new AsyncBaton(); + baton->request.data = baton; + baton->phrase = phrase; + baton->cb.Reset(callback); + + /* + `uv_queue_work` is the all-important way to pass info into the threadpool. + It cannot take v8 objects, so we need to do some manipulation above to convert into cpp objects + otherwise things get janky. It takes four arguments: + + 1) which loop to use, node only has one so we pass in a pointer to the default + 2) the baton defined above, we use this to access information important for the method + 3) operations to be executed within the threadpool + 4) operations to be executed after #3 is complete to pass into the callback + */ + uv_queue_work(uv_default_loop(), &baton->request, AsyncContentiousThreads, (uv_after_work_cb)AfterAsync); return; } -std::string do_expensive_work(std::string const& phrase, bool louder) { +// expensive allocation of std::map, querying, and string comparison +std::string do_expensive_work(std::string const& phrase, std::size_t work_to_do=100000) { + + if (phrase != "rawr") { + throw std::runtime_error("we really would prefer rawr all the time"); + } + + std::map container; + + for (std::size_t i=0;i lock(mutex); + if (a != 0 || b != 0) { + abort(); // should never happen since the lock should synchronized access to these variables + } + a = 1; + b = 1; + do_expensive_work(phrase); + a = 0; + b = 0; + std::string result = phrase + "...threads are locked and contending with each other"; + + return result; +} + +std::string do_sleepy_work(std::string const& phrase, uint32_t sleep) { std::string result; // This is purely for testing, to be able to simulate an unexpected throw @@ -208,12 +413,16 @@ std::string do_expensive_work(std::string const& phrase, bool louder) { throw std::runtime_error("we really would prefer rawr all the time"); } - result = phrase + "!"; - - if (louder) - { - result += "!!!!"; + // suspends execution of the calling thread for (at least) # of seconds + if (sleep) { + // http://en.cppreference.com/w/cpp/chrono/duration + // duration type object that sleep_for accepts + std::chrono::seconds sec(sleep); + + std::this_thread::sleep_for(sec); + result = phrase + " zzzZZZ"; } + return result; } @@ -226,7 +435,24 @@ void HelloWorld::AsyncShout(uv_work_t* req) // The try/catch is critical here: if code was added that could throw an unhandled error INSIDE the threadpool, it would be disasterous try { - baton->result = do_expensive_work(baton->phrase,baton->louder); + std::string result; + std::string phrase = baton->phrase; + + // This is purely for testing, to be able to simulate an unexpected throw + // from a function you do not control and may throw an exception + if (phrase != "rawr") { + throw std::runtime_error("we really would prefer rawr all the time"); + } + + result = phrase + "!"; + + if (baton->louder) + { + result += "!!!!"; + } + + baton->result = result; + } catch (std::exception const& ex) { @@ -236,9 +462,9 @@ void HelloWorld::AsyncShout(uv_work_t* req) } -// handle results from AsyncShout - if there are errors return those +// handle results from Async function - if there are errors return those // otherwise return the type & info to our callback -void HelloWorld::AfterShout(uv_work_t* req) +void HelloWorld::AfterAsync(uv_work_t* req) { Nan::HandleScope scope; @@ -259,6 +485,64 @@ void HelloWorld::AfterShout(uv_work_t* req) delete baton; } + +// this is where we actually set the bees to work +void HelloWorld::AsyncBusyThreads(uv_work_t* req) +{ + AsyncBaton *baton = static_cast(req->data); + + /***************** custom code here ******************/ + // The try/catch is critical here: if code was added that could throw an unhandled error INSIDE the threadpool, it would be disasterous + try + { + baton->result = do_expensive_work(baton->phrase); + } + catch (std::exception const& ex) + { + baton->error_name = ex.what(); + } + /***************** end custom code *******************/ + +} + +// this is where we actually exclaim our shout phrase +void HelloWorld::AsyncSleepyThreads(uv_work_t* req) +{ + AsyncBaton *baton = static_cast(req->data); + + /***************** custom code here ******************/ + // The try/catch is critical here: if code was added that could throw an unhandled error INSIDE the threadpool, it would be disasterous + try + { + baton->result = do_sleepy_work(baton->phrase,baton->sleep); + } + catch (std::exception const& ex) + { + baton->error_name = ex.what(); + } + /***************** end custom code *******************/ + +} + +// this is where we actually exclaim our shout phrase +void HelloWorld::AsyncContentiousThreads(uv_work_t* req) +{ + AsyncBaton *baton = static_cast(req->data); + + /***************** custom code here ******************/ + // The try/catch is critical here: if code was added that could throw an unhandled error INSIDE the threadpool, it would be disasterous + try + { + baton->result = do_contentious_work(baton->phrase); + } + catch (std::exception const& ex) + { + baton->error_name = ex.what(); + } + /***************** end custom code *******************/ + +} + NAN_MODULE_INIT(HelloWorld::Init) { const auto whoami = Nan::New("HelloWorld").ToLocalChecked(); @@ -270,6 +554,9 @@ NAN_MODULE_INIT(HelloWorld::Init) // custom methods added here SetPrototypeMethod(fnTp, "wave", wave); SetPrototypeMethod(fnTp, "shout", shout); + SetPrototypeMethod(fnTp, "busyThreads", busyThreads); + SetPrototypeMethod(fnTp, "sleepyThreads", sleepyThreads); + SetPrototypeMethod(fnTp, "contentiousThreads", contentiousThreads); const auto fn = Nan::GetFunction(fnTp).ToLocalChecked(); constructor().Reset(fn); diff --git a/src/hello_world.hpp b/src/hello_world.hpp index 5c8cc04..e5e09b3 100644 --- a/src/hello_world.hpp +++ b/src/hello_world.hpp @@ -23,11 +23,26 @@ class HelloWorld: public Nan::ObjectWrap // wave, custom sync method static NAN_METHOD(wave); + + // Function called after aync work is done + // Currently re-used by all async functions + static void AfterAsync(uv_work_t* req); // shout, custom async method static NAN_METHOD(shout); static void AsyncShout(uv_work_t* req); - static void AfterShout(uv_work_t* req); + + // busyThreads, custom async method + static NAN_METHOD(busyThreads); + static void AsyncBusyThreads(uv_work_t* req); + + // sleepyThreads, custom async method + static NAN_METHOD(sleepyThreads); + static void AsyncSleepyThreads(uv_work_t* req); + + // sleepyThreads, custom async method + static NAN_METHOD(contentiousThreads); + static void AsyncContentiousThreads(uv_work_t* req); // constructor // This includes a Default Argument diff --git a/test/bench/bench-batch.js b/test/bench/bench-batch.js new file mode 100644 index 0000000..915331e --- /dev/null +++ b/test/bench/bench-batch.js @@ -0,0 +1,71 @@ +"use strict"; + +var argv = require('minimist')(process.argv.slice(2)); +if (!argv.iterations || !argv.concurrency || !argv.mode) { + console.error('Please provide desired iterations, concurrency, and mode'); + console.error('Example: \nnode test/bench/bench-batch.js --iterations 50 --concurrency 10 --mode contentiousThreads'); + process.exit(1); +} + +// This env var sets the libuv threadpool size. +// This value is locked in once a function interacts with the threadpool +// Therefore we need to set this value either in the shell or at the very +// top of a JS file (like we do here) +process.env.UV_THREADPOOL_SIZE = argv.concurrency; + +var HelloWorld = require('../../lib/index.js'); + +var HW = new HelloWorld(); + +if (!HW[argv.mode]) { + console.error("Invalid mode",argv.mode) + console.error("Must be equal to one of the async methods on the HelloWorld class: 'shout','busyThreads','sleepyThreads', or 'contentiousThreads'") + process.exit(1); +} + +var fs = require('fs'); +var path = require('path'); +var assert = require('assert') +var d3_queue = require('d3-queue'); + +var queue = d3_queue.queue(argv.concurrency); +var runs = 0; + +function run(cb) { + HW[argv.mode]('rawr', argv, function(err, result) { + if (err) { + return cb(err); + } + ++runs; + return cb(); + }); +} + +for (var i = 0; i < argv.iterations; i++) { + queue.defer(run); +} + +var time = +(new Date()); + +queue.awaitAll(function(error) { + if (error) throw error; + if (runs != argv.iterations) { + throw new Error("Error: did not run as expected"); + } + // check rate + time = +(new Date()) - time; + + if (time == 0) { + console.log("Warning: ms timer not high enough resolution to reliably track rate. Try more iterations"); + } else { + // number of milliseconds per iteration + var rate = runs/(time/1000); + console.log('Benchmark speed: ' + rate.toFixed(0) + ' runs/s (runs:' + runs + ' ms:' + time + ' )'); + } + + console.log("Benchmark iterations:",argv.iterations,"concurrency:",argv.concurrency,"mode:",argv.mode) + + // There may be instances when you want to assert some performance metric + //assert.equal(rate > 1000, true, 'speed not at least 1000/second ( rate was ' + rate + ' runs/s )'); + +}); \ No newline at end of file diff --git a/test/bench/bench.js b/test/bench/bench.js new file mode 100644 index 0000000..4220c95 --- /dev/null +++ b/test/bench/bench.js @@ -0,0 +1,27 @@ +"use strict"; + +var fs = require('fs'); +var path = require('path'); +var HelloWorld = require('../../lib/index.js'); + +console.time('constructor'); +var HW = new HelloWorld(); +console.timeEnd('constructor'); + +console.time('shout'); +// memory usage for this single process +var mem_before = process.memoryUsage(); + +HW.shout('rawr', {}, function(err, result) { + if (err) throw err; + console.timeEnd('shout'); + + // heap: a memory segment dedicated to storing explicitly referenced types like objects and strings + var mem_after = process.memoryUsage(); + var mem_used = (mem_after.heapUsed - mem_before.heapUsed) / 10000; + console.log('total memory used: ' + mem_used + ' MB'); +}); + +// Going to not worry about benchmarking sync functions for now +// since the most common usecase of a C++ in a Node module is +// optimizing processes within the threadpool (async) diff --git a/test/hello_world.test.js b/test/hello_world.test.js index 92794f2..9b1d57a 100644 --- a/test/hello_world.test.js +++ b/test/hello_world.test.js @@ -1,8 +1,10 @@ +"use strict"; + var test = require('tape'); var HelloWorld = require('../lib/index.js'); var HW = new HelloWorld(); -test('HellowWorld error - throw exception during construction', function(t) { +test('HelloWorld error - throw exception during construction', function(t) { var never = ''; try { var HWuhoh = new HelloWorld('uhoh'); @@ -14,7 +16,7 @@ test('HellowWorld error - throw exception during construction', function(t) { t.end(); }); -test('HellowWorld error - throw type error during construction', function(t) { +test('HelloWorld error - throw type error during construction', function(t) { var never = ''; try { var HWuhoh = new HelloWorld(24); @@ -27,7 +29,7 @@ test('HellowWorld error - throw type error during construction', function(t) { t.end(); }); -test('HellowWorld error - invalid constructor', function(t) { +test('HelloWorld error - invalid constructor', function(t) { var never = ''; try { var HWuhoh = HelloWorld(); @@ -40,7 +42,7 @@ test('HellowWorld error - invalid constructor', function(t) { t.end(); }); -test('HellowWorld success - valid constructor', function(t) { +test('HelloWorld success - valid constructor', function(t) { var never = ''; try { var HWyay = new HelloWorld('hello'); @@ -52,7 +54,6 @@ test('HellowWorld success - valid constructor', function(t) { t.end(); }); - test('wave success', function(t) { var hello = HW.wave(); t.equal(hello, 'howdy world', 'output of HelloWorld.wave'); @@ -75,9 +76,8 @@ test('shout success - options.louder', function(t) { }); }); - test('shout error - not enough rawr', function(t) { - HW.shout('tiny moo', { louder: true }, function(err, shout) { + HW.shout('tiny moo', { louder: true }, function(err) { t.ok(err, 'expected error'); t.ok(err.message.indexOf('rawr all the time') > -1, 'rawrs all the time are way nicer'); t.end(); @@ -85,7 +85,7 @@ test('shout error - not enough rawr', function(t) { }); test('shout error - non string phrase', function(t) { - HW.shout(4, {}, function(err, shout) { + HW.shout(4, {}, function(err) { t.ok(err, 'expected error'); t.ok(err.message.indexOf('phrase') > -1, 'proper error message'); t.end(); @@ -93,7 +93,7 @@ test('shout error - non string phrase', function(t) { }); test('shout error - no options object', function(t) { - HW.shout('rawr', true, function(err, shout) { + HW.shout('rawr', true, function(err) { t.ok(err, 'expected error'); t.ok(err.message.indexOf('options') > -1, 'proper error message'); t.end(); @@ -101,7 +101,7 @@ test('shout error - no options object', function(t) { }); test('shout error - options.louder non boolean', function(t) { - HW.shout('rawr', { louder: 3 }, function(err, shout) { + HW.shout('rawr', { louder: 3 }, function(err) { t.ok(err, 'expected error'); t.ok(err.message.indexOf('louder') > -1, 'proper error message'); t.end(); @@ -117,4 +117,130 @@ test('shout error - no callback', function(t) { t.ok(err.message.indexOf('callback') > -1, 'proper error message'); t.end(); } +}); + +test('shout success - default ', function(t) { + HW.shout('rawr', {}, function(err, shout) { + if (err) throw err; + t.equal(shout, 'rawr!'); + t.end(); + }); +}); + +test('sleepyThreads success - options.sleep', function(t) { + HW.sleepyThreads('rawr', { sleep: 2 }, function(err, shout, stdout) { + if (err) throw err; + t.equal(shout, 'rawr zzzZZZ'); + t.end(); + }); +}); + +test('sleepyThreads error - phrase not a string', function(t) { + HW.sleepyThreads(24, {}, function(err) { + t.ok(err, 'expected error'); + t.ok(err.message.indexOf("\'phrase\' must be a string") > -1, 'proper error message'); + t.end(); + }); +}); + +test('sleepyThreads error - options.sleep not integer', function(t) { + HW.sleepyThreads('rawr', { sleep: "hi" }, function(err) { + t.ok(err, 'expected error'); + t.ok(err.message.indexOf("\'sleep\' must be a positive integer") > -1, 'proper error message'); + t.end(); + }); +}); + +test('sleepyThreads error - no callback', function(t) { + try { + HW.sleepyThreads('rawr', {}); + } catch (err) { + t.ok(err, 'expected error'); + t.ok(err.message.indexOf('callback') > -1, 'proper error message'); + t.end(); + } +}); + +test('sleepyThreads error - no options object', function(t) { + HW.sleepyThreads('rawr', true, function(err) { + t.ok(err, 'expected error'); + t.ok(err.message.indexOf('options') > -1, 'proper error message'); + t.end(); + }); +}); + +test('sleepyThreads error - not enough rawr', function(t) { + HW.sleepyThreads('tiny moo', {}, function(err) { + t.ok(err, 'expected error'); + t.ok(err.message.indexOf('rawr all the time') > -1, 'rawrs all the time are way nicer'); + t.end(); + }); +}); + +test('busyThreads success - default ', function(t) { + HW.busyThreads('rawr', function(err, result) { + if (err) throw err; + t.equal(result, 'rawr...threads are busy bees'); + t.end(); + }); +}); + +test('busyThreads error - no callback', function(t) { + try { + HW.busyThreads('rawr', {}); + } catch (err) { + t.ok(err, 'expected error'); + t.ok(err.message.indexOf('callback') > -1, 'proper error message'); + t.end(); + } +}); + +test('busyThreads error - phrase not a string', function(t) { + HW.busyThreads(24, function(err) { + t.ok(err, 'expected error'); + t.ok(err.message.indexOf("\'phrase\' must be a string") > -1, 'proper error message'); + t.end(); + }); +}); + +test('busyThreads error - not enough rawr', function(t) { + HW.busyThreads('tiny moo', function(err) { + t.ok(err, 'expected error'); + t.ok(err.message.indexOf('rawr all the time') > -1, 'rawrs all the time are way nicer'); + t.end(); + }); +}); + +test('contentiousThreads success - default ', function(t) { + HW.contentiousThreads('rawr', function(err, result) { + if (err) throw err; + t.equal(result, 'rawr...threads are locked and contending with each other'); + t.end(); + }); +}); + +test('contentiousThreads error - no callback', function(t) { + try { + HW.contentiousThreads('rawr', {}); + } catch (err) { + t.ok(err, 'expected error'); + t.ok(err.message.indexOf('callback') > -1, 'proper error message'); + t.end(); + } +}); + +test('contentiousThreads error - phrase not a string', function(t) { + HW.contentiousThreads(24, function(err) { + t.ok(err, 'expected error'); + t.ok(err.message.indexOf("\'phrase\' must be a string") > -1, 'proper error message'); + t.end(); + }); +}); + +test('contentiousThreads error - not enough rawr', function(t) { + HW.contentiousThreads('tiny moo', function(err) { + t.ok(err, 'expected error'); + t.ok(err.message.indexOf('rawr all the time') > -1, 'rawrs all the time are way nicer'); + t.end(); + }); }); \ No newline at end of file