Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add bench scripts for async examples #61

Merged
merged 4 commits into from
Aug 15, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ This repository itself can be cloned and edited to your needs. The skeleton prep

* **Tests**: created with [Tape](https://github.com/substack/tape) in the `test/` directory. Travis CI file is prepared to build and test your project on every push.
* **Documentation**: use this README as a template and customize for your own project. Also, this skeleton uses [documentation.js](http://documentation.js.org/) to generate API documentation from JSDOC comments in the `.cpp` files. Docs are located in `API.md`.
* **[Benchmarking](./docs/benchmarking.md)**: Easily test the performance of your code using the built-in benchmark tests provided in this skeleton.
* **Build system**: [node-pre-gyp](https://github.com/mapbox/node-pre-gyp) generates binaries with the proper system architecture flags
* **[Publishing](./docs/publishing-binaries.md)**: Structured as a node module with a `package.json` that can be deployed to NPM's registry.
* **Learning resources**: Read the detailed inline comments within the example code to learn exactly what is happening behind the scenes. Also, check out the [extended tour](./docs/extended-tour.md) to learn more about Node/C++ Addon development, builds, Xcode, and more details about the configuration of this skeleton.
Expand Down
61 changes: 61 additions & 0 deletions bench/hello_async.bench.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
"use strict";

var argv = require('minimist')(process.argv.slice(2));
if (!argv.iterations || !argv.concurrency) {
console.error('Please provide desired iterations, concurrency');
console.error('Example: \nnode bench/hello_async.bench.js --iterations 50 --concurrency 10');
process.exit(1);
}

// This env var sets the libuv threadpool size.
// This value is locked in once a function interacts with the threadpool
// Therefore we need to set this value either in the shell or at the very
// top of a JS file (like we do here)
process.env.UV_THREADPOOL_SIZE = argv.concurrency;

var fs = require('fs');
var path = require('path');
var assert = require('assert')
var d3_queue = require('d3-queue');
var module = require('../lib/index.js');
var queue = d3_queue.queue(argv.concurrency);
var runs = 0;

function run(cb) {
module.helloAsync({ louder: false }, function(err, result) {
if (err) {
return cb(err);
}
++runs;
return cb();
});
}

for (var i = 0; i < argv.iterations; i++) {
queue.defer(run);
}

var time = +(new Date());

queue.awaitAll(function(error) {
if (error) throw error;
if (runs != argv.iterations) {
throw new Error("Error: did not run as expected");
}
// check rate
time = +(new Date()) - time;

if (time == 0) {
console.log("Warning: ms timer not high enough resolution to reliably track rate. Try more iterations");
} else {
// number of milliseconds per iteration
var rate = runs/(time/1000);
console.log('Benchmark speed: ' + rate.toFixed(0) + ' runs/s (runs:' + runs + ' ms:' + time + ' )');
}

console.log("Benchmark iterations:",argv.iterations,"concurrency:",argv.concurrency)

// There may be instances when you want to assert some performance metric
//assert.equal(rate > 1000, true, 'speed not at least 1000/second ( rate was ' + rate + ' runs/s )');

});
63 changes: 63 additions & 0 deletions bench/hello_object_async.bench.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
"use strict";

var argv = require('minimist')(process.argv.slice(2));
if (!argv.iterations || !argv.concurrency) {
console.error('Please provide desired iterations, concurrency');
console.error('Example: \nnode bench/hello_object_async.bench.js --iterations 50 --concurrency 10');
process.exit(1);
}

// This env var sets the libuv threadpool size.
// This value is locked in once a function interacts with the threadpool
// Therefore we need to set this value either in the shell or at the very
// top of a JS file (like we do here)
process.env.UV_THREADPOOL_SIZE = argv.concurrency;

var fs = require('fs');
var path = require('path');
var assert = require('assert')
var d3_queue = require('d3-queue');
var module = require('../lib/index.js');

var H = new module.HelloObjectAsync('park bench');
var queue = d3_queue.queue(argv.concurrency);
var runs = 0;

function run(cb) {
H.helloAsync({ louder: false }, function(err, result) {
if (err) {
return cb(err);
}
++runs;
return cb();
});
}

for (var i = 0; i < argv.iterations; i++) {
queue.defer(run);
}

var time = +(new Date());

queue.awaitAll(function(error) {
if (error) throw error;
if (runs != argv.iterations) {
throw new Error("Error: did not run as expected");
}
// check rate
time = +(new Date()) - time;

if (time == 0) {
console.log("Warning: ms timer not high enough resolution to reliably track rate. Try more iterations");
} else {
// number of milliseconds per iteration
var rate = runs/(time/1000);
console.log('Benchmark speed: ' + rate.toFixed(0) + ' runs/s (runs:' + runs + ' ms:' + time + ' )');
}

console.log("Benchmark iterations:",argv.iterations,"concurrency:",argv.concurrency)

// There may be instances when you want to assert some performance metric
//assert.equal(rate > 1000, true, 'speed not at least 1000/second ( rate was ' + rate + ' runs/s )');

});
25 changes: 25 additions & 0 deletions docs/benchmarking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Benchmarking

This project includes [bench tests](https://github.com/mapbox/node-cpp-skel/tree/master/bench) you can use to experiment with and measure performance. These bench tests hit the functions that [simulate expensive work being done in the threadpool](https://github.com/mapbox/node-cpp-skel/blob/master/src/object_async/hello_async.cpp#L121-L122). This is intended to model realworld use cases where you, as the developer, have expensive computation you'd like to dispatch to worker threads. Adapt these tests to your custom code and monitor the performance of your code.

We've included two bench tests for the async examples, demonstrating the affects of concurrency and threads within a process. For example, you can run:

```
node bench/hello_async.bench.js --iterations 50 --concurrency 10
```

This will run a bunch of calls to the module's `helloAsync()` function. You can control two things:

- iterations: number of times to call `helloAsync()`
- concurrency: max number of threads the test can utilize, by setting `UV_THREADPOOL_SIZE`. When running the bench script, you can see this number of threads reflected in your [Activity Monitor](https://github.com/springmeyer/profiling-guide#activity-monitorapp-on-os-x)/[htop window](https://hisham.hm/htop/).

### Ideal Benchmarks

**Ideally, you want your workers to run your code ~99% of the time and never idle.** This reflects a healthy node c++ addon and what you would expect to see when you've picked a good problem to solve with node.

The bench tests and async functions that come with `node-cpp-skel` out of the box demonstrate this behaviour:
- An async function that is CPU intensive and takes a while to finish (expensive creation and querying of a `std::map` and string comparisons).
- Worker threads are busy doing a lot of work, and the main loop is relatively idle. Depending on how many threads (concurrency) you enable, you may see your CPU% sky-rocket and your cores max out. Yeaahhh!!!
- If you bump up `--iterations` to 500 and profile in Activity Monitor.app, you'll see the main loop is idle as expected since the threads are doing all the work. You'll also see the threads busy doing work in AsyncHelloWorker roughly 99% of the time :tada:

![](https://user-images.githubusercontent.com/1209162/29333300-e7c483e2-81c8-11e7-8253-1beb12173841.png)
8 changes: 6 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,14 @@
"nan": "~2.5.1",
"node-pre-gyp": "~0.6.32"
},
"bundledDependencies":["node-pre-gyp"],
"bundledDependencies": [
"node-pre-gyp"
],
"devDependencies": {
"aws-sdk": "^2.4.7",
"tape": "^4.5.1"
"tape": "^4.5.1",
"d3-queue": "^3.0.1",
"minimist": "~1.2.0"
},
"binary": {
"module_name": "module",
Expand Down