Our development culture is committed to testing. CesiumJS is used in diverse use cases on a wide array of platforms so it is important for it to be well tested.
As of CesiumJS 1.93, CesiumJS has 13,500 tests with 90% code coverage. CesiumJS has more test code (263K lines) than engine code (218K). We are unaware of any other project of this size and lifetime and with this many contributors that has similar stats.
All new code should have 100% code coverage and should pass all tests. Always run the tests before opening a pull request. It's also important that the tests run quickly so we run them often.
- Testing Guide
The CesiumJS tests are written in JavaScript and use Jasmine, a behavior-driven testing framework. Jasmine calls an individual test, e.g., a function with one or more assertions, a spec (however, the Cesium team usually still say "test"), and a group of related tests, e.g., all the tests for Cartesian3
, a suite. Jasmine also calls an assertion, an expectation.
Karma is a tool which spawns a browser window, runs tests against that browser, and displays the results on the command line.
To run all tests with Karma, run npm run test
.
When all tests pass, output looks like this:
When one or more tests fail, output looks like this:
The failed tests will be listed by name, and details on each failure are listed below, including the expected and actual value of the failed expectation and the call stack.
It is also possible for Karma to run all tests against each browser installed on the current system. To do so, run npm run test-all
. Currently included are launchers for Chrome, Firefox, Edge, IE, and Safari.
When running CesiumJS locally, first build the code and choose any of the following test options:
npm run test -- --browsers Firefox,Chrome,Edge
This option enables extra low-level WebGL validation such as calling gl.getError()
after each WebGL call.
npm run test-webgl-validation
This makes all WebGL calls a noop and ignores test expectations that rely on reading back from WebGL. This allows running the tests on CI where a reasonable WebGL implementation is not available and still getting full code coverage albeit not all verification.
npm run test-webgl-stub
Suites can have a category associated with them. This option runs all tests in the WebGL
category, which includes all tests that use WebGL (basically anything that requires creating a Viewer
, CesiumWidget
, Scene
, or Context
).
npm run test-webgl
Likewise, this option runs all tests not in the WebGL category.
Perhaps surprisingly, this is the bulk of CesiumJS tests, which include math and geometry tests, imagery provider tests, data source tests, etc.
These tests run quickly (for example, 15 seconds compared to 60) and are very reliable across systems since they do not rely on the underlying WebGL implementation, which can vary based on the browser, OS, driver, and GPU.
npm run test-non-webgl
Most test options load CesiumJS with the unminified build plus a source map, which is great for debugging.
However, many users build apps using the built Cesium.js in Build/Cesium
(which is created, for example, by running npm run release
). This option runs the tests using this instead of the unminified build. The release version has DeveloperError
exceptions optimized out so this test option makes toThrowDeveloperError
always pass. See the Build Guide for all the CesiumJS build options.
npm run test-release
Often when developing, it is useful to run only one suite to save time, instead of all the tests, and then run all the tests before opening a pull request. To do this simply change the it
function call for the desired test to fit
, the f
stands for focused
in Jasmine speak. Likewise, to run an entire suite, use fdescribe
instead of describe
.
Alternatively, test suites can be run from the command line with the includeName
flag. The argument value should be a substring of one or more test suite names, which are found in the describe
function of the spec file. For example, the Cartesian2
tests can be run with:
npm run test -- --includeName Cartesian2
If it is helpful to step through a unit test in a browser debugger, run the tests with the debug
flag:
npm run test -- --debug
The --debug
flag will prevent the Karma browser from closing after running the tests, and clicking the "Debug" button will open a new tab that can be used for placing breakpoints and stepping through the code.
When running CesiumJS locally, start the local server and browse to http://localhost:8080/. There are several test options:
When all the tests pass, the page looks like this:
When one or more tests fail, the page looks like this:
In this case, the number of failing tests is listed at the top, and details on each failure are listed below, including the expected and actual value of the failed expectation and the call stack. The top several functions of the call stack are inside Jasmine and can be ignored. Above, the file and line of interest for the first failing test starts with an @
:
@at UserContext.<anonymous> (http://localhost:8080/Specs/Core/Cartesian3Spec.js:12:25)
Click on the failed test to rerun just that test. This is useful for saving time when fixing an issue as it avoids rerunning all the tests. Always rerun all the tests before opening a pull request.
The link to Run with WebGL validation passes a query parameter to the tests to enable extra low-level WebGL validation such as calling gl.getError()
after each WebGL call.
The Run with WebGL stub link passes a query parameter to the tests to use CesiumJS's WebGL stub. This makes all WebGL calls a noop and ignores test expectations that rely on reading back from WebGL. This allows running the tests on CI where a reasonable WebGL implementation is not available and still getting full code coverage albeit not all verification.
This option loads the test page without running any tests.
We can then use the browser's built-in search to find a test or suite and run only that. For example, below just the tests for Cartesian3
were run.
This uses a query parameter to select the test/suite to run so refreshing the page will run just that test/suite again.
Often when developing, it is useful to run only one suite to save time, instead of all the tests, and then run all the tests before opening a pull request.
Suites can have a category associated with them. This option runs all tests in the WebGL
category, which includes all tests that use WebGL (basically anything that requires creating a Viewer
, CesiumWidget
, Scene
, or Context
).
Likewise, this option runs all tests not in the WebGL category.
Perhaps surprisingly, this is the bulk of CesiumJS tests, which include math and geometry tests, imagery provider tests, data source tests, etc.
These tests run quickly (for example, 15 seconds compared to 60) and are very reliable across systems since they do not rely on the underlying WebGL implementation, which can vary based on the browser, OS, driver, and GPU.
Most test options load CesiumJS with the unminified build plus a source map, which is great for debugging.
However, many users build apps using the built Cesium.js in Build/Cesium
(which is created, for example, by running npm run release
). This option runs the tests using this instead of the unminified build.
The Run All Tests against Combined File with Debug Code Removed is the same except it is for use with the release version of the built Cesium.js (which is created, for example, by running npm run release
). The release version has DeveloperError
exceptions optimized out so this test option makes toThrowDeveloperError
always pass.
See the Build Guide for all the CesiumJS build options.
We use istanbul via karma-coverage to generate code coverage reports. It is especially important to have outstanding code coverage since JavaScript doesn't have a compiler and linker to catch early errors.
To generate a coverage report, run: npm run coverage
. This will place a report inside of the Build/Coverage/<browser>
folder and open your default browser with the result.
You'll see a source tree that matches Cesium's own code layout. Each directory shows aggregated results for all files it contains.
Click on a directory to see results for each file in that directory. Click on a specific file to see line-by-line coverage for just that file. For example, here is Core/AssociativeArray
:
In the left margin, green indicates how many times a line was executed. Many lines, such as comments and semicolons, are not colored since they are not executable.
For the contains
function above
AssociativeArray.prototype.contains = function(key) {
is executed once when CesiumJS is loaded to assign thecontains
function to theAssociativeArray
's prototype.- The
if
statement and return statement are executed 8,022 times. - The
throw
statement is not executed, which indicates that test coverage should be improved here. We strive to test all error conditions.
When writing tests, do not confuse 100% code coverage with 100% tested. For example, it is possible to have 100% code coverage without having any expectations. Also consider the following code:
if (a) {
// Code block a.1
} else {
// Code block a.2
}
if (b) {
// Code block b.1
} else {
// Code block b.2
}
It is possible to have 100% code coverage with two tests: one test where a
and b
are both true
, and another where both are false
; however, this only takes into account the case when // Code block a.1
and // Code block b.1
run together or when // Code block a.2
and // Code block b.2
run. There could be an issue when, for example, // Code block a.1
and // Code block b.2
run together.
The number of linearly independent paths (four in this case) is called the cyclomatic complexity. Be mindful of this when writing tests. On one extreme, 100% code coverage is the least amount of testing, on the other extreme is covering the cyclomatic complexity, which quickly becomes unreasonable. Use your knowledge of the implementation to devise the best strategy.
End to end (E2E) testing is a type of testing that tests the entire stack from the user's perspective. This is different to unit testing, which validates a small isolated piece of functionality at the class or function level. In CesiumJS, this testing consists mainly of screenshot comparisons.
Playwright is used to conduct end to end testing. The Playwright tests run in Node and drive instances of the browsers. It is recommended that your read through the Playwright documentation to get up to speed before writing or reviewing tests.
Since CesiumJS often takes the current time into consideration for things like lighting, animation, and the position of the skybox corresponding to the earth's rotation, we use Sinon to mock system time, ensuring consistency for all end to end tests.
To generate initial screenshots, checkout the main
branch (or a previous release tag), and run npm run test-e2e-update
. Subsequently, you can test against the generated screenshots with npm run test-e2e
.
Common end to end workflows have been captured in the following tasks:
npm run test-e2e
- Tests only in Chromium against the development build of CesiumJS.npm run test-e2e-all
- Tests in Chromium, Firefox, and Webkit against the development build of CesiumJS.npm run test-e2e-release
- Tests only in Chromium against the release build of CesiumJS.npm run test-e2e-release-all
- Tests in Chromium, Firefox, and Webkit against the release build of CesiumJS.npm run test-e2e-report
- Launch a server to view the HTML results of the last test.npm run test-e2e-update
- Tests in Chromium, Firefox, and Webkit against the development build of CesiumJS, updating the screenshots used for comparison. Use this if a feature has deliberately changed rendering.
For further info and options, see the Playwright documentation on running tests.
test-e2e
, test-e2e-all
, test-e2e-release
, and test-e2e-release-all
can all have command line options for playwright test appended. The most useful are:
--debug
- Launch a headed browser with developer tools for stepping through the tests.--project="webkit"
- Test only webkit. Can also usechromium
orfirefox
to test against those browsers.-g <grep>
or--grep <grep>
- Run only tests that match a regular expression.--grep-invert <grep>
- Run only tests that don't match a regular expression.
For example:
npm run test-e2e -- -g "3D Tiles Clipping Planes"
Tests can also be isolated by appending .only
.
test.only("focus this test", async ({ page }) => {
// Run only focused tests in the entire project.
});
When new Sandcastle is added, or behavior is intentionally changed, the screenshots will need to be updated. Use test-e2e-update
to run the relevant E2E tests and generate any new screenshots.
npm run test-e2e-update -- -g "3D Tiles Clipping Planes"
The vast majority of our end to end tests should run in about 2-5 seconds across any machine (a few may still be above 15 seconds but most should not). If they are taking longer than that you should look to speed them up. We've previously noticed certain browsers not using the GPU under Playwright resulting in slower tests.
The first step to checking for WebGL related issues should be to add an extra test or two to load the WebGL report under the Playwright environment to see if anything (like the wrong GPU) pops out. Expand the details below for an example of these tests
WebGL check example
// tests/example1.test.js
import { test } from "./test.js";
function waitFor(delay) {
return new Promise((resolve) => setTimeout(resolve, delay));
}
const screenshotPath = "Specs/e2e/webgl-check";
const chromeGpu = "chrome://gpu/"; // only works for chrome not firefox
const webGlReport1 = "https://webglreport.com/?v=1";
const webGlReport2 = "https://webglreport.com/?v=2";
/**
* This is used to check how WebGL is running in the testing environment to spot things like
* not using the correct gpu that may affect performace and run time of the tests themselves
* Based off of https://www.createit.com/blog/headless-chrome-testing-webgl-using-playwright/
*/
test.describe("WebGL verification", () => {
// Check if hardware acceleration is enabled. Without it, our tests will be much slower.
test("GPU hardware acceleration", async ({ page }) => {
await page.goto(chromeGpu);
await waitFor(2000);
await page.screenshot({
path: `${screenshotPath}/screenshot_hardware.png`,
fullPage: true,
});
});
test("webgl report v1", async ({ page }) => {
await page.goto(webGlReport1);
await waitFor(2000);
await page.screenshot({
path: `${screenshotPath}/screenshot_webgl1.png`,
fullPage: true,
});
});
test("webgl report v2", async ({ page }) => {
await page.goto(webGlReport2);
await waitFor(2000);
await page.screenshot({
path: `${screenshotPath}/screenshot_webgl2.png`,
fullPage: true,
});
});
});
Despite our best efforts, sometimes tests fail. This is often due to a new browser, OS, or driver bug that breaks a test that previously passed. If this indicates a bug in CesiumJS, we strive to quickly fix it. Likewise, if it indicates that CesiumJS needs to work around the issue (for example, as we did for Safari 9), we also strive to quickly fix it.
If a test failure is likely due to a browser, OS, or driver bug, or a poorly written test, and the failure does not impact actual CesiumJS apps, we sometimes submit an issue with the testfailure label to fix it at a later time. A great way to contribute to CesiumJS is to help fix these issues.
We love to write tests. We often write them as we write engine code (meaning CesiumJS itself). Or if the engine code is experimental, we make a second pass and write tests before opening a pull request. Sometimes we do both: we write tests right away for the new code we expect to be stable, and we wait to write tests for the code in flux.
Tests are located in the Specs directory (recall, Jasmine calls a test a "spec"), which has a directory structure that mirrors the Source directory. For example, all the tests for files in Source/Core
are in Specs/Core
. Likewise, all the tests for Source/Core/Cartesian3.js
are in Specs/Core/Cartesian3Spec.js
. The filenames are the same except for the Spec
suffix. Each spec file corresponds to at least one suite (sometimes suites are nested inside).
The CesiumJS tests are largely unit tests because they test individual units, e.g., functions or classes. The simplest units are tested individually, and then units built upon other units are also tested. This allows us to build CesiumJS on well-tested foundations and to quickly narrow down issues.
For example, a BoundingSphere
is composed of a Cartesian3
that defines its center and a number that defines its radius. Even though tests for BoundingSphere
implicitly test parts of Cartesian3
, there are separate tests that explicitly test Cartesian3
as a unit so anything that relies on Cartesian3
knows it is already tested.
Often, we also test private units individually for the same reason. For example, ShaderCache
is a private class in CesiumJS used by primitives, but it is still individually tested in ShaderCacheSpec.js.
Sometimes classes or functions are even designed with a separation specifically to enable more precise testing. For example, see getStringFromTypedArray
and getStringFromTypedArraySpec.js.
Tests are written in JavaScript using Jasmine. It is important to realize that the tests themselves are code, just like CesiumJS. As such, the test code is held to the same standards as the engine code: it should be well organized, cohesive, loosely coupled, fast, and go through peer review.
Cartesian3Spec.js contains the tests for Cartesian3
, which is a class representing a 3D point or vector with x
, y
, and z
properties, and typical functions like adding two Cartesian3
objects.
Here is a stripped down version of the tests:
import { Cartesian3 } from "../../Source/Cesium.js";
describe("Cartesian3", function () {
it("construct with default values", function () {
const cartesian = new Cartesian3();
expect(cartesian.x).toEqual(0.0);
expect(cartesian.y).toEqual(0.0);
expect(cartesian.z).toEqual(0.0);
});
});
describe
identifies this file as a test suite and we include modules the same way define
is used in engine code.
Using Jasmine, each test is defined by calling it
and passing a string that describes the test and a function that is the test.
This test constructs a default Cartesian3
object and then expects that the x
, y
, and z
properties are zero (their default) using Jasmine's expect
and toEqual
functions.
Tests should have at least one expect
call, but they may also have several as long as the test is cohesive. A test should test one behavior; if a test grows too complicated, it is hard to debug when it fails. To test one function may only require one test with one expect
, or it may require multiple tests, each with multiple expect
statements. It depends on context. Experience, peer review, and the existing tests will help guide you.
The above test does not require creating a Viewer
widget or even a WebGL context; the only part of CesiumJS it uses is Cartesian3
and anything it depends on.
To learn the ins and outs of Jasmine, take 15 minutes to go through their examples. We will not cover all the details in this guide.
We often can't rely on an exact floating-point comparison. In this case, use toEqualEpsilon
instead of toEqual
to compare within a tolerance.
it("angleBetween works for acute angles", function () {
const x = new Cartesian3(0.0, 1.0, 0.0);
const y = new Cartesian3(1.0, 1.0, 0.0);
expect(Cartesian3.angleBetween(x, y)).toEqualEpsilon(
CesiumMath.PI_OVER_FOUR,
CesiumMath.EPSILON14,
);
expect(Cartesian3.angleBetween(y, x)).toEqualEpsilon(
CesiumMath.PI_OVER_FOUR,
CesiumMath.EPSILON14,
);
});
toEqualEpsilon
is a custom Jasmine matcher that the CesiumJS tests add. See Specs/addDefaultMatchers.js for all the custom matchers. In general, all test utility functions are in files in the Specs
root directory.
For more on comparing floating-point numbers, see Comparing Floating Point Numbers, 2012 Edition.
In addition to testing success cases, we also test all failure cases. The custom matchers, toThrowDeveloperError
and toThrowRuntimeError
, can be used to expect an exception to be thrown.
it("fromDegrees throws with no latitude", function () {
expect(function () {
Cartesian3.fromDegrees(0.0, undefined);
}).toThrowDeveloperError(
"Expected latitude to be typeof number, actual typeof was undefined",
);
});
Above, Cartesian3.fromDegrees
is expected to throw a DeveloperError
because it expects longitude and latitude arguments, and only longitude is provided.
- When testing for exceptions it is recommended to test for the expected error message to verify that the test is triggering the correct error. This can be achieved either with the full error message, like above, or with a regular expression that will match the error message like this:
it("fromDegrees throws with no latitude", function () {
expect(function () {
Cartesian3.fromDegrees(0.0, undefined);
}).toThrowDeveloperError(/Expected latitude to be/);
});
- When testing for exceptions, put only code that is expected to trigger the exception inside the function passed to
expect()
, in case setup code unintentionally throws an exception. - To verify the right exception is thrown, it is often useful to comment out the
expect
call when first running the test, for example:
it("fromDegrees throws with no latitude", function () {
// expect(function() {
Cartesian3.fromDegrees(0.0, undefined);
// }).toThrowDeveloperError();
});
The Jasmine functions beforeAll
and afterAll
are used to run a function before and after, respectively, all the tests in a suite. Likewise, beforeEach
and afterEach
run a function before and after each test is run. For example, here is a common pattern from DebugModelMatrixPrimitiveSpec.js:
let scene;
beforeAll(function () {
scene = createScene();
});
afterAll(function () {
scene.destroyForSpecs();
});
afterEach(function () {
scene.primitives.removeAll();
});
Above, scene
is scoped at the suite-level, so all tests in the file have access to it. Before the suite is run, beforeAll
is used to assign to scene
(see below), and after the suite is run, afterAll
is used to destroy the scene. Using afterEach
, after each test is run, all the primitives are removed from the scene. scene
is typically used in a test like this:
it("renders", function () {
const p = scene.primitives.add(new DebugModelMatrixPrimitive());
expect(scene).notToRender([0, 0, 0, 255]);
});
The test knows scene
will be defined and does not need to worry about cleaning up the scene
because afterEach
and afterAll
take care of it.
We strive to write isolated isolated tests so that a test can be run individually and produce the same results as when running the suite containing the test or all CesiumJS tests. Therefore, a test should not depend, for example, on a previous test setting global state.
The tests in the 'WebGL'
category do not strictly follow this pattern. Creating a WebGL context (which is implicit, for example, in createScene
) is slow. Because it creates a lot of contexts, e.g., one per test, it is not well supported in browsers. So the tests use the pattern in the code example below where a scene
(or viewer
or context
) has the lifetime of the suite using beforeAll
and afterAll
.
Due to side-effects, a WebGL context should never be created in the global scope, that is, outside of a it
, beforeAll
, afterAll
, beforeEach
, or afterEach
block. Since they create a context, this applies to helper functions createContext
, createScene
, and createViewer
.
Unlike the Cartesian3
tests we first saw, many tests need to construct the main CesiumJS Viewer
widget or one of its major components. Low-level renderer tests construct just Context
(which, itself, has a canvas and WebGL context), and primitive tests construct a Scene
(which contains a Context
).
As shown above, these tests use CesiumJS test utility functions: createViewer
, createScene
, or createContext
. These functions honor query parameters passed to the tests (e.g., enabling WebGL validation or the WebGL stub) and add a few utility functions to the returned object. For example, createScene
creates a 1x1 pixel canvas with a CesiumJS Scene and adds renderForSpecs
(to initialize and render a frame) and destroyForSpecs
to the returned Scene
object.
Most CesiumJS apps do not render the scene directly; instead, the
Viewer
object's default render loop renders the scene implicit to the user. The tests are an exception; most tests explicitly render the scene.
CesiumJS adds several custom Jasmine matchers to make the rendering tests more concise and to support running tests with the WebGL stub. When using the WebGL stub, the WebGL implementation is a noop, and test expectations that rely on reading back from WebGL are ignored. The rendering custom matchers are:
toRender
notToRender
toRenderAndCall
toPickPrimitive
notToPick
toPickAndCall
toPickVoxelAndCall
toDrillPickAndCall
toReadPixels
notToReadPixels
contextToRender
notContextToRender
toRender
and notToRender
clear a 1x1 viewport to black, renders the scene into it, and verifies the RGBA value of the pixel, e.g.:
it("renders", function () {
const p = scene.primitives.add(new DebugModelMatrixPrimitive());
expect(scene).notToRender([0, 0, 0, 255]);
});
it("does not render when show is false", function () {
scene.primitives.add(
new DebugModelMatrixPrimitive({
show: false,
}),
);
expect(scene).toRender([0, 0, 0, 255]);
});
Like most rendering tests, the first example uses a coarse-grained expectation to check that the pixel is not the default value of black. Although an expectation this coarse-grained may not catch all subtle errors, it is reliable across platforms, and we rarely have bugs a more fine-grained test would have caught, especially with some manual testing (see below).
The second test verifies that the pixel value is the same as the default background color since the primitive's show
property is false
.
toRender
and notToRender
can also render the scene at a given CesiumJS simulation time, e.g.,:
expect({
scene: scene,
time: t,
}).toRender([0, 0, 0, 255]);
For more complicated expectations, which will still be ignored with the WebGL stub, toRenderAndCall
takes a callback function:
expect(scene).toRenderAndCall(function (rgba) {
expect(rgba[0]).not.toEqual(0);
expect(rgba[1]).toBeGreaterThanOrEqualTo(0);
expect(rgba[2]).toBeGreaterThanOrEqualTo(0);
expect(rgba[3]).toEqual(255);
});
For reliability across WebGL implementations, use complex expectations in toRenderAndCall
sparingly.
Similar custom matchers are used for picking tests:
const b = billboards.add(/* ... */);
expect(scene).toPickPrimitive(b); // Can also use toPickAndCall() and toDrillPickAndCall()
b.show = false;
expect(scene).notToPick();
For tests that render the scene themselves, toReadPixels
and notToReadPixels
are used to verify the RGBA value. In the simplest case, pass an RGBA array, e.g.:
expect(context).toReadPixels([0, 0, 0, 255]);
expect(context).notToReadPixels([0, 0, 0, 255]);
toReadPixels
can also read from a given framebuffer and use an epsilon for the RGBA comparison test, e.g.:
expect({
context: context,
framebuffer: framebuffer,
epsilon: 1,
}).toReadPixels([0, 0, 0, 255]);
Low-level CesiumJS renderer tests use just a Context
without a CesiumJS Scene
, and use the contextToRender
and notContextToRender
custom matchers to render a WebGL point primitive to the context's 1x1 viewport and verify the RGBA value, e.g.:
expect({
context: context,
shaderProgram: sp,
}).contextToRender(expected);
expect(/* ... */).notContextToRender();
Uniforms, the model matrix, and various depth options can be provided. In addition, instead of providing a full shader program, just the fragment shader's source can be provided, e.g.,
it("can declare automatic uniforms", function () {
const fs =
"void main() { " +
" out_FragColor = vec4((czm_viewport.x == 0.0) && (czm_viewport.y == 0.0) && (czm_viewport.z == 1.0) && (czm_viewport.w == 1.0)); " +
"}";
expect({
context: context,
fragmentShader: fs,
}).contextToRender();
});
Sometimes, it's helpful to run rendering test in both WebGL 1 and WebGL 2 contexts to verify code works in either case. createWebglVersionHelper
is a helper function that duplicates a block of specs in each context, and only runs WebGL 2 if supported by the environment.
For example, the following code will execute the spec "can create a vertex buffer from a size in bytes"
twice, once in a WebGL 1 context and once in a WebGL 2 context.
createWebglVersionHelper(createBufferSpecs);
function createBufferSpecs(contextOptions) {
let buffer;
let buffer2;
let context;
beforeAll(function () {
context = createContext(contextOptions);
});
afterAll(function () {
context.destroyForSpecs();
});
it("can create a vertex buffer from a size in bytes", function () {
buffer = Buffer.createVertexBuffer({
context: context,
sizeInBytes: 4,
usage: BufferUsage.STATIC_DRAW,
});
expect(buffer.sizeInBytes).toEqual(4);
expect(buffer.usage).toEqual(BufferUsage.STATIC_DRAW);
});
}
Rendering tests typically render to a 1x1 pixel canvas. This is so each test runs as
quickly as possible. However, when regressions happen, it is difficult to tell why the test is failing since the
image is too small to see. To make debugging tests easier, the debugCanvasWidth
and debugCanvasHeight
arguments can
be used to increase the canvas size as desired.
Example using the command line:
# Render tests will use a 400x300 canvas
npm run test -- --debugCanvasWidth 400 --debugCanvasHeight 300
Example using SpecRunner:
http://localhost:8080/Specs/SpecRunner.html?debugCanvasWidth=400&debugCanvasHeight=300
For ease of use, debugCanvasHeight
can be omitted to produce a square canvas. For example:
# Render tests will use a 300x300 canvas
npm run test -- --debugCanvasWidth 300
An example debug workflow might look like this:
- Use
fit()
to focus on the test that is failing. - Create a breakpoint where the first rendering code happens, such as a call of
scene.renderForSpecs()
. - Run the tests using the debug options described above
- Step through the test. After each render, check the browser window to see the frame that was just rendered.
// Focus the test that is failing
fit("test that is failing", function () {
// Start a breakpoint here
scene.renderForSpecs();
// After each render call, check the browser for the frame that was just rendered.
// ...
scene.renderForSpecs();
// Check the browser again for the next frame
// ... and so on
scene.renderForSpecs();
});
GLSL is the shading language used by WebGL to run small graphics programs in parallel on the GPU. Under-the-hood, CesiumJS contains a library of GLSL identifiers and functions. These are unit tested by writing a simple fragment shader that outputs white if the test passes. For example, here is an excerpt from BuiltinFunctionsSpec.js;
let context;
beforeAll(function () {
context = createContext();
});
afterAll(function () {
context.destroyForSpecs();
});
it("has czm_transpose (2x2)", function () {
const fs =
"void main() { " +
" mat2 m = mat2(1.0, 2.0, 3.0, 4.0); " +
" mat2 mt = mat2(1.0, 3.0, 2.0, 4.0); " +
" out_FragColor = vec4(czm_transpose(m) == mt); " +
"}";
context.verifyDrawForSpecs(fs);
});
createContext
returns a Context
object with a test function, verifyDrawForSpecs
, that renders a point to the 1x1 canvas and verifies the pixel value is white, e.g.,
expect(context.readPixels()).toEqual([255, 255, 255, 255]);
In the test above, the expectation is implicit in the GLSL string for the fragment shader, fs
, which assigns white to out_FragColor
if czm_transpose
correctly transposes the matrix.
It can be useful to expect if a function was called and inspect information about the function call such as the arguments passed to it. Jasmine spies are used for this.
Here is an excerpt from TweenCollectionSpec.js:
it("add() adds with a duration of zero", function () {
const complete = jasmine.createSpy("complete");
const tweens = new TweenCollection();
tweens.add({
startObject: {},
stopObject: {},
duration: 0.0,
complete: complete,
});
expect(tweens.length).toEqual(0);
expect(complete).toHaveBeenCalled();
});
Tweens are used for animation. This test creates a spy with jasmine.createSpy
to verify that a tween calls the provided complete
function when a tween finishes animating using toHaveBeenCalled()
, which is immediately in this case given duration
is 0.0
.
Spies can also provide more information about the function call (or calls). Here is an excerpt from GeocoderViewModelSpec.js:
it("Zooms to longitude, latitude, height", function () {
const viewModel = new GeocoderViewModel({
scene: scene,
});
spyOn(Camera.prototype, "flyTo");
viewModel.searchText = " 1.0, 2.0, 3.0 ";
viewModel.search();
expect(Camera.prototype.flyTo).toHaveBeenCalled();
expect(Camera.prototype.flyTo.calls.mostRecent().args[0].destination).toEqual(
Cartesian3.fromDegrees(1.0, 2.0, 3.0),
);
viewModel.searchText = "1.0 2.0 3.0";
viewModel.search();
expect(Camera.prototype.flyTo.calls.mostRecent().args[0].destination).toEqual(
Cartesian3.fromDegrees(1.0, 2.0, 3.0),
);
});
Here, spyOn
is used to replace Camera.flyTo
(prototype function on instances) with a spy. When the Geocoder is used to search for a location, the test expects that Camera.flyTo
was called with the right arguments.
Spies can also be used on non-prototype functions. For example,
it("Applies the right render state", function () {
spyOn(RenderState, "fromCache").and.callThrough();
return loadAndZoomToModel({ gltf: gltfUrl }, scene).then(function (model) {
const rs = {
frontFace: WebGLConstants.CCW,
cull: {
enabled: true,
face: WebGLConstants.BACK,
},
// ...
};
expect(RenderState.fromCache).toHaveBeenCalledWith(rs);
});
});
This test verifies that a glTF model uses the expected render state. First, a spy is added to RenderState.fromCache
. Since we want the spy to collect information but still call the original function, and.callThrough()
is used. Once the model is loaded, toHaveBeenCalledWith
is used to expect that RenderState.fromCache
was called with the expected arguments.
For more examples of what you can do with spies, see the Jasmine examples.
Beware of too tightly coupling a test with an implementation; it makes engine code hard to refactor and results in specific narrow tests. Given that we are usually white box testing (where we know the implementation details, as opposed to black box testing), we need to resist the urge to let too many implementation details leak into a test. In particular, reach into private members (whose names start with _
) sparingly.
Sometimes, a test requires sample data, like a CZML file or glTF model, or a service. When possible, we try to procedurally create data or mock a response in the test instead of reading a local file or making an external request. For example, loadArrayBufferSpec.js uses a spy to simulate an XHR response.
When external data can't be avoided, prefer storing a small file in a subdirectory of Specs/Data. Avoid bloating the repo with an unnecessarily large file. Update LICENSE.md if the data requires a license or attribution. Include a README file when useful, for example, see Specs/Data/Models/glTF-2.0/BoomBox.
Make external requests that assume the tests are being used with an Internet connection very sparingly. We anticipate being able to run the tests offline.
(For an introduction to promises, see JavaScript Promises - There and back again).
Jasmine also has support for running specs that require testing asynchronous operations. The functions that you pass to beforeAll
, afterAll
, beforeEach
, afterEach
, and it
can be declared async
. These functions can also return promises. There are also cases where asynchronous functions that explicitly return promises should be tested. See the Asynchronous Work tutorial for more information.
Here is a simplified example of beforeAll
from sampleTerrainSpec.js:
let worldTerrain;
beforeAll(async function () {
worldTerrain = await createWorldTerrainAsync();
});
Here is a simplified example of a test from ModelSpec.js:
const modelUrl = "./Data/Models/glTF-2.0/Box/glTF/Box.gltf";
let scene;
beforeAll(function () {
scene = createScene();
});
afterAll(function () {
scene.destroyForSpecs();
});
it("renders glTF model", async function () {
const model = await loadAndZoomToModelAsync({ gltf: modelUrl }, scene);
expect(scene).toRenderAndCall(function (rgba) {
expect(rgba[0]).toBeGreaterThan(0);
expect(rgba[1]).toBeGreaterThan(0);
expect(rgba[2]).toBeGreaterThan(0);
expect(rgba[3]).toBe(255);
});
});
Given a model's url and other options, loadAndZoomToModelAsync
loads a model, configures the camera, and returns a promise that resolves when a model is ready for rendering.
Since loading a model requires asynchronous requests and creating WebGL resources that may be spread over several frames, CesiumJS's pollToPromise
is used to return a promise that resolves when the model is ready, which occurs by rendering the scene in an implicit loop (hence the name "poll") until model.ready
is true
or the timeout
is reached.
pollToPromise
is also used in many places where a test needs to wait for an asynchronous event before testing its expectations. Here is an excerpt from BillboardCollectionSpec.js:
it("can create a billboard using a URL", async function () {
const b = billboards.add({
image: "./Data/Images/Green.png",
});
expect(b.ready).toEqual(false);
await pollToPromise(function () {
return b.ready;
});
expect(scene).toRender([0, 255, 0, 255]);
});
Here a billboard is loaded using a url to image. Internally, Billboard
makes an asynchronous request for the image and then sets its ready
property to true
. The function passed to pollToPromise
just returns the value of ready
; it does not need to render the scene to progressively complete the request like Model
. Finally, the test verifies that the billboard is green.
To test if a promise rejects, we use expectAsync
and provide the expected error type and message. Here is an excerpt from ArcGISTiledElevationTerrainProviderSpec.js:
it("fromUrl throws if the SRS is not supported", async function () {
const baseUrl = "made/up/url";
metadata.spatialReference.latestWkid = 1234;
await expectAsync(
ArcGISTiledElevationTerrainProvider.fromUrl(baseUrl),
).toBeRejectedWithError(RuntimeError, "Invalid spatial reference");
});
Since developer errors are removed for release builds, CesiumJS's toBeRejectedWithDeveloperError
matcher is used to verify asynchronous Developer Errors. Here is an excerpt from Cesium3DTilesetSpec.js:
it("fromUrl throws without url", async function () {
await expectAsync(Cesium3DTileset.fromUrl()).toBeRejectedWithDeveloperError(
"url is required, actual value was undefined",
);
});
To isolate testing, mock objects can be used to simulate real objects. Here is an excerpt from SceneSpec.js;
function MockPrimitive(command) {
this.update = function (frameState) {
frameState.commandList.push(command);
};
this.destroy = function () {};
}
it("debugCommandFilter filters commands", function () {
const c = new DrawCommand({
pass: Pass.OPAQUE,
});
c.execute = function () {};
spyOn(c, "execute");
scene.primitives.add(new MockPrimitive(c));
scene.debugCommandFilter = function (command) {
return command !== c; // Do not execute command
};
scene.renderForSpecs();
expect(c.execute).not.toHaveBeenCalled();
});
This test verifies that debugCommandFilter
can be used to filter the commands executed when the scene is rendered. Here, the function passed to debugCommandFilter
explicitly filters out the command c
. In order to ask the scene to execute the command in the first place, a mock object, MockPrimitive
, is used to return the command when the scene is rendered.
This test is more cohesive and easier to debug than if it were written using a real primitive, which brings along all of its extra behavior and does not provide direct access to its commands.
As mentioned above, some tests are in the 'WebGL'
category. To assign a category to a suite, pass the category to describe
.
import { DebugModelMatrixPrimitive } from "../../Source/Cesium.js";
import createScene from "../createScene.js";
describe(
"Scene/DebugModelMatrixPrimitive",
function () {
let scene;
beforeAll(function () {
scene = createScene();
});
afterAll(function () {
scene.destroyForSpecs();
});
// ...
},
"WebGL",
);
CesiumJS uses a customized describe
function that wraps Jasmine describe calls and provides the category capability.
Sometimes running the unit tests is all that is needed to verify new code. However, we often also manually run CesiumJS to see the effects of new code. Sometimes it is as simple as running CesiumJS Viewer before opening a pull request, perhaps because we just added a new function to Cartesian3
. Other times, it is as involved as going through each example in Sandcastle and testing the different options because, for example, we refactored the renderer for WebGL 2. Most often, there is a middle ground, for example, we added a new feature to Model
so we ran the Sandcastle examples that create 3D Models.
Advice from @pjcozzi:
Since I wrote the very first CesiumJS test, I have not written a suite - or even individual test - from scratch. I suspect no one does.
The first 73 CesiumJS tests from March 2011.
Instead, start with a similar suite or test, copy it, strip it down to the minimum you need, and then start adding your specific code. For example, if you are adding a new math type, start with Cartesian3Spec.js or Matrix4Spec.js. If you are adding a new primitive, start with DebugModelMatrixPrimitiveSpec.js.
Two decades ago, I didn't know what unit tests and code coverage were. I wrote a little bit of code, put breakpoints in all the new code paths, and then ran the app in the debugger. When a breakpoint hit, I stepped through the code, looked at the variable values, and verified that the code was doing what I intended. I did this until everything worked and all the breakpoints were hit. Then, I repeated: I added a bit more new code, new breakpoints, and incrementally tested.
Even today, with modern tools and experience, I never code for hours straight without testing, and I rarely write new code without seeing it execute in the debugger. Debuggers are not a reactive tool for when a bug is found, they are a proactive tool for gaining insight and avoiding surprises.
Try this approach for yourself!
See Section 4.4 of Getting Serious with JavaScript by CesiumJS contributors Matthew Amato and Kevin Ring in WebGL Insights for a deeper but less broad presentation of CesiumJS testing.