Intern. The user guide

Fundamentals

What is Intern?

Intern is a complete framework for testing Web sites and applications. It’s built around standards like WebDriver and was designed from the ground up to be the most powerful, flexible, and reliable JavaScript testing system in the world.

Intern can test all sorts of things:

  • Plain JavaScript code, in any module format (or no module format!)
  • Web pages generated by server-side languages (like Java, PHP, or Ruby)
  • Native or hybrid iOS, Android, and Firefox OS applications

Intern is minimally prescriptive and enforces only a basic set of best practices designed to ensure your tests stay maintainable over time. Its extensible architecture allows you to write custom test interfaces, executors, and reporters to influence how your tests run & easily integrate with your existing coding environment.

Intern also comes with Grunt tasks so it can be quickly added to existing Grunt-based workflows, and is designed to work out-of-the-box with many popular continuous integration services like Jenkins and Travis CI.

Intern is used by companies large and small, including some you may have heard of, like Aerohive Networks, Esri, HSBC, IBM, ING, Intuit, Marriott, Mozilla, Stripe, and Twitter. We hope you’ll enjoy working with it too!

Who is Intern best for?

Intern is best for development teams that want a complete, flexible, standards-based, high-quality testing solution that Just Works. It’s best for testing JavaScript code, but it is also an excellent tool for testing server-generated Web pages or native mobile apps. Its built-in support for source maps makes it uniquely well-suited for developers that compile their code to JavaScript from another language, such as TypeScript, or that run tests against production-ready built/minified code.

Intern is excellent for teams that are just beginning to write tests; its built-in code coverage analysis makes thorough testing fast and precise, and its functional testing system enables testing of highly procedural code that’s otherwise impossible to test.

Intern is a great choice if you’re hoping to transition away from your current testing library but already have a lot of tests, since its extensible interfaces make it possible to transition without needing to rewrite tests or learn a totally new set of APIs.

Intern’s execution model is well-suited to those that follow a test-last development approach and want to prevent regressions using continuous integration. It has also been used very successfully by teams that follow a test-driven development approach.

Finally, because of its strong architectural patterns, conventions, and expansive feature set, Intern is well-suited for testing extremely large, “enterprise-level” applications that must be maintained by large teams of developers with varying skill levels.

System requirements

Intern can be used to run unit tests in any of the following environments:

Android Browser 4.1+
Chrome 31+
Firefox 17+
Internet Explorer 9+
Node.js 0.10+
Opera 26+
Safari (iOS) 6.1+
Safari (Mac OS) 6.0+

Intern can be used to run functional tests in any of the following environments:

Android (browser, hybrid, native) 4.1+
Chrome 31+
Firefox 17+
Internet Explorer 8+
iOS (browser, hybrid, native) 6.1+
Opera 26+
Safari (Mac OS) 6+

All other environments are not currently officially supported, but should work as long as they correctly implement EcmaScript 5 (for unit tests) and/or WebDriver (for functional tests).

In order to execute functional tests, or to execute tests against multiple browsers at the same time, Intern also requires Node.js 0.10+ plus a WebDriver-compatible server.

The following WebDriver servers are compatible with Intern:

Appium 1.3.0+
ChromeDriver 2.9+
FirefoxDriver (Selenium) 2.41.0+
InternetExplorerDriver (Selenium) 2.41.0+
ios-driver 0.6.6+
SafariDriver (Selenium) 2.41.0+
Selendroid 0.9.0+

Intern also has built-in support for cloud hosted services from BrowserStack, CrossBrowserTesting, Sauce Labs, and TestingBot.

Reading this guide

Throughout this guide, you will find certain pieces of information are called out specifically, as we have found them to be especially important or problematic for users.

Features marked with a 3.0 indicate that the feature is only available starting in that particular version of Intern.

Later sections of this guide are designed to be built upon knowledge presented in earlier parts. If you jump into the middle and feel confused, step back a section or two until you find the information you need to move forward. We also accept pull requests to the documentation source in order to improve its flow and clarity.

Getting started

Overview

Intern provides two strategies for automated testing: unit testing and functional testing.

Unit testing works by executing a piece of code directly and inspecting the result. For example, calling a function and then checking that it returns an expected value is a form of unit testing. This is the most common and most useful form of testing for day-to-day development, since it’s very fast and allows very small units of code to be tested in isolation. However, unit tests are limited to only testing certain testable code designs, and can also be limited by the constraints of the execution environment (like browser sandboxes).

Functional testing works by issuing commands to a device that mimic actual user interactions. Once an interaction has occurred, these tests verify that the expected information is displayed by the user interface. Because these interactions come from outside the application being tested, they are not restricted by the execution environment. They also allow application code to be treated as a black box, which means functional tests can be written to test pages and applications written in any language. Because functional tests don’t call any APIs directly, code that is unable to be unit tested can still be successfully exercised. Functional tests allow the automation of UI & integration testing that would otherwise need to be performed manually.

By understanding & combining both of these testing strategies when testing an application, it becomes possible to effectively automate nearly all of the QA process, enabling much faster development cycles and significantly reducing software defects.

Installation

Intern can be installed simply by running npm install intern.

Recommended directory structure

While Intern can be used to test code using nearly any directory structure, if you are starting a new project or have the ability to modify the directory structure of your existing project, a few small changes can help make Intern integration a lot easier.

The recommended directory structure for a front-end or front+back-end project using Intern looks like this:

project_root/
	dist/         – (optional) Built code; mirrors the `src` directory
	node_modules/ – Node.js dependencies, including Intern
		intern/
	src/          – Front-end source code (+ browser dependencies)
		app/        – Your application code
		index.html  – Your application entry point
	tests/        – Intern tests
		functional/ – Functional tests
		support/    – Test support files
		              (custom interfaces, reporters, mocks, etc.)
		unit/       – Unit tests
		intern.js   – Intern configuration

Using this directory structure provides a few benefits:

  • It lets you easily switch from testing source and built code simply by changing the location of your packages from src to dist in your Intern configuration
  • It lets you use the default loader baseUrl configuration setting without worrying about path differences between Node.js and browser
  • It adds another layer of assurance that your tests and other private server-side code won’t be accidentally deployed along with the rest of your application

Terminology

Intern uses certain standard terminology in order to make it easier to understand each part of the system.

  • An assertion is a function call that verifies that an expression (like a variable or function call) returns an expected, correct, value (e.g. assert.isTrue(someVariable, 'someVariable should be true'))
  • A test interface is a programming interface for registering tests with Intern
  • A test case (or, just test) is a function that makes calls to application code and makes assertions about what it should have done
  • A test suite is a collection of tests (and, optionally, sub–test-suites) that are related to each other in some logical way
  • A test module is a JavaScript module, usually in AMD format, that contains test suites

These pieces can be visualized in a hierarchy, like this:

  • test module
    • test suite
      • test suite
        • test case
          • assertion
          • assertion
      • test case
        • assertion
        • assertion
    • test suite
  • test module

Conventions

Intern follows certain conventions in order to make testing easier and more reliable. Understanding these fundamental concepts will help you get the most out of testing with Intern.

Asynchronous operations

Intern always uses Promise objects whenever an asynchronous operation needs to occur. All suite, test, and reporter functions can return a Promise, which will pause the test system until the Promise resolves (or until a timeout occurs, whichever happens first).

Module loader & format

Intern is built on top of a standard AMD loader, which means that its modules are also normally written in the AMD module format. Using an AMD loader instead of something like the built-in Node.js loader + Browserify is critical to provide a highly stable and flexible testing system, because AMD is the only stable standard for module loading that has all of these traits:

  • Allows modules to be written for the browser without requiring an intermediate compilation step;
  • Allows modules and other assets to be asynchronously or conditionally resolved by writing simple loader plugins;
  • Allows “hard-coded” dependencies of modules under test to be mocked without messing with the internals of the module loader

For users that are only familiar with Node.js modules, AMD modules are exactly the same with one extra line of wrapper code to enable asynchronous loading:

define(function (require, exports, module) {
/* Node.js module code here! */
});

Testing your first app

In order to quickly get started with Intern, we’ve created a basic tutorial that walks through the steps required to install, configure, and run basic tests against a very simple demo application.

Once you’ve run through the tutorial, you may also want to look at some of the example integrations for popular libraries and frameworks if you are using AngularJS, Backbone.js, Dojo, Ember, or jQuery.

After that, continue reading the user guide to learn about all the advanced functionality available within Intern that you can use test your own code better and faster!

Configuration

Common configuration

Intern’s configuration files are actually standard AMD modules that export a configuration object. This allows simple inheritance of parent configurations and enables test configurations to be generated programmatically at runtime.

The configuration file is specified using the config argument on the command-line (for Node.js) or the config argument in the URL query-string (for browsers).

The following configuration options are common to all execution modes in Intern:

Option Type Default
bail 3.1 If this value is set to true, a failing test will cause all following tests in all suites to be skipped. false
baseline 3.4 If true, and if benchmark is also true, benchmarking will run in "baseline" mode. false
basePath 3.0 The common base path for all files that need to be loaded during testing. process.cwd() (Node.js)
'node_modules/intern/../../' (browser)
benchmark 3.4 If true, enable benchmarking mode. false
benchmarkConfig 3.4 An object containing options for the benchmarking system. undefined
benchmarkSuites 3.4 An array of benchmark test module IDs to load. These may include glob patterns. []
coverageVariable 3.0 The name of the global variable used to store and retrieve code coverage data. '__internCoverage'
defaultTimeout 3.0 The amount of time, in milliseconds, an asynchronous test can take before it is considered timed out. 30000
excludeInstrumentation A boolean 3.0 or regular expression matching paths to exclude from code coverage. null
filterErrorStack 3.4 If this value is set to true, stack trace lines for non-application code will be pruned from error messages. false
grep A regular expression that filters which tests should run. /.*/
loaderOptions 3.0
loader 2.0
Configuration options for the AMD loader. { … }
loaders 3.0
useLoader 2.0
An alternative module loader to use in place of the built-in AMD loader. {}
reporters An array of reporters to use to report test results. [ 'Runner' ] (runner)
[ 'Console' ] (client)
setup 3.0 A function that will be run before the testing process starts. undefined
suites An array of unit test module IDs to load. These may include glob patterns. 3.1 []
teardown 3.0 A function that will be run after the testing process ends. undefined

bail (boolean) 3.1

The bail option controls Intern's “fail fast” behavior. When bail is set to true and a test fails, all remaining tests, both unit and functional, will be skipped. Other than the cleanup methods for the failing test and its containing suite, no other test or suite lifecycle methods (setup/before, beforeEach, afterEach, teardown/after) will be run.

baseline (boolean) 3.4

If this value is true, and if Intern is running in benchmarking mode, it will record baseline data rather than evaluating benchmarks against existing baseline data. Intern will automatically run in baseline mode if no benchmark data exists when a benchmarking run is started.

basePath (string) 3.0

The common base path for all files that need to be loaded during testing. If basePath is specified using a relative path, that path is resolved differently depending upon where Intern is executing:

  • In Node.js, basePath is resolved relative to process.cwd()
  • In a browser with an initialBaseUrl argument in the query-string, basePath is resolved relative to initialBaseUrl
  • In a browser with no initialBaseUrl argument, basePath is resolved relative to two directories above the Intern directory (i.e. node_modules/intern/../../)

If basePath is not explicitly provided, it is set to . and is resolved according to the rules above.

benchmark (boolean) 3.4

If this value is true, Intern will run in benchmarking mode. In this mode, only suites in the benchmarkSuites will be executed. By default, benchmarking mode will compare benchmark results against previously recorded results and flag deviations. Set the baseline option to true to record new baseline results.

benchmarkConfig (object) 3.4

This value contains options for the Benchmark reporter. The default values are:

benchmarkConfig: {
	filename: 'baseline.json',
	thresholds: {
		warn: { rme: 5, mean: 3 },
		fail: { rme: 6, mean: 10 }
	},
	verbosity: 0
}

benchmarkSuites (array) 3.4

An array of benchmark test module IDs to load. See suites for supported syntax.

coverageVariable (string) 3.0

The name of the global variable used to store and retrieve code coverage data. Change this only if you have code that is pre-instrumented by a compatible version of Istanbul with a different global variable name.

defaultTimeout (number) 3.0

The amount of time, in milliseconds, an asynchronous test can run before it is considered timed out. By default this value is 30 seconds.

Timeouts can be set for an individual test by setting the timeout property of the test, or for all tests within a test suite by setting the timeout property of the suite.

excludeInstrumentation (RegExp | boolean 3.0)

The excludeInstrumentation option can be either a regular expression or the boolean value true.

As a boolean true, completely disables code instrumentation.

As a regular expression, a regular expression matching paths to exclude from code coverage. The regular expression matches the path-part of URLs (starting from the end of proxyUrl, excluding any trailing slash) or paths (starting from the end of process.cwd()) that should not be instrumented for code coverage during testing in browsers and the Node.js client.

This option should be used when you want to exclude dependencies from being reported in your code coverage results. (Intern code—that is, anything that loads from {{proxyUrl}}/__intern/—is always excluded from code coverage results.) For example, to exclude tests and Node.js dependencies from being reported in your application’s code coverage analysis:

{
	excludeInstrumentation: /^(?:tests|node_modules)\//
}

filterErrorStack (boolean) 3.4

The filterErrorStack option tells Intern to clean up error stack traces by removing non-application code. For example, by default a stack trace for a WebDriver test error might look like this:

UnknownError: [GET http://localhost:4444/.../text] Element reference not seen before: %5Bobject%20Object%5D
  at runRequest  <node_modules/leadfoot/Session.js:88:40>
  at <node_modules/leadfoot/Session.js:109:39>
  at new Promise  <node_modules/dojo/Promise.ts:411:3>
  at ProxiedSession._get  <node_modules/leadfoot/Session.js:63:10>
  at Element._get  <node_modules/leadfoot/Element.js:23:31>
  at Element.getVisibleText  <node_modules/leadfoot/Element.js:199:21>
  at Command.<anonymous>  <node_modules/leadfoot/Command.js:680:19>
  at <node_modules/dojo/Promise.ts:393:15>
  at run  <node_modules/dojo/Promise.ts:237:7>
  at <node_modules/dojo/nextTick.ts:44:3>
  at Command.target.(anonymous function) [as getVisibleText]  <node_modules/leadfoot/Command.js:674:11>
  at Test.check contents [as test]  <tests/functional/hello.js:21:6>
  at <node_modules/intern/lib/Test.js:191:24>
  at <node_modules/intern/browser_modules/dojo/Promise.ts:393:15>
  at runCallbacks  <node_modules/intern/browser_modules/dojo/Promise.ts:11:11>
  at <node_modules/intern/browser_modules/dojo/Promise.ts:317:4>
  at run  <node_modules/intern/browser_modules/dojo/Promise.ts:237:7>
  at <node_modules/intern/browser_modules/dojo/nextTick.ts:44:3>
  at _combinedTickCallback  <internal/process/next_tick.js:67:7>

With filterErrorStack set to true, it would look like this:

UnknownError: [GET http://localhost:4444/.../text] Element reference not seen before: %5Bobject%20Object%5D
  at Test.check contents [as test]  <tests/functional/hello.js:21:6>

grep (RegExp)

A regular expression that filters which tests should run. grep should be used whenever you want to run only a subset of all available tests.

When using grep, its value is matched against the ID of each registered test, and tests that don’t match are skipped with a skip message of “grep”.

The ID of a test is a concatenation of the test’s name, plus the names of its parent suites, separated by ' - '. In other words, a test registered like this:

tdd.suite('FooComponent', function () {
	tdd.test('startup', function () {
		// …
	});
});

…would have the ID 'FooComponent - startup'. In this case, all of the following grep values would match and cause this test to run:

  • /FooComponent/
  • /startup/
  • /FooComponent - startup/
  • /foocomponent/i
  • /start/

The following grep values would not match and cause this test to be skipped:

  • /BarComponent/ – “BarComponent” is not in the full name of the test
  • /foocomponent/ – this regular expression is case sensitive
  • /^startup/ – the full ID of the test is matched, not just the name part

loaderOptions (Object) 3.0 / loader (Object) 2.0

Configuration options for the module loader. Any configuration options that are supported by the active loader can be used here. By default, the Dojo 2 AMD loader is used; this can be changed to another loader that provides an AMD-compatible API with loaders.

AMD configuration options supported by the built-in loader are map, packages, and paths.

3.0 If baseUrl is not explicitly defined, it is automatically set to be equivalent to the basePath. Relative baseUrls are relative to basePath.

When following the recommended directory structure, no extra loader configuration is needed.

If you are testing an AMD application and need to use stub modules for testing, the map configuration option is the correct way to do this:

{
	loaderOptions: {
		map: {
			app: {
				// When any module inside 'app' tries to load 'app/foo',
				// it will receive 'tests/stubs/app/foo' instead
				'app/foo': 'tests/stubs/app/foo'
			}
		}
	}
}

loaders (Object) 3.0 / useLoader (Object) 2.0

An alternative module loader to use in place of the built-in AMD loader. When loaders is specified, Intern will swap out the built-in loader with the loader you’ve specified before loading reporters and test modules.

The alternative loader you use must implement the AMD API and must support the baseUrl, map, and packages configuration options.

There are two different keys that may be specified so that the correct path to the loader can be provided in each environment:

  • host-node specifies the loader to use in Node.js. This should be a Node.js module ID.
  • host-browser specifies the loader to use in browsers. This should be a path or URL to a script file.

In Intern 2, loader paths are relative to the directory where Intern is installed. In Intern 3, loader paths are relative to basePath.

For example, to use a copy of RequireJS installed to the same project as Intern:

loaders: {
	'host-node': 'requirejs',
	'host-browser': 'node_modules/requirejs/require.js'
}

reporters (Array<Object 3.0 | string>)

An array of reporters to use to report test results. Reporters in this list can either be built-in reporter names (like 'Console' or 'JUnit'), or absolute AMD module IDs (like 'tests/support/customReporter') when using custom reporters.

3.0 Reporters can also be configured by passing an object with extra configuration options valid for the given reporter. In this case, the ID of the reporter should be given by the id key of the reporter configuration:

{
	reporters: [
		{ id: 'JUnit', filename: 'report.xml' }
	]
}

If reporters are not specified in the configuration, Intern will pick defaults that are most suitable for the current execution mode.

setup (Function)

A function that will be run before the testing process starts. If this function returns a Promise, Intern will wait for the Promise to resolve before continuing. If the function throws an exception or rejects a returned Promise, the testing process will terminate with an error. This can be a good place to initialize testing resources needed by all tests, such as database connections.

suites (Array<string>)

An array of unit test module IDs to load. For example:

{
	suites: [
		'tests/unit/foo',
		'tests/unit/bar'
	]
}

Suite specifiers may also include glob patterns using syntax supported by node-glob:

{
	suites: [
		'tests/unit/foo/*',
		'tests/unit/{bar,baz}/*'
	]
}

Like simple suite specifiers, specifiers with glob patterns refer to module IDs, not file paths. Glob patterns must resolve to individual test modules, not packages. For example, given the following project structure, 'tests/u*' would not be a valid glob, but 'tests/unit/f*' would be:

project_root/
	tests/
		unit/
			foo.js
			bar.js
		intern.js

suites can be set to null to skip loading the unit testing system when in runner mode. From the command line, this is done by passing the argument suites=.

teardown (Function)

A function that will be run after the testing process completes. If this function returns a Promise, Intern will wait for the Promise to resolve before continuing. If the function throws an exception or rejects a returned Promise, the testing process will terminate with an error. This is generally where resources opened in config.setup should be cleaned up.

Client configuration

There are currently no options that only apply when running in client mode.

Test runner configuration

Certain configuration options only apply when in runner mode. These options are ignored when running in client mode.

Option Type Default
capabilities Default capabilities for all test environments. { name: configModuleId,
'idle-timeout': 60 }
environments An array of capabilities objects, one for each desired test environment. []
environmentRetries 3.0 The number of times to retry creating a session for a remote environment. 3
functionalSuites An array of functional test module IDs to load. These may include glob patterns. []
leaveRemoteOpen 3.0 Leaves the remote environment running at the end of the test run. false
maxConcurrency The maximum number of environments to test simultaneously. 3
proxyOnly 3.0 Starts Intern’s instrumenting HTTP proxy but performs no other work. false
proxyPort The port where the Intern HTTP server will listen for requests. 9000
proxyUrl The external URL to the Intern HTTP server. 'http://localhost:9000/'
runnerClientReporter 3.0 The reporter used to send data from the unit testing system back to the test runner. { id: 'WebDriver' }
tunnel The tunnel to use to establish a WebDriver server for testing. 'NullTunnel'
tunnelOptions Options to pass to the WebDriver server tunnel. {}

capabilities (Object)

Default capabilities for all test environments. These baseline capabilities are extended for each environment by the environments array.

Different services like BrowserStack and Sauce Labs may have different sets of available capabilities. In order for Intern to work correctly, it’s important that you use the appropriate capabilities for WebDriver server you are interacting with:

Extra options for ChromeDriver are specified on the chromeOptions capability.

Intern will automatically fill certain capabilities fields in order to provide better feedback within cloud service dashboards:

  • name will be set to the ID of the configuration file being used
  • build will be set to the commit ID from the TRAVIS_COMMIT and BUILD_TAG environment variables, if either exists

environments (Array<Object>)

An array of capabilities objects, one for each desired test environment. The same options from capabilities are used for each environment specified in the array. To delete an option from the default capabilities, explicitly set its value to undefined.

If arrays are provided for browserName, version, platform, or platformVersion, all possible option permutations will be generated. For example:

{
	environments: [
		{
			browserName: 'chrome',
			version: [ '23', '24' ],
			platform: [ 'Linux', 'Mac OS 10.8' ]
		}
	]
}

This configuration will generate 4 environments: Chrome 23 on Linux, Chrome 23 on Mac OS 10.8, Chrome 24 on Linux, and Chrome 24 on Mac OS 10.8.

All other capabilities are not permuted, but are simply passed as-is to the WebDriver server.

3.3 When using one of the supported cloud services (BrowserStack, CrossBrowserTesting, Sauce Labs, or TestingBot), the browser version may also use range expressions and the “latest” alias. A range expression consists of two versions separated by “..”. Intern will request a list of all supported platform + browser + version combinations from the service and will expand the range using the versions available from the service. The range is inclusive, so for the range expression “23..26”, Intern will include versions 23 and 26 in the expanded version list.

{
	environments: [
		{ browserName: 'chrome', version: '23..26', platform: 'Linux' }
	]
}

The “latest” alias represents the most recent version of a browser (the most recent verison available on the relevant cloud service). An integer may be subtracted from the latest value, like “latest-1”; this represents the next-to-latest version. The “latest” alias, including the subtraction form, may be used in version ranges or by itself.

{
	environments: [
		{ browserName: 'firefox', version: 'latest', platform: 'Linux' },
		{ browserName: 'chrome', version: '24..latest-1', platform: 'Linux' }
	]
}

environmentRetries (number) 3.0

The number of times to retry creating a session for a remote environment. Occasionally, hosted VM services will experience temporary failures when creating a new session; specifying environmentRetries avoids false positives caused by transient session creation failures.

functionalSuites (Array<string>)

An array of functional test module IDs to load. Functional tests are different from unit tests because they are executed on the local (Node.js) side, not the remote (browser) side, so they are specified separately from the list of unit test modules.

{
	functionalSuites: [
		'tests/functional/foo',
		'tests/functional/bar'
	]
}

As with suites, functional suite specifiers may also include glob patterns.

leaveRemoteOpen (boolean | string) 3.0

Leaves the remote environment running at the end of the test run. This makes it easier to investigate a browser’s state when debugging failing tests. This can also be set to fail to only keep the remote environment running when a test failure has occurred.

In Intern 2, this option is available, but only as a command-line flag.

maxConcurrency (number)

The maximum number of environments to test simultaneously. Set this to Infinity to run tests against all environments at once. You may want to reduce this if you have a limited number of test machines available, or are using a shared hosted account.

proxyOnly (boolean) 3.0

Starts Intern’s instrumenting HTTP proxy but performs no other work. basePath will be served as the root of the server. This can be useful when you want to run the browser client manually and get access to code coverage information, which is not available when running the browser client directly from a normal HTTP server. The browser client is available from {proxyUrl}/__intern/client.html.

In Intern 2, this option is available, but only as a command-line flag.

proxyPort (number)

The port where the Intern HTTP server will listen for requests. Intern’s HTTP server performs two critical tasks:

  • Automatically adds instrumentation to your JavaScript code so that it can be analysed for completeness by the code coverage reporter
  • Provides a communication conduit for the unit testing system to provide live test results in runner mode

Any JavaScript code that you want to evaluate for code coverage must either pass through the code coverage proxy or be pre-instrumented for Intern. The HTTP server must also be accessible to the environment (browser) being tested in runner mode in order for unit testing results to be transmitted back to the test runner successfully.

proxyUrl (string)

The external URL to the Intern HTTP server. You will need to change this value only if you are running Intern’s HTTP server through a reverse proxy, or if Intern’s HTTP server needs to be reached through a public interface that your Selenium servers can access directly.

runnerClientReporter (string | Object) 3.0

The reporter used to send data from the unit testing system back to the test runner. The default reporter is the built-in WebDriver reporter, which can be configured by setting runnerClientReporter to an object with one or more properties:

Option Type Default
waitForRunner Whether or not events transmitted from the unit testing system to the test runner should cause the unit testing system to pause until a response is received from the test runner. This is necessary if you expect to be able to do things like take screenshots of the browser before/after each unit test executes from a custom reporter. This property can be set to true to always wait for the test runner after each event from the test system, or 'fail' to only wait if the event was a test failure or other error. false
writeHtml Whether or not test status should be written to the screen during the test run. This is useful for debugging test hangs when running on a cloud provider, but can also interfere with tests that rely on scrolling/positioning or code which indiscriminately destroys the content of the DOM. true

tunnel (string)

The tunnel to use to establish a WebDriver server for testing. The tunnel can either be a built-in tunnel name (like 'NullTunnel' or 'BrowserStackTunnel'), or an absolute AMD module ID (like 'tests/support/CustomTunnel') when using a custom tunnel.

The following tunnels are built in to Intern:

When you are using your own Selenium server or your own Selenium grid, you will typically use the 'NullTunnel' tunnel and specify the host, port, and/or path to the Selenium server in tunnelOptions.

tunnelOptions (Object)

Options to pass to the WebDriver server tunnel. Valid options for each of the built-in tunnels can be found in the Dig Dug documentation.

Test interfaces

Overview

Test interfaces are the way in which your tests make it into Intern. You can use one of the standard interfaces that come with Intern, or you can create your own custom interface if you don’t like the available defaults.

The Object interface

The Object interface is the most basic API for writing tests. It exposes a single function, usually referenced as registerSuite. This function is used to register a series of tests by passing in a plain JavaScript object containing test functions:

define(function (require) {
	var registerSuite = require('intern!object');

	registerSuite({
		name: 'Suite name',

		setup: function () {
			// executes before suite starts;
			// can also be called `before` instead of `setup`
		},

		teardown: function () {
			// executes after suite ends;
			// can also be called `after` instead of `teardown`
		},

		beforeEach: function (test 3.0) {
			// executes before each test
		},

		afterEach: function (test 3.0) {
			// executes after each test
		},

		'Test foo': function () {
			// a test case
		},

		'Test bar': function () {
			// another test case
		},

		/* … */
	});
});

If you need to hold variables that are modified by test suites, it’s important to pass a function to registerSuite and create the variables inside that function, instead of putting the variables directly inside the factory:

define(function (require) {
	var assert = require('intern/chai!assert');
	var registerSuite = require('intern!object');

	// Don't put this here! This variable is shared!
	var counter = 0;

	registerSuite({
		name: 'Anti-pattern',

		setup: function () {
			app = {
				id: counter++
			};
		},

		'Test the id': function () {
			// May or may not be true! The value of `counter`
			// may have been modified by another suite execution!
			assert.strictEqual(app.id, counter - 1);
		}
	});
});
define(function (require) {
	var assert = require('intern/chai!assert');
	var registerSuite = require('intern!object');

	registerSuite(function () {
		// Do put this here! This variable is unique for each environment!
		var counter = 0;

		return {
			name: 'Correct pattern',

			setup: function () {
				app = {
					id: counter++
				};
			},

			'Test the id': function () {
				// The value of `counter` will always be what is expected
				assert.strictEqual(app.id, counter - 1);
			}
		};
	});
});

It is also possible to nest suites by using an object as a value instead of a function:

define(function (require) {
	var registerSuite = require('intern!object');

	registerSuite({
		name: 'Suite name',

		'Test foo': function () {
			// a test case
		},

		// this is a sub-suite, not a test
		'Sub-suite name': {
			// it can also have its own suite lifecycle methods
			setup: function () { /* … */ },
			teardown: function () { /* … */ },
			beforeEach: function () { /* … */ },
			afterEach: function () { /* … */ },

			'Sub-suite test': function () {
				// a test case inside the sub-suite
			},

			'Sub-sub-suite name': {
				// and so on…
			}
		},

		/* … */
	});
});

The TDD & BDD interfaces

The TDD & BDD interfaces are nearly identical to each other, differing only slightly in the names of the properties that they expose. Registering suites and tests using the TDD & BDD interfaces is more procedural than the Object interface:

define(function (require) {
	var tdd = require('intern!tdd');

	tdd.suite('Suite name', function () {
		tdd.before(function () {
			// executes before suite starts
		});

		tdd.after(function () {
			// executes after suite ends
		});

		tdd.beforeEach(function () {
			// executes before each test
		});

		tdd.afterEach(function () {
			// executes after each test
		});

		tdd.test('Test foo', function () {
			// a test case
		});

		tdd.test('Test bar', function () {
			// another test case
		});

		// …
	});

The BDD interface attempts to enforce a more literary, behaviour-describing convention for suites and tests by using different names for its registration functions:

define(function (require) {
	var bdd = require('intern!bdd');

	bdd.describe('the thing being tested', function () {
		bdd.before(function () {
			// executes before suite starts
		});

		bdd.after(function () {
			// executes after suite ends
		});

		bdd.beforeEach(function () {
			// executes before each test
		});

		bdd.afterEach(function () {
			// executes after each test
		});

		bdd.it('should do foo', function () {
			// a test case
		});

		bdd.it('should do bar', function () {
			// another test case
		});

		// …
	});
});

Just like the Object interface, the TDD & BDD interfaces allow suites to be nested by calling tdd.suite or bdd.describe from within a parent suite:

define(function (require) {
	var tdd = require('intern!tdd');

	tdd.suite('Suite name', function () {
		tdd.test('Test foo', function () {
			// a test case
		});

		tdd.suite('Sub-suite name', function () {
			// it can also have its own suite lifecycle methods
			tdd.before(function () { /* … */ });
			tdd.after(function () { /* … */ });
			tdd.beforeEach(function () { /* … */ });
			tdd.afterEach(function () { /* … */ });

			tdd.test('Sub-test name', function () {
				// a test case inside the sub-suite
			});

			tdd.suite('Sub-sub-suite', function () {
				// and so on…
			})
		});

		// …
	});

The QUnit interface

The QUnit interface provides a test interface that is compatible with the QUnit 1 API. This interface allows you to easily take existing QUnit tests and run them with Intern, or apply your existing QUnit knowledge to writing tests with Intern.

Converting existing QUnit tests to use Intern is as simple as wrapping your test files to expose Intern’s QUnit interface:

define(function (require) {
	var QUnit = require('intern!qunit');

	QUnit.module('Suite name');
	QUnit.test('Test foo', function (assert) {
		assert.expects(1);
		assert.ok(true, 'Everything is OK');
	});

	// … other tests …
});

The Benchmark interface

The benchmark interface is a specialized version of the object interface used to write benchmarking tests. Its usage is similar to that of the object interface, with a few key differences. One is that asynchronous tests must be declared using an async wrapper function, and tests that will be explicitly skipped also need to use a wrapper function (skip). The this.async() and this.skip() methods are not supported for benchmark tests.

define([ 'intern!benchmark' ], function (registerSuite) {
	var async = registerSuite.async; 
	var skip = registerSuite.skip; 

	registerSuite({
		name: 'a suite',

		'basic test': function () {
			// benchmark
		},

		'skipped test': skip(function () {
			// benchmark
		}),

		'async test': async(function (dfd) {
			// benchmark that returns a Promise or resolves the passed in deferred object
		})
	});
});

Intern's benchmarking support is based on Benchmark.js, which accepts configuration options beyond the standard ones provided through Intern. To pass additional options directly to Benchmark.js, attach them as an "options" property to the test function.

registerSuite({
	'basic test': (function () {
		function test() {
			// benchmark
		}

		test.options = {
			// benchmark.js options
		};

		return test;
	})()
});

The benchmark interface also adds two new lifecycle functions: beforeEachLoop and afterEachLoop. These are similar to Benchmark.js's setup and teardown methods, in that they will be called for each execution of Benchmark.js's test loop. (The existing beforeEach and afterEach methods will be called before and after the benchmarking process for a particular test, which will involve calling the test function multiple times.) Like Intern's existing lifecycle functions, these methods support nested suites.

Unit testing

Writing a unit test

As described in the fundamentals overview, unit tests are the cornerstone of every test suite. Unit tests allow us to test applications by loading and interacting directly with application code.

In order to write a unit test, you first need to pick an interface to use. For the sake of clarity, this guide uses the Object interface, but all the test and suite lifecycle functions themselves are written the same way, and have the same functionality, no matter which interface you use.

A unit test function, at its most basic level, is simply a function that throws an error when a test failure occurs, or throws no errors when a test passes:

define(function (require) {
	var registerSuite = require('intern!object');

	registerSuite({
		'passing test': function () {},
		'failing test': function () {
			throw new Error('Oops');
		}
	});
});

The this keyword within a test function in Intern refers to the internal Test object. This object provides functionality for asynchronous testing and skipping tests.

In order to facilitate testing, the Chai Assertion Library is bundled with Intern. Chai allows us to easily verify that certain operations perform as expected by comparing expected and actual values and throwing useful errors when they don’t match:

define(function (require) {
	var registerSuite = require('intern!object');
	var assert = require('intern/chai!assert');

	registerSuite({
		'passing test': function () {
			var result = 2 + 3;

			assert.equal(result, 5,
				'Addition operator should add numbers together');
		},
		'failing test': function () {
			var result = 2 * 3;

			assert.equal(result, 5,
				'Addition operator should add numbers together');
		}
	});
});

Chai provides its own set of different interfaces for providing assertions. They all do the same things, so just like Intern’s test interfaces, pick the one whose syntax you prefer:

  • The assert API, loaded from 'intern/chai!assert', looks like assert.isTrue(value)
  • The expect API, loaded from 'intern/chai!expect', looks like expect(value).to.be.true
  • The should API, loaded from 'intern/chai!should', looks like value.should.be.true

The test lifecycle

When tests are executed, the test system follows a specific lifecycle:

  • For each registered root suite:
    • The setup method of the suite is called, if it exists
    • For each test within the suite:
      • The beforeEach method of the suite is called, if it exists
      • The test function is called
      • The afterEach method of the suite is called, if it exists
    • The teardown method of the suite is called, if it exists

So, given the this test module:

define(function (require) {
	var registerSuite = require('intern!object');

	registerSuite({
		setup: function () {
			console.log('outer setup');
		},
		beforeEach: function () {
			console.log('outer beforeEach');
		},
		afterEach: function () {
			console.log('outer afterEach');
		},
		teardown: function () {
			console.log('outer teardown');
		},

		'inner suite': {
			setup: function () {
				console.log('inner setup');
			},
			beforeEach: function () {
				console.log('inner beforeEach');
			},
			afterEach: function () {
				console.log('inner afterEach');
			},
			teardown: function () {
				console.log('inner teardown');
			},

			'test A': function () {
				console.log('inner test A');
			},
			'test B': function () {
				console.log('inner test B');
			}
		},

		'test C': function () {
			console.log('outer test C');
		}
	});
});

…the resulting console output would be in this order:

outer setup
inner setup
outer beforeEach
inner beforeEach
inner test A
inner afterEach
outer afterEach
outer beforeEach
inner beforeEach
inner test B
inner afterEach
outer afterEach
inner teardown
outer beforeEach
outer test C
outer afterEach
outer teardown

The this keyword inside of the suite lifecycle methods (setup, beforeEach, afterEach, teardown) refers to the internal Suite object.

Asynchronous tests

As mentioned in the earlier section on conventions, asynchronous testing in Intern is based on Promises. When writing a test, you may either return a Promise from your test function (convenient for interfaces that already use Promises), or call this.async from within a test function to create a promise for that test.

Returning a Promise

If your test returns a promise (any object with a then function), it is understood that your test is asynchronous. Resolving the promise indicates a passing test, and rejecting the promise indicates a failed test. The test will also fail if the promise is not fulfilled within the timeout of the test (the default is 30 seconds; set this.timeout to change the value).

Calling this.async

All tests have a this.async method that can be used to retrieve a Deferred object. It has the following signature:

this.async(timeout?: number, numCallsUntilResolution?: number): Deferred;

After calling this method, Intern will assume your test is asynchronous, even if you do not return a Promise. (If you do return a Promise, the returned Promise takes precedence over the one generated by this.async.)

The Deferred object by this.async includes a reference to the generated Promise, along with methods for resolving and rejecting the promise:

Property/method Description
callback(fn: Function): Function Returns a function that, when called, resolves the Promise if fn does not throw an error, or rejects the Promise if it does. This is the most common way to complete an asynchronous test.
promise: Promise The underlying Promise object for this Deferred.
reject(error: Error): void Rejects the Promise. The error passed to reject is used as the error for reporting the test failure.
rejectOnError(fn: Function): Function Returns a function that, when called, does nothing if fn does not throw an error, or rejects the Promise if it does. This is useful when working with nested callbacks where only the innermost callback should resolve the Promise but a failure in any of the outer callbacks should reject it.
resolve(value?: any): void Resolves the Promise. The resolved value is not used by Intern.

The this.async method accepts two optional arguments:

Argument Description
timeout Set the timeout of the test in milliseconds. Equivalent to setting this.timeout. If not provided, this defaults to 30 seconds.
numCallsUntilResolution Specifies how many times the callback method should be called before actually resolving the Promise. This defaults to 1. numCallsUntilResolution is only useful in rare cases where you may have a callback that will be called several times and the test should be considered complete only on the last invocation.

A basic asynchronous test using this.async looks like this:

define(function (require) {
	var registerSuite = require('intern!object');
	var assert = require('intern/chai!assert');

	var request = require('request');

	registerSuite({
		name: 'async demo',

		'async test': function () {
			var dfd = this.async(1000);

			request(
				'http://example.com/test.txt',
				dfd.callback(function (error, data) {
					if (error) {
						throw error;
					}

					assert.strictEqual(data, 'Hello world!');
				})
			);
		}
	});
});

In this example, an HTTP request is made using a hypothetical request library that uses legacy Node.js-style callbacks. When the call is completed successfully, the data is checked to make sure it is correct.

If the data is correct, the Promise associated with dfd will be resolved, and the test will pass; otherwise, it will be rejected (because an error is thrown), and the test will fail.

Skipping tests at runtime

All tests have a skip method that can be used to skip the test if it should not be executed for some reason:

define(function (require) {
	var registerSuite = require('intern!object');
	var assert = require('intern/chai!assert');

	registerSuite({
		name: 'skip demo',

		'skip test': function () {
			if (typeof window === 'undefined') {
				this.skip('Browser-only test');
			}

			// ...
		}
	});
});

The grep configuration option can also be used to skip tests whose IDs don’t match a regular expression.

Suites also have a skip method. Calling this.skip() from a suite lifecycle method, or calling this.parent.skip() from a test, will cause all remaining tests in a suite to be skipped.

define(function (require) {
	var registerSuite = require('intern!object');
	var assert = require('intern/chai!assert');

	registerSuite({
		name: 'skip demo',

		setup: function () {
			// Skip entire suite if not running in a browser
			if (typeof window === 'undefined') {
				this.skip('Browser-only suite');
			}
		},

		'test 1': function () {
			// test code
		},

		'test 2': function () {
			// Skip remainder of suite if `someVar` isn't defined
			if (window.someVar == null) {
				this.parent.skip('somVar not defined');
			}
			// test code
		},

		// tests ...
	});
});

Testing CommonJS modules

CommonJS modules, including Node.js built-ins, can be loaded as dependencies to a test module using the dojo/node loader plugin that comes with Intern:

define(function (require) {
	var registerSuite = require('intern!object');
	var assert = require('intern/chai!assert');
	var path = require('intern/dojo/node!path');

	registerSuite({
		name: 'path',

		'basic tests': function () {
			var ab = path.join('a', 'b');

			// …
		}
	});
});

Testing non-modular code

Browser code that doesn’t support any module system and expects to be loaded along with other dependencies in a specific order can be loaded using the intern/order loader plugin:

define([
	'intern!object',
	'intern/chai!assert',
	'intern/order!../jquery.js',
	'intern/order!../plugin.jquery.js'
], function (registerSuite, assert) {
	registerSuite({
		name: 'plugin.jquery.js',

		'basic tests': function () {
			jQuery('<div>').plugin();
			// …
		}
	});
});

It is also possible to use the use-amd loader plugin to load non-modular code:

define(function (require) {
	var registerSuite = require('intern!object');
	var assert = require('intern/chai!assert');
	var jQuery = require('use!plugin.jquery');

	registerSuite({
		name: 'plugin.jquery.js',

		'basic tests': function () {
			jQuery('<div>').plugin();
			// …
		}
	});
});

In this case, the dependency ordering is handled by use-amd instead.

Testing other transpiled code

Other transpiled code can be tested without requiring a build step by first writing a loader plugin that performs code compilation for you:

// in tests/support/customscript.js
define(function (require) {
	var compiler = require('customscript');
	var request = require('intern/dojo/request');

	return {
		load: function (resourceId, require, load) {
			// Get the raw source code…
			request(require.toUrl(resourceId)).then(function (sourceCode) {
				// …then compile it into JavaScript code…
				compiler.compile(sourceCode).then(function (javascriptCode) {
					// …then execute the compiled function. In this case,
					// the compiled code returns its value. An AMD module would
					// call a `define` function, and a CJS module would set its
					// values on `exports` or `module.exports`.
					load(new Function(javascriptCode)());
				});
			});
		}
	};
});

Once you have a suitable loader plugin, just load your code through the loader plugin like any other dependency:

// in tests/unit/foo.js
define(function (require) {
	var registerSuite = require('intern!object');
	var assert = require('intern/chai!assert');
	var foo = require('../support/customscript!app/foo.cs');

	registerSuite({
		name: 'app/foo',

		'basic tests': function () {
			foo.doSomething();
			// …
		}
	});
});

Testing non-CORS APIs

When writing unit tests with Intern, occasionally you will need to interact with a Web service using XMLHttpRequest. However, because the test runner serves code at http://localhost:9000 by default, any cross-origin requests will fail.

In order to test Ajax requests without using CORS or JSONP, the solution is to set up a reverse proxy to Intern and tell the test runner to load from that URL instead by setting the proxyUrl configuration option.

You can either set up the Web server to only send requests to Intern for your JavaScript files, or you can set up the Web server to send all requests to Intern except for the Web services you’re trying to access.

Option 1: All traffic except Web services to Intern

  1. Modify proxyUrl in your Intern configuration to point to the URL where the Web server lives
  2. Set up the Web server to reverse proxy to http://localhost:9000/ by default
  3. Add location directives to pass the more specific Web service URLs to the Web service instead

An nginx configuration implementing this pattern might look like this:

server {
	server_name proxy.example;

	location /web-service/ {
		# This will proxy to http://www.web-service.example/web-service/<rest of url>;
		# use `proxy_pass http://www.web-service.example/` to proxy to
		# http://www.web-service.example/<rest of url> instead
		proxy_pass http://www.web-service.example;
	}

	location / {
		proxy_pass http://localhost:9000;
	}
}

Option 2: Only JavaScript traffic to Intern

  1. Modify proxyUrl in your Intern configuration to point to the URL where the Web server lives
  2. Set up the Web server to reverse proxy to http://localhost:9000/ for the special /__intern/ location, plus any directories that contain JavaScript

An nginx configuration implementing this pattern might look like this:

server {
	server_name proxy.example;
	root /var/www/;

	location /js/ {
		proxy_pass http://localhost:9000;
	}

	location /__intern/ {
		proxy_pass http://localhost:9000;
	}

	location / {
		try_files $uri $uri/ =404;
	}
}

Benchmark testing

Intern's benchmark testing mode is used to evaluate the performance of code. Test functions will be run many times in a loop, and Intern will record how long each test function takes to run on average. This information can be saved and used during later test runs to see if performance has deviated from acceptable values.

The benchmarking functionality is driven by Benchmark.js.

Writing a benchmark test

Tests are created using the benchmark interface, which is very much like the unit test object interface, and follow a similar lifecycle. The main differences are in how before/afterEach behave and the requirement to use async and skip wrappers rather than this.async() and this.skip().

The benchmark test lifecycle

The benchmark test lifcycle is very similar to standard tests.

  • For each registered root suite:
    • The setup method of the suite is called, if it exists
    • For each test within the suite:
      • The beforeEach method of the suite is called, if it exists
      • The benchmark is started. This involves calling the test function itself many times in a "test loop". For each execution of the test loop, the following steps take place:
        • The beforeEachLoop method of the suite is called, if it exists
        • The test function is called at least once
        • The afterEachLoop method of the suite is called, if it exists
      • The afterEach method of the suite is called, if it exists
    • The teardown method of the suite is called, if it exists

Functional testing

Writing a functional test

As described in the fundamentals overview, functional testing enables application testing by automating user interactions like navigating to pages, scrolling, clicking, reading content, etc.. It’s used as an automated alternative to manual user testing.

Functional tests are registered using the same interfaces as unit tests, and use the same internal Suite and Test objects, but are loaded using the functionalSuites configuration option instead of the suites option and run inside the test runner instead of inside the environment being tested.

When writing a functional test, instead of executing application code directly, use the Leadfoot Command object at this.remote to automate interactions that you’d normally perform manually to test an application:

define(function (require) {
	var registerSuite = require('intern!object');
	var assert = require('intern/chai!assert');

	registerSuite({
		name: 'index',

		'greeting form': function () {
			return this.remote
				.get(require.toUrl('index.html'))
				.setFindTimeout(5000)
				.findByCssSelector('body.loaded')
				.findById('nameField')
					.click()
					.type('Elaine')
					.end()
				.findByCssSelector('#loginForm input[type=submit]')
					.click()
					.end()
				.findById('greeting')
					.getVisibleText()
					.then(function (text) {
						assert.strictEqual(text, 'Hello, Elaine!',
							'Greeting should be displayed when the form is submitted');
					});
		}
	});
});

In this example, taken from the Intern tutorial, we’re automating interaction with a basic form that is supposed to accept a name from the user and then display it as a greeting in the user interface. As can be seen from the code above, the series of steps the test takes are as follows:

  • Load the page. require.toUrl is used here to convert a local file path (index.html) into a URL that can actually be loaded by the remote browser (http://localhost:9000/index.html).
  • Set a timeout of 5 seconds for each attempt to find an element on the page. This ensures that even if the browser takes a couple of seconds to create an element, the test won’t fail
  • Wait for the page to indicate it has loaded by putting a loaded class on the body element
  • Find the form field where the name should be typed
  • Click the field and type a name into it
  • Find the submit button for the form. Note that if end hadn’t been called on the previous line, Intern would try to find the #loginForm input[type=submit] element from inside the previously selected nameField element, instead of inside the body of the page
  • Click the submit button
  • Find the element where the greeting is supposed to show
  • Get the text from the greeting
  • Verify that the correct greeting is displayed

Page objects

A page object is like a widget for your test code. It abstracts away the details of your UI so you can avoid tightly coupling your test code to a specific view (DOM) tree design.

Using page objects means that if the view tree for part of your UI is modified, you only need to make a change in the page object to fix all your tests. Without page objects, every time the views in your application change, you’d need to touch every single test that interacts with that part of the UI.

Once you’ve written a page object, your tests will use the page object to interact with a page instead of the low-level methods of the this.remote object.

For example, a page object for the index page of a Web site could be written like this:

// in tests/support/pages/IndexPage.js
define(function (require) {
  // the page object is created as a constructor
  // so we can provide the remote Command object
  // at runtime
	function IndexPage(remote) {
		this.remote = remote;
	}

	IndexPage.prototype = {
		constructor: IndexPage,

		// the login function accepts username and password
		// and returns a promise that resolves to `true` on
		// success or rejects with an error on failure
		login: function (username, password) {
			return this.remote
				// first, we perform the login action, using the
				// specified username and password
				.findById('login')
				.click()
				.type(username)
				.end()
				.findById('password')
				.click()
				.type(password)
				.end()
				.findById('loginButton')
				.click()
				.end()
				// then, we verify the success of the action by
				// looking for a login success marker on the page
				.setFindTimeout(5000)
				.findById('loginSuccess')
				.then(function () {
					// if it succeeds, resolve to `true`; otherwise
					// allow the error from whichever previous
					// operation failed to reject the final promise
					return true;
				});
		},

		// …additional page interaction tasks…
	};

	return IndexPage;
});

Then, the page object would be used in tests instead of the this.remote object:

// in tests/functional/index.js
define([
	'intern!object',
	'intern/chai!assert',
	'../support/pages/IndexPage'
], function (registerSuite, assert, IndexPage) {
	registerSuite(function () {
		var indexPage;
		return {
			// on setup, we create an IndexPage instance
			// that we will use for all the tests
			setup: function () {
				indexPage = new IndexPage(this.remote);
			},

			'successful login': function () {
				// then from the tests themselves we simply call
				// methods on the page object and then verify
				// that the expected result is returned
				return indexPage
					.login('test', 'test')
					.then(function (loggedIn) {
						assert.isTrue(loggedIn,
							'Valid username and password should log in successfully');
					});
			},

			// …additional tests…
		};
	});
});

Testing native apps

Native mobile application UIs can be tested by Intern using an Appium, ios-driver, or Selendroid server. Each server has slightly different support for WebDriver, so make sure to read each project’s documentation to pick the right one for you.

Appium

To test a native app with Appium, one method is to pass the path to a valid IPA or APK using the app key in your environments configuration:

{
	environments: [
		{
			platformName: 'iOS',
			app: 'testapp.ipa',
			fixSessionCapabilities: false
		}
	]
}

You can also use appPackage and appActivity for Android, or bundleId and udid for iOS, to run an application that is already installed on a test device:

{
	environments: [
		{
			platformName: 'iOS',
			bundleId: 'com.example.TestApp',
			udid: 'da39a3ee5e…',
			fixSessionCapabilities: false
		},
		{
			platformName: 'Android',
			appActivity: 'MainActivity',
			appPackage: 'com.example.TestApp',
			fixSessionCapabilities: false
		}
	]
}

Once the application has started successfully, you can interact with it using any of the supported WebDriver APIs.

ios-driver

To test a native app with ios-driver, first run ios-driver, passing one or more app bundles for the applications you want to test:

java -jar ios-driver.jar -aut TestApp.app

Then, pass the bundle ID and version using the CFBundleName and CFBundleVersion keys in your environments configuration:

{
	environments: [
		{
			device: 'iphone',
			CFBundleName: 'TestApp',
			CFBundleVersion: '1.0.0',
			// required for ios-driver to use iOS Simulator
			simulator: true,
			fixSessionCapabilities: false
		}
	]
}

Once the application has started successfully, you can interact with it using any of the supported WebDriver APIs.

Selendroid

To test a native app with Selendroid, first run Selendroid, passing one or more APKs for the applications you want to test:

java -jar selendroid.jar -app testapp-1.0.0.apk

Then, pass the Android app ID of the application using the aut key in your environments configuration:

{
	environments: [
		{
			automationName: 'selendroid',
			aut: 'com.example.testapp:1.0.0',
			fixSessionCapabilities: false
		}
	]
}

Once the application has started successfully, you can interact with it using any of the supported WebDriver APIs.

Debugging

Keep in mind that JavaScript code is running in two separate environments: your test suites are run in Node.js, while the page loaded by functional tests runs in a web browser. Functional tests can be debugged with Node.js’s built-in debugger, or with the more user-friendly node-inspector. Note that these instructions are for debugging functional tests, which run in Node.js; debugging code on the test page itself should be done using the browser's debugging tools.

  1. npm install -g node-inspector
  2. Set a breakpoint in your test code by adding a debugger statement. Since test modules are loaded dynamically by Intern, they will likely not show up in the debugger’s file list, so you won’t be able use the debugger to set an initial breakpoint.
  3. Launch Node.js with debugging enabled, set to pause on the first line of code:
    • node --debug-brk node_modules/intern/runner config=myPackage/test/intern
  4. Launch node-inspector by running node-inspector.
  5. Open Chrome (you must use Chrome as node-inspector leverages Chrome's developer tools) to:
    • http://127.0.0.1:8080/debug?port=5858
  6. Continue code execution (F8). The tests will run until your debugger statement.
  7. Debug!

Getting a WebDriver server

Cloud hosting

Using cloud hosting is the fastest way to get an operational Selenium server. Intern natively provides support for four major cloud hosting providers:

BrowserStack

  1. Sign up for BrowserStack Automate
  2. Get your Automate username and password from the Automate account settings page
  3. Set tunnel to 'BrowserStackTunnel'
  4. Set your username and access key in one of these ways:
    • Define BROWSERSTACK_USERNAME and BROWSERSTACK_ACCESS_KEY environment variables
    • Set browserstackUsername and browserstackAccessKey in your Gruntfile’s intern task options
    • Set username and accessKey on your tunnelOptions configuration option

CrossBrowserTesting

  1. Sign up for a trial account
  2. Get your authkey from your account settings page
  3. Set tunnel to 'CrossBrowserTestingTunnel'
  4. Set your username and access key in one of these ways:
    • Define CBT_USERNAME and CBT_APIKEY environment variables
    • Set cbtUsername and cbtApikey in your Gruntfile’s intern task options
    • Set username and apiKey on your tunnelOptions configuration option

Sauce Labs

  1. Sign up for a Sauce Labs account
  2. Get your master account access key from the sidebar of the Account settings page, or create a separate sub-account on the sub-accounts page and get a username and access key from there
  3. Set tunnel to 'SauceLabsTunnel'
  4. Set your username and access key in one of these ways:
    • Define SAUCE_USERNAME and SAUCE_ACCESS_KEY environment variables
    • Set sauceUsername and sauceAccessKey in your Gruntfile’s intern task options
    • Set username and accessKey on your tunnelOptions configuration option

TestingBot

  1. Sign up for a TestingBot account
  2. Get your API key and secret from the Account settings page
  3. Set tunnel to 'TestingBotTunnel'
  4. Set your API key and secret in one of these ways:
    • Define TESTINGBOT_KEY and TESTINGBOT_SECRET environment variables
    • Set testingbotKey and testingbotSecret in your Gruntfile’s intern task options
    • Set apiKey and apiSecret on your tunnelOptions configuration option

Local Selenium

Depending upon which browsers you want to test locally, a few options are available.

Using a WebDriver directly

It's possible to run a browser-specific WebDriver in standalone-mode and use Intern with it.

Using ChromeDriver (Chrome-only)

If you’re just looking to have a local environment for developing functional tests, a stand-alone ChromeDriver installation works great.

  1. Download the latest version of ChromeDriver
  2. Set tunnel to 'NullTunnel'
  3. Run chromedriver --port=4444 --url-base=wd/hub
  4. Set your environments capabilities to [ { browserName: 'chrome' } ]
  5. Run the test runner
Using PhantomJS 2

If you want to use a fake browser to develop your tests, PhantomJS 2 is an option.

  1. Download the latest version of PhantomJS
  2. Set tunnel to 'NullTunnel'
  3. Run phantomjs --webdriver=4444
  4. Set your environments capabilities to [ { browserName: 'phantomjs' } ]
  5. Run the test runner

Using Selenium (all browsers)

If you want to test against more than just Chrome, or you want to use multiple browsers at once, you can run a local copy of Selenium. You can do this manually, or using SeleniumTunnel.

3.3 SeleniumTunnel

If Java (JRE or JDK) is installed on the testing system, you can set the tunnel class to SeleniumTunnel to have Intern automatically download and run Selenium. Use the drivers tunnel option to tell the tunnel which WebDrivers to download:

tunnel: 'SeleniumTunnel',
tunnelOptions: {
	drivers: [ 'chrome', 'firefox' ]
}

Intern will download and start Selenium at the beginning of the functional tests, and will shut it down when the testing process has finished.

Manually running Selenium

Start by downloading the servers for each platform you want to test:

To start the server, run java -jar selenium-server-standalone-{version}.jar.

To use ChromeDriver and IEDriver with a Selenium server, the driver executables must either be placed somewhere in the environment PATH, or their locations must be given explicitly to the Selenium server using the -Dwebdriver.chrome.driver (ChromeDriver) and -Dwebdriver.ie.driver (IEDriver) flags upon starting the Selenium server:

java -jar selenium-server-standalone-{version}.jar \
-Dwebdriver.chrome.driver=/path/to/chromedriver \
-Dwebdriver.ie.driver=C:/path/to/IEDriverServer.exe

Once the server is running, simply configure Intern to point to the server by setting tunnel to 'NullTunnel', then run the test runner.

Each driver you use with Selenium has its own installation and configuration requirements, so be sure to read the installation instructions for each:

Selenium Grid

selenium-server-standalone-{version}.jar includes both stand-alone and grid server functionality. To start a Selenium Grid, first create a hub by running Selenium server in hub mode:

java -jar selenium-server-standalone-{version}.jar -hub

The hub normally drives no browsers by its own and simply acts as a forwarding proxy to each of the nodes that have been registered with the hub.

Once you’ve installed and configured all the drivers for one of your grid nodes following the instructions for setting up local Selenium, start the node and register it with the hub:

java -jar selenium-server-standalone-2.xx.x.jar -hub http://hub-server:4444/grid/register

Once the server is running, simply configure Intern to point to the hub by setting tunnel to 'NullTunnel', then run the test runner.

Creating a grid that works with Selendroid and ios-driver requires that additional selendroid-grid-plugin and ios-grid-plugin plugins be downloaded and added to the Java classpath when starting the grid hub:

java -Dfile.encoding=UTF-8 -cp "selendroid-grid-plugin-{version}.jar:ios-grid-plugin-{version}.jar:selenium-server-standalone-{version}.jar" org.openqa.grid.selenium.GridLauncher -capabilityMatcher io.selendroid.grid.SelendroidCapabilityMatcher -role hub

Firefox, Safari, Chrome, Chrome for Android, and Internet Explorer will all be available using a standard Selenium server node. Selendroid and ios-driver, in contrast, use their own custom Selenium servers (selendroid-standalone-{version}-with-dependencies.jar and ios-server-standalone-{version}.jar), which must be run and registered separately with the hub. ios-driver uses the same hub registration method as the standard Selenium server (-hub http://hub-server…); Selendroid requires manual registration to the hub.

Running tests

The browser client

The browser client allows unit tests to be run directly in a browser without any server other than a regular HTTP server. This is useful when you are in the process of writing unit tests that require a browser, or when you need to run a debugger in the browser to inspect a test failure.

The browser client is loaded by navigating to intern/client.html. Assuming an Intern configuration file is located at my-project/tests/intern, a typical execution that runs all unit tests would look like this:

http://localhost/my-project/node_modules/intern/client.html?
	config=tests/intern

As can be seen from this example, because there is no concept of a “working directory” in URLs, the browser client chooses the directory two levels above client.html to be the root directory for the current test run. This can be overridden by specifying an initialBaseUrl argument:

http://localhost/my-project/node_modules/intern/client.html?
	initialBaseUrl=/&
	config=my-project/tests/intern

Additional arguments to the browser client can be put in the query string. A more complex execution with arguments overriding the suites and reporters properties from the configuration file might look like this:

http://localhost/my-project/node_modules/intern/client.html?
	config=tests/intern&
	suites=tests/unit/request&
	suites=tests/unit/animation&
	reporters=Console&
	reporters=Html

The browser client supports the following arguments:

Argument Description Default
config The module ID of the Intern configuration file that should be used. Relative to initialBaseUrl. This argument is required. none
initialBaseUrl The path to use when resolving the basePath in a browser. 'node_modules/intern/../../'

initialBaseUrl (string) 3.0

The path to use when resolving the basePath in a browser. Since browsers do not have any concept of a current working directory, using this argument allows a pseudo-cwd to be specified for the browser client in order to match up file paths with what exists on the underlying filesystem. This argument should always be an absolute path (i.e. it should be the entire path that comes after the domain name).

You can also specify any valid configuration option in the query string.

The Node.js client

The Node.js client allows unit tests to be run directly within a local Node.js environment. This is useful when you are writing unit tests for code that runs in Node.js. It is invoked by running intern-client on the command-line.

A typical execution that runs all tests and outputs results to the console would look like this:

intern-client config=tests/intern

A more complex execution with arguments overriding the suites and reporters properties from the configuration file might look like this:

intern-client config=tests/intern suites=tests/unit/request \
	suites=tests/unit/animation \
	reporters=Console \
	reporters=LcovHtml

The Node.js client supports the following arguments:

Argument Description
config The module ID of the Intern configuration file that should be used. Relative to the current working directory. This argument is required.

You can also specify any valid configuration option as an argument on the command-line.

The test runner

The test runner allows functional tests to be executed against a Web browser or native mobile application. It also allows unit tests & functional tests to be executed on multiple environments at the same time. This is useful when you want to automate UI testing, or when you want to run your entire test suite against multiple environments at once (for example, in continuous integration). It is invoked by running intern-runner on the command-line.

In order to use the test runner, you will need a WebDriver server. The WebDriver server is responsible for providing a way to control to all of the environments that you want to test. You can get a WebDriver server in one of a few different ways:

A typical execution that runs all tests against all environments and outputs aggregate test & code coverage results to the console would look like this:

intern-runner config=tests/intern

A more complex execution that overrides the reporters and functionalSuites properties from the configuration file might look like this:

intern-runner config=tests/intern \
	reporters=Runner reporters=LcovHtml \
	functionalSuites=tests/functional/home \
	functionalSuites=tests/functional/cart

The test runner supports the following arguments:

Argument Description
config The module ID of the Intern configuration file that should be used. Relative to the current working directory. This argument is required.

You can also specify any valid configuration option as an argument on the command-line.

Using custom arguments

Intern allows arbitrary arguments to be passed on the command-line that can then be retrieved through the main Intern object. This is useful for cases where you want to be able to pass things like custom ports, servers, etc. dynamically:

define(function (require) {
	var intern = require('intern');

	// arguments object
	intern.args;
});

This makes it possible to, for example, define a dynamic proxy URL from the command-line or Grunt task:

define(function (require) {
	var intern = require('intern');

	var SERVERS = {
		id1: 'http://id1.example/',
		id2: 'http://id2.example/'
	};

	return {
		proxyUrl: SERVERS[intern.args.serverId],

		// …additional configuration…
	};
});
intern-runner config=tests/intern serverId=id1

Using Grunt

Grunt support is built into Intern. Install Intern and load the Grunt task into your Gruntfile using grunt.loadNpmTasks('intern'):

module.exports = function (grunt) {
	// Load the Intern task
	grunt.loadNpmTasks('intern');

	// Configure tasks
	grunt.initConfig({
		intern: {
			someReleaseTarget: {
				options: {
					runType: 'runner', // defaults to 'client'
					config: 'tests/intern',
					reporters: [ 'Console', 'Lcov' ],
					suites: [ 'tests/unit/all' ]
				}
			},
			anotherReleaseTarget: { /* … */ }
		}
	});

	// Register a test task that uses Intern
	grunt.registerTask('test', [ 'intern' ]);

	// By default we just test
	grunt.registerTask('default', [ 'test' ]);
};

The following task options are available:

Name Description Default
browserstackAccessKey The access key for authentication with BrowserStack. none
browserstackUsername The username for authentication with BrowserStack. none
cbtApikey The API key for authentication with CrossBrowserTesting. none
cbtUsername The username for authentication with CrossBrowserTesting. none
runType The execution mode in which Intern should run. This may be 'runner' for the test runner, or 'client' for the Node.js client. 'client'
sauceAccessKey The access key for authentication with Sauce Labs. none
sauceUsername The username for authentication with Sauce Labs. none
testingbotKey The API key for authentication with TestingBot. none
testingbotSecret The API key for authentication with TestingBot. none

The following events are available:

Event Description
intern.pass(message: string) This event is emitted when a test passes.
intern.fail(message: string) This event is emitted when a test fails.

Using Gulp

Intern doesn’t provide a gulp plugin, but running Intern with gulp is much like running it with Grunt. The key difference is that Intern is run explicitly in gulp rather than through a plugin. The following example shows one way to do this

gulp.task('test', function (done) {
	// Define the Intern command line
	var command = [
		'./node_modules/intern/runner.js',
		'config=tests/intern'
	];

	// Add environment variables, such as service keys
	var env = Object.create(process.env);
	env.BROWSERSTACK_ACCESS_KEY = '123456';
	env.BROWSERSTACK_USERNAME = 'foo@nowhere.com';

	// Spawn the Intern process
	var child = require('child_process').spawn('node', command, {
		// Allow Intern to write directly to the gulp process's stdout and
		// stderr.
		stdio: 'inherit',
		env: env
	});

	// Let gulp know when the child process exits
	child.on('close', function (code) {
		if (code) {
			done(new Error('Intern exited with code ' + code));
		}
		else {
			done();
		}
	});
});

Additional configuration options, such as suites and reporters, can be specified just as they would for a typical Intern client or runner command line. Also, spawn isn’t a requirement. An Intern process has to be started, but some other library such as gulp-shell or shelljs could be used for this purpose. The only requirement is that the gulp task run Intern and be notified when it finishes.

Getting test results

Overview

Information about the state of a test run needs to be published in many different formats in order to properly integrate with different systems. To facilitate this, Intern allows reporters to be registered. A reporter is a simple object that receives messages from the rest of the test system and forwards that information, in the correct format, to a destination, like a file, console, or HTTP server.

There are two primary kinds of reporters: reporters for test results, and reporters for code coverage results. Intern comes with a set of standard reporters, and also makes it easy to write your own custom reporters.

Reporters for a test run are defined using the reporters configuration option.

Test results reporters

Test results reporters provide information about the tests themselves—whether or not they passed, how long they took to run, and so on.

Intern comes with several different test results reporters:

Reporter Description Options
Console This reporter outputs test pass/fail messages to the console or stdout, grouped by suite. It’s recommended that this reporter only be used by the browser and Node.js clients, and not the test runner, since it does not understand how to group results received from multiple different clients simultaneously. watermarks
Html This reporter generates a basic HTML report of unit test results. It is designed to be used in the browser client, but can also generate reports in Node.js if a DOM-compatible document object is passed in. document
JUnit This reporter generates a JUnit “compatible” XML file of unit test results. filename
Pretty This reporter displays test progress across one or more environments with progress bars while testing is in progress. After all tests are finished, a sorted list of all tests is output along with an overall code coverage summary. watermarks
Runner This reporter outputs information to the console about the current state of the runner, code coverage and test results for each environment tested, and a total test result. It can only be used in the test runner. filename,
watermarks
TeamCity This reporter outputs test result information in a TeamCity-compatible format. filename

Code coverage reporters

Code coverage reporters provide information about the state of code coverage—how many lines of code, functions, code branches, and statements were executed by the test system.

Intern comes with several different test results reporters:

Reporter Description Options
Cobertura This reporter generates a Cobertura-compatible XML report from collated coverage data. filename,
watermarks
Combined This reporter stores coverage data generated by the Node.js client in an intermediate file, and then merges in data generated by the WebDriver runner to generate a combined coverage report. directory,
watermarks
Lcov This reporter generates an lcov.info from collated coverage data that can be fed to another program that understands the standard lcov data format. filename,
watermarks
LcovHtml This reporter generates a set of illustrated HTML reports from collated coverage data. directory,
watermarks

Reporter options

As noted in the tables above, each reporter supports one or more different configuration options.

Option Description Default
directory The directory where output files should be written. This option is only used by reporters that need to write multiple files. varies by reporter
document A DOM document. window.document
filename A filename where output should be written. If a filename is not provided, output will be sent to stdout. stdout
watermarks The low & high watermarks for code coverage results. Watermarks can be specified for statements, lines, functions, and branches. Normally, code coverage values below the low watermark appear in red, and code coverage values above the high watermark appear in green. {
  statements: [ 50, 80 ],
  lines: [ 50, 80 ],
  functions: [ 50, 80 ],
  branches: [ 50, 80 ]
}

Continuous integration

Jenkins

When integrating Intern with Jenkins, there are two primary ways in which the integration can be completed: either creating a new project that executes as a post-build action for your primary project using a shared workspace, or by creating a multi-step free-style software project that executes Intern after the first (existing) build step.

For projects that are already using Maven, a third option is to execute Intern using exec-maven-plugin from an existing pom.xml.

When using Intern with Jenkins, use the junit reporter and enable the “Publish JUnit test result report” post-build action for the best test results display.

To add code coverage data to Jenkins, add the cobertura reporter, install the Cobertura plugin for Jenkins, and enable the “Publish Cobertura Coverage Report” post-build action.

Intern as a post-build action to an existing project

This option enables you to use an existing build project by adding a new project that executes unit tests in a separate job from the main build. This option is ideal for situations where you want to be able to manage the build and testing processes separately, or have several projects that need to be built downstream from the main project that can occur in parallel with test execution.

In order to accomplish this efficiently without the need to copy artifacts, use of the Shared Workspace plugin is recommended. To install and configure the Shared Workspace plugin, follow these steps:

  1. Install the Shared Workspace plugin from the Jenkins → Manage Jenkins → Manage Plugins page.
  2. Go to the Jenkins → Manage Jenkins → Configure System page.
  3. Under Workspace Sharing, add a new shared workspace. For the purposes of these instructions, this shared workspace will be called “myApp”.
  4. Save changes.

Once the Shared Workspace plugin is installed, all projects that need to share the same workspace must be updated. The shared workspace for each project can be selected from the first section of the project’s configuration page.

Once the main project is set to use the shared workspace, the new unit test project should be created:

  1. Create a new free-style software project, e.g. “myApp-tests”.
  2. At the top of the configuration, change the shared workspace to “myApp-tests”.
  3. Under “Source Code Management”, leave the “None” option checked. Because of the shared workspace, source code checkout will be handled by the upstream project.
  4. Under “Build triggers”, check the “Build after other projects are built” checkbox. Enter the name of the existing Maven project in the text box that appears. (This will create a corresponding post-build action to build “myApp-tests” in the existing project’s configuration.)
  5. Under “Build”, click the “Add build step” button and choose “Execute shell” from the drop-down.
  6. Under “Execute shell”, enter the command you want to use to run Intern. See the Running tests section for possible commands.
  7. Save changes.

Once this project has been configured, test everything by running a build on the main project. Once the main project build finishes successfully, the new “myApp-tests” project will begin executing automatically.

Intern as part of a free-style software project

When working with an existing free-style software project it is possible to simply add the unit testing as an extra build step, following steps similar to the above:

  1. Open the configuration page for the existing free-style software project.
  2. Under “Build”, click the “Add build step” button and choose “Execute shell” from the drop-down.
  3. Under “Execute shell”, enter the command you want to use to run Intern. See the Running tests section for possible commands.
  4. Save changes.

Intern as an execution step in a Maven pom.xml

Intern can be executed by Maven from a pom.xml during the test or integration-test phases of the build by using the exec-maven-plugin to spawn a new Intern process:

<plugin>
	<artifactId>exec-maven-plugin</artifactId>
	<groupId>org.codehaus.mojo</groupId>
	<version>1.2.1</version>
	<executions>
			<execution>
			<id>run-tests</id>
			<phase>test</phase>
			<goals>
				<goal>exec</goal>
			</goals>
		</execution>
	</executions>
	<configuration>
		<executable>node_modules/.bin/intern-runner</executable>
		<arguments>
			<argument>config=tests/intern</argument>
		</arguments>
	</configuration>
</plugin>

The executable and arguments elements should be modified to run Intern using your desired executor and configuration.

Travis CI

In order to enable Travis CI builds for your project, you must first create a .travis.yml in your repository root that will load and execute Intern:

language: node_js
node_js:
	- '0.10'
script: node_modules/.bin/intern-runner config=tests/intern

Once you have a Travis configuration, you just need to actually start the thing:

  1. Go to https://travis-ci.org/
  2. Click “Sign in with GitHub” at the top-right
  3. Allow Travis CI to access your GitHub account
  4. Go to https://travis-ci.org/profile
  5. Click “Sync now”, if necessary, to list all your GitHub projects
  6. Click the on/off switch next to the repository you want to test

The next time you push commits to the repository, you will be able to watch Intern happily execute all your tests directly from the Travis CI Web site. Any time you make a new commit, or a new pull request is issued, Travis will automatically re-run your test suite and send notification emails on failure.

TeamCity

There are two primary ways that Intern can be integrated with a TeamCity project: either by adding a new build configuration that is chained using a post-build trigger, or by adding additional build steps to an existing build configuration.

When using Intern with TeamCity, use Intern’s teamcity reporter for best integration.

Intern as an additional build step

  1. Go to the project that you want to add Intern to and click “Edit Project Settings” at the top-right.
  2. In the left-hand menu, click “General Settings”.
  3. Under “Build Configurations”, click “Edit” on the existing build configuration you want to add Intern to.
  4. In the left-hand menu, click “Build Steps”.
  5. Click “Add build step”.
  6. Select “Command Line” from the “Runner type” drop-down.
  7. Enter a name like “Run Intern” as the step name.
  8. Select “Custom Script” from the “Run” drop-down.
  9. Under “Custom script”, enter the command you want to use to run Intern. See the Running tests section for possible commands.
  10. Click “Save”.

Intern as a separate build configuration

  1. Go to the project that you want to add Intern to and click “Edit Project Settings” at the top-right.
  2. In the left-hand menu, click “General Settings”.
  3. Under “Build Configurations”, click “Create build configuration”.
  4. Enter a name like “Intern” as the build configuration name.
  5. Click “Save”.
  6. In the left-hand menu, click “Build Steps”.
  7. Click “Add build step”.
  8. Select “Command Line” from the “Runner type” drop-down.
  9. Enter a name like “Run Intern” as the step name.
  10. Select “Custom Script” from the “Run” drop-down.
  11. Under “Custom script”, enter the command you want to use to run Intern. See the Running tests section for possible commands.
  12. Click “Save”.
  13. Go back to the settings page for the project.
  14. In the left-hand menu, click “General Settings”.
  15. Click “Edit” on the build configuration you want to trigger Intern from.
  16. In the left-hand menu, click “Triggers”.
  17. Click “Add new trigger”.
  18. Choose “Finish Build Trigger” from the drop-down.
  19. Under “Build configuration”, choose the Intern build configuration that was just created.
  20. Check “Trigger after successful build only”.
  21. Click “Save”.

Codeship

To use Intern with Codeship, you’ll need to configure a test pipeline:

For a new project:

  1. Log in to Codeship.
  2. In the upper-left corner, click “Select Project...”.
  3. Click the “Create a new project” button.
  4. Connect your GitHub or Bitbucket account as required.
  5. Choose the repository you’d like to test.
  6. Select “I want to create my own custom commands” from the dropdown box labeled “Select your technology to prepopulate basic commands”.
  7. The following steps are the same as for an existing project.
  8. Once completed, click “Save and go to dashboard” and then push a commit to see your build tested.

For an existing project:

  1. Log in to Codeship.
  2. In the upper-left corner, click “Select Project...”.
  3. Select the gear icon to the right of your project’s name.
  4. From the “Test” Project Settings page, select “I want to create my own custom commands” from the dropdown box labeled “Select your technology to prepopulate basic commands”.
  5. The remaining steps are identical to creating any new project with Codeship.

Setup Commands

Setup Commands are those that allow you to set up your environment. For testing a project with Intern, you must install node and your project’s dependencies:


# Install the version of node specified in your package.json
nvm install node

# Install project requirements
npm install
							

Configure Test Pipelines

The test pipeline is what actually runs your specified test commands. This is equivalent to running the tests on your local development environment. For example, to run the Intern self-tests with the intern-client, you would enter the following command:


# run the intern-client with the specified configuration
node_modules/.bin/intern-client config=tests/selftest.intern.js
							

If you want to run tests with Selenium, Codeship supports this as well! You just need to curl and run this script before calling the intern-runner with a NullTunnel.


curl -sSL https://raw.githubusercontent.com/codeship/scripts/master/packages/selenium_server.sh | bash -s
node_modules/.bin/intern-runner config=tests/selftest.intern.js tunnel=NullTunnel
							

Bamboo

Using Intern with Bamboo involves creating a build plan, described below. Note that the instructions below were tested using Bamboo Cloud edition, but configuring a build plan task should work similarly using a local agent.

Manage elastic instances

By default, if you run a build on Bamboo and an agent isn’t available, an Elastic Bamboo image is started as a Windows Server instance. Intern behavior is more consistent running in a POSIX compliant environment, so follow the steps below to create a Linux instance:

  1. From the gear icon menu in the upper-right corner of any Bamboo administration page, select “Elastic instances”.
  2. Click the “Start new elastic instances” button in the upper-right corner of the page.
  3. Under the “Elastic image configuration name” dropdown, select “Ubuntu stock image”.
  4. Click the “Submit” button.

You will be taken to the “Manage elastic instances” page, where you will see your image and its current state. Once the image status is “Running”, the Elastic Agent starts. Once the agent has started and is either “Pending” or “Idle”, you may begin your build.

Create a build plan

  1. Click the “Create” dropdown button at the top-middle of any Bamboo administration page.
  2. Select “Create a new plan” from the menu.
  3. Select or create a Project to house the build plan.
  4. Give the build plan a name and key, for your reference, and provide an optional description.
  5. Link the build plan to a previously linked or new repository.
  6. Click “Configure plan”.

Configure your build plan

By default, the plan starts with an initial task of “Source Code Checkout”, which you can leave configured as is, because you linked the repository in a previous step.

  1. Add a task of “npm”.
  2. Use Node.js executable “Node.js 0.12” (or newer).
  3. Provide it with a Command of “install” and save the task.
  4. Add a task of “Script”.
  5. In the script body, write the following (use the version of node chosen in step 2):
    
    /opt/node-0.12/bin/node ${bamboo.build.working.directory}/node_modules/.bin/intern-client \
    	config=tests/selftest.intern \
    	reporters=JUnit \
    	> results.xml
    							
  6. Save the Script task.
  7. Add a task of “JUnit Parser”.
  8. Enter “*” in the “Specify custom results directories” field and save the task.
  9. Below the task configuration interface, make sure “Yes please!” is checked under the “Enable this plan?” heading.
  10. Click the “Create” button to create your build plan.

Running your build plan and verifying its output

  1. Click the “Build” dropdown menu item from the Bamboo administration page top menu and select “All build plans”.
  2. Select your plan from those shown.
  3. Click “Run” and then “Run plan” in the upper-right corner of the page.
  4. Once your plan has finished running, you will see a “Tests” tab on the page, which you can click through and see details of every test.

Customisation

Custom interfaces

Custom interfaces allow Intern to understand test files written for other testing systems—and even other languages! If you want to use Intern but don’t want to spend time and energy converting tests you’ve already written for another test system, writing a custom interface may be the quickest solution.

Any interface in Intern, including a custom interface, is responsible for doing three things:

  1. Creating new instances of intern/lib/Suite for each test suite defined in the test file
  2. Creating new instances of intern/lib/Test for each test function defined in the test file, and associating it with a suite
  3. Calling intern.executor.register to register all of the suites generated by the test interface with the test executor

intern.executor.register takes a callback that will be called by the executor. The callback will be passed a Suite object that it should register tests on. For unit tests, the callback will normally be called only once. For functional tests, the callback will be called once for each remote environment.

There are two ways to write test interfaces: as standard modules, or as loader plug-ins.

As a standard module (for tests written in JavaScript)

A standard module is a normal AMD module that returns a test interface.

A very basic custom interface that lets users register test functions by calling addTest looks like this:

// in tests/support/customInterface.js
define(function (require) {
	var intern = require('intern');
	var Test = require('intern/lib/Test');

	return {
		// Whenever `addTest` is called on this interface…
		addTest: function (name, testFn) {
			// …the test function is registered on each of the root
			// suites defined by the test executor
			intern.executor.register(function (suite) {
				// The interface is responsible for creating a Test object
				// representing the test function and associating it with
				// the correct parent suite
				var test = new Test({ name: name, test: testFn, parent: suite });
				suite.tests.push(test);
			});
		}
	};

	// That’s it!
});

This custom interface can then be used by any test module simply by requiring and using it the same way you’d use one of the built-in test interfaces:

// in tests/unit/test.js
define(function (require) {
	var interface = require('../support/customInterface');
	var assert = require('intern/chai!assert');

	interface.addTest('my test', function () {
		assert.ok(true);
	});
});

Interfaces can also create nested suites by creating Suite objects:

// in tests/support/suiteInterface.js
define(function (require) {
	var intern = require('intern');
	var Suite = require('intern/lib/Suite');
	var Test = require('intern/lib/Test');

	return {
		// Whenever `createSuite` is called on this interface…
		createSuite: function (name) {
			var suites = [];

			// …one or more new suites are created and registered
			// with each of the root suites from the executor…
			intern.executor.register(function (rootSuite) {
				var suite = new Suite({ name: name, parent: rootSuite });

				// (Sub-suites are pushed to the `tests` array of their
				// parent suite, same as tests)
				rootSuite.tests.push(suite);

				suites.push(suite);
			});

			// …and a new object is returned that allows test functions
			// to be added to the newly created suite(s)
			return {
				addTest: function (name, testFn) {
					suites.forEach(function (suite) {
						var test = new Test({ name: name, test: testFn, parent: suite });
						suite.tests.push(test);
					});
				}
			};
		}
	};
});

This custom interface would then be used like this:

// in tests/unit/test2.js
define(function (require) {
	var interface = require('../support/suiteInterface');
	var assert = require('intern/chai!assert');

	var suite = interface.createSuite('my suite');
	suite.addTest('my test', function () {
		assert.ok(true);
	});
});

As a more practical (but incomplete) example, to convert a Jasmine test suite to an Intern test suite using a custom Jasmine interface, you’d simply run a script to wrap all of your existing Jasmine spec files like this:

// in tests/unit/jasmineTest.js
define(function (require) {
	var jasmine = require('../support/jasmineInterface');

	// you could also just use `with (jasmine) {}` if you want
	var describe = jasmine.describe, it = jasmine.it, expect = jasmine.expect, beforeEach = jasmine.beforeEach, afterEach = jasmine.afterEach, xdescribe = jasmine.xdescribe, xit = jasmine.xit, fdescribe = jasmine.fdescribe, fit = jasmine.fit;

	// existing test code goes here
});

Then, you’d only need to write a custom Jasmine test interface that creates Intern Suite and Test objects and registers them with the current executor. In this case, since the Jasmine API is so similar to Intern’s TDD API, it’s possible to leverage one of the built-in interfaces instead of having to do it all ourselves:

// in tests/support/jasmineInterface.js
define(function (require) {
	var tdd = require('intern!tdd');

	// This function creates an object that looks like a Jasmine Suite and
	// translates back to the native Intern Suite object type
	function createJasmineCompatibleSuite(suite) {
		return {
			disable: function () {
				suite.tests.forEach(function (test) {
					test.skip('Disabled');
				});
			},
			getFullName: function () {
				return suite.id;
			},
			// …
		};
	}

	// This function creates an object that looks like a Jasmine spec and
	// translates back to the native Intern Test object type
	function createJasmineCompatibleSpec(test) {
		return {
			disable: function () {
				test.skip('Disabled');
			},
			status: function () {
				if (test.skipped != null) {
					return 'disabled';
				}

				if (test.timeElapsed == null) {
					return 'pending';
				}

				if (test.error) {
					return 'failed';
				}

				return 'passed';
			},
			// …
		};
	}

	// This function wraps a Jasmine suite factory so that when it is invoked
	// it gets a `this` context that looks like a Jasmine suite
	function wrapJasmineSuiteFactory(factory) {
		return function () {
			var jasmineSuite = createJasmineCompatibleSuite(this);
			factory.call(jasmineSuite);
		};
	}

	// This function wraps a Jasmine spec so when it is invoked it gets a
	// `this` context that looks like a Jasmine spec and supports Jasmine’s
	// async API
	function wrapJasmineTest(test) {
		return function () {
			var jasmineTest = createJasmineCompatibleSpec(test);
			if (test.length === 1) {
				return new Promise(function (resolve) {
					test.call(jasmineTest, resolve);
				});
			}
			else {
				test.call(jasmineTest);
			}
		};
	}

	return {
		// When `describe` is called on the Jasmine interface…
		describe: function (name, factory) {
			// …route it through to Intern, wrapping the factory as needed
			// so it will work correctly in Intern
			tdd.suite(name, wrapJasmineSuiteFactory(factory));
		},

		// When `it` is called on the Jasmine interface…
		it: function (name, spec) {
			// …route it through to Intern, wrapping the test function
			// so it will work correctly in Intern
			tdd.test(name, wrapJasmineTest(spec));
		},

		// continue to translate the Jasmine API until tests run…

		// …
	};
});

As a loader plugin (for tests written in other languages)

For tests written in other languages, an AMD loader plugin can be used instead of a normal module to asynchronously parse and compile the foreign source code into a JavaScript function that can be called by an Intern Test object:

// in tests/support/javaInterface.js
define(function () {
	var text = require('intern/dojo/text');
	var parser = require('tests/support/javaTestParser');
	var intern = require('intern');
	var Test = require('intern/lib/Test');

	// Return an AMD loader plugin…
	return {
		// When the plugin is requested as a dependency…
		load: function (resourceId, require, load) {
			// …load the associated resource ID…
			text.load(resourceId, function (code) {
				// …then use a parser to convert the raw test code into
				// a list of test names and functions…
				var tests = parser.parse(code);

				// …then register each test function with Intern…
				tests.forEach(function (testNode) {
					intern.executor.register(function (suite) {
						var test = new Test({
							name: testNode.name,
							test: testNode.fn,
							parent: suite
						});
						suite.tests.push(test);
					});
				});

				// …then tell the module loader that the everything is done loading
				load();
			});
		}
	};
});

To use a plugin-based test interface like this, use the AMD loader plugin syntax in your Intern configuration:

// in intern.js
define({
	suites: [
		// load `tests/unit/test.java` using `tests/support/javaInterface`
		'tests/support/javaInterface!tests/unit/test.java'
	],
	functionalSuites: [
		// load `tests/functional/test.java` using `tests/support/javaInterface`
		'tests/support/javaInterface!tests/functional/test.java'
	]
});

Custom executors 3.0

TODO

Custom reporters

If none of the built-in reporters provide the information you need, you can write a custom reporter and reference it using an absolute module ID (i.e. 'tests/support/CustomReporter').

3.0 Reporters in Intern are JavaScript constructors. When instantiated, a reporter receives the configuration data provided by the user in their reporters configuration, along with the following special properties:

Property Description
console An object that provides the basic Console API for reporters that want to provide enhanced console-based output.
output A Writable stream where output data should be sent by calling output.write(data). This stream will automatically be closed by Intern at the end of the test run. Most reporters should use this mechanism for outputting data.

Reporters should implement one or more of the following methods, which will be called by Intern when an event occurs:

Method Description
coverage(
  sessionId: string,
  data: Object
)
This method is called when code coverage data has been retrieved from an environment. This will occur once per remote environment when all unit tests have completed, and again any time a new page is loaded. Each unique sessionId corresponds to a single remote environment. sessionId will be null for a local environment (for example, in the Node.js client).
deprecated(
  name: string,
  replacement?: string,
  extra?: string
)
This method is called when a deprecated function is called.
fatalError(error: Error) This method is called when an error occurs within the test system that is non-recoverable (for example, a bug within Intern).
newSuite(suite: Suite) This method is called when a new test suite is created.
newTest(test: Test) This method is called when a new test is created.
proxyEnd(config: Proxy) This method is called once the built-in HTTP server has finished shutting down.
proxyStart(config: Proxy) This method is called once the built-in HTTP server has finished starting up.
reporterError(
  reporter: Reporter,
  error: Error
)
This method is called when a reporter throws an error during execution of a command. If a reporter throws an error in response to a reporterError call, it will not be called again to avoid infinite recursion.
runEnd(executor: Executor) This method is called after all test suites have finished running and the test system is preparing to shut down.
runStart(executor: Executor) This method is called after all tests have been registered and the test system is about to begin running tests.
suiteEnd(suite: Suite) This method is called when a test suite has finished running.
suiteError(
  suite: Suite,
  error: Error
)
This method is called when an error occurs within one of the suite’s lifecycle methods (setup, beforeEach, afterEach, or teardown), or when an error occurs when a suite attempts to run a child test.
suiteStart(suite: Suite) This method is called when a test suite starts running.
testEnd(test: Test) This method is called when a test has finished running.
testFail(test: Test) This method is called when a test has failed.
testPass(test: Test) This method is called when a test has passed.
testSkip(test: Test) This method is called when a test has been skipped.
testStart(test: Test) This method is called when a test starts running.
tunnelDownloadProgress(
  tunnel: Tunnel,
  progress: Object
)
This method is called every time a tunnel download has progressed. The progress object contains loaded (bytes received) and total (bytes to download) properties.
tunnelEnd(tunnel: Tunnel) This method is called after the WebDriver server tunnel has shut down.
tunnelStart(tunnel: Tunnel) This method is called immediately before the WebDriver server tunnel is started.
tunnelStatus(
  tunnel: Tunnel,
  status: string
)
This method is called whenever the WebDriver server tunnel reports a status change.

A reporter can return a Promise from any of these methods, which will cause the test system to pause at that point until the Promise has been resolved. The behaviour of a rejected reporter Promise is currently undefined.

For backwards-compatibility, you can also create reporters using the deprecated Intern 2 format, where the reporter itself is a single JavaScript object that uses topic names as keys and functions as values:

define(function (require) {
	return {
		'/test/start': function (test) {
			console.log(test.id + ' started');
		},
		'/test/end': function (test) {
			console.log(test.id + ' ended');
		}
	};
});

Legacy reporters can also include optional start and stop methods for performing any additional arbitrary work when a reporter is started or stopped:

define(function (require) {
	var aspect = require('dojo/aspect');
	var Suite = require('intern/lib/Suite');

	var handles = [];
	return {
		start: function () {
			function augmentJsonValue() {
				/* … */
			}

			handles.push(aspect.after(Suite.prototype, 'toJSON', augmentJsonValue));
		},

		stop: function () {
			var handle;
			while ((handle = handles.pop())) {
				handle.remove();
			}
		}
	}
});

Events from the list above will be converted to topics for legacy reporters as follows:

Event Topic
coverage /coverage
deprecated /deprecated
fatalError /error
newSuite /suite/new
newTest /test/new
runEnd /client/end
/runner/end
stop (method)
runStart /runner/start
start (method)
suiteEnd /suite/end
suiteError /suite/error
suiteStart /suite/start
testEnd /test/end
testFail /test/fail
testPass /test/pass
testSkip /test/skip
testStart /test/start
testFail /test/fail
tunnelDownloadProgress /tunnel/download/progress
tunnelEnd /tunnel/stop
tunnelStart /tunnel/start
tunnelStatus /tunnel/status

Internals

The Suite object

The Suite object represents a logical grouping of tests within a test run. When inside a setup (a.k.a. “before”), beforeEach, afterEach, or teardown (a.k.a. “after”) method, the this object will be the Suite object that represents that suite.

The following properties and methods are available on all Suite objects:

Property Description
error An Error object containing any error thrown from one of the suite lifecycle methods.
grep A RegExp that will be used to skip tests. This value will be inherited from the parent suite. See the grep configuration option for more information.
id The unique identifier string for this suite. This property is read-only.
name A string representing the human-readable name of the suite.
numTests The total number of tests registered in this suite and its sub-suites. (To get just the number of tests for this suite, use tests.length.)
numFailedTests The number of failed tests in this suite and its sub-suites.
numSkippedTests The number of skipped tests in this suite and its sub-suites.
parent The parent suite, if this suite is a sub-suite. This property will be null for unit tests that have been sent from a client back to the test runner.
publishAfterSetup When set to false (the default), the suiteStart event is sent before the setup method runs and the suiteEnd event is sent after the teardown method has finished running. Setting this value to true flips when the suiteStart and suiteEnd events are sent to reporters so suiteStart is sent after setup is finished and suiteEnd is sent before teardown is finished.
remote A Leadfoot Command object that can be used to drive a remote environment. This value will be inherited from the parent suite. Only available to suites that are loaded from functionalSuites.
reporterManager 3.0 The event hub that can be used to send result data and other information to reporters. This value will be inherited from the parent suite.
sessionId A unique identifier for a remote environment session. This value will be inherited from the parent suite. Only available to suites that are loaded from functionalSuites.
timeElapsed Time, in milliseconds, that the suite took to execute. Only available once all tests in the suite have finished running.
tests An array of Test or Suite objects. Push more tests/suites onto this object before a test run begins to populate the test system with tests. The behaviour of adding tests after a test run has begun is undefined.
Method Description
afterEach(test: Test 3.0):
  Promise<void>
A function which will be executed after each test in the suite, including nested, skipped, and failed tests. If a Promise is returned, the suite will wait until the Promise is resolved before continuing. If the Promise rejects, the test will be considered failed and the error from the Promise will be used as the error for the Test.
afterEachLoop(test: Test):
  Promise<void> 3.4
A function added to suites of BenchmarkTests that will be executed after each execution of the test function during a benchmarking run.
beforeEach(test: Test 3.0):
  Promise<void>
A function which will be executed before each test in the suite, including nested tests. If a Promise is returned, the suite will wait until the Promise is resolved before continuing. If the Promise rejects, the test will be considered failed and the error from the Promise will be used as the error for the Test.
beforeEachLoop(test: Test):
  Promise<void> 3.4
A function added to suites of BenchmarkTests that will be executed before each execution of the test function during a benchmarking run.
run(): Promise<number> Runs the test suite. Returns a Promise that resolves to the number of failed tests after all tests in the suite have finished running.
setup(): Promise<void> A function which will be executed once when the suite starts running. If a Promise is returned, the suite will wait until the Promise is resolved before continuing. If the Promise rejects, the suite will be considered failed and the error from the Promise will be used as the error for the Suite.
teardown(): Promise<void> A function which will be executed once after all tests in the suite have finished running. If a Promise is returned, the suite will wait until the Promise is resolved before continuing. If the Promise rejects, the suite will be considered failed and the error from the Promise will be used as the error for the Suite.
toJSON(): Object Returns an object that can be safely serialised to JSON. This method normally does not need to be called directly; JSON.stringify will use the toJSON method automatically if you try to serialise the Suite object.

3.1 Within the lifecycle methods (setup/before, beforeEach, afterEach, teardown/after), an async method is available that can be used in lieu of returning a Promise. For example:

setup: function () {
	var dfd = this.async(1000); 
	fs.readFile(filename, function (data) {
		testData = data;
		dfd.resolve();
	});
}

async returns an augmented Deferred object (see asynchronous tests for more information). The lifecycle method in which async was called will wait for the Deferred to resolve, just as if a Promise were returned. The main difference between using the async method and returning a Promise is that async allows the method’s timeout to be configured with an optional number of milliseconds.

The Test object

The Test object represents a single test within a test run. When inside a test function, the this object will be the Test object that represents that test.

The following properties and methods are available on all Test objects:

Property Description
error If a test fails, the error that caused the failure will be available here.
id The unique identifier string for this test. This property is read-only.
isAsync A flag representing whether or not this test is asynchronous. This flag will not be set until the test function actually runs.
name A string representing the human-readable name of the suite.
parent The parent suite for the test. This property must be set by the test interface that instantiates the Test object. This property will be null for unit tests that have been sent from a client back to the test runner.
remote A Leadfoot Command object that can be used to drive a remote environment. This value will be inherited from the parent suite. Only available to suites that are loaded from functionalSuites.
reporterManager 3.0 The event hub that can be used to send result data and other information to reporters. This value will be inherited from the parent suite.
sessionId A unique identifier for a remote environment session. This value will be inherited from the parent suite. Only available to suites that are loaded from functionalSuites.
skipped If a test is skipped, the reason for the skip will be available here.
timeElapsed Time, in milliseconds, that the test took to execute. Only available once the test has finished running.
timeout The maximum time, in milliseconds, that an asynchronous test can take to finish before it is considered timed out. Once the test function has finished executing, changing this value has no effect.
Method Description
async(
  timeout?: number,
  numCallsUntilResolution?: number
): Deferred
Makes the test asynchronous. This method is idempotent and will always return the same Deferred object. See asynchronous tests for more information.
skip(reason?: string): void Causes the test to be skipped.
test(): Promise<void> The test function for this test. If a Promise is returned, the test will wait until the Promise is resolved before passing. If the Promise rejects, the test will be considered failed and the error from the Promise will be used as the error for the Test.
toJSON(): Object Returns an object that can be safely serialised to JSON. This method normally does not need to be called directly; JSON.stringify will use the toJSON method automatically if you try to serialise the Test object.
run(): Promise<void> Runs the test. Returns a Promise that resolves to undefined after the test finishes successfully, or rejects with an error if the test failed.

The BenchmarkTest object

The BenchmarkTest object is an extension of the Test object that represents a single benchmark test within a test run. It manages the execution of benchmark its test using the Benchmark.js library. It supports the Test API, barring the async and skip methods.

Community

Getting help

The Intern team wants to help people like you write tests more quickly and easily than ever before. As such, we offer two different ways you can get help with Intern:

Community support

The Intern community is available to assist you with basic questions, advice, and general guidance. There are two primary ways to get community support:

Commercial support

Some problems are too complicated, specific, time-sensitive, or confidential to be solved through free community support. In these cases, the creators of Intern, SitePen, offer commercial support services for you or your company. Commercial support has several advantages over community support:

  • Guaranteed response
  • 24 hours maximum response time
  • Priority bug fix and enhancement requests
  • Total confidentiality for your next big idea
  • Provides direct financial support for ongoing development of Intern

If you aren’t sure if commercial support is right for you, we’re happy to take a few minutes to talk through your needs in greater detail. Get in touch to schedule a time!

Contributing

We’re always excited to receive contributions from the community. If you think you’ve discovered a bug, want to submit a patch, or would like to request a new feature, take a look at our contribution guidelines on GitHub to learn how you can contribute.

FAQ

How do I use modifier keys?

Import the leadfoot/keys module and use its constants with the pressKeys method. For example, to send Shift + Click to the browser:

require([
	'intern!object',
	'chai!assert',
	'intern/dojo/node!leadfoot/keys'
], function (registerSuite, assert, keys) {
	registerSuite({
		name: 'test',
		'test1': function () {
			var remote = this.remote();
			return remote.get('testpage.html')
				.elementById('testLink')
				.pressKeys(keys.SHIFT)
				.clickElement()
				// Release all currently pressed modifier keys
				.pressKeys(keys.NULL)
				.end();
		}
	});
});

How can I keep the test page open?

Use Intern’s leaveRemoteOpen command line option to keep the browser open after testing is complete:

intern-runner config=myPackage/test/intern leaveRemoteOpen

How do I test locally with multiple browsers?

  1. Setup a local WebDriver server.
  2. Configure the environments section of your Intern config to use multiple target browsers:
    environments: [
    	{ browserName: 'chrome' },
    	{ browserName: 'internet explorer' }
    ]
  3. Start your WebDriver server: java -jar selenium-server-standalone-2.53.0.jar
  4. Run Intern: intern-runner config=myPackage/test/intern