2 Test Structure and Test Specifications

2.1  Test structure

A test consists of a set of test cases. Each test case is implemented as an erlang function. An erlang module implementing one or more test cases is called a test suite.

2.2  Test specifications

A test specification is a specification of which test suites and test cases to run and which to skip. A test specification can also group several test cases into conf cases with init and cleanup functions (see section about configuration cases below). In a test there can be test specifications on three different levels:

The top level is a test specification file which roughly specifies what to test for a whole application. The test specification in such a file is encapsulated in a topcase command.

Then there is a test specification for each test suite, specifying which test cases to run within the suite. The test specification for a test suite is returned from the all(suite) function in the test suite module.

And finally there can be a test specification per test case, specifying sub test cases to run. The test specification for a test case is returned from the specification clause of the test case.

When a test starts, the total test specification is built in a tree fashion, starting from the top level test specification.

The following are the valid elements of a test specification. The specification can be one of these elements or a list with any combination of the elements:

{Mod, Case}
This specifies the test case Mod:Case/1
{dir, Dir}
This specifies all modules *_SUITE in the directory Dir
{dir, Dir, Pattern}
This specifies all modules Pattern* in the directory Dir
{conf, Init, TestSpec, Fin}
This is a configuration case. In a test specification file, Init and Fin must be {Mod,Func}. Inside a module they can also be just Func. See the section named Configuration Cases below for more information about this.
{conf, Properties, Init, TestSpec, Fin}
This is a configuration case as explained above, but which also takes a list of execution properties for its group of test cases and nested sub-groups.
{make, Init, TestSpec, Fin}
This is a special version of a conf case which is only used by the test server framework ts. Init and Fin are make and unmake functions for a data directory. TestSpec is the test specification for the test suite owning the data directory in question. If the make function fails, all tests in the test suite are skipped. The difference between this "make case" and a normal conf case is that for the make case, Init and Fin are given with arguments ({Mod,Func,Args}).
This can only be used inside a module, i.e. not a test specification file. It specifies the test case CurrentModule:Case.

2.3  Test Specification Files

A test specification file is a text file containing the top level test specification (a topcase command), and possibly one or more additional commands. A "command" in a test specification file means a key-value tuple ended by a dot-newline sequence.

The following commands are valid:

{topcase, TestSpec}
This command is mandatory in all test specification files. TestSpec is the top level test specification of a test.
{skip, {Mod, Comment}}
This specifies that all cases in the module Mod shall be skipped. Comment is a string.
{skip, {Mod, Case, Comment}}
This specifies that the case Mod:Case shall be skipped.
{skip, {Mod, CaseList, Comment}}
This specifies that all cases Mod:Case, where Case is in CaseList, shall be skipped.
{nodes, Nodes}
Nodes is a list of nodenames available to the test suite. It will be added to the Config argument to all test cases. Nodes is a list of atoms.
{require_nodenames, Num}
Specifies how many nodenames the test suite will need. Theese will be automatically generated and inserted into the Config argument to all test cases. Num is an integer.
{hosts, Hosts}
This is a list of available hosts on which to start slave nodes. It is used when the {remote, true} option is given to the test_server:start_node/3 function. Also, if {require_nodenames, Num} is contained in a test specification file, the generated nodenames will be spread over all hosts given in this Hosts list. The hostnames are atoms or strings.
{diskless, true}
Adds {diskless, true} to the Config argument to all test cases. This is kept for backwards compatibility and should not be used. Use a configuration case instead.
{ipv6_hosts, Hosts}
Adds {ipv6_hosts, Hosts} to the Config argument to all test cases.

All test specification files shall have the extension ".spec". If special test specification files are needed for Windows or VxWorks platforms, additional files with the extension ".spec.win" and ".spec.vxworks" shall be used. This is useful e.g. if some test cases shall be skipped on these platforms.

Some examples for test specification files can be found in the Examples section of this user's guide.

2.4  Configuration cases

If a group of test cases need the same initialization, a so called configuration or conf case can be used. A conf case consists of an initialization function, the group of test cases needing this initialization and a cleanup or finalization function.

If the init function in a conf case fails or returns {skip,Comment}, the rest of the test cases in the conf case (including the cleanup function) are skipped. If the init function succeeds, the cleanup function will always be called, even if some of the test cases in between failed.

Both the init function and the cleanup function in a conf case get the Config parameter as only argument. This parameter can be modified or returned as is. Whatever is returned by the init function is given as Config parameter to the rest of the test cases in the conf case, including the cleanup function.

If the Config parameter is changed by the init function, it must be restored by the cleanup function. Whatever is returned by the cleanup function will be given to the next test case called.

The optional Properties list can be used to specify execution properties for the test cases and possibly nested sub-groups of the configuration case. The available properties are:

      Properties = [parallel | sequence | Shuffle | {RepeatType,N}]
      Shuffle = shuffle | {shuffle,Seed}
      Seed = {integer(),integer(),integer()}
      RepeatType = repeat | repeat_until_all_ok | repeat_until_all_fail |
                   repeat_until_any_ok | repeat_until_any_fail
      N = integer() | forever

If the parallel property is specified, Test Server will execute all test cases in the group in parallel. If sequence is specified, the cases will be executed in a sequence, meaning if one case fails, all following cases will be skipped. If shuffle is specified, the cases in the group will be executed in random order. The repeat property orders Test Server to repeat execution of the cases in the group a given number of times, or until any, or all, cases fail or succeed.

Properties may be combined so that e.g. if shuffle, repeat_until_any_fail and sequence are all specified, the test cases in the group will be executed repeatedly and in random order until a test case fails, when execution is immediately stopped and the rest of the cases skipped.

The properties for a conf case is always printed on the top of the HTML log for the group's init function. Also, the total execution time for a conf case can be found at the bottom of the log for the group's end function.

Configuration cases may be nested so that sets of grouped cases can be configured with the same init- and end functions.

2.5  The parallel property and nested configuration cases

If a conf case has a parallel property, its test cases will be spawned simultaneously and get executed in parallel. A test case is not allowed to execute in parallel with the end function however, which means that the time it takes to execute a set of parallel cases is equal to the execution time of the slowest test case in the group. A negative side effect of running test cases in parallel is that the HTML summary pages are not updated with links to the individual test case logs until the end function for the conf case has finished.

A conf case nested under a parallel conf case will start executing in parallel with previous (parallel) test cases (no matter what properties the nested conf case has). Since, however, test cases are never executed in parallel with the init- or the end function of the same conf case, it's only after a nested group of cases has finished that any remaining parallel cases in the previous conf case get spawned.

2.6  Repeated execution of test cases

A conf case may be repeated a certain number of times (specified by an integer) or indefinitely (specified by forever). The repetition may also be stopped prematurely if any or all cases fail or succeed, i.e. if the property repeat_until_any_fail, repeat_until_any_ok, repeat_until_all_fail, or repeat_until_all_ok is used. If the basic repeat property is used, status of test cases is irrelevant for the repeat operation.

It is possible to return the status of a conf case (ok or failed), to affect the execution of the conf case on the level above. This is accomplished by, in the end function, looking up the value of tc_group_properties in the Config list and checking the result of the finished test cases. If status failed should be returned from the conf case as a result, the end function should return the value {return_group_result,failed}. The status of a nested conf case is taken into account by Test Server when deciding if execution should be repeated or not (unless the basic repeat property is used).

The tc_group_properties value is a list of status tuples, each with the key ok, skipped and failed. The value of a status tuple is a list containing names of test cases that have been executed with the corresponding status as result.

Here's an example of how to return the status from a conf case:

      conf_end_function(Config) ->
          Status = ?config(tc_group_result, Config),
          case proplists:get_value(failed, Status) of
              [] ->                                   % no failed cases 
	      _Failed ->                              % one or more failed

It is also possible in the end function to check the status of a nested conf case (maybe to determine what status the current conf case should return). This is as simple as illustrated in the example above, only the name of the end function of the nested conf case is stored in a tuple {group_result,EndFunc}, which can be searched for in the status lists. Example:

      conf_end_function_X(Config) ->
          Status = ?config(tc_group_result, Config),
          Failed = proplists:get_value(failed, Status),
          case lists:member({group_result,conf_end_function_Y}, Failed) of
	        true ->
                false ->                                                    

When a conf case is repeated, the init- and end functions are also always called with each repetition.

2.7  Shuffled test case order

The order that test cases in a conf case are executed, is under normal circumstances the same as the order defined in the test specification. With the shuffle property set, however, Test Server will instead execute the test cases in random order.

The user may provide a seed value (a tuple of three integers) with the shuffle property: {shuffle,Seed}. This way, the same shuffling order can be created every time the conf case is executed. If no seed value is given, Test Server creates a "random" seed for the shuffling operation (using the return value of erlang:now()). The seed value is always printed to the log file of the init function so that it can be used to recreate the same execution order in subsequent test runs.


If execution of a conf case with shuffled test cases is repeated, the seed will not be reset in between turns.

If a nested conf case is specified in a conf case with a shuffle property, the execution order of the nested cases in relation to the test cases (and other conf cases) is also random. The order of the test cases in the nested conf case is however not random (unless, of course, this one also has a shuffle property).

2.8  Skipping test cases

It is possible to skip certain test cases, for example if you know beforehand that a specific test case fails. This might be functionality which isn't yet implemented, a bug that is known but not yet fixed or some functionality which doesn't work or isn't applicable on a specific platform.

There are several different ways to state that a test case should be skipped:

  • Using the {skip,What} command in a test specification file
  • Returning {skip,Reason} from the init_per_testcase/2 function
  • Returning {skip,Reason} from the specification clause of the test case
  • Returning {skip,Reason} from the execution clause of the test case

The latter of course means that the execution clause is actually called, so the author must make sure that the test case is not run. For more information about the different clauses in a test case, see the chapter about writing test cases.

When a test case is skipped, it will be noted as SKIPPED in the HTML log.