Erlang logo
User's Guide
Reference Manual
Release Notes
PDF
Top

Common Test
User's Guide
Version 1.5.4


Expand All
Contract All

Chapters

3 Writing Test Suites

3.1  Support for test suite authors

The ct module provides the main interface for writing test cases. This includes e.g:

  • Functions for printing and logging
  • Functions for reading configuration data
  • Function for terminating a test case with error reason
  • Function for adding comments to the HTML overview page

Please see the reference manual for the ct module for details about these functions.

The CT application also includes other modules named ct_<something> that provide various support, mainly simplified use of communication protocols such as rpc, snmp, ftp, telnet, etc.

3.2  Test suites

A test suite is an ordinary Erlang module that contains test cases. It is recommended that the module has a name on the form *_SUITE.erl. Otherwise, the directory and auto compilation function in CT will not be able to locate it (at least not per default).

The ct.hrl header file must be included in all test suite files.

Each test suite module must export the function all/0 which returns the list of all test case groups and test cases in that module.

3.3  Init and end per suite

Each test suite module may contain the optional configuration functions init_per_suite/1 and end_per_suite/1. If the init function is defined, so must the end function be.

If it exists, init_per_suite is called initially before the test cases are executed. It typically contains initializations that are common for all test cases in the suite, and that are only to be performed once. It is recommended to be used for setting up and verifying state and environment on the SUT (System Under Test) and/or the CT host node, so that the test cases in the suite will execute correctly. Examples of initial configuration operations: Opening a connection to the SUT, initializing a database, running an installation script, etc.

end_per_suite is called as the final stage of the test suite execution (after the last test case has finished). The function is meant to be used for cleaning up after init_per_suite.

init_per_suite and end_per_suite will execute on dedicated Erlang processes, just like the test cases do. The result of these functions is however not included in the test run statistics of successful, failed and skipped cases.

The argument to init_per_suite is Config, the same key-value list of runtime configuration data that each test case takes as input argument. init_per_suite can modify this parameter with information that the test cases need. The possibly modified Config list is the return value of the function.

If init_per_suite fails, all test cases in the test suite will be skipped automatically (so called auto skipped), including end_per_suite.

3.4  Init and end per test case

Each test suite module can contain the optional configuration functions init_per_testcase/2 and end_per_testcase/2. If the init function is defined, so must the end function be.

If it exists, init_per_testcase is called before each test case in the suite. It typically contains initialization which must be done for each test case (analogue to init_per_suite for the suite).

end_per_testcase/2 is called after each test case has finished, giving the opportunity to perform clean-up after init_per_testcase.

The first argument to these functions is the name of the test case. This value can be used with pattern matching in function clauses or conditional expressions to choose different initialization and cleanup routines for different test cases, or perform the same routine for a number of, or all, test cases.

The second argument is the Config key-value list of runtime configuration data, which has the same value as the list returned by init_per_suite. init_per_testcase/2 may modify this parameter or return it as is. The return value of init_per_testcase/2 is passed as the Config parameter to the test case itself.

The return value of end_per_testcase/2 is ignored by the test server, with exception of the save_config and fail tuple.

It is possible in end_per_testcase to check if the test case was successful or not (which consequently may determine how cleanup should be performed). This is done by reading the value tagged with tc_status from Config. The value is either ok, {failed,Reason} (where Reason is timetrap_timeout, info from exit/1, or details of a run-time error), or {skipped,Reason} (where Reason is a user specific term).

The end_per_testcase/2 function is called even after a test case terminates due to a call to ct:abort_current_testcase/1, or after a timetrap timeout. However, end_per_testcase will then execute on a different process than the test case function, and in this situation, end_per_testcase will not be able to change the reason for test case termination by returning {fail,Reason}, nor will it be able to save data with {save_config,Data}.

If init_per_testcase crashes, the test case itself gets skipped automatically (so called auto skipped). If init_per_testcase returns a tuple {skip,Reason}, also then the test case gets skipped (so called user skipped). It is also possible, by returning a tuple {fail,Reason} from init_per_testcase, to mark the test case as failed without actually executing it.

Note

If init_per_testcase crashes, or returns {skip,Reason} or {fail,Reason}, the end_per_testcase function is not called.

If it is determined during execution of end_per_testcase that the status of a successful test case should be changed to failed, end_per_testcase may return the tuple: {fail,Reason} (where Reason describes why the test case fails).

init_per_testcase and end_per_testcase execute on the same Erlang process as the test case and printouts from these configuration functions can be found in the test case log file.

3.5  Test cases

The smallest unit that the test server is concerned with is a test case. Each test case can actually test many things, for example make several calls to the same interface function with different parameters.

It is possible to choose to put many or few tests into each test case. What exactly each test case does is of course up to the author, but here are some things to keep in mind:

Having many small test cases tend to result in extra, and possibly duplicated code, as well as slow test execution because of large overhead for initializations and cleanups. Duplicated code should be avoided, e.g. by means of common help functions, or the resulting suite will be difficult to read and understand, and expensive to maintain.

Larger test cases make it harder to tell what went wrong if it fails, and large portions of test code will potentially be skipped when errors occur. Furthermore, readability and maintainability suffers when test cases become too large and extensive. Also, the resulting log files may not reflect very well the number of tests that have actually been performed.

The test case function takes one argument, Config, which contains configuration information such as data_dir and priv_dir. (See Data and Private Directories for more information about these). The value of Config at the time of the call, is the same as the return value from init_per_testcase, see above.

Note

The test case function argument Config should not be confused with the information that can be retrieved from configuration files (using ct:get_config/[1,2]). The Config argument should be used for runtime configuration of the test suite and the test cases, while configuration files should typically contain data related to the SUT. These two types of configuration data are handled differently!

Since the Config parameter is a list of key-value tuples, i.e. a data type generally called a property list, it can be handled by means of the proplists module in the OTP stdlib. A value can for example be searched for and returned with the proplists:get_value/2 function. Also, or alternatively, you might want to look in the general lists module, also in stdlib, for useful functions. Normally, the only operations you ever perform on Config is insert (adding a tuple to the head of the list) and lookup. Common Test provides a simple macro named ?config, which returns a value of an item in Config given the key (exactly like proplists:get_value). Example: PrivDir = ?config(priv_dir, Config).

If the test case function crashes or exits purposely, it is considered failed. If it returns a value (no matter what actual value) it is considered successful. An exception to this rule is the return value {skip,Reason}. If this tuple is returned, the test case is considered skipped and gets logged as such.

If the test case returns the tuple {comment,Comment}, the case is considered successful and Comment is printed out in the overview log file. This is by the way equal to calling ct:comment(Comment).

3.6  Test case info function

For each test case function there can be an additional function with the same name but with no arguments. This is the test case info function. The test case info function is expected to return a list of tagged tuples that specifies various properties regarding the test case.

The following tags have special meaning:

timetrap

Set the maximum time the test case is allowed to execute. If the timetrap time is exceeded, the test case fails with reason timetrap_timeout. Note that init_per_testcase and end_per_testcase are included in the timetrap time.

userdata

Use this to specify arbitrary data related to the testcase. This data can be retrieved at any time using the ct:userdata/3 utility function.

silent_connections

Please see the Silent Connections chapter for details.

require

Use this to specify configuration variables that are required by the test case. If the required configuration variables are not found in any of the test system configuration files, the test case is skipped.

It is also possible to give a required variable a default value that will be used if the variable is not found in any configuration file. To specify a default value, add a tuple on the form: {default_config,ConfigVariableName,Value} to the test case info list (the position in the list is irrelevant). Examples:

	    testcase1() -> 
	        [{require, ftp},
	         {default_config, ftp, [{ftp, "my_ftp_host"},
	                                {username, "aladdin"},
	                                {password, "sesame"}]}}].
	    testcase2() -> 
	        [{require, unix_telnet, {unix, [telnet, username, password]}},
	         {default_config, unix, [{telnet, "my_telnet_host"},
	                                 {username, "aladdin"},
	                                 {password, "sesame"}]}}].

See the Config files chapter and the ct:require/[1,2] function in the ct reference manual for more information about require.

Note

Specifying a default value for a required variable can result in a test case always getting executed. This might not be a desired behaviour!

If timetrap and/or require is not set specifically for a particular test case, default values specified by the suite/0 function are used.

Other tags than the ones mentioned above will simply be ignored by the test server.

Example of a test case info function:

	reboot_node() ->
	    [
	     {timetrap,{seconds,60}},
	     {require,interfaces},
	     {userdata,
	         [{description,"System Upgrade: RpuAddition Normal RebootNode"},
	          {fts,"http://someserver.ericsson.se/test_doc4711.pdf"}]}                  
            ].

3.7  Test suite info function

The suite/0 function can be used in a test suite module to set the default values for the timetrap and require tags. If a test case info function also specifies any of these tags, the default value is overruled. See above for more information.

Other options that may be specified with the suite info list are:

Example of the suite info function:

	suite() ->
	    [
	     {timetrap,{minutes,10}},
	     {require,global_names},
	     {userdata,[{info,"This suite tests database transactions."}]},
	     {silent_connections,[telnet]},
	     {stylesheet,"db_testing.css"}
            ].

3.8  Test case groups

A test case group is a set of test cases that share configuration functions and execution properties. Test case groups are defined by means of the groups/0 function according to the following syntax:

      groups() -> GroupDefs

      Types:

      GroupDefs = [GroupDef]
      GroupDef = {GroupName,Properties,GroupsAndTestCases}
      GroupName = atom()
      GroupsAndTestCases = [GroupDef | {group,GroupName} | TestCase]
      TestCase = atom()

GroupName is the name of the group and should be unique within the test suite module. Groups may be nested, and this is accomplished simply by including a group definition within the GroupsAndTestCases list of another group. Properties is the list of execution properties for the group. The possible values are:

      Properties = [parallel | sequence | Shuffle | {RepeatType,N}]
      Shuffle = shuffle | {shuffle,Seed}
      Seed = {integer(),integer(),integer()}
      RepeatType = repeat | repeat_until_all_ok | repeat_until_all_fail |
                   repeat_until_any_ok | repeat_until_any_fail
      N = integer() | forever

If the parallel property is specified, Common Test will execute all test cases in the group in parallel. If sequence is specified, the cases will be executed in a sequence, as described in the chapter Dependencies between test cases and suites. If shuffle is specified, the cases in the group will be executed in random order. The repeat property orders Common Test to repeat execution of the cases in the group a given number of times, or until any, or all, cases fail or succeed.

Example:

      groups() -> [{group1, [parallel], [test1a,test1b]},
                   {group2, [shuffle,sequence], [test2a,test2b,test2c]}].

To specify in which order groups should be executed (also with respect to test cases that are not part of any group), tuples on the form {group,GroupName} should be added to the all/0 list. Example:

      all() -> [testcase1, {group,group1}, testcase2, {group,group2}].

Properties may be combined so that e.g. if shuffle, repeat_until_any_fail and sequence are all specified, the test cases in the group will be executed repeatedly and in random order until a test case fails, when execution is immediately stopped and the rest of the cases skipped.

Before execution of a group begins, the configuration function init_per_group(GroupName, Config) is called (the function is mandatory if one or more test case groups are defined). The list of tuples returned from this function is passed to the test cases in the usual manner by means of the Config argument. init_per_group/2 is meant to be used for initializations common for the test cases in the group. After execution of the group is finished, the end_per_group(GroupName, Config function is called. This function is meant to be used for cleaning up after init_per_group/2.

Note

init_per_testcase/2 and end_per_testcase/2 are always called for each individual test case, no matter if the case belongs to a group or not.

The properties for a group is always printed on the top of the HTML log for init_per_group/2. Also, the total execution time for a group can be found at the bottom of the log for end_per_group/2.

Test case groups may be nested so that sets of groups can be configured with the same init_per_group/2 and end_per_group/2 functions. Nested groups may be defined by including a group definition, or a group name reference, in the test case list of another group. Example:

      groups() -> [{group1, [shuffle], [test1a,
                                        {group2, [], [test2a,test2b]},
                                        test1b]},
                   {group3, [], [{group,group4},
                                 {group,group5}]},
                   {group4, [parallel], [test4a,test4b]},
                   {group5, [sequence], [test5a,test5b,test5c]}].

In the example above, if all/0 would return group name references in this order: [{group,group1},{group,group3}], the order of the configuration functions and test cases will be the following (note that init_per_testcase/2 and end_per_testcase/2: are also always called, but not included in this example for simplification):

-      init_per_group(group1, Config) -> Config1  (*)

--          test1a(Config1)

--	    init_per_group(group2, Config1) -> Config2

---              test2a(Config2), test2b(Config2)

--          end_per_group(group2, Config2)

--          test1b(Config1)

-      end_per_group(group1, Config1) 

-      init_per_group(group3, Config) -> Config3

--          init_per_group(group4, Config3) -> Config4

---              test4a(Config4), test4b(Config4)  (**)

--          end_per_group(group4, Config4)

--	    init_per_group(group5, Config3) -> Config5

---              test5a(Config5), test5b(Config5), test5c(Config5)

--          end_per_group(group5, Config5)

-      end_per_group(group3, Config3)


    (*) The order of test case test1a, test1b and group2 is not actually
        defined since group1 has a shuffle property.

    (**) These cases are not executed in order, but in parallel.

Properties are not inherited from top level groups to nested sub-groups. E.g, in the example above, the test cases in group2 will not be executed in random order (which is the property of group1).

3.9  The parallel property and nested groups

If a group has a parallel property, its test cases will be spawned simultaneously and get executed in parallel. A test case is not allowed to execute in parallel with end_per_group/2 however, which means that the time it takes to execute a parallel group is equal to the execution time of the slowest test case in the group. A negative side effect of running test cases in parallel is that the HTML summary pages are not updated with links to the individual test case logs until the end_per_group/2 function for the group has finished.

A group nested under a parallel group will start executing in parallel with previous (parallel) test cases (no matter what properties the nested group has). Since, however, test cases are never executed in parallel with init_per_group/2 or end_per_group/2 of the same group, it's only after a nested group has finished that any remaining parallel cases in the previous group get spawned.

3.10  Repeated groups

A test case group may be repeated a certain number of times (specified by an integer) or indefinitely (specified by forever). The repetition may also be stopped prematurely if any or all cases fail or succeed, i.e. if the property repeat_until_any_fail, repeat_until_any_ok, repeat_until_all_fail, or repeat_until_all_ok is used. If the basic repeat property is used, status of test cases is irrelevant for the repeat operation.

It is possible to return the status of a sub-group (ok or failed), to affect the execution of the group on the level above. This is accomplished by, in end_per_group/2, looking up the value of tc_group_properties in the Config list and checking the result of the test cases in the group. If status failed should be returned from the group as a result, end_per_group/2 should return the value {return_group_result,failed}. The status of a sub-group is taken into account by Common Test when evaluating if execution of a group should be repeated or not (unless the basic repeat property is used).

The tc_group_properties value is a list of status tuples, each with the key ok, skipped and failed. The value of a status tuple is a list containing names of test cases that have been executed with the corresponding status as result.

Here's an example of how to return the status from a group:

      end_per_group(_Group, Config) ->
          Status = ?config(tc_group_result, Config),
          case proplists:get_value(failed, Status) of
              [] ->                                   % no failed cases 
	          {return_group_result,ok};
	      _Failed ->                              % one or more failed
	          {return_group_result,failed}
          end.

It is also possible in end_per_group/2 to check the status of a sub-group (maybe to determine what status the current group should also return). This is as simple as illustrated in the example above, only the name of the group is stored in a tuple {group_result,GroupName}, which can be searched for in the status lists. Example:

      end_per_group(group1, Config) ->
          Status = ?config(tc_group_result, Config),
          Failed = proplists:get_value(failed, Status),
          case lists:member({group_result,group2}, Failed) of
	        true ->
		    {return_group_result,failed};
                false ->                                                    
	            {return_group_result,ok}
          end; 
      ...
Note

When a test case group is repeated, the configuration functions, init_per_group/2 and end_per_group/2, are also always called with each repetition.

3.11  Shuffled test case order

The order that test cases in a group are executed, is under normal circumstances the same as the order specified in the test case list in the group definition. With the shuffle property set, however, Common Test will instead execute the test cases in random order.

The user may provide a seed value (a tuple of three integers) with the shuffle property: {shuffle,Seed}. This way, the same shuffling order can be created every time the group is executed. If no seed value is given, Common Test creates a "random" seed for the shuffling operation (using the return value of erlang:now()). The seed value is always printed to the init_per_group/2 log file so that it can be used to recreate the same execution order in a subsequent test run.

Note

If a shuffled test case group is repeated, the seed will not be reset in between turns.

If a sub-group is specified in a group with a shuffle property, the execution order of this sub-group in relation to the test cases (and other sub-groups) in the group, is also random. The order of the test cases in the sub-group is however not random (unless, of course, the sub-group also has a shuffle property).

3.12  Data and Private Directories

The data directory (data_dir) is the directory where the test module has its own files needed for the testing. The name of the data_dir is the the name of the test suite followed by "_data". For example, "some_path/foo_SUITE.beam" has the data directory "some_path/foo_SUITE_data/". Use this directory for portability, i.e. to avoid hardcoding directory names in your suite. Since the data directory is stored in the same directory as your test suite, you should be able to rely on its existence at runtime, even if the path to your test suite directory has changed between test suite implementation and execution.

The priv_dir is the test suite's private directory. This directory should be used when a test case needs to write to files. The name of the private directory is generated by the test server, which also creates the directory.

Note

You should not depend on current working directory for reading and writing data files since this is not portable. All scratch files are to be written in the priv_dir and all data files should be located in data_dir. Note also that the Common Test server sets current working directory to the test case log directory at the start of every case.

3.13  Execution environment

Each test case is executed by a dedicated Erlang process. The process is spawned when the test case starts, and terminated when the test case is finished. The configuration functions init_per_testcase and end_per_testcase execute on the same process as the test case.

The configuration functions init_per_suite and end_per_suite execute, like test cases, on dedicated Erlang processes.

3.14  Timetrap timeouts

The default time limit for a test case is 30 minutes, unless a timetrap is specified either by the suite info function or a test case info function. The timetrap timeout value defined in suite/0 is the value that will be used for each test case in the suite (as well as for the configuration functions init_per_suite/1 and end_per_suite). A timetrap timeout value set with the test case info function will override the value set by suite/0, but only for that particular test case.

It is also possible to set/reset a timetrap during test case (or configuration function) execution. This is done by calling ct:timetrap/1. This function will cancel the current timetrap and start a new one.

Timetrap values can be extended with a multiplier value specified at startup with the multiply_timetraps option. It is also possible to let Test Server decide to scale up timetrap timeout values automatically, e.g. if tools such as cover or trace are running during the test. This feature is disabled by default and can be enabled with the scale_timetraps start option.

If a test case needs to suspend itself for a time that also gets multipled by multiply_timetraps, and possibly scaled up if scale_timetraps is enabled, the function ct:sleep/1 may be called.

3.15  Illegal dependencies

Even though it is highly efficient to write test suites with the Common Test framework, there will surely be mistakes made, mainly due to illegal dependencies. Noted below are some of the more frequent mistakes from our own experience with running the Erlang/OTP test suites.

  • Depending on current directory, and writing there:

    This is a common error in test suites. It is assumed that the current directory is the same as what the author used as current directory when the test case was developed. Many test cases even try to write scratch files to this directory. Instead data_dir and priv_dir should be used to locate data and for writing scratch files.

  • Depending on the Clearcase (file version control system) paths and files:

    The test suites are stored in Clearcase but are not (necessarily) run within this environment. The directory structure may vary from test run to test run.

  • Depending on execution order:

    During development of test suites, no assumption should be made (preferrably) about the execution order of the test cases or suites. E.g. a test case should not assume that a server it depends on, has already been started by a previous test case. There are several reasons for this:

    Firstly, the user/operator may specify the order at will, and maybe a different execution order is more relevant or efficient on some particular occasion. Secondly, if the user specifies a whole directory of test suites for his/her test, the order the suites are executed will depend on how the files are listed by the operating system, which varies between systems. Thirdly, if a user wishes to run only a subset of a test suite, there is no way one test case could successfully depend on another.

  • Depending on Unix:

    Running unix commands through os:cmd are likely not to work on non-unix platforms.

  • Nested test cases:

    Invoking a test case from another not only tests the same thing twice, but also makes it harder to follow what exactly is being tested. Also, if the called test case fails for some reason, so will the caller. This way one error gives cause to several error reports, which is less than ideal.

    Functionality common for many test case functions may be implemented in common help functions. If these functions are useful for test cases across suites, put the help functions into common help modules.

  • Failure to crash or exit when things go wrong:

    Making requests without checking that the return value indicates success may be ok if the test case will fail at a later stage, but it is never acceptable just to print an error message (into the log file) and return successfully. Such test cases do harm since they create a false sense of security when overviewing the test results.

  • Messing up for subsequent test cases:

    Test cases should restore as much of the execution environment as possible, so that the subsequent test cases will not crash because of execution order of the test cases. The function end_per_testcase is suitable for this.