The test_server_ctrl module provides a low level interface to the Test Server. This interface is normally not used directly by the tester, but through a framework built on top of test_server_ctrl.
Common Test is such a framework, well suited for automated black box testing of target systems of any kind (not necessarily implemented in Erlang). Common Test is also a very useful tool for white box testing Erlang programs and OTP applications. Please see the Common Test User's Guide and reference manual for more information.
If you want to write your own framework, some more information can be found in the chapter "Writing your own test server framework" in the Test Server User's Guide. Details about the interface provided by test_server_ctrl follows below.
start() -> Result
start(ParameterFile) -> Result
Types:
Result = ok | {error, {already_started, pid()}
ParameterFile = atom() | string()
This function starts the test server. If the parameter file is given, it indicates that the target is remote. In that case the target node is started and a socket connection is established between the controller and the target node.
The parameter file is a text file containing key-value tuples. Each tuple must be followed by a dot-newline sequence. The following key-value tuples are allowed:
This stops the test server (both controller and target) and all its activity. The running test suite (if any) will be halted.
add_dir(Name, Dir) -> ok
add_dir(Name, Dir, Pattern) -> ok
add_dir(Name, [Dir|Dirs]) -> ok
add_dir(Name, [Dir|Dirs], Pattern) -> ok
Types:
Name = term()
Puts a collection of suites matching (*_SUITE) in given directories into the job queue. Name is an arbitrary name for the job, it can be any erlang term. If Pattern is given, only modules matching Pattern* will be added.
add_module(Mod) -> ok
add_module(Name, [Mod|Mods]) -> ok
Types:
Mod = atom()
Mods = [atom()]
This function adds a module or a list of modules, to the test servers job queue. Name may be any Erlang term. When Name is not given, the job gets the name of the module.
Types:
Mod = atom()
This function will add one test case to the job queue. The job will be given the module's name.
add_case(Name, Mod, Case) -> ok
Types:
Name = string()
Equivalent to add_case/2, but the test job will get the specified name.
Types:
Mod = atom()
This function will add one or more test cases to the job queue. The job will be given the module's name.
add_cases(Name, Mod, Cases) -> ok
Types:
Name = string()
Equivalent to add_cases/2, but the test job will get the specified name.
add_spec(TestSpecFile) -> ok | {error, nofile}
Types:
TestSpecFile = string()
This function will add the content of the given test specification file to the job queue. The job will be given the name of the test specification file, e.g. if the file is called test.spec, the job will be called test.
See the reference manual for the test server application for details about the test specification file.
add_dir_with_skip(Name, [Dir|Dirs], Skip) -> ok
add_dir_with_skip(Name, [Dir|Dirs], Pattern, Skip) -> ok
add_module_with_skip(Mod, Skip) -> ok
add_module_with_skip(Name, [Mod|Mods], Skip) -> ok
add_case_with_skip(Mod, Case, Skip) -> ok
add_case_with_skip(Name, Mod, Case, Skip) -> ok
add_cases_with_skip(Mod, Cases, Skip) -> ok
add_cases_with_skip(Name, Mod, Cases, Skip) -> ok
Types:
Skip = [SkipItem]
These functions add test jobs just like the add_dir, add_module, add_case and add_cases functions above, but carry an additional argument, Skip. Skip is a list of items that should be skipped in the current test run. Test job items that occur in the Skip list will be logged as SKIPPED with the associated Comment.
add_tests_with_skip(Name, Tests, Skip) -> ok
Types:
Name = term()
This function adds various test jobs to the test_server_ctrl job queue. These jobs can be of different type (all or specific suites in one directory, all or specific cases in one suite, etc). It is also possible to get particular items skipped by passing them along in the Skip list (see the add_*_with_skip functions above).
abort_current_testcase(Reason) -> ok | {error,no_testcase_running}
Types:
Reason = term()
When calling this function, the currently executing test case will be aborted. It is the user's responsibility to know for sure which test case is currently executing. The function is therefore only safe to call from a function which has been called (or synchronously invoked) by the test case.
set_levels(Console, Major, Minor) -> ok
Types:
Console = integer()
Determines where I/O from test suites/test server will go. All text output from test suites and the test server is tagged with a priority value which ranges from 0 to 100, 100 being the most detailed. (see the section about log files in the user's guide). Output from the test cases (using io:format/2) has a detail level of 50. Depending on the levels set by this function, this I/O may be sent to the console, the major log file (for the whole test suite) or to the minor logfile (separate for each test case).
All output with detail level:
To view the currently set thresholds, use the get_levels/0 function.
get_levels() -> {Console, Major, Minor}
Returns the current levels. See set_levels/3 for types.
Types:
JobQueue = [{list(), pid()}]
This function will return all the jobs currently in the job queue.
Types:
N = integer() | infinity
This function should be called before a test is started which requires extended timetraps, e.g. if extensivie tracing is used. All timetraps started after this call will be multiplied by N.
cover(Application,Analyse) -> ok
cover(CoverFile,Analyse) -> ok
cover(App,CoverFile,Analyse) -> ok
Types:
Application = atom()
This function informs the test_server controller that next test shall run with code coverage analysis. All timetraps will automatically be multiplied by 10 when cover i run.
Application and CoverFile indicates what to cover compile. If Application is given, the default is that all modules in the ebin directory of the application will be cover compiled. The ebin directory is found by adding ebin to code:lib_dir(Application).
A CoverFile can have the following entries:
{exclude, all | ExcludeModuleList}. {include, IncludeModuleList}.
Note that each line must end with a full stop. ExcludeModuleList and IncludeModuleList are lists of atoms, where each atom is a module name.
If both an Application and a CoverFile is given, all modules in the application are cover compiled, except for the modules listed in ExcludeModuleList. The modules in IncludeModuleList are also cover compiled.
If a CoverFile is given, but no Application, only the modules in IncludeModuleList are cover compiled.
Analyse indicates the detail level of the cover anlysis. If Analyse = details, each cover compiled module will be analysed with cover:analyse_to_file/1. If Analyse = overview an overview of all cover compiled modules is created, listing the number of covered and not covered lines for each module.
If the test following this call starts any slave or peer nodes with test_server:start_node/3, the same cover compiled code will be loaded on all nodes. If the loading fails, e.g. if the node runs an old version of OTP, the node will simply not be a part of the coverage analysis. Note that slave or peer nodes must be stopped with test_server:stop_node/1 for the node to be part of the coverage analysis, else the test server will not be able to fetch coverage data from the node.
When the test is finished, the coverage analysis is automatically completed, logs are created and the cover compiled modules are unloaded. If another test is to be run with coverage analysis, test_server_ctrl:cover/2/3 must be called again.
cross_cover_analyse(Level) -> ok
Types:
Level = details | overview
Analyse cover data collected from all tests. The modules analysed are the ones listed in the cross cover file cross.cover in the current directory of the test server.
The modules listed in the cross.cover file are modules that are heavily used by other applications than the one they belong to. This function should be run after all tests are completed, and the result will be stored in a file called cross_cover.html in the run.<timestamp> directory of the application the modules belong to.
The cross.cover file contains elements like this:
{App,Modules}.
where App can be an application name or the atom all. The application (or all applications) will cover compile the listed Modules.
trc(TraceInfoFile) -> ok | {error, Reason}
Types:
TraceInfoFile = atom() | string()
This function starts call trace on target and on slave or peer nodes that are started or will be started by the test suites.
Timetraps are not extended automatically when tracing is used. Use multiply_timetraps/1 if necessary.
Note that the trace support in the test server is in a very early stage of the implementation, and thus not yet as powerful as one might wish for.
The trace information file specified by the TraceInfoFile argument is a text file containing one or more of the following elements:
The trace result will be logged in a (binary) file called NodeName-test_server in the current directory of the test server controller node. The log must be formatted using ttb:format/1/2.
This is valid for all targets except the OSE/Delta target for which all nodes will be logged and automatically formatted in one single text file called allnodes-test_server.
stop_trace() -> ok | {error, not_tracing}
This function stops tracing on target, and on slave or peer nodes that are currently running. New slave or peer nodes will no longer be traced after this.
The following functions are supposed to be invoked from the command line using the -s option when starting the erlang node.
Types:
CommandLine = FlagList
This function is supposed to be invoked from the commandline. It starts the test server, interprets the argument supplied from the commandline, runs the tests specified and when all tests are done, stops the test server and returns to the Erlang prompt.
The CommandLine argument is a list of command line flags, typically ['KEY1', Value1, 'KEY2', Value2, ...]. The valid command line flags are listed below.
Under a UNIX command prompt, this function can be invoked like this:
erl -noshell -s test_server_ctrl run_test KEY1 Value1 KEY2 Value2 ... -s erlang halt
Or make an alias (this is for unix/tcsh)
alias erl_test 'erl -noshell -s test_server_ctrl run_test \!* -s erlang halt'
And then use it like this
erl_test KEY1 Value1 KEY2 Value2 ...
The valid command line flags are
A test server framework can be defined by setting the environment variable TEST_SERVER_FRAMEWORK to a module name. This module will then be framework callback module, and it must export the following function:
get_suite(Mod,Func) -> TestCaseList
Types:
Mod = atom()
Func = atom()
TestCaseList = [,SubCase]
This function is called before a test case is started. The purpose is to retrieve a list of subcases. The default behaviour of this function should be to call Mod:Func(suite) and return the result from this call.
init_tc(Mod,Func,Args) -> {ok,Args}
Types:
Mod = atom()
Func = atom()
Args = [tuple()]
This function is called when a test case is started. It is called on the process executing the test case function (Mod:Func). Typical use of this function can be to alter the input parameters to the test case function (Args) or to set properties for the executing process.
Types:
Mod = atom()
Func = atom()
Args = [tuple()]
This function is called when a test case is completed. It is called on the process where the test case function (Mod:Func) was executed. Typical use of this function can be to clean up stuff done by init_tc/3.
Types:
What = atom()
Data = term()
This function is called in order to keep the framework upto date about the progress of the test. This is useful e.g. if the framework implements a GUI where the progress information is constantly updated. The following can be reported:
What = tests_start, Data = {Name,NumCases}
What = tests_done, Data = {Ok,Failed,Skipped}
What = tc_start, Data = {Mod,Func}
What = tc_done, Data = {Mod,Func,Result}
error_notification(Mod, Case, Args, Error) -> ok
Types:
Mod = atom()
This function is called as the result of testcase Mod:Case failing with Reason at Location. The function is intended mainly to aid specific logging or error handling in the framework application. Note that for Location to have relevant values (i.e. other than unknown), the line macro or test_server_line parse transform must be used. For details, please see the section about test suite line numbers in the test_server reference manual page.
Types:
What = processes | nodes
The test server checks the number of processes and nodes before and after the test is executed. This function is a question to the framework if the test server should warn when the number of processes or nodes has changed during the test execution. If true is returned, a warning will be written in the test case minor log file.
Types:
InfoStr = string() | ""
The test server will ask the framework for information about the test target system and print InfoStr in the test case log file below the host information.