0% found this document useful (0 votes)
49 views88 pages

Common Test

This document provides details about Common Test, which is a framework for automated testing. It discusses the basics of test suites and cases, writing and running tests, and other topics like code coverage, dependencies between tests, and large scale testing. The document contains extensive information about using Common Test.

Uploaded by

Nhung Pham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views88 pages

Common Test

This document provides details about Common Test, which is a framework for automated testing. It discusses the basics of test suites and cases, writing and running tests, and other topics like code coverage, dependencies between tests, and large scale testing. The document contains extensive information about using Common Test.

Uploaded by

Nhung Pham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

Common Test

version 1.3
Typeset in LATEX from SGML source using the DocBuilder-0.9.8.4 Document System.
Contents

1 Common Test User’s Guide 1


1.1 Common Test Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Test Suite Organisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 Support Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.4 Scripting Suite and Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.5 External Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Unix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Writing Test Suites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Support for test suite authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Test suites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.3 Init and end per suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.4 Init and end per test case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.5 Test cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.6 Test case info function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.7 Test suite default data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.8 Data and Private Directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.9 Execution environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.10 Illegal dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Test Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.1 Test structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.2 Skipping test cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.3 Definition of terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.1 Test suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6 Running Test Suites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.1 Using the Common Test Framework . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.2 Automatic compilation of test suites and help modules . . . . . . . . . . . . . . 12

Common Test iii


1.6.3 Running tests from the UNIX command line . . . . . . . . . . . . . . . . . . . . 12
1.6.4 Running tests from the Web based GUI . . . . . . . . . . . . . . . . . . . . . . . 13
1.6.5 Running tests from the Erlang shell . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6.6 Running interactive shell mode . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6.7 Using test specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6.8 Log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6.9 HTML Style Sheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.6.10 Repeating tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6.11 Silent Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.7 Config Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.7.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.7.2 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.7.3 Reading config values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.7.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.8 Code Coverage Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.8.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.8.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.8.3 The cover specification file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.8.4 Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.9 Using Common Test for Large Scale Testing . . . . . . . . . . . . . . . . . . . . . . . . 25
1.9.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.9.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.9.3 Test Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.10 Event Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.10.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.10.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.11 Dependencies between Test Cases and Suites . . . . . . . . . . . . . . . . . . . . . . . . 30
1.11.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.11.2 Saving configuration data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.11.3 Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.12 Some thoughts about testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.12.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.12.2 What to test? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

iv Common Test
2 Common Test Reference Manual 37
2.1 Common Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.2 The run test shell script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.3 ct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.4 ct cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.5 ct ftp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.6 ct master . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.7 ct rpc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.8 ct snmp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.9 ct telnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.10 unix telnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Common Test v
vi Common Test
Chapter 1

Common Test User’s Guide

Common Test is a portable application for automated testing. It is suitable for black-box testing of target
systems of any type (i.e. not necessarily implemented in Erlang), as well as for white-box testing of
Erlang/OTP programs. Black-box testing is performed via standard O&M interfaces (such as SNMP,
HTTP, Corba, Telnet, etc) and, if required, via user specific interfaces (often called test ports).
White-box testing of Erlang/OTP programs is easily accomplished by calling the target API functions
directly from the test case functions. Common Test also integrates usage of the OTP cover tool for code
coverage analysis of Erlang/OTP programs.
Common Test executes test suite programs automatically, without operator interaction. Test progress
and results is printed to logs on HTML format, easily browsed with a standard web browser. Common
Test also sends notifications about progress and results via an OTP event manager to event handlers
plugged in to the system. This way users can integrate their own programs for e.g. logging, database
storing or supervision with Common Test.
Common Test provides libraries that contain useful support functions to fill various testing needs and
requirements. There is for example support for flexible test declarations by means of so called test
specifications. There is also support for central configuration and control of multiple independent test
sessions (towards different target systems) running in parallel.
Common Test is implemented as a framework based on the OTP Test Server application.

1.1 Common Test Basics


1.1.1 Introduction
The Common Test framework (CT) is a tool which can support implementation and automated
execution of test cases towards different types of target systems. The framework is based on the OTP
Test Server. Test cases can be run individually or in batches. Common Test also features a distributed
testing mode with central control and logging. This feature makes it possible to test multiple systems
independently in one common session. This can be very useful e.g. for running automated large-scale
regression tests.
The SUT (system under test) may consist of one or several target nodes. CT contains a generic test
server which together with other test utilities is used to perform test case execution. It is possible to
start the tests from the CT GUI or from an OS- or Erlang shell prompt. Test suites are files (Erlang
modules) that contain the test cases (Erlang functions) to be executed. Support modules provide
functions that the test cases utilize in order to carry out the tests.

Common Test 1
Chapter 1: Common Test User’s Guide

The main idea is that CT based test programs connect to the target system(s) via standard O&M
interfaces. CT provides implementations and wrappers of some of these O&M interfaces and will be
extended with more interfaces later. There are a number of target independent interfaces supported in
CT such as Generic Telnet, FTP etc. which can be specialized or used directly for controlling
instruments, traffic generators etc.
For white-box testing of Erlang code, the test programs can of course call Erlang API functions directly.
Black-box testing of Erlang code can use Erlang RPC as well as standard O&M interfaces if desired.
A test case can handle several connections towards one or several target systems, instruments and traffic
generators in parallel in order to perform the necessary actions for a test. The handling of many
connections in parallel is one of the major strengths of CT!

1.1.2 Test Suite Organisation


The test suites are organized in test directories and each test suite may have a separate data directory.
Typically, these files and directories are version controlled similarly to other forms of source code
(possibly by means of some CVS-like tool). However, CT does not itself put any requirements on (or
has any form of awareness of) possible file and directory versions.

1.1.3 Support Libraries


Support libraries are functions that are useful for all test suites or for test suites in a specific functional
area or subsystem. So as well as the general support libraries provided by the CT framework there
might be a need for customized support libraries on a subsystem level.

1.1.4 Scripting Suite and Test Cases


Testing is performed by running test suites (a set of test cases) or individual test cases. A test suite is
implemented as an Erlang module named <suite name> SUITE.erl which contains a number of test
cases. A test case is an Erlang function which tests one or more things. The test case is the smallest unit
that the CT test server deals with.
The test suite file must conform to a certain interface which is specified by the CT test server. See the
section on writing test suites for more information.
A test case is considered successful if it returns to the caller, no matter what the returned value is. A
few return values have special meaning however (such as fskip,Reasong which indicates that the test
case is skipped, fcomment,Commentg which prints a comment in the log for the test case and
fsave config,Configg which makes the CT test server pass Config to the next test case). A test case
failure is specified as a runtime error (a crash), no matter what the reason for termination is. If you use
Erlang pattern matching effectively, you can take advantage of this property. The result will be concise
and readable test case functions that look much more like scripts than actual programs. Simple example:

session( Config) ->


fstarted,ServerIdg = my server:start(),
fclients,[]g = my server:get clients(ServerId),
MyId = self(),
connected = my server:connect(ServerId, MyId),
fclients,[MyId]g = my server:get clients(ServerId),
disconnected = my server:disconnect(ServerId, MyId),
fclients,[]g = my server:get clients(ServerId),
stopped = my server:stop(ServerId).

2 Common Test
1.2: Installation

As a test suite runs, all information (including output to stdout) is recorded in several different log files.
A minimum of information is displayed in the user console (only start and stop information, plus a note
for each failed test case).
The result from each test case is recorded in an HTML log file which is created for the particular test
run. Each test case is represented by a row in a table that shows the total execution time, whether the
case was successful or if it failed or was skipped, plus a possible user comment. For a failed test case, the
reason for termination is printed. The HTML file has a link to the logfile for each test case (which may
also be viewed with an HTML browser).

1.1.5 External Interfaces


The CT test server requires some default functions in a test suite. Each suite module should define and
export the following functions (most are however optional):

all() Returns a list of all test cases in the suite. (Mandatory)


suite() Default suite configuration. (Optional)
sequences() Specifies dependencies between test cases. (Optional)
init per suite(Conf) Executed before the first test case in a suite. (Optional)
end per suite(Conf) Executed after the last test case in a suite. (Optional)
init per testcase(TC, Conf) Executed before each test case in the suite. (Optional)
end per testcase(TC, Conf) Executed after each test case in the suite. (Optional)

For each test case the CT test server needs these functions:

Testcasename() Returns a key-value list of test case configuration/information. (Optional)


Testcasename(Config) The actual test case function.

1.2 Installation
1.2.1 Unix
Copy the Common Test and Test Server application directories, common test-<vsn> and
test server-<vsn>, to a location of your choice. They do not have to be placed among the Erlang
applications under the OTP lib directory, nor do they need to have a particular path relative to your test
suite modules. In the Common Test directory you find the shell script install.sh. Execute this script
to generate the Common Test start script run test in the sub directory
common test-<vsn>/priv/bin.
install.sh takes one input parameter which specifies the absolute path to the top directory of
Common Test and Test Server. (This path is inserted in the run test script so that when the script
starts Erlang, the Erlang code server will be able to load the Common Test and Test Server application
modules). Example (assuming Common Test and Test Server have been placed in
/usr/local/test tools):
$ install.sh /usr/local/test tools
Note that the common test-<vsn> and test server-<vsn> directories must be located under the
same top directory for the installation to work properly. Note also that the install script does not e.g.
copy files or update environment variables. It only generates the run test script.
If the directories are later moved, make sure to run install.sh again or edit the run test script
(Bourne shell) manually.

Common Test 3
Chapter 1: Common Test User’s Guide

For more information on the run test script, please see the reference manual.

1.2.2 Windows
On Windows it is very convenient to use Cygwin (www.cygwin.com) for running Common Test and
Erlang, since it enables you to use the run test script for starting Common Test. If you are a Cygwin
user, simply follow the instructions for installing Common Test on Unix above.
If you do not use Cygwin, you have to rely on the API functions in the ct module (instead of run test)
for running Common Test. In this case you do not need to install Common Test (i.e. no need to
generate the run test script). Simply copy the common test-<vsn> and test server-<vsn>
directories to a location of your choice. They do not have to be placed among the Erlang applications
under the OTP lib directory, nor do they need to have a particular path relative to your test suite
modules.
When you start the Erlang node on which you will be running Common Test, make sure the Common
Test and Test Server ebin directories are included in the Erlang code server path (so the application
modules can be loaded). If you do copy the application directories to the OTP lib directory, there is no
need to explicitly update the code server path as the code server will be able to locate the modules
automatically.

1.3 Writing Test Suites


1.3.1 Support for test suite authors
The ct module provides the main interface for writing test cases. This includes:

 Functions for printing and logging


 Functions for reading configuration data
 Function for terminating a test case with error reason
 Function for adding comments to the HTML overview page
 Tracing of line numbers in the test suite, i.e. if a test case fails, the last 10 executed line numbers
are displayed

Please turn to the reference manual for the ct module for details about these functions.
The CT application also includes other modules named ct <something> that provide support for the
use of communication mechanisms such as rpc, snmp, ftp, telnet etc in test suites.

1.3.2 Test suites


A test suite is an ordinary Erlang module that contains test cases. It is recommended that the module
has a name on the form * SUITE.erl. Otherwise, the directory function in CT will not be able to
locate the modules (per default).
ct.hrl shall be included in all test suite files.
Each test suite module must export the function all/0 which returns the list of all test cases in that
module.

4 Common Test
1.3: Writing Test Suites

1.3.3 Init and end per suite


Each test suite module may contain the functions init per suite/1 and end per suite/1.
If it exists, init per suite is called as the first testcase of the suite. It typically contains initiation
which is common for all test cases in the suite, and which are only to be performed once.
end per suite is called as the last test case of the suite. This function should clean up after
init per suite.
The argument to init per suite is Config, the same as the argument to all test cases. init per suite
can modify this parameter with information that the other test cases need.
If init per suite fails, all test cases in the test suite will be skipped, including end per suite
It is possible in end per suite to check if the last executed test case was successful or not (which
consequently may determine how cleanup should be performed). This is done by reading the value
tagged with tc status from Config. The value is either ok, ffailed,Reasong, or fskipped,Reasong.

1.3.4 Init and end per test case


Each test suite module can contain the functions init per testcase/2 and end per testcase/2.
If it exists, init per testcase is called before each test case in the suite. It typically contains initiation
which must be done for each test case.
end per testcase/2 is called after each test case is completed, giving a possibility to clean up.
The first argument to these functions is the name of the test case. This can be used to do individual
initiation and cleanup for each test cases.
The second argument is called Config. init per testcase/2 may modify this parameter or just return
it as is. Whatever is retuned by init per testcase/2 is given as Config parameter to the test case
itself.
The return value of end per testcase/2 is ignored by the test server (with exception of the save config
[page 32] tuple).
If init per testcase crashes, the test case itself is skipped and end per testcase is never called.

1.3.5 Test cases


The smallest unit that the test server is concerned with is a test case. Each test case can in turn test
many things, for example make several calls to the same interface function with different parameters.
It is possible to put many or few tests into each test case. How many things each test case does is of
course up to the author, but here are some things to keep in mind:
Using many small test cases tend to result in extra and often duplicated code as well as slow test
execution because of large overhead for initializations and cleanups. Lots of duplicated code results in
high maintenance cost and bad readability.
Larger test cases make it harder to tell what went wrong if it fails, and large portions of test code will be
skipped if a specific part fails. Also, readability and maintainability suffers when test cases become too
extensive.
The test case function takes one argument, Config, which contains configuration information such as
data dir and priv dir. See Data and Private Directories [page 7] for more information about these.

Common Test 5
Chapter 1: Common Test User’s Guide

Note:
The test case function argument Config should not be confused with the information that can be
retrieved from configuration files (using ct:get config/[1,2]). The Config argument should be used for
runtime configuration of the test suite and the test cases. A configuration file should contain data
related to the SUT (system under test). These two types of config data are handled differently!

All Config items can be extracted using the ?config macro, e.g PrivDir =
?config(priv dir,Config).
If the test case function crashes or exits, it is considered a failure. If it returns a value (no matter what
actual value) it is considered a success. An exception to this rule is the return value fskip,Reasong. If
this is returned, the test case is considered skipped and gets logged as such.
If the test case returns the tuple fcomment,Commentg, Comment is printed out in the overview log (this is
equal to calling ct:comment(Comment)).

1.3.6 Test case info function


For each test case function there can be an additional function with the same name but with no
arguments. This is the test case info function. The test case info function is expected to return a list of
tagged tuples that specifies various properties regarding the test case.
The following tags have special meaning:

timetrap Set the maximum time the test case is allowed to take. If the timetrap time is exceeded, the
test case fails with reason timetrap timeout. Note that init per testcase and
end per testcase are included in the timetrap time.
userdata Use this to specify arbitrary data related to the testcase. This data can be retrieved at any
time using the ct:userdata/3 utility function.
silent connections Please see the Silent Connections [page 21] chapter for details.
require Use this to specify configuration variables that are required by the test case. If the required
configuration variables are not found in any of the test system configuration files, the test case is
skipped.
It is also possible to give a required variable a default value that will be used if the variable is not
found in any configuration file. To specify a default value, start by adding a normal require-tuple
to the list for the variable in question and then let the key-value definition for the variable follow.
The definition should have the same form as in the configuration file. Example:

my testcase() ->
[frequire, unix telnet, funix, [telnet, username, password]gg,
funix telnet, funix, [ftelnet, "durin.du.uab"g,
fusername, "alladin"g,
fpassword, "sesame"g]gg].

Note:
Giving a required variable a default value can result in that a test case is always run. For many types
of configuration values this is not a desired behavior.

6 Common Test
1.3: Writing Test Suites

If timetrap and/or require is not set specifically for a particular test case, default values specified by
the suite/0 function are used.
Other tags than the ones mentioned above will simply be ignored by the test server.
Example:

reboot node() ->


[
ftimetrap,fseconds,60gg,
frequire,interfacesg,
fuserdata,
[fdescription,"System Upgrade: RpuAddition Normal RebootNode"g,
ffts,"http://someserver.ericsson.se/test doc4711.pdf"g]g
].

1.3.7 Test suite default data


The suite/0 function can be used in a test suite module to set the default values for the timetrap and
require tags. If a test case info function also specifies any of these tags, the default value is overruled.
See above for more information.
Other options that may be specified with the suite defaults list are:

 stylesheet, see HTML Style Sheets [page 18].


 userdata, see Test case info function [page 6].
 silent connections, see Silent Connections [page 21].

Example:

suite() ->
[
ftimetrap,fminutes,10gg,
frequire,global namesg,
fuserdata,[finfo,"This suite tests database transactions."g]g,
fsilent connections,[telnet]g,
fstylesheet,"db testing.css"g
].

1.3.8 Data and Private Directories


The data directory (data dir) is the directory where the test module has its own files needed for the
testing. The name of the data dir is the the name of the test suite followed by " data". For example,
"some path/foo SUITE.beam" has the data directory "some path/foo SUITE data/".
The priv dir is the test suite’s private directory. This directory should be used when a test case needs
to write to files. The name of the private directory is generated by the test server, which also creates the
directory.

Common Test 7
Chapter 1: Common Test User’s Guide

Note:
You should not depend on current working directory for reading and writing data files since this is
not portable. All scratch files are to be written in the priv dir and all data files should be located in
data dir. If you do need to use the current working directory, you must set it explicitly with
file:set cwd/1 for each individual test case before use. (The Common Test server sets current
working directory to the test case log directory at the start of every case).

1.3.9 Execution environment


Each test case, including init per testcase and end per testcase is executed by a dedicated Erlang
process. The process is spawned when the test case starts, and terminated when the test case is finished.
init per suite and end per suite are separate test cases and will execute on their own processes.
The default time limit for a test case is 30 minutes, unless a timetrap is specified either by the test case
info function or by the suite/0 function.

1.3.10 Illegal dependencies


Even though it is highly efficient to write test suites with the Common Test framework, there will be
mistakes in the test suites. Noted below are some of the more frequent dependency mistakes from our
experience with running the Erlang/OTP test suites.

 Depending on current directory, and writing there:


This is a common error in test suites. It is assumed that the current directory is the same as what
the author used as current directory when the test case was developed. Many test cases even try to
write scratch files to this directory. If the current directory has to be set to something in particular,
use file:set cwd/1 to set it. And use the data dir and priv dir to locate data and scratch files.
 Depending on the Clearcase (file version control tool) paths and files:
The test suites are stored in Clearcase but are not (necessarily) run within this environment. The
directory structure may vary from test run to test run.
 Depending on execution order:
There is no way of telling in which order the test cases are going to be run, so a test case can’t
depend on a server being started by a test case that runs “before”. This has to be so for several
reasons:
The user may specify the order at will, and maybe some particular order is better suited
sometimes. Secondly, if the user just specifies a test directory, the order the suites are executed
will depend on how the files are listed by the operating system, which varies between systems.
Thirdly, if a user wishes to run only a subset of a test suite, there is no way one test case could
successfully depend on another.
 Depending on Unix:
Running unix commands through unix:cmd or os:cmd are likely not to work on non-unix
platforms.
 Nested test cases:
Invoking a test case from another not only tests the same thing twice, but also makes it harder to
follow what exactly is being tested. Also, if the called test case fails for some reason, so will the
caller. This way one error gives cause to several error reports, which is less than ideal.
Functionality common for many test case functions may be implemented in common help
functions. If these functions are useful for test cases across suites, put the help functions into
common help modules.

8 Common Test
1.4: Test Structure

 Failure to crash or exit when things go wrong:


Making requests without checking that the return value indicates success may be ok if the test
case will fail at a later stage, but it is never acceptable just to print an error message (into the log
file) and return successfully. Such test cases do harm since they create a false sense of security
when overviewing the test results.
 Messing up for following test cases:
Test cases should restore as much of the execution environment as possible, so that the following
test cases will not crash because of execution order of the test cases. The function
end per testcase is suitable for this.

1.4 Test Structure


1.4.1 Test structure
A test consists of a set of test cases. Each test case is implemented as an Erlang function. An Erlang
module implementing one or more test cases is called a test suite. One or more test suites are stored in
a test directory.

1.4.2 Skipping test cases


It is possible to skip certain test cases, for example if you know beforehand that a specific test case fails.
This might be functionality which isn’t yet implemented, a bug that is known but not yet fixed or some
functionality which doesn’t work or isn’t applicable on a specific platform.
There are several different ways to state that one or more test cases should be skipped:

 Using skip suites and skip cases terms in test specifications [page 15].
 Returning fskip,Reasong from the init per testcase/2 or init per suite/1 functions.
 Returning fskip,Reasong from the execution clause of the test case.

The latter of course means that the execution clause is actually called, so the author must make sure
that the test case is not run.
When a test case is skipped, it will be noted as SKIPPED in the HTML log.

1.4.3 Definition of terms


data dir Data directory for a test suite. This directory contains any files used by the test suite, e.g.
additional Erlang modules, binaries or data files.
major log file An overview log file for one or more test suites.
minor log file A log file for one particular test case.
priv dir Private directory for a test suite. This directory should be used when the test suite needs to
write to files.
test case A single test included in a test suite. A test case is implemented as a function in a test suite
module.
test suite An erlang module containing a collection of test cases for a specific functional area.
test directory A directory that contains one or more test suite modules, i.e. a group of test suites.

Common Test 9
Chapter 1: Common Test User’s Guide

1.5 Examples
1.5.1 Test suite
The example test suite shows some tests of an HTTP client that uses a proxy.

-module(httpc_proxy_SUITE).

%% Note: This directive should only be used in test suites.


-compile(export_all).

-include("ct.hrl").

-define(URL, "http://www.erlang.org").
-define(PROXY, "www-proxy.ericsson.se").
-define(PROXY_PORT, 8080).

%%--------------------------------------------------------------------
%% Test server callback functions
%%--------------------------------------------------------------------

%%--------------------------------------------------------------------
%% Function: suite() -> DefaultData
%% DefaultData: [tuple()]
%% Description: Require variables and set default values for the suite
%%--------------------------------------------------------------------
suite() -> [{timetrap,{minutes,1}}].

%%--------------------------------------------------------------------
%% Function: init_per_suite(Config) -> Config
%% Config: [tuple()]
%% A list of key/value pairs, holding the test case configuration.
%% Description: Initiation before the whole suite
%%
%% Note: This function is free to add any key/value pairs to the Config
%% variable, but should NOT alter/remove any existing entries.
%%--------------------------------------------------------------------
init_per_suite(Config) ->
application:start(inets),
http:set_options([{proxy, {{?PROXY, ?PROXY_PORT}, ["localhost"]}}]),
Config.

%%--------------------------------------------------------------------
%% Function: end_per_suite(Config) -> _
%% Config: [tuple()]
%% A list of key/value pairs, holding the test case configuration.
%% Description: Cleanup after the whole suite
%%--------------------------------------------------------------------
end_per_suite(_Config) ->
application:stop(inets),
ok.
%%--------------------------------------------------------------------

10 Common Test
1.5: Examples

%% Function: all() -> TestCases


%% TestCases: [Case]
%% Case: atom()
%% Name of a test case.
%% Description: Returns a list of all test cases in this test suite
%%--------------------------------------------------------------------
all() ->
[options, head, get, trace].

%%-------------------------------------------------------------------------
%% Test cases starts here.
%%-------------------------------------------------------------------------

options() ->
[{userdata,[{doc,"Perform an OPTIONS request that goes through a proxy."}]}].

options(_Config) ->
{ok, {{_,200,_}, Headers, _}} =
http:request(options, {?URL, []}, [], []),
case lists:keysearch("allow", 1, Headers) of
{value, {"allow", _}} ->
ok;
_ ->
ct:fail(http_options_request_failed)
end.

head() ->
[{userdata,[{doc,"Perform a HEAD request that goes through a proxy."}]}].

head(_Config) ->
{ok, {{_,200, _}, [_ | _], []}} =
http:request(head, {?URL, []}, [], []).

get() ->
[{userdata,[{doc, "Perform a GET request that goes through a proxy."}]}].

get(_Config) ->
{ok, {{_,200,_}, [_ | _], Body = [_ | _]}} =
http:request(get, {?URL, []}, [], []),
check_body(Body).

trace() ->
[{userdata,[{doc, "Perform a TRACE request that goes through a proxy."}]}].

trace(_Config) ->
{ok, {{_,200,_}, [_ | _], "TRACE /" ++ _}} =
http:request(trace, {?URL, []}, [], []),
ok.

%%--------------------------------------------------------------------
%% Internal functions
%%--------------------------------------------------------------------

Common Test 11
Chapter 1: Common Test User’s Guide

check_body(Body) ->
case string:rstr(Body, "\html>") of
0 ->
ct:fail(did_not_receive_whole_body);
_ ->
ok
end.

1.6 Running Test Suites


1.6.1 Using the Common Test Framework
The Common Test Framework provides a high level operator interface for testing. It adds the following
features to the Erlang/OTP Test Server:

 Automatic compilation of test suites (and help modules).


 Creation of additional HTML pages for better overview.
 Single command interface for running all available tests.
 Handling of configuration files specifying target nodes and other variables.
 Mode for running multiple independent test sessions in parallel with central control and
configuration.

1.6.2 Automatic compilation of test suites and help modules


When Common Test starts, it will automatically attempt to compile any suites included in the specified
tests. If particular suites have been specified, only those suites will be compiled. If a particular test
object directory has been specified (meaning all suites in this directory should be part of the test),
Common Test runs make:all/1 in the directory to compile the suites.
If compilation should fail for one or more suites, the compilation errors are printed to tty and the
operator is asked if the test run should proceed without the missing suites, or be aborted. If the
operator chooses to proceed, it is noted in the HTML log which tests have missing suites.
Any help module (i.e. regular Erlang module with name not ending with “ SUITE”) that resides in the
same test object directory as a suite which is part of the test, will also be automatically compiled. A
help module will not be mistaken for a test suite (unless it has a “ SUITE” name of course). All help
modules in a particular test object directory are compiled no matter if all or only particular suites in the
directory are part of the test.

1.6.3 Running tests from the UNIX command line


The script run test can be used for running tests from the UNIX command line, e.g.

 run test -config <configfilenames> -dir <dirs>


 run test -config <configfilenames> -suite <suiteswithfullpath>
 run test -config <configfilenames> -suite <suitewithfullpath> -case
<casenames>

12 Common Test
1.6: Running Test Suites

Examples:
$ run test -config $CFGS/sys1.cfg $CFGS/sys2.cfg -dir $SYS1 TEST $SYS2 TEST
$ run test -suite $SYS1 TEST/setup SUITE $SYS2 TEST/config SUITE
$ run test -suite $SYS1 TEST/setup SUITE -case start stop
Other flags that may be used with run test:

 -logdir <dir>, specifies where the HTML log files are to be written.
 -refresh logs, refreshes the top level HTML index files.
 -vts, start web based GUI (see below).
 -shell, start interactive shell mode (see below).
 -spec <testspecs>, use test specification as input (see below).
 -allow user terms, allows user specific terms in a test specification (see below).
 -silent connections [conn types], tells Common Test to suppress printouts for specified
connections (see below).
 -stylesheet <css file>, points out a user HTML style sheet (see below).
 -cover <cover cfg file>, to perform code coverage test (see Code Coverage Analysis [page
23]).
 -event handler <event handlers>, to install event handlers [page 27].
 -repeat <n>, tells Common Test to repeat the tests n times (see below).
 -duration <time>, tells Common Test to repeat the tests for duration of time (see below).
 -until <stop time>, tells Common Test to repeat the tests until stop time (see below).
 -force stop, on timeout, the test run will be aborted when current test job is finished (see
below).

1.6.4 Running tests from the Web based GUI


The web based GUI, VTS, is started with the run test script. From the GUI you can load config files,
and select directories, suites and cases to run. You can also state the config files, directories, suites and
cases on the command line when starting the web based GUI.

 run test -vts


 run test -vts -config <configfilename>
 run test -vts -config <configfilename> -suite <suitewithfullpath> -case
<casename>
From the GUI you can run tests and view the result and the logs.
Note that run test -vts will try to open the Common Test start page in an existing web browser
window or start the browser if it is not running. Which browser should be started may be specified with
the browser start command option:
run test -vts -browser <browser start cmd>
Example:
$ run test -vts -browser ’firefox-2.0.0.3&’
Note that the browser must run as a separate OS process or VTS will hang!

Common Test 13
Chapter 1: Common Test User’s Guide

If no specific browser start command is specified, netscape will be the default browser on UNIX
platforms and Internet Explorer on Windows. For the VTS mode to work properly with netscape, make
sure the netscape program in your path starts version 7!

1.6.5 Running tests from the Erlang shell


Common Test provides an Erlang API for running tests. For documentation, please see the ct manual
page.

1.6.6 Running interactive shell mode


You can start an Erlang shell with the necessary paths and with Common Test running in an interactive
mode with the run test script and the -shell option:

 run test -shell


 run test -shell -config <configfilename>
If no config file is given with the run test command, a warning will be displayed. If Common Test has
been run from the same directory earlier, the same config file(s) will be used again. If Common Test has
not been run from this directory before, no config files will be available.
From the interactive mode all test case support functions can be executed directly from the erlang shell.
This is an experimentation mode useful during test suite development and debugging.
If any functions using “required config data” (e.g. telnet or ftp functions) are to be called from the
erlang shell, config data must first be required with ct:require/[1,2]. This is equivalent to a require
statement in the Test Suite Default Data [page 7] or in the Test Case Info Function [page 6].
Example:

> ct:require(a,funix,[telnet]g).
ok
> ct:cmd(a,"ls").
fok,["ls","file1 ...",...]g

Everything that Common Test normally prints in the test case logs, will in the interactive mode be
written to a log named ctlog.html in the ct run.<timestamp> directory. A link to this file will be
available in the file named last interactive.html in the directory from which you executed
run test.
If you for some reason want to exit the interactive mode, use the function ct:stop interactive/0.
This shuts down the running ct application. Associations between configuration names and data
created with require are consequently deleted. ct:start interactive/0 will get you back into
interactive mode, but previous state is not restored.

14 Common Test
1.6: Running Test Suites

1.6.7 Using test specifications


The most expressive way to specify what to test is to use a so called test specification. A test
specification is a sequence of Erlang terms. The terms may be declared in a text file or passed to the test
server at runtime as a list (see run testspec/1 in the manual page for ct). There are two general types
of terms; configuration terms and test specification terms.
With configuration terms it is possible to import configuration data (similar to run test -config),
specify HTML log directories (similar to run test -logdir), give aliases to test nodes and test
directories (to make a specification easier to read and maintain), enable code coverage analysis (see the
Code Coverage Analysis [page 23] chapter) and specify event handler plugins (see the Event Handling
[page 27] chapter).
With test specification terms it is possible to state exactly which tests should run and in which order. A
test term specifies either one or more suites or one or more test cases. An arbitrary number of test terms
may be declared in sequence. A test term can also specify one or more test suites or test cases to be
skipped. Skipped suites and cases are not executed and show up in the HTML test log as SKIPPED.
Below is the test specification syntax. Test specifications can be used to run tests both in a single test
host environment and in a distributed Common Test environment. Node parameters are only relevant
in the latter (see the chapter about running Common Test in distributed mode for information). For
details on the event handler term, see the Event Handling [page 27] chapter.
Config terms:

fnode, NodeAlias, Nodeg.

fcover, CoverSpecFileg.
fcover, NodeRef, CoverSpecFileg.

fconfig, ConfigFilesg.
fconfig, NodeRefs, ConfigFilesg.

falias, DirAlias, Dirg.

flogdir, LogDirg.
flogdir, NodeRefs, LogDirg.

fevent handler, EventHandlersg.


fevent handler, NodeRefs, EventHandlersg.
fevent handler, EventHandlers, InitArgsg.
fevent handler, NodeRefs, EventHandlers, InitArgsg.

Test terms:

fsuites, DirRef, Suitesg.


fsuites, NodeRefs, DirRef, Suitesg.

fcases, DirRef, Suite, Casesg.


fcases, NodeRefs, DirRef, Suite, Casesg.

fskip suites, DirRef, Suites, Commentg.


fskip suites, NodeRefs, DirRef, Suites, Commentg.

Common Test 15
Chapter 1: Common Test User’s Guide

fskip cases, DirRef, Suite, Cases, Commentg.


fskip cases, NodeRefs, DirRef, Suite, Cases, Commentg.

Types:

NodeAlias = atom()
Node = node()
NodeRef = NodeAlias | Node | master
NodeRefs = all nodes | [NodeRef] | NodeRef
CoverSpecFile = string()
ConfigFiles = string() | [string()]
DirAlias = atom()
Dir = string()
LogDir = string()
EventHandlers = atom() | [atom()]
InitArgs = [term()]
DirRef = DirAlias | Dir
Suites = atom() | [atom()] | all
Cases = atom() | [atom()] | all
Comment = string() | ""

Example:

flogdir, "/home/test/logs"g.

fconfig, "/home/test/t1/cfg/config.cfg"g.
fconfig, "/home/test/t2/cfg/config.cfg"g.
fconfig, "/home/test/t3/cfg/config.cfg"g.

falias, t1, "/home/test/t1"g.


falias, t2, "/home/test/t2"g.
falias, t3, "/home/test/t3"g.

fsuites, t1, allg.


fskip suites, t1, [t1B SUITE,t1D SUITE], "Not implemented"g.
fskip cases, t1, t1A SUITE, [test3,test4], "Irrelevant"g.
fskip cases, t1, t1C SUITE, [test1], "Ignore"g.

fsuites, t2, [t2B SUITE,t2C SUITE]g.


fcases, t2, t2A SUITE, [test4,test1,test7]g.

fskip suites, t3, all, "Not implemented"g.

The example specifies the following:

 The specified logdir directory will be used for storing the HTML log files (in subdirectories tagged
with node name, date and time).
 The variables in the specified test system config files will be imported for the test.

16 Common Test
1.6: Running Test Suites

 Aliases are given for three test system directories. The suites in this example are stored in
“/home/test/tX/test”.
 The first test to run includes all suites for system t1. Excluded from the test are however the t1B
and t1D suites. Also test cases test3 and test4 in t1A as well as the test1 case in t1C are excluded
from the test.
 Secondly, the test for system t2 should run. The included suites are t2B and t2C. Included are also
test cases test4, test1 and test7 in suite t2A. Note that the test cases will be executed in the
specified order.
 Lastly, all suites for systems t3 are to be completely skipped and this should be explicitly noted in
the log files.

It is possible for the user to provide a test specification that includes (for Common Test) unrecognizable
terms. If this is desired, the -allow user terms flag should be used when starting tests with run test.
This forces Common Test to ignore unrecognizable terms. Note that in this mode, Common Test is not
able to check the specification for errors as efficiently as if the scanner runs in default mode. If
ct:run test/1 is used for starting the tests, the relaxed scanner mode is enabled by means of the tuple:
fallow user terms,trueg

1.6.8 Log files


As the execution of the test suites go on, events are logged in four different ways:

 Text to the operator’s console.


 Suite related information is sent to the major log file.
 Case related information is sent to the minor log file.
 The HTML overview log file gets updated with test results.
 A link to all runs executed from a certain directory is written in the log named “all runs.html” and
direct links to all tests (the latest results) are written to the top level “index.html”.

Typically the operator, who may run hundreds or thousands of test cases, doesn’t want to fill the screen
with details about/from the specific test cases. By default, the operator will only see:

 A confirmation that the test has started and information about how many test cases will be
executed totally.
 A small note about each failed test case.
 A summary of all the run test cases.
 A confirmation that the test run is complete.
 Some special information like error reports and progress reports, printouts written with
erlang:display/1 or io:format/3 specifically addressed to somewhere other than standard io.

This is enough for the operator to know, and if he wants to dig in deeper into a specific test case result,
he can do so by following the links in the HTML presentation to take a look in the major or minor log
files. The “all runs.html” page is a practical starting point usually. It’s located in logdir and contains a
link to each test run including a quick overview (date and time, node name, number of tests, test names
and test result totals).
An “index.html” page is written for each test run (i.e. stored in the “ct run” directory tagged with node
name, date and time). This file gives a short overview of all individual tests performed in the same test
run. The test names follow this convention:

 TopLevelDir.TestDir (all suites in TestDir executed)

Common Test 17
Chapter 1: Common Test User’s Guide

 TopLevelDir.TestDir:suites (specific suites were executed)


 TopLevelDir.TestDir.Suite (all cases in Suite executed)
 TopLevelDir.TestDir.Suite:cases (specific test cases were executed)
 TopLevelDir.TestDir.Suite.Case (only Case was executed)

On the test run index page there is a link to the Common Test Framework log file in which information
about imported configuration data and general test progress is written. This log file is useful to get
snapshot information about the test run during execution. It can also be very helpful when analyzing
test results or debugging test suites.
On the test run index page it is noted if a test has missing suites (i.e. suites that Common Test has failed
to compile). Names of the missing suites can be found in the Common Test Framework log file.
A detailed report of the test run is stored in the major logfile (textual log file). This includes test suite
and test case names, execution time, the exact reason for failure etc. There is an HTML log file that
corresponds to this textual file. The HTML file is a summary which gives a better overview of the test
run. It also has links to each individual test case log file for quick viewing with an HTML browser.
The minor log file contain full details of every single test case, each one in a separate file. This way the
files should be easy to compare with previous test runs, even if the set of test cases change.
Which information goes where is user configurable via the test server controller. Three threshold values
determine what comes out on screen, and in the major or minor log files. See the OTP Test Server
manual for information. The contents that goes to the HTML log file is fixed however and cannot be
altered.
The log files are written continously during a test run and links are always created initially when a test
starts. This makes it possible to follow test progress simply by refreshing pages in the HTML browser.
Statistics totals are not presented until a test is complete however.

1.6.9 HTML Style Sheets


Common Test includes the optional feature to use HTML style sheets (CSS) for customizing user
printouts. The functions in ct that print to a test case HTML log file (log/3 and pal/3) accept
Category as first argument. With this argument it’s possible to specify a category that can be mapped
to a selector in a CSS definition. This is useful especially for coloring text differently depending on the
type of (or reason for) the printout. Say you want one color for test system configuration information, a
different one for test system state information and finally one for errors detected by the test case
functions. The corresponding style sheet may look like this:

<style>
div.ct internal f background:lightgrey; color:black g
div.default f background:lightgreen; color:black g
div.sys config f background:blue; color:white g
div.sys state f background:yellow; color:black g
div.error f background:red; color:white g
</style>

To install the CSS file (Common Test inlines the definition in the HTML code), the name may be
provided when executing run test. Example:

$ run test -dir $TEST/prog -stylesheet $TEST/styles/test categories.css

18 Common Test
1.6: Running Test Suites

Categories in a CSS file installed with the -stylesheet flag are on a global test level in the sense that
they can be used in any suite which is part of the test run.
It is also possible to install style sheets on a per suite and per test case basis. Example:

-module(my SUITE).
...
suite() -> [..., fstylesheet,"suite categories.css"g, ...].
...
my testcase( ) ->
...
ct:log(sys config, "Test node version: ~p", [VersionInfo]),
...
ct:log(sys state, "Connections: ~p", [ConnectionInfo]),
...
ct:pal(error, "Error ~p detected! Info: ~p", [SomeFault,ErrorInfo]),
ct:fail(SomeFault).

If the style sheet is installed as in this example, the categories are private to the suite in question. They
can be used by all test cases in the suite, but can not be used by other suites. A suite private style sheet,
if specified, will be used in favour of a global style sheet (one specified with the -stylesheet flag). A
stylesheet tuple (as returned by suite/0 above) can also be returned from a test case info function. In
this case the categories specified in the style sheet can only be used in that particular test case. A test
case private style sheet is used in favour of a suite or global level style sheet.
In a tuple fstylesheet,CSSFileg, if CSSFile is specified with a path, e.g.
"$TEST/styles/categories.css", this full name will be used to locate the file. If only the file name is
specified however, e.g. “categories.css”, then the CSS file is assumed to be located in the data directory,
data dir, of the suite. The latter usage is recommended since it is portable compared to hard coding
path names in the suite!
The Category argument in the example above may have the value (atom) sys config (white on blue),
sys state (black on yellow) or error (white on red).
If the Category argument is not specified, Common Test will use the CSS selector div.default for the
printout. For this reason a user supplied style sheet must include this selector. Also the selector
div.ct internal must be included. Hence a minimal user style sheet should look like this (which is
also the default style sheet Common Test uses if no user CSS file is provided):

<style>
div.ct internal f background:lightgrey; color:black g
div.default f background:lightgreen; color:black g
</style>

1.6.10 Repeating tests


You can order Common Test to repeat the tests you specify. You can choose to repeat tests a certain
number of times, repeat tests for a specific period of time, or repeat tests until a particular stop time is
reached. If repetition is controlled by means of time, it is also possible to specify what action Common
Test should take upon timeout. Either Common Test performs all tests in the current run before
stopping, or it stops as soon as the current test job is finished. Repetition can be activated by means of
run test start flags, or tuples in the ct:run:test/1 option list argument. The flags (options in
parenthesis) are:

Common Test 19
Chapter 1: Common Test User’s Guide

 -repeat N (frepeat,Ng), where N is a positive integer.


 -duration DurTime (fduration,DurTimeg), where DurTime is the duration, see below.
 -until StopTime (funtil,StopTimeg), where StopTime is finish time, see below.
 -force stop (fforce stop,trueg)

The duration time, DurTime, is specified as HHMMSS. Example: -duration 012030 or


fduration,"012030"g, means the tests will be executed and (if time allows) repeated, until timeout
occurs after 1 h, 20 min and 30 secs. StopTime can be specified as HHMMSS and is then interpreted as a
time today (or possibly tomorrow). StopTime can also be specified as YYMoMoDDHHMMSS. Example:
-until 071001120000 or funtil,"071001120000"g, which means the tests will be executed and (if
time allows) repeated, until 12 o’clock on the 1st of Oct 2007.
When timeout occurs, Common Test will never abort the test run immediately, since this might leave
the system under test in an undefined, and possibly bad, state. Instead Common Test will finish the
current test job, or the complete test run, before stopping. The latter is the default behaviour. The
force stop flag/option tells Common Test to stop as soon as the current test job is finished. Note that
since Common Test always finishes off the current test job or test session, the time specified with
duration or until is never definitive!
Log files from every single repeated test run is saved in normal Common Test fashion (see above).
Common Test may later support an optional feature to only store the last (and possibly the first) set of
logs of repeated test runs, but for now the user must be careful not to run out of disk space if tests are
repeated during long periods of time.
Note that for each test run that is part of a repeated session, information about the particular test run is
printed in the Common Test Framework Log. There you can read the repetition number, remaining
time, etc.
Example 1:

$ run test -dir $TEST ROOT/to1 $TEST ROOT/to2 -duration 001000 -force stop

Here the suites in test directory to1, followed by the suites in to2, will be executed in one test run. A
timeout event will occur after 10 minutes. As long as there is time left, Common Test will repeat the
test run (i.e. starting over with the to1 test). When the timeout occurs, Common Test will stop as soon
as the current job is finished (because of the force stop flag). As a result, the specified test run might
be aborted after the to1 test and before the to2 test.
Example 2:

$ date
Fri Sep 28 15:00:00 MEST 2007

$ run test -dir $TEST ROOT/to1 $TEST ROOT/to2 -until 160000

Here the same test run as in the example above will be executed (and possibly repeated). In this
example, however, the timeout will occur after 1 hour and when that happens, Common Test will finish
the entire test run before stopping (i.e. the to1 and to2 test will always both be executed in the same
test run).
Example 3:

$ run test -dir $TEST ROOT/to1 $TEST ROOT/to2 -repeat 5

Here the test run, including both the to1 and the to2 test, will be repeated 5 times.

20 Common Test
1.6: Running Test Suites

1.6.11 Silent Connections


The protocol handling processes in Common Test, implemented by ct telnet, ct ftp etc, do verbose
printing to the test case logs. This can be switched off by means of the -silent connections flag:

run test -silent connections [conn types]

where conn types specifies telnet, ftp, rpc and/or snmp.


Example:

run test ... -silent connections telnet ftp

switches off logging for telnet and ftp connections.

run test ... -silent connections

switches off logging for all connection types.


Basic and important information such as opening and closing a connection, fatal communication error
and reconnection attempts will always be printed even if logging has been suppressed for the connection
type in question. However, operations such as sending and receiving data may be performed silently.
It is possible to also specify silent connections in a test suite. This is accomplished by returning a
tuple, fsilent connections,ConnTypesg, in the suite/0 or test case info list. If ConnTypes is a list of
atoms (telnet, ftp, rpc and/or snmp), output for any corresponding connections will be suppressed.
Full logging is per default enabled for any connection of type not specified in ConnTypes. Hence, if
ConnTypes is the empty list, logging is enabled for all connections.
The silent connections setting returned from a test case info function overrides, for the test case in
question, any setting made with suite/0 (which is the setting used for all cases in the suite). Example:

-module(my SUITE).
...
suite() -> [..., fsilent connections,[telnet,ftp]g, ...].
...
my testcase1() ->
[fsilent connections,[ftp]g].
my testcase1( ) ->
...
my testcase2( ) ->
...

In this example, suite/0 tells Common Test to suppress printouts from telnet and ftp connections.
This is valid for all test cases. However, my testcase1/0 specifies that for this test case, only ftp should
be silent. The result is that my testcase1 will get telnet info (if any) printed in the log, but not ftp info.
my testcase2 will get no info from either connection printed.
The -silent connections tag (or silent connections tagged tuple in the call to ct:run test/1)
overrides any settings in the test suite.
Note that in the current Common Test version, the silent connections feature only works for telnet
connections. Support for other connection types will be added in future Common Test versions.

Common Test 21
Chapter 1: Common Test User’s Guide

1.7 Config Files


1.7.1 General
The Common Test framework uses configuration files to describe data related to a test or a test plant.
The configuration data makes it possible to change properties without changing the test program itself.
Configuration data can for example be:

 Addresses to the test plant or other instruments


 Filenames for files needed by the test
 Program names for programs that shall be run by the test
 Any other variable that is needed by the test

1.7.2 Syntax
A config file can contain any number of elements of the type:

fKey,Valueg.

where

Key = atom()
Value = term() | [fKey,Valueg]

1.7.3 Reading config values


From inside a test suite, one can require a variable to exist before a test case can be run, and read the
value of a variable.
require is a statement that can be part of the test suite default data [page 7] or test case info function
[page 6]. If the required variable is not available, the test is skipped. There is also a function
ct:require/[1,2] which can be called from inside a test suite in order to check if a specific variable is
available. The return value from the function must then be checked and appropriate actions taken
explicitly by the test case.
A require statement should look like this: frequire,Requiredg or frequire,Name,Requiredg. The
arguments Name and Required are the same as the arguments to ct:require/[1,2] which are
described in the reference manual for ct [page 50].
To read the value of a config variable, use the function get config/[1,2] which is also described in the
reference manual for ct [page 50].

1.7.4 Examples
A config file for usning telnet to communicate with a unix host could look like this:

funix,[ftelnet,"durin.du.uab"g,
fusername,"someuser"g,
fpassword,"somepassword"g]g.

22 Common Test
1.8: Code Coverage Analysis

1.8 Code Coverage Analysis


1.8.1 General
Although Common Test was created primarly for the purpose of black box testing, nothing prevents it
from working perfectly as a white box testing tool as well. This is especially true when the application
to test is written in Erlang. Then the test ports are easily realized by means of Erlang function calls.
When white box testing an Erlang application, it is useful to be able to measure the code coverage of
the test. Common Test provides simple access to the OTP Cover tool for this purpose. Common Test
handles all necessary communication with the Cover tool (starting, compiling, analysing, etc). All the
Common Test user needs to do is to specify the extent of the code coverage analysis.

1.8.2 Usage
To specify what modules should be included in the code coverage test, you provide a cover specification
file. Using this file you can point out specific modules or specify directories that contain modules which
should all be included in the analysis. You can also, in the same fashion, specify modules that should be
excluded from the analysis.
If you are testing a distributed Erlang application, it is likely that code you want included in the code
coverage analysis gets executed on an Erlang node other than the one Common Test is running on. If
this is the case you need to specify these other nodes in the cover specification file or add them
dynamically to the code coverage set of nodes. See the ct cover page in the reference manual for
details on the latter.
In the cover specification file you can also specify your required level of the code coverage analysis;
details or overview. In detailed mode, you get a coverage overview page, showing you per module
and total coverage percentages, as well as one HTML file printed for each module included in the
analysis that shows exactly what parts of the code have been executed during the test. In overview
mode, only the code coverage overview page gets printed.
Note: Currently, for Common Test to be able to print code coverage HTML files for the modules
included in the analysis, the source code files of these modules must be located in the same directory as
the corresponding .beam files. This is a limitation that will be removed later.
You can choose to export and import code coverage data between tests. If you specify the name of an
export file in the cover specification file, Common Test will export collected coverage data to this file at
the end of the test. You may similarly specify that previously exported data should be imported and
included in the analysis for a test (you can specify multiple import files). This way it is possible to
analyse total code coverage without necessarily running all tests at once. Note that even if you run
separate tests in one test run, code coverage data will not be passed on from one test to another unless
you specify an export file for Common Test to use for this purpose.
To activate the code coverage support, you simply specify the name of the cover specification file as you
start Common Test. This you do either by using the -cover flag with run test. Example:
$ run test -dir $TESTOBJS/db -cover $TESTOBJS/db/config/db.coverspec
You may also pass the cover specification file name in a call to ct:run test/1, by adding a
fcover,CoverSpecg tuple to the Opts argument. Also, you can of course enable code coverage in your
test specifications (read more in the chapter about using test specifications [page 15]).

Common Test 23
Chapter 1: Common Test User’s Guide

1.8.3 The cover specification file


These are the terms allowed in a cover specification file:

%% List of Nodes on which cover will be active during test.


%% Nodes = [atom()]
fnodes, Nodesg.

%% Files with previously exported cover data to include in analysis.


%% CoverDataFiles = [string()]
fimport, CoverDataFilesg.

%% Cover data file to export from this session.


%% CoverDataFile = string()
fexport, CoverDataFileg.

%% Cover analysis level.


%% Level = details | overview
flevel, Levelg.

%% Directories to include in cover.


%% Dirs = [string()]
fincl dirs, Dirsg.

%% Directories, including subdirectories, to include.


fincl dirs r, Dirsg.

%% Specific modules to include in cover.


%% Mods = [atom()]
fincl mods, Modsg.

%% Directories to exclude in cover.


fexcl dirs, Dirsg.

%% Directories, including subdirectories, to exclude.


fexcl dirs r, Dirsg.

%% Specific modules to exclude in cover.


fexcl mods, Modsg.

The incl dirs r and excl dirs r terms tell Common Test to search the given directories recursively
and include or exclude any module found during the search. The incl dirs and excl dirs terms
result in a non-recursive search for modules (i.e. only modules found in the given directories are
included or excluded).
Note: Directories containing Erlang modules that are to be included in a code coverage test must exist
in the code server path, or the cover tool will fail to recompile the modules. (It is not sufficient to
specify these directories in the cover specification file for Common Test).

24 Common Test
1.9: Using Common Test for Large Scale Testing

1.8.4 Logging
To view the result of a code coverage test, follow the “Coverage log” link on the test suite results page.
This takes you to the code coverage overview page. If you have successfully performed a detailed
coverage analysis, you find links to each individual module coverage page here.

1.9 Using Common Test for Large Scale Testing


1.9.1 General
Large scale automated testing requires running multiple independent test sessions in parallel. This is
accomplished by running a number of Common Test nodes on one or more hosts, testing different
target systems. Configuring, starting and controlling the test nodes independently can be a cumbersome
operation. To aid this kind of automated large scale testing, CT offers a master test node component,
CT Master, that handles central configuration and control in a system of distributed CT nodes.
The CT Master server runs on one dedicated Erlang node and uses distributed Erlang to communicate
with any number of CT test nodes, each hosting a regular CT server. Test specifications are used as
input to specify what to test on which test nodes, using what configuration.
The CT Master server writes progress information to HTML log files similarly to the regular CT server.
The logs contain test statistics and links to the log files written by each independent CT server.
The CT master API is exported by the ct master module.

1.9.2 Usage
CT Master requires all test nodes to be on the same network and share a common file system. As of this
date, CT Master can not start test nodes automatically. The nodes must have been started in advance
for CT Master to be able to start test sessions on them.
Tests are started by calling:
ct master:run(TestSpecs) or ct master:run(TestSpecs, InclNodes, ExclNodes)
TestSpecs is either the name of a test specification file (string) or a list of test specifications. In case of a
list, the specifications will be handled (and the corresponding tests executed) in sequence. An element
in a TestSpecs list can also be list of test specifications. The specifications in such a list will be merged
into one combined specification prior to test execution. For example:
ct master:run(["ts1","ts2",["ts3","ts4"]])
means first the tests specified by “ts1” will run, then the tests specified by “ts2” and finally the tests
specified by both “ts3” and “ts4”.
The InclNodes argument to run/3 is a list of node names. The run/3 function runs the tests in
TestSpecs just like run/1 but will also take any test in TestSpecs that’s not explicitly tagged with a
particular node name and execute it on the nodes listed in InclNodes. By using run/3 this way it is
possible to use any test specification, with or without node information, in a large scale test
environment! ExclNodes is a list of nodes that should be excluded from the test. I.e. tests that have
been specified in the test specification to run on a particular node will not be performed if that node is
at runtime listed in ExclNodes.
If CT Master fails initially to connect to any of the test nodes specified in a test specification or in the
InclNodes list, the operator will be prompted with the option to either start over again (after manually
checking the status of the node(s) in question), to run without the missing nodes, or to abort the
operation.

Common Test 25
Chapter 1: Common Test User’s Guide

When tests start, CT Master prints information to console about the nodes that are involved. CT
Master also reports when tests finish, successfully or unsuccessfully. If connection is lost to a node, the
test on that node is considered finished. CT Master will not attempt to reestablish contact with the
failing node. At any time to get the current status of the test nodes, call the function:
ct master:progress()
To stop one or more tests, use:
ct master:abort() (stop all) or ct master:abort(Nodes)
For detailed information about the ct master API, please see the manual page for this module.

1.9.3 Test Specifications


The test specifications used as input to CT Master are fully compatible with the specifications used as
input to the regular CT server. The syntax is described in the Running Test Suites [page 15] chapter.
All test specification terms can have a NodeRefs element. This element specifies which node or nodes a
configuration operation or a test is to be executed on. NodeRefs is defined as:
NodeRefs = all nodes | [NodeRef] | NodeRef
where
NodeRef = NodeAlias | node() | master
A NodeAlias (atom()) is used in a test specification as a reference to a node name (so the actual node
name only needs to be declared once). The alias is declared with a node term:
fnode, NodeAlias, NodeNameg
If NodeRefs has the value all nodes, the operation or test will be performed on all given test nodes.
(Declaring a term without a NodeRefs element actually has the same effect). If NodeRefs has the value
master, the operation is only performed on the CT Master node (namely set the log directory or install
an event handler).
Consider the example in the Running Test Suites [page 15] chapter, now extended with node
information and intended to be executed by the CT Master:

fnode, node1, ct node@host xg.


fnode, node2, ct node@host yg.

flogdir, master, "/home/test/master logs"g.


flogdir, "/home/test/logs"g.

fconfig, node1, "/home/test/t1/cfg/config.cfg"g.


fconfig, node2, "/home/test/t2/cfg/config.cfg"g.
fconfig, "/home/test/t3/cfg/config.cfg"g.

falias, t1, "/home/test/t1"g.


falias, t2, "/home/test/t2"g.
falias, t3, "/home/test/t3"g.

fsuites, node1, t1, allg.


fskip suites, node1, t1, [t1B SUITE,t1D SUITE], "Not implemented"g.
fskip cases, node1, t1, t1A SUITE, [test3,test4], "Irrelevant"g.
fskip cases, node1, t1, t1C SUITE, [test1], "Ignore"g.

fsuites, node2, t2, [t2B SUITE,t2C SUITE]g.

26 Common Test
1.10: Event Handling

fcases, node2, t2, t2A SUITE, [test4,test1,test7]g.

fskip suites, t3, all, "Not implemented"g.

This example specfies the same tests as the original example. But now if started with a call to
ct master:run(TestSpecName), the t1 test will be executed on node ct node@host x (node1), the t2
test on ct node@host y (node2) and the t3 test on both node1 and node2. The t1 config file will only
be read on node1 and the t2 config file only on node2, while the t3 config file will be read on both
node1 and node2. Both test nodes will write log files to the same directory. (The CT Master node will
however use a different log directory than the test nodes).
If the test session is instead started with a call to ct master:run(TestSpecName, [ct node@host z],
[ct node@host x]), the result is that the t1 test does not run on ct node@host x (or any other node)
while the t3 test runs on ct node@host y and ct node@host z.
A nice feature is that a test specification that includes node information can still be used as input to the
regular Common Test server (as described in the Running Test Suites [page 15] chapter). The result is
that any test specified to run on a node with the same name as the Common Test node in question
(typically ct@somehost if started with the run test script), will be performed. Tests without explicit
node association will always be performed too of course!

Note:
It is recommended that absolute paths are used for log directories, config files and test directory
aliases in the test specifications so that current working directory settings are not important.

1.10 Event Handling


1.10.1 General
It is possible for the operator of a Common Test system to receive event notifications continously
during a test session execution. It is reported e.g. when a test case starts and stops, what the current
count of succeeded, failed and skipped cases is, etc. This information can be used for different purposes
such as logging progress and results on other format than HTML, saving statistics to a database for
report generation and test system supervision.
Common Test has a framework for event handling which is based on the OTP event manager concept
and gen event behaviour. When the Common Test server starts, it spawns an event manager. During
test execution the manager gets a notification from the server every time something of potential interest
happens. Any event handler plugged into the event manager can match on event notifications, take
some action or pass the information on. Event handlers are Erlang modules implemented by the
Common Test user according to the gen event behaviour (see the OTP User’s Guide and Reference
Manual for more information).
As already described, a Common Test server always starts an event manager. The server also plugs in a
default event handler which has as its only purpose to relay notifications to a globally registered CT
Master event manager (if a CT Master server is actually running in the system). The CT Master also
spawns an event manager at startup. Event handlers plugged into this manager will receive the events
from all the test nodes as well as information from the CT Master server itself.

Common Test 27
Chapter 1: Common Test User’s Guide

1.10.2 Usage
Event handlers may be plugged in by means of test specification [page 15] terms:
fevent handler, EventHandlersg, or
fevent handler, EventHandlers, InitArgsg, or
fevent handler, NodeRefs, EventHandlersg, or
fevent handler, NodeRefs, EventHandlers, InitArgsg
EventHandlers is a list of modules (each having gen event behaviour). An event handler module must
be precompiled and its location must be specified in the Erlang runtime system code path. Before a test
session starts, the init function of each plugged in event handler is called (with the InitArgs list as
argument or [] if no start arguments are given).
To plug a handler into the CT Master event manager, specify master as the node in NodeRefs.
For an event handler to be able to match on events, the module must include the header file
ct event.hrl. An event is a record with the following definition:
#eventfname, node, datag
name is the label (type) of the event. node is the name of the node the event has originated from (only
relevant for CT Master event handlers). data is data specific for the particular event.
Events:

#event.name = test start


#event.data = fStartTime,LogDirg
StartTime = fdate(), time()g
LogDir = string()

#event.name = test done


#event.data = EndTime
EndTime = fdate(), time()g

#event.name = start make


#event.data = Dir
Dir = string()

#event.name = finished make


#event.data = Dir
Dir = string()

#event.name = tc start
#event.data = fSuite,Caseg
Suite = atom()
Case = atom()

#event.name = tc done
#event.data = fSuite,Case,Resultg
Suite = atom()
Case = atom()
Result = ok | fskipped,Reasong | ffailed,Reasong
Reason = term()

#event.name = tc user skip

28 Common Test
1.10: Event Handling

#event.data = fSuite,Case,Commentg
Suite = atom()
Case = atom()
Comment = string()

#event.name = tc auto skip


#event.data = fSuite,Case,Commentg
Suite = atom()
Case = atom()
Comment = string()

#event.name = test stats


#event.data = fOk,Failed,Skippedg
Ok = Failed = Skipped = integer()

#event.name = start logging


#event.data = CtRunDir
CtRunDir = string()

#event.name = stop logging


#event.data = []

#event.name = start write file


#event.data = FullNameFile
FullNameFile = string()

#event.name = finished write file


#event.data = FullNameFile
FullNameFile = string()

The events are also documented in ct event.erl. This module may serve as an example of what an
event handler for the CT event manager can look like.
Besides the event handler test specification terms, it is also possible to install event handlers by means
of the run test flag -event handler, e.g:

$ run test -dir my testobj -event handler my evh1 my evh2

Note that it is not posible to specify start arguments to the event handlers when using the run test
script. You may however pass along start arguments if you use the ct:run test/1 function. An
event handler tuple in the argument Opts has the following definition (see also ct:run test/1 in the
reference manual):

fevent handler,EventHandlersg

EventHandlers = EH | [EH]
EH = atom() | fatom(),InitArgsg | f[atom()],InitArgsg
InitArgs = [term()]

Example:

Common Test 29
Chapter 1: Common Test User’s Guide

1> ct:run test([fdir,"my testobj"g,fevent handler,[my evh1,fmy evh2,[node()]g]g]).

This will install two event handlers for the my testobj test. Event handler my evh1 is started with [] as
argument to the init function. Event handler my evh2 is started with the name of the current node in
the init argument list.

1.11 Dependencies between Test Cases and Suites


1.11.1 General
When creating test suites, it is strongly recommended to not create dependencies between test cases, i.e.
letting test cases depend on the result of previous test cases. There are various reasons for this, for
example:

 It makes it impossible to run test cases individually.


 It makes it impossible to run test cases in different order.
 It makes debugging very difficult (since a fault could be the result of a problem in a different test
case than the one failing).
 There exists no good and explicit ways to declare dependencies, so it may be very difficult to see
and understand these in test suite code and in test logs.
 Extending, restructuring and maintaining test suites with test case dependencies is difficult.

There are often sufficient means to work around the need for test case dependencies. Generally, the
problem is related to the state of the system under test (SUT). The action of one test case may alter the
state of the system and for some other test case to run properly, the new state must be known.
Instead of passing data between test cases, it is recommended that the test cases read the state from the
SUT and perform assertions (i.e. let the test case run if the state is as expected, otherwise reset or fail)
and/or use the state to set variables necessary for the test case to execute properly. Common actions can
often be implemented as library functions for test cases to call to set the SUT in a required state. (Such
common actions may of course also be separately tested if necessary, to ensure they are working as
expected). It is sometimes also possible, but not always desirable, to group tests together in one test
case, i.e. let a test case perform a “scenario” test (a test that consists of subtests).
Consider for example a server application under test. The following functionality is to be tested:

 Starting the server.


 Configuring the server.
 Connecting a client to the server.
 Disconnecting a client from the server.
 Stopping the server.

There are obvious dependencies between the listed functions. We can’t configure the server if it hasn’t
first been started, we can’t connect a client until the server has been properly configured, etc. If we
want to have one test case for each of the functions, we might be tempted to try to always run the test
cases in the stated order and carry possible data (identities, handles, etc) between the cases and
therefore introduce dependencies between them. To avoid this we could consider starting and stopping
the server for every test. We would implement the start and stop action as common functions that may
be called from init per testcase and end per testcase. (We would of course test the start and stop
functionality separately). The configuration could perhaps also be implemented as a common function,

30 Common Test
1.11: Dependencies between Test Cases and Suites

maybe grouped with the start function. Finally the testing of connecting and disconnecting a client may
be grouped into one test case. The resulting suite would look something like this:

-module(my server SUITE).


-compile(export all).
-include lib("ct.hrl").

%%% init and end functions...

suite() -> [frequire,my server cfgg].

init per testcase(start and stop, Config) ->


Config;

init per testcase(config, Config) ->


[fserver pid,start server()g | Config];

init per testcase( , Config) ->


ServerPid = start server(),
configure server(),
[fserver pid,ServerPidg | Config].

end per testcase(start and stop, ) ->


ok;

end per testcase( , ) ->


ServerPid = ?config(server pid),
stop server(ServerPid).

%%% test cases...

all() -> [start and stop, config, connect and disconnect].

%% test that starting and stopping works


start and stop( ) ->
ServerPid = start server(),
stop server(ServerPid).

%% configuration test
config(Config) ->
ServerPid = ?config(server pid, Config),
configure server(ServerPid).

%% test connecting and disconnecting client


connect and disconnect(Config) ->
ServerPid = ?config(server pid, Config),
fok,SessionIdg = my server:connect(ServerPid),
ok = my server:disconnect(ServerPid, SessionId).

%%% common functions...

start server() ->


fok,ServerPidg = my server:start(),

Common Test 31
Chapter 1: Common Test User’s Guide

ServerPid.

stop server(ServerPid) ->


ok = my server:stop(),
ok.

configure server(ServerPid) ->


ServerCfgData = ct:get config(my server cfg),
ok = my server:configure(ServerPid, ServerCfgData),
ok.

1.11.2 Saving configuration data


There might be situations where it is impossible, or infeasible at least, to implement independent test
cases. Maybe it is simply not possible to read the SUT state. Maybe resetting the SUT is impossible and
it takes much too long to restart the system. In situations where test case dependency is necessary, CT
offers a structured way to carry data from one test case to the next. The same mechanism may also be
used to carry data from one test suite to the next.
The mechanism for passing data is called save config. The idea is that one test case (or suite) may save
the current value of Config - or any list of key-value tuples - so that it can be read by the next executing
test case (or test suite). The configuration data is not saved permanently but can only be passed from
one case (or suite) to the next.
To save Config data, return the tuple:
fsave config,ConfigListg
from end per testcase or from the main test case function. To read data saved by a previous test case,
use the config macro with a saved config key:
fSaver,ConfigListg = ?config(saved config, Config)
Saver (atom()) is the name of the previous test case (where the data was saved). The config macro
may be used to extract particular data also from the recalled ConfigList. It is strongly recommended
that Saver is always matched to the expected name of the saving test case. This way problems due to
restructuring of the test suite may be avoided. Also it makes the dependency more explicit and the test
suite easier to read and maintain.
To pass data from one test suite to another, the same mechanism is used. The data should be saved by
the end per suite function and read by init per suite in the suite that follows. When passing data
between suites, Saver carries the name of the test suite.
Example:

-module(server b SUITE).
-compile(export all).
-include lib("ct.hrl").

%%% init and end functions...

init per suite(Config) ->


%% read config saved by previous test suite
fserver a SUITE,OldConfigg = ?config(saved config, Config),
%% extract server identity (comes from server a SUITE)
ServerId = ?config(server id, OldConfig),

32 Common Test
1.11: Dependencies between Test Cases and Suites

SessionId = connect to server(ServerId),


[fids,fServerId,SessionIdgg | Config].

end per suite(Config) ->


%% save config for server c SUITE (session id and server id)
fsave config,Configg

%%% test cases...

all() -> [allocate, deallocate].

allocate(Config) ->
fServerId,SessionIdg = ?config(ids, Config),
fok,Handleg = allocate resource(ServerId, SessionId),
%% save handle for deallocation test
NewConfig = [fhandle,Handleg],
fsave config,NewConfigg.

deallocate(Config) ->
fServerId,SessionIdg = ?config(ids, Config),
fallocate,OldConfigg = ?config(saved config, Config),
Handle = ?config(handle, OldConfig),
ok = deallocate resource(ServerId, SessionId, Handle).

It is also possible to save Config data from a test case that is to be skipped. To accomplish this, return
the following tuple:
fskip and save,Reason,ConfigListg
The result will be that the test case is skipped with Reason printed to the log file (as described in
previous chapters), and ConfigList is saved for the next test case. ConfigList may be read by means
of ?config(saved config, Config), as described above. skip and save may also be returned from
init per suite, in which case the saved data can be read by init per suite in the suite that follows.

1.11.3 Sequences
It is possible that test cases depend on each other so that if one case fails, the following test(s) should
not be executed. Typically, if the save config facility is used and a test case that is expected to save
data crashes, the following case can not run. CT offers a way to declare such dependencies, called
sequences.
A sequence of test cases is declared by means of the function sequences/0. This function should return
a list of tuples on the format: fSeqTag,TestCasesg. SeqTag is an atom that identifies one particular
sequence. TestCases is a list of test case names. Sequences must be included in the list that all/0
returns. They are declared as: fsequence,SeqTagg.
For example, if we would like to make sure that if allocate in server b SUITE (above) crashes,
deallocate is skipped, we may declare the sequence:

sequences() -> [falloc and dealloc,[alloc,dealloc]g].

Let’s also assume the suite contains the test case get resource status, which is independent of the
other two cases. The all function could look like this:

Common Test 33
Chapter 1: Common Test User’s Guide

all() -> [fsequence,alloc and deallocg, get resource status].

If alloc succeeds, dealloc is also executed. If alloc fails however, dealloc is not executed but
marked as SKIPPED in the html log. get resource status will run no matter what happens to the
alloc and dealloc cases.
Test cases in a sequence will be executed in order until they have all succeeded or until one case fails. If
one fails, all following cases in the sequence are skipped. The cases in the sequence that have succeeded
up to that point are reported as successful in the log. An arbitrary number of sequence tuples may be
specified. Example:

sequences() -> [fscenarioA, [testA1, testA2]g,


fscenarioB, [testB1, testB2, testB3]g].

all() -> [test1,


test2,
fsequence,scenarioAg,
test3,
fsequence,scenarioBg,
test4].

Note:
It is not possible to execute a test case which is part of a sequence as a regular (stand alone) test case.
It is also not possible to use the same test case in multiple sequences. Remember that you can always
work around these limitations if necessary by means of help functions common for the test cases in
question.

1.12 Some thoughts about testing


1.12.1 Goals
It’s not possible to prove that a program is correct by testing. On the contrary, it has been formally
proven that it is impossible to prove programs in general by testing. Theoretical program proofs or plain
examination of code may be viable options for those that wish to certify that a program is correct. The
test server, as it is based on testing, cannot be used for certification. Its intended use is instead to (cost
effectively) find bugs. A successful test suite is one that reveals a bug. If a test suite results in Ok, then
we know very little that we didn’t know before.

34 Common Test
1.12: Some thoughts about testing

1.12.2 What to test?


There are many kinds of test suites. Some concentrate on calling every function or command (in the
documented way) in a certain interface. Some other do the same, but uses all kinds of illegal
parameters, and verifies that the server stays alive and rejects the requests with reasonable error codes.
Some test suites simulate an application (typically consisting of a few modules of an application), some
try to do tricky requests in general, some test suites even test internal functions with help of special
loadmodules on target.
Another interesting category of test suites are the ones that check that fixed bugs don’t reoccur. When a
bugfix is introduced, a test case that checks for that specific bug should be written and submitted to the
affected test suite(s).
Aim for finding bugs. Write whatever test that has the highest probability of finding a bug, now or in
the future. Concentrate more on the critical parts. Bugs in critical subsystems are a lot more expensive
than others.
Aim for functionality testing rather than implementation details. Implementation details change quite
often, and the test suites should be long lived. Often implementation details differ on different
platforms and versions. If implementation details have to be tested, try to factor them out into separate
test cases. Later on these test cases may be rewritten, or just skipped.
Also, aim for testing everything once, no less, no more. It’s not effective having every test case fail just
because one function in the interface changed.

Common Test 35
Chapter 1: Common Test User’s Guide

36 Common Test
Common Test Reference
Manual

Short Summaries
 Application common test [page 44] – A framework for automatic testing of a
variety of target nodes
 Application run test [page 48] – Shell script used for starting Common Test from
the Unix command line.
 Erlang Module ct [page 50] – Main user interface for the Common Test
framework.
 Erlang Module ct cover [page 58] – Common Test Framework code coverage
support module.
 Erlang Module ct ftp [page 59] – Common Test specific layer on top of OTP ftp
clinet ftp.erl.
 Erlang Module ct master [page 62] – Distributed test execution control for
Common Test.
 Erlang Module ct rpc [page 65] – Common Test specific layer on Erlang/OTP rpc.
 Erlang Module ct snmp [page 68] – Common Test specific layer on top of the
OTPs snmp.
 Erlang Module ct telnet [page 73] – Common Test specific layer on top of telnet
client ct telnet client.erl.
 Erlang Module unix telnet [page 78] – Callback module for ct telnet for talking
telnet to a unix host.

common test
The following functions are exported:
 Module:all() -> TestCases | fskip,Reasong
[page 44] Returns the list of all test cases in the module.
 Module:sequences() -> Sequences
[page 45] Returns a list of test cases that depend on each other.
 Module:suite() -> [Info]
[page 45] Function for providing default test suite data.
 Module:init per suite(Config) -> NewConfig | fskip,Reasong |
fskip and save,Reason,Configg
[page 45] Test suite initialization.

Common Test 37
Common Test Reference Manual

 Module:end per suite(Config) -> void() | fsave config,Configg


[page 45] Test suite finalization.
 Module:init per testcase(TestCase, Config) -> NewConfig |
fskip,Reasong
[page 46] Test case initialization.
 Module:end per testcase(TestCase, Config) -> void() |
fsave config,Configg
[page 46] Test case finalization.
 Module:testcase() -> [Info]
[page 46] Test case info function.
 Module:testcase(Config) -> ok | fskip,Reasong | fcomment,Commentg |
fsave config,Configg | fskip and save,Reason,Configg | exit()
[page 47] A test case

run test
No functions are exported.

ct
The following functions are exported:
 abort current testcase(Reason) -> ok | ferror, no testcase runningg
[page 50] When calling this function, the currently executing test case will be
aborted.
 comment(Comment) -> void()
[page 50] Print the given Comment in the comment field of the table on the test
suite result page.
 fail(Reason) -> void()
[page 51] Terminate a test case with the given error Reason.
 get config(Required) -> Value
[page 51] Equivalent to get config(Required, undefined).
 get config(Required, Default) -> Value
[page 51] Get the value of config data.
 get status() -> TestStatus | ferror, Reasong
[page 51] Returns status of ongoing tests.
 get target name(Handle) -> fok, TargetNameg | ferror, Reasong
[page 52] Return the name of the target that the given connection belongs to.
 install(Opts) -> ok | ferror, Reasong
[page 52] Install config files and event handlers.
 listenv(Telnet) -> [Env]
[page 52] Performs the listenv command on the given telnet connection and
returns the result as a list of Key-Value pairs.
 log(Format) -> ok
[page 52] Equivalent to log(default, Format, []).
 log(X1, X2) -> ok
[page 52] Equivalent to log(Category, Format, Args).

38 Common Test
Common Test Reference Manual

 log(Category, Format, Args) -> ok


[page 52] Printout from a testcase to the log.
 pal(Format) -> ok
[page 53] Equivalent to pal(default, Format, []).
 pal(X1, X2) -> ok
[page 53] Equivalent to pal(Category, Format, Args).
 pal(Category, Format, Args) -> ok
[page 53] Print and log from a testcase.
 parse table(Data) -> fHeading, Tableg
[page 53] Parse the printout from an SQL table and return a list of tuples.
 print(Format) -> ok
[page 53] Equivalent to print(default, Format, []).
 print(X1, X2) -> term()
[page 53] Equivalent to print(Category, Format, Args).
 print(Category, Format, Args) -> ok
[page 53] Printout from a testcase to the console.
 require(Required) -> ok | ferror, Reasong
[page 54] Check if the required configuration is available.
 require(Name, Required) -> ok | ferror, Reasong
[page 54] Check if the required configuration is available, and give it a name.
 run(TestDirs) -> Result
[page 55] Run all testcases in all suites in the given directories.
 run(TestDir, Suite) -> Result
[page 55] Run all testcases in the given suite.
 run(TestDir, Suite, Cases) -> Result
[page 55] Run the given testcase(s).
 run test(Opts) -> Result
[page 55] Run tests as specified by the combination of options in Opts.
 run testspec(TestSpec) -> Result
[page 56] Run test specified by TestSpec.
 start interactive() -> ok
[page 56] Start CT in interactive mode.
 step(TestDir, Suite, Case) -> Result
[page 56] Step through a test case with the debugger.
 stop interactive() -> ok
[page 56] Exit the interactive mode.
 testcases(TestDir, Suite) -> Testcases | ferror, Reasong
[page 56] Returns all testcases in the specified suite.
 userdata(TestDir, Suite) -> SuiteUserData | ferror, Reasong
[page 57] Returns any data specified with the tag userdata in the list of tuples
returned from Suite:suite/0.
 userdata(TestDir, Suite, Case) -> TCUserData | ferror, Reasong
[page 57] Returns any data specified with the tag userdata in the list of tuples
returned from Suite:Case/0.

Common Test 39
Common Test Reference Manual

ct cover
The following functions are exported:
 add nodes(Nodes) -> fok, StartedNodesg | ferror, Reasong
[page 58] Add nodes to current cover test (only works if cover support is active!).
 remove nodes(Nodes) -> ok | ferror, Reasong
[page 58] Remove nodes from current cover test.

ct ftp
The following functions are exported:
 cd(Connection, Dir) -> ok | ferror, Reasong
[page 59] Change directory on remote host.
 close(Connection) -> ok | ferror, Reasong
[page 59] Close the FTP connection.
 delete(Connection, File) -> ok | ferror, Reasong
[page 59] Delete a file on remote host.
 get(Name, RemoteFile, LocalFile) -> ok | ferror, Reasong
[page 59] Open a ftp connection and fetch a file from the remote host.
 ls(Connection, Dir) -> fok, Listingg | ferror, Reasong
[page 60] List the directory Dir.
 open(Name) -> fok, Handleg | ferror, Reasong
[page 60] Open an FTP connection to the specified node.
 put(Name, LocalFile, RemoteFile) -> ok | ferror, Reasong
[page 60] Open a ftp connection and send a file to the remote host.
 recv(Connection, RemoteFile) -> ok | ferror, Reasong
[page 60] Fetch a file over FTP.
 recv(Connection, RemoteFile, LocalFile) -> ok | ferror, Reasong
[page 60] Fetch a file over FTP.
 send(Connection, LocalFile) -> ok | ferror, Reasong
[page 61] Send a file over FTP.
 send(Connection, LocalFile, RemoteFile) -> ok | ferror, Reasong
[page 61] Send a file over FTP.
 type(Connection, Type) -> ok | ferror, Reasong
[page 61] Change file transfer type.

ct master
The following functions are exported:
 abort() -> ok
[page 62] Stops all running tests.
 abort(Nodes) -> ok
[page 62] Stops tests on specified nodes.
 progress() -> [fNode, Statusg]
[page 62] Returns test progress.

40 Common Test
Common Test Reference Manual

 run(TestSpecs) -> ok
[page 62] Equivalent to run(TestSpecs, false, [], []).
 run(TestSpecs, InclNodes, ExclNodes) -> ok
[page 62] Equivalent to run(TestSpecs, false, InclNodes, ExclNodes).
 run(TestSpecs, AllowUserTerms, InclNodes, ExclNodes) -> ok
[page 63] Tests are spawned on the nodes as specified in TestSpecs.
 run on node(TestSpecs, Node) -> ok
[page 63] Equivalent to run on node(TestSpecs, false, Node).
 run on node(TestSpecs, AllowUserTerms, Node) -> ok
[page 63] Tests are spawned on Node according to TestSpecs.
 run test(Node, Opts) -> ok
[page 63] Tests are spawned on Node using ct:run test/1.

ct rpc
The following functions are exported:
 app node(App, Candidates) -> NodeName
[page 65] From a set of candidate nodes determines which of them is running the
application App.
 app node(App, Candidates, FailOnBadRPC) -> NodeName
[page 65] Same as app node/2 only the FailOnBadRPC argument will determine if
the search for a candidate node should stop or not if badrpc is received at some
point.
 app node(App, Candidates, FailOnBadRPC, Cookie) -> NodeName
[page 65] Same as app node/2 only the FailOnBadRPC argument will determine if
the search for a candidate node should stop or not if badrpc is received at some
point.
 call(Node, Module, Function, Args) -> term() | fbadrpc, Reasong
[page 66] Same as call(Node, Module, Function, Args, infinity).
 call(Node, Module, Function, Args, TimeOut) -> term() | fbadrpc,
Reasong
[page 66] Evaluates apply(Module, Function, Args) on the node Node.
 call(Node, Module, Function, Args, TimeOut, Cookie) -> term() |
fbadrpc, Reasong
[page 66] Evaluates apply(Module, Function, Args) on the node Node.
 cast(Node, Module, Function, Args) -> ok
[page 66] Evaluates apply(Module, Function, Args) on the node Node.
 cast(Node, Module, Function, Args, Cookie) -> ok
[page 67] Evaluates apply(Module, Function, Args) on the node Node.

ct snmp
The following functions are exported:
 get next values(Agent, Oids, ConfName) -> SnmpReply
[page 70] Issues a synchronous snmp get next request.
 get values(Agent, Oids, ConfName) -> SnmpReply
[page 70] Issues a synchronous snmp get request.

Common Test 41
Common Test Reference Manual

 load mibs(Mibs) -> ok | ferror, Reasong


[page 70] Load the mibs into the agent ’snmp master agent’.
 register agents(ConfName, ManagedAgents) -> ok | ferror, Reasong
[page 70] Explicitly instruct the manager to handle this agent.
 register users(ConfName, Users) -> ok | ferror, Reasong
[page 71] Register the manager entity (=user) responsible for specific agent(s).
 register usm users(ConfName, UsmUsers) -> ok | ferror, Reasong
[page 71] Explicitly instruct the manager to handle this USM user.
 set info(Config) -> [fAgent, OldVarsAndVals, NewVarsAndValsg]
[page 71] Returns a list of all successful set requests performed in the test case in
reverse order.
 set values(Agent, VarsAndVals, ConfName, Config) -> SnmpReply
[page 71] Issues a synchronous snmp set request.
 start(Config, ConfName) -> ok
[page 71] Starts an snmp manager and/or agent.
 stop(Config) -> ok
[page 72] Stops the snmp manager and/or agent removes all files created.
 unregister agents(ConfName) -> ok | ferror, Reasong
[page 72] Removes information added when calling register agents/2.
 unregister users(ConfName) -> ok | ferror, Reasong
[page 72] Removes information added when calling register users/2.
 update usm users(ConfName, UsmUsers) -> ok | ferror, Reasong
[page 72] Alters information added when calling register usm users/2.

ct telnet
The following functions are exported:
 close(Connection) -> ok | ferror, Reasong
[page 74] Close the telnet connection and stop the process managing it.
 cmd(Connection, Cmd) -> fok, Datag | ferror, Reasong
[page 74] Equivalent to cmd(Connection, Cmd, DefaultTimeout).
 cmd(Connection, Cmd, Timeout) -> fok, Datag | ferror, Reasong
[page 74] Send a command via telnet and wait for prompt.
 cmdf(Connection, CmdFormat, Args) -> fok, Datag | ferror, Reasong
[page 74] Equivalent to cmdf(Connection, CmdFormat, Args, DefaultTimeout).
 cmdf(Connection, CmdFormat, Args, Timeout) -> fok, Datag | ferror,
Reasong
[page 74] Send a telnet command and wait for prompt (uses a format string and
list of arguments to build the command).
 cont log(Str, Args) -> term()
[page 74]
 end log() -> term()
[page 74]
 expect(Connection, Patterns) -> term()
[page 74] Equivalent to expect(Connections, Patterns, []).

42 Common Test
Common Test Reference Manual

 expect(Connection, Patterns, Opts) -> fok, Matchg | fok, MatchList,


HaltReasong | ferror, Reasong
[page 74] Get data from telnet and wait for the expected pattern.
 get data(Connection) -> fok, Datag | ferror, Reasong
[page 76] Get all data which has been received by the telnet client since last
command was sent.
 open(Name) -> fok, Handleg | ferror, Reasong
[page 76] Equivalent to open(Name, telnet).
 open(Name, ConnType) -> fok, Handleg | ferror, Reasong
[page 76] Open a telnet connection to a node.
 open(Name, ConnType, TargetMod) -> fok, Handleg | ferror, Reasong
[page 76] Equivalent to open(Name, ConnType, TargetMod, []).
 open(Name, ConnType, TargetMod, Extra) -> fok, Handleg | ferror,
Reasong
[page 76] Open a telnet connection to the specified target.
 send(Connection, Cmd) -> ok | ferror, Reasong
[page 76] Send a telnet command and return immediately.
 sendf(Connection, CmdFormat, Args) -> ok | ferror, Reasong
[page 77] Send a telnet command and return immediately (uses a format string
and a list of arguments to build the command).

unix telnet
No functions are exported.

Common Test 43
Common Test Common Test Reference Manual

Common Test
Application

The Common Test framework is an environment for writing and executing automatic
and semi-automatic test cases. The framework is based on the underlying
implementation as is used with great success in the Visual Test Server (AXD301) and
by Erlang/OTP. The main function in this framework is the test server which runs all
the test cases.
In brief the test server supports:

 Running multiple test suites


 Logging of the events in a test suite, on both suite and case levels
 HTML presentation of test suite results
 HTML presentation of test suite code
 Support functions for test suite authors
 Step by step execution

The following sections describes the callback functions the test server expects to find in
a test suite. For more details see Common Test User’s Guide. [page 4]

TEST CASE CALLBACK FUNCTIONS


The following functions define the callback interface for a test suite.

Exports

Module:all() -> TestCases | fskip,Reasong


Types:
 TestCases = [atom() | fsequence,SeqNameg]
 Reason = term()
 SeqName = atom()
MANDATORY
This function must return the list of all test cases in the test suite module. Each test
case is represented by an atom, the name of the test case function.
If fskip,Reasong is returned, all test cases in the module will be skipped, and the
Reason will be printed on the HTML result page.
For details on sequences, see Dependencies between Test Cases and Suites [page 33] in
the User’s Guide.

44 Common Test
Common Test Reference Manual Common Test

Module:sequences() -> Sequences


Types:
 Sequences = [fSeqName,Testcasesg]
 SeqName = atom()
 Testcases = [atom()]
OPTIONAL
See Dependencies between Test Cases and Suites [page 33] in the User’s Guide for
details.

Module:suite() -> [Info]


Types:
 Info = ftimetrap,Timeg | frequire,Requiredg | frequire,Name,Requiredg |
fuserdata,UserDatag | fsilent connections,Connsg | fstylesheet,CSSFileg
 Time = MilliSec | fseconds,integer()g | fminutes,integer()g | fhours,integer()g
 MilliSec = integer()
 Required = Key | fKey,SubKeysg
 Key = atom()
 SubKeys = SubKey | [SubKey]
 SubKey = atom()
 Name = atom()
 UserData = term()
 Conns = [atom()]
 CSSFile = string()
OPTIONAL
Use this function to set default data for the test suite.

Module:init per suite(Config) -> NewConfig | fskip,Reasong |


fskip and save,Reason,Configg
Types:
 Config = NewConfig = [fKey, Valueg]
 Key = atom()
 Value = term()
 Reason = term()
OPTIONAL
This function is called as the first test case in the suite. It typically contains initialization
which is common for all test cases in the suite, and which shall only be done once. The
Config parameter is the configuration which can be modified here. Whatever is
returned from this function is given as Config to all test cases in the suite. If
fskip,Reasong is returned, all test cases in the suite will be skipped and Reason printed
in the overview log for the suite.
For information on save config and skip and save, please see Dependencies between
Test Cases and Suites [page 32] in the User’s Guide.

Module:end per suite(Config) -> void() | fsave config,Configg


Types:

Common Test 45
Common Test Common Test Reference Manual

 Config = [fKey, Valueg]


 Key = atom()
 Value = term()
OPTIONAL
This function is called as the last test case in the suite. It is meant to be used for
cleaning up after init per suite. For information on save config, please see
Dependencies between Test Cases and Suites [page 32] in the User’s Guide.

Module:init per testcase(TestCase, Config) -> NewConfig | fskip,Reasong


Types:
 Config = NewConfig = [fKey, Valueg]
 Key = atom()
 Value = term()
 Reason = term()
OPTIONAL
This function is called before each test case. The TestCase argument is the name of the
test case, and Config is the configuration which can be modified here. Whatever is
returned from this function is given as Config to the test case. If fskip,Reasong is
returned, the test case will be skipped and Reason printed in the overview log for the
suite.

Module:end per testcase(TestCase, Config) -> void() | fsave config,Configg


Types:
 Config = [fKey, Valueg]
 Key = atom()
 Value = term()
OPTIONAL
This function is called after each test case, and can be used to clean up whatever the test
case has done. The return value is ignored. For information on save config, please see
Dependencies between Test Cases and Suites [page 32] in the User’s Guide

Module:testcase() -> [Info]


Types:
 Info = ftimetrap,Timeg | frequire,Requiredg | frequire,Name,Requiredg |
fuserdata,UserDatag | fsilent connections,Connsg
 Time = MilliSec | fseconds,integer()g | fminutes,integer()g | fhours,integer()g
 MilliSec = integer()
 Required = Key | fKey,SubKeysg
 Key = atom()
 SubKeys = SubKey | [SubKey]
 SubKey = atom()
 Name = atom()
 UserData = term()
 Conns = [atom()]

46 Common Test
Common Test Reference Manual Common Test

OPTIONAL
This is the test case info function. It shall return a list of tagged tuples that specify
various properties regarding the test case.
The timetrap tag sets the maximum time the test case is allowed to take. If the
timetrap time is exceeded, the test case fails with reason timetrap timeout.
init per testcase and end per testcase are included in the timetrap time.
The require tag specifies configuration variables that are required by the test case. If
the required configuration variables are not found in any of the configuration files, the
test case is skipped. For more information about the ’require’ functionality, see the
reference manual for the function ct:require/[1,2].
If timetrap and/or require is not set, the default values specified in the suite/0
return list will be used.
Apart from the above mentioned tags, there is no limitation for which tags that can be
specified in the test case info function.

Module:testcase(Config) -> ok | fskip,Reasong | fcomment,Commentg |


fsave config,Configg | fskip and save,Reason,Configg | exit()
Types:
 Config = [fKey, Valueg]
 Key = atom()
 Value = term()
MANDATORY
This is the implementation of a test case. Here you must call the functions you want to
test, and do whatever you need to check the result. If someting fails, make sure the
process crashes or call ct:fail/[0,1] (which also will cause the process to crash).
Elements from the Config parameter can be read with the ?config macro. The config
macro is defined in ct.hrl
You can return fskip,Reasong if you decide not to run the test case after all. Reason
will then be printed in ’Comment’ field on the HTML result page.
You can return fcomment,Commentg if you wish to print some information in the
’Comment’ field on the HTML result page.
If the function returns anything else, it is considered a success.
For information on save config and skip and save, please see Dependencies between
Test Cases and Suites [page 32] in the User’s Guide.

Common Test 47
The run test shell script Common Test Reference Manual

The run test shell script


Application

The run test script is automatically generated as Common Test is installed (please see
the Installation chapter in the Common Test User’s Guide for more information). The
script accepts a number of different start flags. Some flags trigger run test to start the
Common Test application and pass on data to it. Some flags start an Erlang node
prepared for running Common Test in a particular mode.
run test also accepts Erlang emulator flags. These are used when run test calls erl to
start the Erlang node (making it possible to e.g. add directories to the code server path,
change the cookie on the node, start additional applications, etc).
If run test is called without parameters, it prints all valid start flags to stdout.

Run tests from command line


run test [-dir TestDir1 TestDir2 .. TestDirN] | [-suite Suite1 Suite2 .. SuiteN
[-step]
[-config ConfigFile1 ConfigFile2 .. ConfigFileN]
[-logdir LogDir]
[-silent connections [ConnType1 ConnType2 .. ConnTypeN]]
[-stylesheet CSSFile]
[-cover CoverCfgFile]
[-event handler EvHandler1 EvHandler2 .. EvHandlerN]
[-repeat N [-force stop]] |
[-duration HHMMSS [-force stop]] |
[-until [YYMoMoDD]HHMMSS [-force stop]]

Run tests using test specification


run test -spec TestSpec1 TestSpec2 .. TestSpecN
[-config ConfigFile1 ConfigFile2 .. ConfigFileN]
[-logdir LogDir]
[-allow user terms]
[-silent connections [ConnType1 ConnType2 .. ConnTypeN]]
[-stylesheet CSSFile]
[-cover CoverCfgFile]
[-event handler EvHandler1 EvHandler2 .. EvHandlerN]
[-repeat N [-force stop]] |
[-duration HHMMSS [-force stop]] |
[-until [YYMoMoDD]HHMMSS [-force stop]]

48 Common Test
Common Test Reference Manual The run test shell script

Run tests in web based GUI


run test -vts [-browser Browser]
[-config ConfigFile1 ConfigFile2 .. ConfigFileN]
[-dir TestDir1 TestDir2 .. TestDirN] |
[-suite Suite [-case Case]]

Refresh the HTML index files


run test -refresh logs [-logdir LogDir]

Run CT in interactive mode


run test -shell [-config ConfigFile1 ConfigFile2 ... ConfigFileN]

Start an Erlang node with a given name


run test -ctname NodeName

Start a Common Test Master node


run test -ctmaster

See also
Please read the Running Test Suites [page 12] chapter in the Common Test User’s
Guide for information about the meaning of the different start flags.

Common Test 49
ct Common Test Reference Manual

ct
Erlang Module

Main user interface for the Common Test framework.


This module implements the command line interface for running tests and some basic
functions for common test case issues such as configuration and logging.
Test Suite Support Macros
The config macro is defined in ct.hrl. This macro should be used to retrieve
information from the Config variable sent to all test cases. It is used with two
arguments, where the first is the name of the configuration variable you wish to retrieve,
and the second is the Config variable supplied to the test case.
Possible configuration variables include:

 data dir - Data file directory.


 priv dir - Scratch file directory.
 Whatever added by init per suite/1 or init per testcase/2 in the test suite.

DATA TYPES
handle() = handle() (see module ct gen conn) | term() The identity of a
specific connection.
target name() = var name() The name of a target.
var name() = atom() A variable name which is specified when ct:require/2 is
called, e.g. ct:require(mynodename,fnode,[telnet]g)

Exports

abort current testcase(Reason) -> ok | ferror, no testcase runningg


Types:
 Reason = term()
When calling this function, the currently executing test case will be aborted. It is the
user’s responsibility to know for sure which test case is currently executing. The
function is therefore only safe to call from a function which has been called (or
synchronously invoked) by the test case.
Reason, the reason for aborting the test case, is printed in the test case log.

comment(Comment) -> void()


Types:

50 Common Test
Common Test Reference Manual ct

 Comment = term()
Print the given Comment in the comment field of the table on the test suite result page.
If called several times, only the last comment is printed. comment/1 is also overwritten
by the return value fcomment,Commentg or by the function fail/1 (which prints
Reason as a comment).

fail(Reason) -> void()


Types:
 Reason = term()
Terminate a test case with the given error Reason.

get config(Required) -> Value


Equivalent to get config(Required, undefined) [page 51].

get config(Required, Default) -> Value


Types:
 Required = KeyOrName | fKeyOrName, SubKeyg
 KeyOrName = atom()
 SubKey = atom()
 Default = term()
 Value = term() | Default
Get the value of config data.
This function returns the value of the requested config element.
Example, given the following config file:
funix,[ftelnet,IpAddrg,
fusername,Usernameg,
fpassword,Passwordg]g.
get config(unix,Default) -> [ftelnet,IpAddrg, fusername,Usernameg,
fpassword,Passwordg]
get config(funix,telnetg,Default) -> IpAddr
get config(funix,ftpg,Default) -> Default
get config(unknownkey,Default) -> Default
If you want to access a config variable which has been given a name by require/2, the
name must be used instead of the key when reading the value:
require(myhost,unix) -> ok
get config(myhost,Default) -> [ftelnet,IpAddrg, fusername,Usernameg,
fpassword,Passwordg]
See also: get config/1 [page 51], require/1 [page 54], require/2 [page 54].

get status() -> TestStatus | ferror, Reasong


Types:
 TestDir = term()
 Reason = term()

Common Test 51
ct Common Test Reference Manual

Returns status of ongoing tests.

get target name(Handle) -> fok, TargetNameg | ferror, Reasong


Types:
 Handle = handle()
 TargetName = target name()
Return the name of the target that the given connection belongs to.

install(Opts) -> ok | ferror, Reasong


Types:
 Opts = [Opt]
 Opt = fconfig, ConfigFilesg | fevent handler, Modulesg
 ConfigFiles = [ConfigFile]
 ConfigFile = string()
 Modules = [atom()]
Install config files and event handlers.
Run this function once before first test.
Example:
install([fconfig,["config node.ctc","config user.ctc"]g]).
Note that this function is automatically run by the run test script.

listenv(Telnet) -> [Env]


Types:
 Telnet = term()
 Env = fKey, Valueg
 Key = string()
 Value = string()
Performs the listenv command on the given telnet connection and returns the result as a
list of Key-Value pairs.

log(Format) -> ok
Equivalent to log(default, Format, []) [page 53].

log(X1, X2) -> ok


Types:
 X1 = Category | Format
 X2 = Format | Args
Equivalent to log(Category, Format, Args) [page 53].

log(Category, Format, Args) -> ok


Types:
 Category = atom()
 Format = string()

52 Common Test
Common Test Reference Manual ct

 Args = list()
Printout from a testcase to the log.
This function is meant for printing stuff directly from a testcase (i.e. not from within
the CT framework) in the test log.
Default Category is default and default Args is [].

pal(Format) -> ok
Equivalent to pal(default, Format, []) [page 53].

pal(X1, X2) -> ok


Types:
 X1 = Category | Format
 X2 = Format | Args
Equivalent to pal(Category, Format, Args) [page 53].

pal(Category, Format, Args) -> ok


Types:
 Category = atom()
 Format = string()
 Args = list()
Print and log from a testcase.
This function is meant for printing stuff from a testcase both in the log and on the
console.
Default Category is default and default Args is [].

parse table(Data) -> fHeading, Tableg


Types:
 Data = [string()]
 Heading = tuple()
 Table = [tuple()]
Parse the printout from an SQL table and return a list of tuples.
The printout to parse would typically be the result of a select command in SQL. The
returned Table is a list of tuples, where each tuple is a row in the table.
Heading is a tuple of strings representing the headings of each column in the table.

print(Format) -> ok
Equivalent to print(default, Format, []) [page 54].

print(X1, X2) -> term()


Equivalent to print(Category, Format, Args) [page 54].

print(Category, Format, Args) -> ok


Types:

Common Test 53
ct Common Test Reference Manual

 Category = atom()
 Format = string()
 Args = list()
Printout from a testcase to the console.
This function is meant for printing stuff from a testcase on the console.
Default Category is default and default Args is [].

require(Required) -> ok | ferror, Reasong


Types:
 Required = Key | fKey, SubKeysg
 Key = atom()
 SubKeys = SubKey | [SubKey]
 SubKey = atom()
Check if the required configuration is available.
Example: require the variable myvar:
ok = ct:require(myvar)
In this case the config file must at least contain:
fmyvar,Valueg.
Example: require the variable myvar with subvariable sub1:
ok = ct:require(fmyvar,sub1g)
In this case the config file must at least contain:
fmyvar,[fsub1,Valueg]g.
See also: get config/1 [page 51], get config/2 [page 51], require/2 [page 54].

require(Name, Required) -> ok | ferror, Reasong


Types:
 Name = atom()
 Required = Key | fKey, SubKeysg
 Key = atom()
 SubKeys = SubKey | [SubKey]
 SubKey = atom()
Check if the required configuration is available, and give it a name.
If the requested data is available, the main entry will be marked as allocated. An
allocated element can only be used if the correct name is given. This means that to read
the value of the element with get config/1,2, you need to provide the Name instead of
the Key.
Example: Require one node with a telnet connection and an ftp connection. Name the
node a:
ok = ct:require(a,fnode,[telnet,ftp]g).
All references to this node must then use the node name. E.g. you can fetch a file over
ftp like this:
ok = ct:ftp get(a,RemoteFile,LocalFile).
For this to work, the config file must at least contain:

54 Common Test
Common Test Reference Manual ct

fnode,[ftelnet,IpAddrg,
fftp,IpAddrg]g.
See also: get config/1 [page 51], get config/2 [page 51], require/1 [page 54].

run(TestDirs) -> Result


Types:
 TestDirs = TestDir | [TestDir]
Run all testcases in all suites in the given directories.
See also: run/3 [page 55].

run(TestDir, Suite) -> Result


Run all testcases in the given suite.
See also: run/3 [page 55].

run(TestDir, Suite, Cases) -> Result


Types:
 TestDir = string()
 Suite = atom()
 Cases = atom() | [atom()]
 Result = [TestResult] | ferror, Reasong
Run the given testcase(s).
Requires that ct:install/1 has been run first.
Suites (* SUITE.erl) files must be stored in TestDir or TestDir/test. All suites will be
compiled when test is run.

run test(Opts) -> Result


Types:
 Opts = [OptTuples]
 OptTuples = fconfig, CfgFilesg | fdir, TestDirsg | fsuite, Suitesg | ftestcase, Casesg
| f’spec’, TestSpecsg | fallow user terms, Boolg | flogdir, LogDirg |
fsilent connections, Connsg | fcover, CoverSpecFileg | fevent handler,
EventHandlersg | frepeat, Ng | fduration, DurTimeg | funtil, StopTimeg |
fforce stop, Boolg
 CfgFiles = [string()] | string()
 TestDirs = [string()] | string()
 Suites = [string()] | string()
 Cases = [atom()] | atom()
 TestSpecs = [string()] | string()
 LogDir = string()
 EventHandlers = EH | [EH]
 EH = atom() | fatom(), InitArgsg | f[atom()], InitArgsg
 InitArgs = [term()]
 Conns = all | [atom()]
 CoverSpecFile = string()

Common Test 55
ct Common Test Reference Manual

 N = integer()
 DurTime = string(HHMMSS)
 StopTime = string(YYMoMoDDHHMMSS) | string(HHMMSS)
 Result = [TestResult] | ferror, Reasong
Run tests as specified by the combination of options in Opts. The options are the same
as those used with the run test script. Note that here a TestDir can be used to point
out the path to a Suite. Note also that the option testcase corresponds to the -case
option in the run test script. Configuration files specified in Opts will be installed
automatically at startup.

run testspec(TestSpec) -> Result


Types:
 TestSpec = [term()]
Run test specified by TestSpec. The terms are the same as those used in test
specification files.

start interactive() -> ok


Start CT in interactive mode.
From this mode all test case support functions can be executed directly from the erlang
shell. The interactive mode can also be started from the unix command line with
run test -shell [-config File...].
If any functions using “required config data” (e.g. telnet or ftp functions) are to be called
from the erlang shell, config data must first be required with ct:require/2.
Example:
> ct:require(a,funix,[telnet]g).
ok
> ct:cmd(a,"ls").
fok,["ls","file1 ...",...]g

step(TestDir, Suite, Case) -> Result


Types:
 Case = atom()
Step through a test case with the debugger.
See also: run/3 [page 55].

stop interactive() -> ok


Exit the interactive mode.
See also: start interactive/0 [page 56].

testcases(TestDir, Suite) -> Testcases | ferror, Reasong


Types:
 TestDir = string()
 Suite = atom()
 Testcases = list()
 Reason = term()

56 Common Test
Common Test Reference Manual ct

Returns all testcases in the specified suite.

userdata(TestDir, Suite) -> SuiteUserData | ferror, Reasong


Types:
 TestDir = string()
 Suite = atom()
 SuiteUserData = [term()]
 Reason = term()
Returns any data specified with the tag userdata in the list of tuples returned from
Suite:suite/0.

userdata(TestDir, Suite, Case) -> TCUserData | ferror, Reasong


Types:
 TestDir = string()
 Suite = atom()
 Case = atom()
 TCUserData = [term()]
 Reason = term()
Returns any data specified with the tag userdata in the list of tuples returned from
Suite:Case/0.

Common Test 57
ct cover Common Test Reference Manual

ct cover
Erlang Module

Common Test Framework code coverage support module.


This module exports help functions for performing code coverage analysis.

Exports

add nodes(Nodes) -> fok, StartedNodesg | ferror, Reasong


Types:
 Nodes = [atom()]
 StartedNodes = [atom()]
 Reason = cover not running | not main node
Add nodes to current cover test (only works if cover support is active!). To have effect,
this function should be called from init per suite/1 before any actual tests are
performed.

remove nodes(Nodes) -> ok | ferror, Reasong


Types:
 Nodes = [atom()]
 Reason = cover not running | not main node
Remove nodes from current cover test. Call this function to stop cover test on nodes
previously added with add nodes/1. Results on the remote node are transferred to the
Common Test node.

58 Common Test
Common Test Reference Manual ct ftp

ct ftp
Erlang Module

Common Test specific layer on top of OTP ftp clinet ftp.erl

DATA TYPES
connection() = handle() | target name() (see module ct)
handle() = handle() (see module ct gen conn) Handle for a specific ftp
connection.

Exports

cd(Connection, Dir) -> ok | ferror, Reasong


Types:
 Connection = connection()
 Dir = string()
Change directory on remote host.

close(Connection) -> ok | ferror, Reasong


Types:
 Connection = connection()
Close the FTP connection.

delete(Connection, File) -> ok | ferror, Reasong


Types:
 Connection = connection()
 File = string()
Delete a file on remote host

get(Name, RemoteFile, LocalFile) -> ok | ferror, Reasong


Types:
 Name = target name()
 RemoteFile = string()
 LocaFile = string()

Common Test 59
ct ftp Common Test Reference Manual

Open a ftp connection and fetch a file from the remote host.
RemoteFile and LocalFile must be absolute paths.
The config file must be as for put/3.
See also: put/3 [page 60].

ls(Connection, Dir) -> fok, Listingg | ferror, Reasong


Types:
 Connection = connection()
 Dir = string()
 Listing = string()
List the directory Dir.

open(Name) -> fok, Handleg | ferror, Reasong


Types:
 Name = target name() (see module ct)
 Handle = handle()
Open an FTP connection to the specified node.

put(Name, LocalFile, RemoteFile) -> ok | ferror, Reasong


Types:
 Name = target name()
 LocaFile = string()
 RemoteFile = string()
Open a ftp connection and send a file to the remote host.
LocalFile and RemoteFile must be absolute paths.
If the target host is a “special” node, the ftp address must be specified in the config file
like this:
fnode,[fftp,IpAddrg]g.
If the target host is something else, e.g. a unix host, the config file must also include the
username and password (both strings):
funix,[fftp,IpAddrg,
fusername,Usernameg,
fpassword,Passwordg]g.

recv(Connection, RemoteFile) -> ok | ferror, Reasong


Fetch a file over FTP.
The file will get the same name on the local host.
See also: recv/3 [page 61].

recv(Connection, RemoteFile, LocalFile) -> ok | ferror, Reasong


Types:
 Connection = connection()

60 Common Test
Common Test Reference Manual ct ftp

 RemoteFile = string()
 LocaFile = string()
Fetch a file over FTP.
The file will be named LocalFile on the local host.

send(Connection, LocalFile) -> ok | ferror, Reasong


Send a file over FTP.
The file will get the same name on the remote host.
See also: send/3 [page 61].

send(Connection, LocalFile, RemoteFile) -> ok | ferror, Reasong


Types:
 Connection = connection()
 LocaFile = string()
 RemoteFile = string()
Send a file over FTP.
The file will be named RemoteFile on the remote host.

type(Connection, Type) -> ok | ferror, Reasong


Types:
 Connection = connection()
 Type = ascii | binary
Change file transfer type

Common Test 61
ct master Common Test Reference Manual

ct master
Erlang Module

Distributed test execution control for Common Test.


This module exports functions for running Common Test nodes on multiple hosts in
parallel.

Exports

abort() -> ok
Stops all running tests.

abort(Nodes) -> ok
Types:
 Nodes = atom() | [atom()]
Stops tests on specified nodes.

progress() -> [fNode, Statusg]


Types:
 Node = atom()
 Status = finished ok | ongoing | aborted | ferror, Reasong
 Reason = term()
Returns test progress. If Status is ongoing, tests are running on the node and have not
yet finished.

run(TestSpecs) -> ok
Types:
 TestSpecs = string() | [SeparateOrMerged]
Equivalent to run(TestSpecs, false, [], []) [page 63].

run(TestSpecs, InclNodes, ExclNodes) -> ok


Types:
 TestSpecs = string() | [SeparateOrMerged]
 SeparateOrMerged = string() | [string()]
 InclNodes = [atom()]
 ExclNodes = [atom()]

62 Common Test
Common Test Reference Manual ct master

Equivalent to run(TestSpecs, false, InclNodes, ExclNodes) [page 63].

run(TestSpecs, AllowUserTerms, InclNodes, ExclNodes) -> ok


Types:
 TestSpecs = string() | [SeparateOrMerged]
 SeparateOrMerged = string() | [string()]
 AllowUserTerms = bool()
 InclNodes = [atom()]
 ExclNodes = [atom()]
Tests are spawned on the nodes as specified in TestSpecs. Each specification in
TestSpec will be handled separately. It is however possible to also specify a list of
specifications that should be merged into one before the tests are executed. Any test
without a particular node specification will also be executed on the nodes in InclNodes.
Nodes in the ExclNodes list will be excluded from the test.

run on node(TestSpecs, Node) -> ok


Types:
 TestSpecs = string() | [SeparateOrMerged]
 SeparateOrMerged = string() | [string()]
 Node = atom()
Equivalent to run on node(TestSpecs, false, Node) [page 63].

run on node(TestSpecs, AllowUserTerms, Node) -> ok


Types:
 TestSpecs = string() | [SeparateOrMerged]
 SeparateOrMerged = string() | [string()]
 AllowUserTerms = bool()
 Node = atom()
Tests are spawned on Node according to TestSpecs.

run test(Node, Opts) -> ok


Types:
 Node = atom()
 Opts = [OptTuples]
 OptTuples = fconfig, CfgFilesg | fdir, TestDirsg | fsuite, Suitesg | ftestcase, Casesg
| f’spec’, TestSpecsg | fallow user terms, Boolg | flogdir, LogDirg | fevent handler,
EventHandlersg | fsilent connections, Connsg | fcover, CoverSpecFileg
 CfgFiles = string() | [string()]
 TestDirs = string() | [string()]
 Suites = atom() | [atom()]
 Cases = atom() | [atom()]
 TestSpecs = string() | [string()]
 LogDir = string()
 EventHandlers = EH | [EH]
 EH = atom() | fatom(), InitArgsg | f[atom()], InitArgsg

Common Test 63
ct master Common Test Reference Manual

 InitArgs = [term()]
 Conns = all | [atom()]
Tests are spawned on Node using ct:run test/1.

64 Common Test
Common Test Reference Manual ct rpc

ct rpc
Erlang Module

Common Test specific layer on Erlang/OTP rpc.

Exports

app node(App, Candidates) -> NodeName


Types:
 App = atom()
 Candidates = [NodeName]
 NodeName = atom()
From a set of candidate nodes determines which of them is running the application
App. If none of the candidate nodes is running the application the function will make
the test case calling this function fail. This function is the same as calling
app node(App, Candidates, true).

app node(App, Candidates, FailOnBadRPC) -> NodeName


Types:
 App = atom()
 Candidates = [NodeName]
 NodeName = atom()
 FailOnBadRPC = true | false
Same as app node/2 only the FailOnBadRPC argument will determine if the search for a
candidate node should stop or not if badrpc is received at some point.

app node(App, Candidates, FailOnBadRPC, Cookie) -> NodeName


Types:
 App = atom()
 Candidates = [NodeName]
 NodeName = atom()
 FailOnBadRPC = true | false
 Cookie = atom()
Same as app node/2 only the FailOnBadRPC argument will determine if the search for a
candidate node should stop or not if badrpc is received at some point. The cookie on
the client node will be set to Cookie for this rpc operation (use to match the server
node cookie).

Common Test 65
ct rpc Common Test Reference Manual

call(Node, Module, Function, Args) -> term() | fbadrpc, Reasong


Same as call(Node, Module, Function, Args, infinity)

call(Node, Module, Function, Args, TimeOut) -> term() | fbadrpc, Reasong


Types:
 Node = NodeName | fFun, FunArgsg
 Fun = fun()
 FunArgs = term()
 NodeName = atom()
 Module = atom()
 Function = atom()
 Args = [term()]
 Reason = timeout | term()
Evaluates apply(Module, Function, Args) on the node Node. Returns whatever
Function returns or fbadrpc, Reasong if the remote procedure call fails. If Node is fFun,
FunArgsg applying Fun to FunArgs should return a node name.

call(Node, Module, Function, Args, TimeOut, Cookie) -> term() | fbadrpc, Reasong
Types:
 Node = NodeName | fFun, FunArgsg
 Fun = fun()
 FunArgs = term()
 NodeName = atom()
 Module = atom()
 Function = atom()
 Args = [term()]
 Reason = timeout | term()
 Cookie = atom()
Evaluates apply(Module, Function, Args) on the node Node. Returns whatever
Function returns or fbadrpc, Reasong if the remote procedure call fails. If Node is fFun,
FunArgsg applying Fun to FunArgs should return a node name. The cookie on the client
node will be set to Cookie for this rpc operation (use to match the server node cookie).

cast(Node, Module, Function, Args) -> ok


Types:
 Node = NodeName | fFun, FunArgsg
 Fun = fun()
 FunArgs = term()
 NodeName = atom()
 Module = atom()
 Function = atom()
 Args = [term()]
 Reason = timeout | term()

66 Common Test
Common Test Reference Manual ct rpc

Evaluates apply(Module, Function, Args) on the node Node. No response is delivered


and the process which makes the call is not suspended until the evaluation is
compleated as in the case of call/[3,4]. If Node is fFun, FunArgsg applying Fun to
FunArgs should return a node name.

cast(Node, Module, Function, Args, Cookie) -> ok


Types:
 Node = NodeName | fFun, FunArgsg
 Fun = fun()
 FunArgs = term()
 NodeName = atom()
 Module = atom()
 Function = atom()
 Args = [term()]
 Reason = timeout | term()
 Cookie = atom()
Evaluates apply(Module, Function, Args) on the node Node. No response is delivered
and the process which makes the call is not suspended until the evaluation is
compleated as in the case of call/[3,4]. If Node is fFun, FunArgsg applying Fun to
FunArgs should return a node name. The cookie on the client node will be set to
Cookie for this rpc operation (use to match the server node cookie).

Common Test 67
ct snmp Common Test Reference Manual

ct snmp
Erlang Module

Common Test specific layer on top of the OTPs snmp


Application to make snmp configuration easier for the test case writer. Many test cases
can use default values for everything and then no snmp-configuration files needs to be
supplied at all. When it is necessary to change some configuration it can be done for the
subset of snmp-configuration files that are relevant, and still all this can be put in to the
common-test configuration file or for the more specialized configuration parameters a
“simple snmp-configuration file” can be placed in the test suites data directory. ct snmp
will also perform a type check on all supplied configuration. In the manager case the
common test application also will keep track of some manager information so that the
test case write does not have to keep track of as much input parameters as if using the
OTPs snmp manager directly.
The following parameters are configurable
fsnmp,
%%% Manager config
[fstart manager, boolean()g % Optional - default is true
fusers, [fuser name(), [call back module(), user data()]g]g, %% Optional
fusm users, [fusm user name(), usm config()g]g,%% Optional - snmp v3 only
% managed agents is optional
fmanaged agents,[fagent name(), [user name(), agent ip(), agent port(), [agent con
fmax msg size, integer()g, % Optional - default is 484
fmgr port, integer()g, % Optional - default is 5000
fengine id, string()g, % Optional - default is "mgrEngine"

%%% Agent config


fstart agent, boolean()g, % Optional - default is false
fagent sysname, string()g, % Optional - default is "ct test"
fagent manager ip, manager ip()g, % Optional - default is localhost
fagent vsns, list()g, % Optional - default is [v2]
fagent trap udp, integer()g, % Optional - default is 5000
fagent udp, integer()g, % Optional - default is 4000
fagent notify type, atom()g, % Optional - default is trap
fagent sec type, sec type()g, % Optional - default is none
fagent passwd, string()g, % Optional - default is ""
fagent engine id, string()g, % Optional - default is "agentEngine"
fagent max msg size, string()g,% Optional - default is 484

%% The following parameters represents the snmp configuration files


%% context.conf, standard.conf, community.conf, vacm.conf,
%% usm.conf, notify.conf, target addr.conf and target params.conf.
%% Note all values in agent.conf can be altered by the parametes
%% above. All these configuration files have default values set

68 Common Test
Common Test Reference Manual ct snmp

%% up by the snmp application. These values can be overridden by


%% suppling a list of valid configuration values or a file located
%% in the test suites data dir that can produce a list
%% of valid configuration values if you apply file:consult/1 to the
%% file.
fagent contexts, [term()] | fdata dir file, rel path()gg, % Optional
fagent community, [term()] | fdata dir file, rel path()gg,% Optional
fagent sysinfo, [term()] | fdata dir file, rel path()gg, % Optional
fagent vacm, [term()] | fdata dir file, rel path()gg, % Optional
fagent usm, [term()] | fdata dir file, rel path()gg, % Optional
fagent notify def, [term()] | fdata dir file, rel path()gg,% Optional
fagent target address def, [term()] | fdata dir file, rel path()gg,% Optional
fagent target param def, [term()] | fdata dir file, rel path()gg,% Optional
]g.

The ConfName parameter in the functions should be the name you allocated in your test
suite using require statement. Example:
suite() -> [frequire, ConfName,fsnmp,[users, managed agents]gg].
or
ct:require(ConfName,fsnmp,[users, managed agents]g).
Note that Usm users are needed for snmp v3 configuration and are not to be confused
with users.
Snmp traps, inform and report messages are handled by the user callback module. For
more information about this see the snmp application.
Note: It is recommended to use the .hrl-files created by the Erlang/OTP mib-compiler
to define the oids. Ex for the getting the erlang node name from the erlNodeTable in
the OTP-MIB
Oid = ?erlNodeEntry ++ [?erlNodeName, 1]

DATA TYPES
agent config() = fItem, Valueg
agent ip() = ip()
agent name() = atom()
agent port() = integer()
call back module() = atom()
error index() = integer()
error status() = noError | atom()
ip() = string() | finteger(), integer(), integer(), integer()g
manager ip() = ip()
oid() = [byte()]
oids() = [oid()]
rel path() = string()
sec type() = none | minimum | semi

Common Test 69
ct snmp Common Test Reference Manual

snmpreply() = ferror status(), error index(), varbinds()g


user data() = term()
user name() = atom()
usm config() = string()
usm user name() = string()
value type() = o(’OBJECT IDENTIFIER’) | i(’INTEGER’) | u(’Unsigned32’) | g(’Unsigned32’) |

var and val() = foid(), value type(), value()g


varbind() = term()
varbinds() = [varbind()]
varsandvals() = [var and val()]

Exports

get next values(Agent, Oids, ConfName) -> SnmpReply


Types:
 Agent = agent name()
 Oids = oids()
 ConfName = atom()
 SnmpReply = snmpreply()
Issues a synchronous snmp get next request.

get values(Agent, Oids, ConfName) -> SnmpReply


Types:
 Agent = agent name()
 Oids = oids()
 ConfName = atom()
 SnmpReply = snmpreply()
Issues a synchronous snmp get request.

load mibs(Mibs) -> ok | ferror, Reasong


Types:
 Mibs = [MibName]
 MibName = string()
 Reason = term()
Load the mibs into the agent ’snmp master agent’.

register agents(ConfName, ManagedAgents) -> ok | ferror, Reasong


Types:
 ConfName = atom()
 ManagedAgents = [agent()]

70 Common Test
Common Test Reference Manual ct snmp

 Reason = term()
Explicitly instruct the manager to handle this agent. Corresponds to making an entry in
agents.conf

register users(ConfName, Users) -> ok | ferror, Reasong


Types:
 ConfName = atom()
 Users = [user()]
 Reason = term()
Register the manager entity (=user) responsible for specific agent(s). Corresponds to
making an entry in users.conf

register usm users(ConfName, UsmUsers) -> ok | ferror, Reasong


Types:
 ConfName = atom()
 UsmUsers = [usm user()]
 Reason = term()
Explicitly instruct the manager to handle this USM user. Corresponds to making an
entry in usm.conf

set info(Config) -> [fAgent, OldVarsAndVals, NewVarsAndValsg]


Types:
 Config = [fKey, Valueg]
 Agent = agent name()
 OldVarsAndVals = varsandvals()
 NewVarsAndVals = varsandvals()
Returns a list of all successful set requests performed in the test case in reverse order.
The list contains the involved user and agent, the value prior to the set and the new
value. This is intended to facilitate the clean up in the end per testcase function i.e. the
undoing of the set requests and its possible side-effects.

set values(Agent, VarsAndVals, ConfName, Config) -> SnmpReply


Types:
 Agent = agent name()
 Oids = oids()
 ConfName = atom()
 Config = [fKey, Valueg]
 SnmpReply = snmpreply()
Issues a synchronous snmp set request.

start(Config, ConfName) -> ok


Types:
 Config = [fKey, Valueg]
 Key = atom()

Common Test 71
ct snmp Common Test Reference Manual

 Value = term()
 ConfName = atom()
Starts an snmp manager and/or agent. In the manager case also registrations of users and
agents as specified by the configuration <ConfName> will be performed. When using
snmp v3 also so called usm users will be registered. Note that users, usm users and
managed agents may also be registerd at a later time using ct snmp:register users/2,
ct snmp:register agents/2, and ct snmp:register usm users/2. The agent started will be
called snmp master agent. Use ct snmp:load mibs to load mibs into the agent.

stop(Config) -> ok
Types:
 Config = [fKey, Valueg]
 Key = atom()
 Value = term()
 ConfName = atom()
Stops the snmp manager and/or agent removes all files created.

unregister agents(ConfName) -> ok | ferror, Reasong


Types:
 ConfName = atom()
 Reason = term()
Removes information added when calling register agents/2.

unregister users(ConfName) -> ok | ferror, Reasong


Types:
 ConfName = atom()
 Reason = term()
Removes information added when calling register users/2.

update usm users(ConfName, UsmUsers) -> ok | ferror, Reasong


Types:
 ConfName = atom()
 UsmUsers = usm users()
 Reason = term()
Alters information added when calling register usm users/2.

72 Common Test
Common Test Reference Manual ct telnet

ct telnet
Erlang Module

Common Test specific layer on top of telnet client ct telnet client.erl


Use this module to set up telnet connections, send commands and perform string
matching on the result. (See the unix telnet manual page for information about how
ct telnet may be used specifically with unix hosts.)
The following default values are defined in ct telnet:
Connection timeout = 10 sec (time to wait for connection)
Command timeout = 10 sec (time to wait for a command to return)
Max no of reconnection attempts = 3
Reconnection interval = 5 sek (time to wait in between reconnection attempts)

These parameters can be altered by the user with the following configuration term:
ftelnet settings, [fconnect timeout,Millisecg,
fcommand timeout,Millisecg,
freconnection attempts,Ng,
freconnection interval,Millisecg]g.

Millisec = integer(), N = integer()


Enter the telnet settings term in a configuration file included in the test and
ct telnet will retrieve the information automatically.

DATA TYPES
connection() = handle() | ftarget name() (see module ct), connection type()g | target n

connection type() = telnet | ts1 | ts2


handle() = handle() (see module ct gen conn) Handle for a specific telnet
connection.
prompt regexp() = string() A regular expression which matches all possible
prompts for a specific type of target. The regexp must not have any groups i.e.
when matching, re:run/3 shall return a list with one single element.

Common Test 73
ct telnet Common Test Reference Manual

Exports

close(Connection) -> ok | ferror, Reasong


Types:
 Connection = connection() (see module ct telnet)
Close the telnet connection and stop the process managing it.

cmd(Connection, Cmd) -> fok, Datag | ferror, Reasong


Equivalent to cmd(Connection, Cmd, DefaultTimeout) [page 74].

cmd(Connection, Cmd, Timeout) -> fok, Datag | ferror, Reasong


Types:
 Connection = connection() (see module ct telnet)
 Cmd = string()
 Timeout = integer()
 Data = [string()]
Send a command via telnet and wait for prompt.

cmdf(Connection, CmdFormat, Args) -> fok, Datag | ferror, Reasong


Equivalent to cmdf(Connection, CmdFormat, Args, DefaultTimeout) [page 74].

cmdf(Connection, CmdFormat, Args, Timeout) -> fok, Datag | ferror, Reasong


Types:
 Connection = connection() (see module ct telnet)
 CmdFormat = string()
 Args = list()
 Timeout = integer()
 Data = [string()]
Send a telnet command and wait for prompt (uses a format string and list of arguments
to build the command). —————————————————————–

cont log(Str, Args) -> term()

end log() -> term()

expect(Connection, Patterns) -> term()


Equivalent to expect(Connections, Patterns, []) [page 75].

expect(Connection, Patterns, Opts) -> fok, Matchg | fok, MatchList, HaltReasong |


ferror, Reasong
Types:
 Connection = connection() (see module ct telnet)
 Patterns = Pattern | [Pattern]

74 Common Test
Common Test Reference Manual ct telnet

 Pattern = string() | fTag, string()g | prompt | fprompt, Promptg


 Prompt = string()
 Tag = term()
 Opts = [Opt]
 Opt = ftimeout, Timeoutg | repeat | frepeat, Ng | sequence | fhalt, HaltPatternsg
| ignore prompt
 Timeout = integer()
 N = integer()
 HaltPatterns = Patterns
 MatchList = [Match]
 Match = RxMatch | fTag, RxMatchg | fprompt, Promptg
 RxMatch = [string()]
 HaltReason = done | Match
 Reason = timeout | fprompt, Promptg
Get data from telnet and wait for the expected pattern.
Pattern can be a POSIX regular expression. If more than one pattern is given, the
function returns when the first match is found.
RxMatch is a list of matched strings. It looks like this: [FullMatch, SubMatch1,
SubMatch2, ...] where FullMatch is the string matched by the whole regular
expression and SubMatchN is the string that matched subexpression no N.
Subexpressions are denoted with ’(’ ’)’ in the regular expression
If a Tag is given, the returned Match will also include the matched Tag. Else, only
RxMatch is returned.
The function will always return when a prompt is found, unless the ignore prompt
options is used.
The timeout option indicates that the function shall return if the telnet client is idle
(i.e. if no data is received) for more than Timeout milliseconds. Default timeout is 10
seconds.
The repeat option indicates that the pattern(s) shall be matched multiple times. If N is
given, the pattern(s) will be matched N times, and the function will return with
HaltReason = done.
The sequence option indicates that all patterns shall be matched in a sequence. A
match will not be concluded untill all patterns are matched.
Both repeat and sequence can be interrupted by one or more HaltPatterns. When
sequence or repeat is used, there will always be a MatchList returned, i.e. a list of
Match instead of only one Match. There will also be a HaltReason returned.
Examples:
expect(Connection,[fabc,"ABC"g,fxyz,"XYZ"g],
[sequence,fhalt,[fnnn,"NNN"g]g]).
will try to match “ABC” first and then “XYZ”, but if “NNN” appears the function will
return ferror,fnnn,["NNN"]gg. If both “ABC” and “XYZ” are matched, the function
will return fok,[AbcMatch,XyzMatch]g.
expect(Connection,[fabc,"ABC"g,fxyz,"XYZ"g],
[frepeat,2g,fhalt,[fnnn,"NNN"g]g]).
will try to match “ABC” or “XYZ” twice. If “NNN” appears the function will return
with HaltReason = fnnn,["NNN"]g.

Common Test 75
ct telnet Common Test Reference Manual

The repeat and sequence options can be combined in order to match a sequence
multiple times.

get data(Connection) -> fok, Datag | ferror, Reasong


Types:
 Connection = connection() (see module ct telnet)
 Data = [string()]
Get all data which has been received by the telnet client since last command was sent.

open(Name) -> fok, Handleg | ferror, Reasong


Equivalent to open(Name, telnet) [page 76].

open(Name, ConnType) -> fok, Handleg | ferror, Reasong


Types:
 Name = target name()
 ConnType = connection type() (see module ct telnet)
 Handle = handle() (see module ct telnet)
Open a telnet connection to a node.

open(Name, ConnType, TargetMod) -> fok, Handleg | ferror, Reasong


Equivalent to open(Name, ConnType, TargetMod, []) [page 76].

open(Name, ConnType, TargetMod, Extra) -> fok, Handleg | ferror, Reasong


Types:
 Name = target name() (see module ct)
 ConnType = connection type()
 TargetMod = atom()
 Extra = term()
 Handle = handle()
Open a telnet connection to the specified target.
The target must exist in a config file, and Name is the name allocated to the target -
either with ct:require/2 or with a require statement in the test suite default data or
the test case info function.
TargetMod is a module which exports the functions connect(Ip,Port,Extra) and
get prompt regexp() for the given TargetType.

send(Connection, Cmd) -> ok | ferror, Reasong


Types:
 Connection = connection() (see module ct telnet)
 Cmd = string()
Send a telnet command and return immediately.
The resulting output from the command can be read with get data/1 or expect/2/3.

76 Common Test
Common Test Reference Manual ct telnet

sendf(Connection, CmdFormat, Args) -> ok | ferror, Reasong


Types:
 Connection = connection() (see module ct telnet)
 CmdFormat = string()
 Args = list()
Send a telnet command and return immediately (uses a format string and a list of
arguments to build the command).

See also
unix telnet [page 78]

Common Test 77
unix telnet Common Test Reference Manual

unix telnet
Erlang Module

Callback module for ct telnet for talking telnet to a unix host.


It requires the following entry in the config file:
funix,[ftelnet,HostNameOrIpAddressg,
fport,PortNumg,
fusername,UserNameg,
fpassword,Passwordg]g.

To talk telnet to the host specified by HostNameOrIpAddress, use the interface


functions in ct, e.g. open(Name), cmd(Name,Cmd), ....
Name is the name you allocated to the unix host in your require statement. E.g.
suite() -> [frequire,Name,funix,[telnet,username,password]gg].
or
ct:require(Name,funix,[telnet,username,password]g).
Note that the fport,PortNumg tuple is optional and if omitted, default telnet port 23
will be used.

See also
ct [page 50], ct telnet [page 73]

78 Common Test
Index of Modules and Functions

Modules are typed in this way .


Functions are typed in this way.

abort/0 cmdf/3
ct master , 62 ct telnet , 74
abort/1 cmdf/4
ct master , 62 ct telnet , 74
abort_current_testcase/1 comment/1
ct , 50 ct , 50
add_nodes/1 Common Test
ct cover , 58 Module:all/0Application, 44
Module:end_per_suite/1Application, 45
app_node/2
Module:end_per_testcase/2Application,
ct rpc , 65
46
app_node/3 Module:init_per_suite/1Application,
ct rpc , 65 45
Module:init_per_testcase/2Application,
app_node/4
46
ct rpc , 65
Module:sequences/0Application, 45
Module:suite/0Application, 45
call/4 Module:testcase/0Application, 46
ct rpc , 66 Module:testcase/1Application, 47
call/5 cont_log/2
ct rpc , 66 ct telnet , 74
call/6 ct
ct rpc , 66 abort_current_testcase/1, 50
cast/4 comment/1, 50
ct rpc , 66 fail/1, 51
get_config/1, 51
cast/5 get_config/2, 51
ct rpc , 67 get_status/0, 51
cd/2 get_target_name/1, 52
ct ftp , 59 install/1, 52
listenv/1, 52
close/1 log/1, 52
ct ftp , 59 log/2, 52
ct telnet , 74 log/3, 52
cmd/2 pal/1, 53
ct telnet , 74 pal/2, 53
pal/3, 53
cmd/3
parse_table/1, 53
ct telnet , 74

Common Test 79
print/1, 53 ct snmp
print/2, 53 get_next_values/3, 70
print/3, 53 get_values/3, 70
require/1, 54 load_mibs/1, 70
require/2, 54 register_agents/2, 70
run/1, 55 register_users/2, 71
run/2, 55 register_usm_users/2, 71
run/3, 55 set_info/1, 71
run_test/1, 55 set_values/4, 71
run_testspec/1, 56 start/2, 71
start_interactive/0, 56 stop/1, 72
step/3, 56 unregister_agents/1, 72
stop_interactive/0, 56 unregister_users/1, 72
testcases/2, 56 update_usm_users/2, 72
userdata/2, 57
ct telnet
userdata/3, 57
close/1, 74
ct cover cmd/2, 74
add_nodes/1, 58 cmd/3, 74
remove_nodes/1, 58 cmdf/3, 74
cmdf/4, 74
ct ftp
cont_log/2, 74
cd/2, 59
end_log/0, 74
close/1, 59
expect/2, 74
delete/2, 59
expect/3, 74
get/3, 59
get_data/1, 76
ls/2, 60
open/1, 76
open/1, 60
open/2, 76
put/3, 60
open/3, 76
recv/2, 60
open/4, 76
recv/3, 60
send/2, 76
send/2, 61
sendf/3, 77
send/3, 61
type/2, 61
delete/2
ct master ct ftp , 59
abort/0, 62
abort/1, 62
end_log/0
progress/0, 62
ct telnet , 74
run/1, 62
run/3, 62 expect/2
run/4, 63 ct telnet , 74
run_on_node/2, 63
expect/3
run_on_node/3, 63
ct telnet , 74
run_test/2, 63
ct rpc fail/1
app_node/2, 65 ct , 51
app_node/3, 65
app_node/4, 65
get/3
call/4, 66
ct ftp , 59
call/5, 66
call/6, 66 get_config/1
cast/4, 66 ct , 51
cast/5, 67 get_config/2

80 Common Test
ct , 51 Module:testcase/1Application
Common Test , 47
get_data/1
ct telnet , 76
open/1
get_next_values/3 ct ftp , 60
ct snmp , 70 ct telnet , 76
get_status/0 open/2
ct , 51 ct telnet , 76
get_target_name/1 open/3
ct , 52 ct telnet , 76
get_values/3 open/4
ct snmp , 70 ct telnet , 76

install/1 pal/1
ct , 52 ct , 53
pal/2
listenv/1
ct , 53
ct , 52
pal/3
load_mibs/1
ct , 53
ct snmp , 70
parse_table/1
log/1
ct , 53
ct , 52
print/1
log/2
ct , 53
ct , 52
print/2
log/3
ct , 53
ct , 52
print/3
ls/2
ct , 53
ct ftp , 60
progress/0
Module:all/0Application ct master , 62
Common Test , 44 put/3
Module:end_per_suite/1Application ct ftp , 60
Common Test , 45
recv/2
Module:end_per_testcase/2Application
ct ftp , 60
Common Test , 46
recv/3
Module:init_per_suite/1Application
ct ftp , 60
Common Test , 45
register_agents/2
Module:init_per_testcase/2Application
ct snmp , 70
Common Test , 46
register_users/2
Module:sequences/0Application
ct snmp , 71
Common Test , 45
register_usm_users/2
Module:suite/0Application
ct snmp , 71
Common Test , 45
remove_nodes/1
Module:testcase/0Application
ct cover , 58
Common Test , 46

Common Test 81
require/1 stop_interactive/0
ct , 54 ct , 56
require/2
ct , 54 testcases/2
ct , 56
run/1
ct , 55 type/2
ct master , 62 ct ftp , 61
run/2
ct , 55 unregister_agents/1
ct snmp , 72
run/3
ct , 55 unregister_users/1
ct master , 62 ct snmp , 72

run/4 update_usm_users/2
ct master , 63 ct snmp , 72

run_on_node/2 userdata/2
ct master , 63 ct , 57

run_on_node/3 userdata/3
ct master , 63 ct , 57

run_test/1
ct , 55
run_test/2
ct master , 63
run_testspec/1
ct , 56

send/2
ct ftp , 61
ct telnet , 76
send/3
ct ftp , 61
sendf/3
ct telnet , 77
set_info/1
ct snmp , 71
set_values/4
ct snmp , 71
start/2
ct snmp , 71
start_interactive/0
ct , 56
step/3
ct , 56
stop/1
ct snmp , 72

82 Common Test

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy