TESSY UserManual 40
TESSY UserManual 40
Windows is a registered trademark of Microsoft. TESSY and CTE are registered trademarks of
Razorcat Development GmbH.
All other registered or unregistered trademarks referenced herein are the property of their respective
owners and no trademark rights to the same is claimed.
Liability exclusion
Razorcat Development GmbH assumes no liability for damage that is caused by improper installation
or improper use of the software or the non-observance of the handling instructions described in this
manual.
Thanks
Various contents are based on application notes and publications on TESSY written by Frank Bü-
chner, Hitex Development Tools GmbH. We would like to thank Frank for his valuable contribution
and commitment in supporting TESSY and spotlighting functionalities and features.
Preface xvii
About TESSY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
How to use this manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Subject matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Helpers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Safety Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Core workflow and registration for safety information . . . . . . . . . . . . xxiii
Verification and certification of TESSY . . . . . . . . . . . . . . . . . . . . xxiv
Instrumentation for coverage measurement . . . . . . . . . . . . . . . . . . xxiv
Adaptation to target environment . . . . . . . . . . . . . . . . . . . . . . . xxv
Operating limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
New features in TESSY 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
C++ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
Software variant management . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
Optimizations in command line mode . . . . . . . . . . . . . . . . . . . . xxviii
Support for test driven development . . . . . . . . . . . . . . . . . . . . . xxviii
UUID for all TESSY objects . . . . . . . . . . . . . . . . . . . . . . . . . xxviii
Auto-reuse in command line execution . . . . . . . . . . . . . . . . . . . . xxix
Excluding individual tests . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix
3 General handling 18
3.1 Creating databases and working with the file system . . . . . . . . . . . . . 19
3.1.1 Creating a project database . . . . . . . . . . . . . . . . . . . . . . 20
3.1.2 Creating, importing, cloning, editing, deleting a project . . . . . . . 25
3.1.3 Creating a template project . . . . . . . . . . . . . . . . . . . . . . 26
3.1.4 Moving the project directory . . . . . . . . . . . . . . . . . . . . . 26
3.1.5 Handling with equally named projects . . . . . . . . . . . . . . . . 27
3.1.6 Using a specific environment setting . . . . . . . . . . . . . . . . . 29
3.1.7 Updating the database . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Understanding the graphical user interface . . . . . . . . . . . . . . . . . . 31
3.2.1 Menu bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.2 Tool bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2.3 Perspectives and perspective (tool) bar . . . . . . . . . . . . . . . . 32
3.2.4 Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2.5 Status bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3 Using the context menu and shortcuts . . . . . . . . . . . . . . . . . . . . 37
3.3.1 Context menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.2 Shortcuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4 Basic knowledge 40
4.1 Unit testing of embedded software . . . . . . . . . . . . . . . . . . . . . . 41
4.1.1 Standards that require testing . . . . . . . . . . . . . . . . . . . . 41
4.1.2 About unit testing . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.1.2.1 What is unit testing? . . . . . . . . . . . . . . . . . . . . 42
4.1.2.2 What are the benefits? . . . . . . . . . . . . . . . . . . . 42
4.1.3 Considerations for unit testing . . . . . . . . . . . . . . . . . . . . 43
4.1.3.1 Which units are good test candidates? . . . . . . . . . . . 43
4.1.3.2 What is not in the scope of unit testing? . . . . . . . . . 43
4.1.3.3 Why is regression testing necessary? . . . . . . . . . . . . 44
4.1.3.4 Who should conduct the tests? . . . . . . . . . . . . . . . 44
4.1.3.5 What is special for testing embedded software? . . . . . . 45
4.1.4 Methods for unit testing . . . . . . . . . . . . . . . . . . . . . . . 45
4.1.4.1 a. Test application . . . . . . . . . . . . . . . . . . . . . 45
4.1.4.2 b. Original binary test . . . . . . . . . . . . . . . . . . . 46
4.1.4.3 Pros and cons . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5 Practical exercises 67
5.1 Quickstart 1: Unit test exercise is_value_in_range . . . . . . . . . . . . . 69
5.1.1 Creating a test project . . . . . . . . . . . . . . . . . . . . . . . . 70
5.1.2 Specifying the target environment . . . . . . . . . . . . . . . . . . 72
5.1.3 Adding the test object and analyzing the C-source file . . . . . . . . 73
5.1.4 Editing the test object interface . . . . . . . . . . . . . . . . . . . 76
5.1.5 Designing test cases . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.1.6 Adding test cases and test steps . . . . . . . . . . . . . . . . . . . 77
5.1.7 Entering test data . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.1.8 Executing the test . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.1.9 Repeating the test run with coverage instrumentation . . . . . . . . 83
5.1.10 Analyzing the coverage . . . . . . . . . . . . . . . . . . . . . . . . 85
5.1.10.1 The flow chart view . . . . . . . . . . . . . . . . . . . . . 86
5.1.10.2 The Branch (C1) Coverage view . . . . . . . . . . . . . . 86
5.1.10.3 The MC/DC Coverage view . . . . . . . . . . . . . . . . 86
5.1.10.4 Analyzing . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.1.11 Creating a report . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.1.12 Repeating the test run with requirements . . . . . . . . . . . . . . 92
5.1.12.1 Importing requirements . . . . . . . . . . . . . . . . . . . 92
5.1.12.2 Committing the requirements document . . . . . . . . . . 94
5.1.12.3 Linking test cases with requirements . . . . . . . . . . . . 95
5.1.12.4 Creating a planning coverage report . . . . . . . . . . . . 96
5.1.12.5 Executing the test and examining the coverage . . . . . . 99
5.1.13 Reusing a test object with a changed interface . . . . . . . . . . . . 102
5.1.13.1 Changing the interface of the test object . . . . . . . . . 102
7 Troubleshooting 398
7.1 Contacting the TESSY support . . . . . . . . . . . . . . . . . . . . . . . . 399
7.2 Solutions for known problems . . . . . . . . . . . . . . . . . . . . . . . . . 403
7.2.1 TESSY does not start or gives errors when starting . . . . . . . . . 403
7.2.2 TESSY gives errors when quitted . . . . . . . . . . . . . . . . . . . 404
7.2.3 License server does not start or gives errors . . . . . . . . . . . . . 405
7.2.4 Working with constant variables . . . . . . . . . . . . . . . . . . . 406
7.2.4.1 Assignment of read-only variables . . . . . . . . . . . . . 407
Appendix 411
A Abbreviations 412
B Glossary 414
Index 433
About TESSY
The test system TESSY was developed by the Research and Technology Group of Daimler.
The former developers of the method and tool at Daimler were:
Klaus Grimm
Matthias Grochtmann
Roman Pitschinetz
Joachim Wegener
TESSY has been well-tried in practice at Daimler and is since applied successfully. TESSY is
commercially available since spring 2000 and is further developed by Razorcat Development
GmbH.
TESSY offers an integrated graphic user interface conducting you comfortably through the
unit test. There are special tools for every testing activity as well as for all organizational
and management tasks.
Dynamic testing is indispensable when testing a software system. Today, up to 80% of the
development time and costs go into unit and integration testing. It is therefore of urgent
necessity to automate testing processes in order to minimize required time and costs for
developing high-quality products. The test system TESSY automates the whole test cycle;
unit testing for programs in C/C++ is supported in all test phases. The system also takes
care of the complete test organization as well as test management, including requirements
coverage measurement and traceability.
Apply for our e-mail list if you want to be informed of a new version of TESSY
manual by sending an e-mail to support@razorcat.com.
Subject matter
The structure of the manual guides you through working with TESSY from the start to
the specific activities possible. In order:
section matter
Preface Describes New features in TESSY 4.0 and contains important informa-
tion about Safety Manual and Troubleshooting.
1 Installation Lists all technical requirements to work with TESSY and describes how
and to install the software.
registration
2 Migrating Lists the changed and new functions and handling within the new version.
from TESSY This will help you when switching from TESSY version 3.x to the new
3.x to 4.x TESSY 4.x.
3 General Explains the workflow of Creating databases and working with the file
handling system. Check this section carefully to know how to handle your project
data! The TESSY interface and basic handling is explained in the follo-
wing sections Understanding the graphical user interface and Using the
context menu and shortcuts.
4 Basic Contains a brief introduction about unit testing with TESSY and the
knowledge classification tree method.
section matter
5 Practical In this chapter you will get to know TESSY with the help of exe-
exercises rcises that are prepared to follow easily though most functions of TESSY:
6 Working with Explains in detail the unit test activities possible with TESSY.
TESSY
As you will notice, the headlines of the sections follow the acti-
ons taken during a test. TESSY provides different editors and windows
(“perspectives” and “views”) for different configurations and steps
taken during and after a test. You will find the name of the perspective
or view as well as the description of the step within the headline, e.g.
6.7 CTE: Designing the test cases.
Therefore, if you need help at some point, ask either “How do I handle
. . . ?” or “Where am I?” and follow the headlines.
7 Troubleshooting Contains information of Solutions for known problems and how to get in
touch with the TESSY support if needed.
section matter
Table 0.1: Where to find - matters of the several parts of the TESSY manual
Helpers
• The Index in the very end of this manual helps you by looking up a theme by a
keyword.
The sidearrow
• Various information is clearly represented within tables, e.g. icons and indicators shows where to
find information
(symbols of the interface) and their meanings. For a fast access to all tables consult and references.
the List of Tables in the appendix of this manual.
• Figures are used to demonstrate information described before. You may as well check
the List of Figures to find those figures.
• All cross references as well as the content directory are active links (blue colored),
so you can easily switch to the referenced chapter or section.
Information box
To help you to work with this manual, different font characters and signs are used to mark
specific information:
typewriter input (you need to type this in- Enter Test Example .
italic formation) or output (message
from system)
Ctrl+C Keyboard characters are not Ctrl+C for pressing control and c
marked in supposition they are
commonly known.
Warning: There might be some damages to your data if you do not operate
correctly! Please follow instructions carefully.
A light bulb provides hints, references and additional information on handling with
TESSY for better usability.
Safety Manual
TESSY can be used for testing of safety-relevant software. Therefore, the core workflow
of TESSY as well as the release and test process of the TESSY product has been certified
according to ISO 26262-08:2011 and IEC 61508-3:2010. Our quality management system
ensures proper handling of all development processes for the TESSY product and constantly
improves all procedures concerning quality and safety.
The figure above shows the core workflow of TESSY that is fully automated and subject
to tool qualification. All other tool capabilities like editing or environment and interface
settings are additional features out of scope of the tool qualification. The core workflow of
TESSY has been certified according to ISO 26262:2011 and IEC 61508:2010. Starting from
editing of test data, the core workflow covers test execution, evaluation of test results and
report generation. Additionally, the coverage measurements have been verified according
to our certified safety plan. Please note, that the Classification Tree Editor (CTE) which
covers test preparation activities is not part of the certified core workflow of TESSY.
Safety-relevant problems arising in released TESSY versions will be reported (once they
are detected) and regarded closely to have them fixed as fast as possible. If you work
with TESSY in a safety-related environment, please register for our safety customer
e-mail-list:
You will be informed about current and newly arising “known problems” as well as worka-
rounds.
The “Tool Qualification Pack” (TQP) is an additional purchase of documents and tests
for TESSY, provided as baseline for the certification process in order to qualify TESSY as
a software verification tool according to DO-178B/C.
Please contact via support@razorcat.com.
Additionally, TESSY has been qualified by the german certification authority TÜV SÜD
Rail GmbH as a testing tool for usage in safety-related software development according to
ISO 26262 and IEC 61508. The TÜV certificate and a certification report is available on
http://www.razorcat.com.
The TQPack contains tests for ANSI-C compliant source code using the GNU GCC compiler
that is part of the TESSY installation. Using an embedded compiler/debugger for a specific
microcontroller requires adaptation of the TQPack for this specific target environment.
This can be provided as an engineering service by Razorcat.
When executing tests using coverage measurements, it is recommended that all tests are
executed once with and once without coverage instrumentation. TESSY uses a copy of
the original source file when creating the test application. This copy of the source file
will be instrumented for coverage measurements. Usually both test runs yield the same
result, indicating that the instrumentation did not change the functional behavior of the
test objects.
Please note, that the source code will be instrumented even if no coverage measurement
has been selected in the following cases:
Some extra code will be added at the end of the copied source file in the following cases:
Please keep this behavior in mind when preparing and executing tests with TESSY.
When running tests on a specific target platform, adaptations of compiler options and
target debugger settings may be needed within the respective target environment. The
verification of the TESSY core workflow covers tests conducted on a Windows host system
using the GNU GCC compiler. In order to verify the transmission of test data and expected
results to and from the target device, there are tests available that may be executed using
the adapted target environment. These tests check the communication layers of the test
driver application.
For details on how to run these tests refer to the application note “048 Using Test
Driver Communication Tests.pdf” within the TESSY installation directory.
Operating limits
TESSY is constructed for usage as a unit testing tool in order to verify the functional
correctness of the function under test. The following restrictions and prerequisites for
TESSY apply:
• The source code to be tested shall be compilable without errors and warnings
by the compiler of the respective microcontroller target. TESSY may fail
analyzing the interface of the module to be tested, if there are syntactical
errors within the source code.
• TESSY does not check any runtime behavior or timing constraints of the
function under test.
• The test execution on the target system highly depends on the correct con-
figuration of the target device itself, the correct compiler/linker settings
within the TESSY environment and other target device related settings within
TESSY (if applicable). Any predefined setup of the TESSY tool for the sup-
ported devices requires manual review by the user to ensure proper operation
of the unit testing execution.
• The usage of compiler specific keywords and compiler command line settings
may require additional tests for tool qualification. Correct operation of the
TESSY toolset with respect to the Qualification Test Suite (QTS) test results
is only provided for ANSI compliant C code.
Since TESSY v4.x the test driver code will be generated and attached at the end of (a
copy of) each source file. The following restrictions apply:
• All types used within usercode must be available within the source file of the
respective test object.
For backward compatibility, you can disable the usage of the new CLANG parser and test
driver generation by setting the attribute “Enable CLANG” to “false” within the project
configuration using the environment editor TEE.
Note the chapter Migrating from TESSY 3.x to 4.x and check as well the Practical
exercises to learn about TESSY´s new features!
C++
TESSY now integrates the CLANG parser for analysis of C++ source files and provides
a new test driver generator to support testing of all C++ features including templates
and derived classes. The interface and constructors of each C++ class are available for
testing and the TDE provides convenient editing of test data. Powerful reuse of test data
by assigning the new to the old C++ interface elements allows easy regression testing.
Warning: If you open an existing module created with TESSY v3.2 or earlier,
you will be warned that this module will be converted to the new interface data-
base format. Please refer to Analyzing modules for more information about the
available options.
Please note that Component testing of C++ modules is not yet supported.
Testing different but related configurations of software is now supported by the variant
management of modules and test cases: Starting from a base module with test cases for
all variants you can derive sub modules for each individual variant configuration. Such
variant modules can overwrite or delete the base test cases or provide additional test cases
specific for the given variant. Changes to the base module and tests will be propagated to
all sub modules automatically.
Refer to Creating variant modules and Test cases and steps inherited from a variant module
for details about creating and editing test cases within variant modules.
The test execution (building the test driver and execution on the target) can now run in
parallel depending on the number of available cores of the host PC. Also the restore of
TMB files now runs in parallel. These optimizations greatly reduce the time required for
automated test runs, especially on continuous integration servers (e.g. Jenkins).
Support for test driven development has been added: You can now add test objects to
modules without having a source file in place. For such test objects you can also add varia-
bles for values being used for calculations by the test object according to the specification.
In this way, you can prepare your tests before any code is available. Later when the first
code is implemented, all tests can be assigned to the real implementation modules using
the normal IDA assign operation.
Refer to Quickstart 5: Test driven development (TDD) for an example on how to use
this feature.
All the following model elements of TESSY will get a UUID assigned to uniquely identify
them even after save and restore via TMB files on different computers:
• Modules
• Test objects
Existing projects will be updated automatically when opened using TESSY v4.
Enhanced auto-reuse in command line execution mode provides more stable regression test
results in case of slightly changed interfaces as follows:
All those interface changes will be ignored when opening modules in command line mode.
Tests that cannot be executed for some reason (e.g. which would fail due to null pointer
access) can be excluded from the test execution. Such tests are displayed faded and they
are automatically skipped when executing all tests of a test object.
• TESSY 4.0 can be installed and run in parallel to any previous major TESSY
version.
• To run TESSY 4.0 you need at least a 1.5 GHz CPU and 2 GB RAM for
TESSY.
1.2 Setup
TESSY allows you to have multiple TESSY installations with different versions on the same
computer. You do not have to uninstall an older version.
Þ insert the TESSY-DVD into the DVD drive and wait for the setup program to
start OR start the installation program manually (“tessy[version number].exe”).
The InstallAware Wizard will start. This will take a few moments.
Þ Read the license agreement carefully. Check the box to accept and click “Next”.
Þ Now select the setup type: “Complete” (default) is recommended. click “Next”.
Þ Select the TESSY testarea folder (“Folder for temporary files:”; default
“C:\tessy”) click “Next”.
Þ Select the program folder (default “TESSY 4.0”) and decide, whether the
installation on the system is for “all users” or for a certain user. Click “Next”.
1.3 Registration
Þ Start TESSY by selecting in Windows “Start” > “All Programs” > “TESSY
4.0” > “TESSY 4.0” (see fig. 1.7).
If a valid license is found, TESSY will start. If there is no valid license, the License Manager
will start with a License Key Request popup window (see figure 1.9).
Þ start the License Manager by selecting in Windows “Start” > “All Programs”
> “TESSY 4.0” > “Floating License Manager”.
The License Key Request popup window will appear (see figure 1.9).
Þ Click on “Online Request” and fill out the form for a license key request.
You will get a license key via e-mail within a license key file. The license key file is a plain
text file with the ending .txt.
Important: The license key file is not generated and send out automatically,
therefore it can take up to a workday until you receive it!
When you have received the license key file (*.txt file),
The Configure window for the License Server will open (see figure 1.11).
Þ Under “License Key File” choose your license key file (*.txt).
Þ Tick the box “Autostart” to start the license server hereafter in the background.
The system will try to set the autostart. If you do not have administrator
privileges, the autostart is set for your windows user.
Þ Optional: You can run the license server as service (select “Service” under
“Run Local Server”). The default is “Application”. If you wish to remove the
service from the registry, tick the accordingly box under “Settings”.
Þ Click on “OK”.
The license server will start automatically. If not, click on (Start Local
Server).
Þ Close the License Manager and start TESSY by selecting “Start” > “All Pro-
grams” > “TESSY 4.0” > “TESSY 4.0”.
Þ To start the License Manager manually select “Start” > “All Programs” >
“TESSY 4.0” > “TESSY 4.0” > “Floating License Manager”.
Þ To start the server manually click on the icon (Start Local Server).
Þ To check your license key file click on (Check). This can be useful to
determine the error if the server does not start.
Figure 1.12: License key check successful: this license key is correct
1.4 Uninstallation
Important: By uninstalling TESSY the project root will not be deleted, neither
your project data or your configuration file will be deleted. But anyway make sure
that your data is saved.
All components of TESSY will be removed. This will take a few seconds.
Important: For using your TESSY license part-time on another computer, you
need a “Floating License”.
You can check-out your TESSY floating license on your computer and import it on another
computer to work with TESSY on another system.
• is for 30 days altogether: You can check-out the license for one, for two or
up to 30 days each time, but overall for 30 days at the most.
Þ In Windows klick on “Start” > “All Programs” > “TESSY 4.0” > “Floating
License Manager”.
Þ Next to “State” the amount of days for possible check-outs left will be displayed
(see figure 1.13).
Figure 1.13: The license info shows the possible days for checking out the license
Þ In the menu bar click on “License” > “Check Out. . . ” (see figure 1.14).
Þ Choose the amount of days. The Registration Information will be filled out
automatically (see figure 1.15).
You can now use this license file on your second computer. To register the
license refer to section 1.3.2 Registrating the license.
You do not have to check-in your license again on computer 1! The license will be
activated automatically.
To be able to use the TESSY floating licence on another computer with no internet con-
nection the procedure described in section 1.5 Using license part-time on another computer
needs to be modified.
Þ Choose the amount of days, click “ok”, and save the license file.
Þ You can now register this license file on your computer with NO internet con-
nection (see section 1.3.2 Registrating the license).
Figure 1.16: Transmitting the license file to a computer with no internet connection
In the new version of TESSY you will find some new functions as mentioned in section
New features in TESSY 4.0.
You will have to convert your projects to use them with the new TESSY 4.x version. When
you open a project, TESSY will ask you if you want to convert your project. By clicking
“Yes” TESSY will convert the project automatically.
Warning: Once you have converted your project it cannot used by TESSY 3.x
anymore! If you want to use your project with TESSY 3.x, make a copy of the
project.
This chapter explains how to create databases for your test, how to work with the different
files and the graphical user interface of TESSY and provides some information about useful
shortcuts to work more efficient.
The following table provides a fast overview about TESSY’s file system and databases:
What does
tessy.pdbx (file) Project file for the location of a TESSY project. The project can be
opened via double-click. Contains the basic settings of a project and
can be renamed.
Project root Specifies the root directory of your project, so that all paths (e.g.
sources, includes, etc.) can be related to this root. Every project will
have an own project root. The project root is saved in the project file
(tessy.pdbx) which allows you to transfer projects to other computers.
Source root Optional directory to specify source and include paths to this source
root independently from the project root (e.g. if you want source files
to reside in another directory outside the project root). The source
root as an absolute path is intentionally not saved in the project file
(tessy.pdbx), only its existence is indicated. Therefore the source root
needs to be selected when opening a project on a different computer.
When opening such a project using the command line, the source root
needs to be provided as command line argument. For more information
about the CLI mode please refer to section 6.13 Command line
interface.
persist (folder) Contains the databases for the project, one for requirements and test
collections, the other one for test data.
work (folder) Contains all temporary files generated during the test process. This
entire directory can be deleted without loosing any data of the TESSY
project.
What does
Þ Start TESSY by selecting “All Programs” > “TESSY 4.0” > “TESSY 4.0”.
The Select Project dialog will open. At top you can see the path of your
workspace (see figure 3.1).
Creating a new
project
Þ Click on (Create Project).
Þ Select a project root: Click on “. . . ” and select a directory where your de-
velopment project resides, i.e. where source and header files are located and
where a sub directory “tessy” containing the test project shall be created (see
figure 3.3).
All the project-related TESSY databases containing information on
the test environment, referenced source files, compiler, debugger,
etc. will be stored within a sub folder of the project root and all
paths into your project will relate to this root.
Þ Optional: Extend the “Advanced Settings” by clicking on the plus (see figure
3.4).
Configuration Enter the path to a specific configuration file or leave the field blank
File to use the default configuration. TESSY will create a new
configuration file containing only the GNU/GCC compiler. Refer to
section 6.5.6 Configuration files about how to customize this
configration file.
Project You can choose a different location if you would like to locate the test
Location project files into another sub directory of the project root.
If you tick the box “Store database in user profile” the database will
be stored in a directory named using the project identifier GUID
located within the “.tessy_persist” directory within the user
profile.
Backup This directory will be used to store all backup and settings files of
Location your project which are vital for your project in order to restore it on
another computer. Refer to section 6.12 Backup, restore, version
control for information about files that are relevant for version control.
It is recommended to use the default location but you can also choose
a different location preferably within the project root. It will be used
as standard for the backup and restore dialog.
By default the project root contains your development project, i.e. the source files,
and one sub folder “tessy” that contains all TESSY related files.
Additionally you can specify the source root to locate source and header files outside
the project root.
TESSY will use paths relative to those root paths for all files, e.g. references to
source and config files. This ensures that you can move whole projects to new
locations.
Please keep in mind that the source root location will always be remembered locally
on each computer and the given absolute path will not be stored into the TESSY
project file (tessy.pdbx). If you transfer a project with an indicated source root
to another computer, you need to provide the source root (e.g. as command line
argument when running in CLI mode). When opening such a project with the GUI,
TESSY will remind you and ask for the source root location.
Þ Click “OK”.
Now TESSY creates automatically a sub folder “tessy” within the project root
directory. This folder contains (within sub folders) the configuration file and
the persistence databases. This will take a few seconds. Afterwards TESSY
will restart (if another project was open before) and open the newly created
project automatically.
In the Select Project dialog you can create, import, clone, edit and remove or delete your
project with selecting the project and click on the icon in the tool bar:
Clones a project: TESSY creates a complete copy of a project and adds it to the
project list. A new name is required.
Removes a project from the workspace. If you want to delete all data including
project and database location, tick the box “Delete project contents on disk”.
With a right-click you can open the context menu for further options:
With a right-click on a project in the project list you can mark a project as “Template
Project” (see figure 3.5).
• Doubleclicking the project or marking the project and click “Open Project”
will start the Project Configuration Dialog and the “Clone Project” command
(see section 3.1.2 Creating, importing, cloning, editing, deleting a project).
• Doubleclicking the PDBX file of the template project will start the Project
Configuration Dialog and the “Clone Project” command.
At anytime you can remove the mark as template project. The project will then
be a normal project and you can open it as usual.
You can move your whole project directory and then import the project again:
Þ Either double-click on the tessy.pdbx file or use the Import Project button.
TESSY will ask you, if the project was moved or copied (see figure 3.6).
If the project was copied, e.g. you want to create a new project as a copy of
the original one, a new project identifier needs to be assigned to distinguish
the new project from the original one. TESSY will do this automatically.
In the “Select Project” dialog all projects are listed with name and local path.
It is possible to handle projects with equal names. The table below explains in which way
TESSY will replace projects within the projects list if they have identical names:
Project named You create a new project You will get a warning:
’Alpha’ exists in ’Alpha’ in another “Project with identical name will be
location ’xy’. location. removed from the project list.”
The new project appears in the project
list, the old project will be removed from
the list (but not deleted!).
Project ’Alpha’ You try to create a new You will get an error, it is not possible to
exists in location project ’Alpha’ in the create the project, because two projects
’xy”. same location. cannot share the same location.
Important: This section is only recommended for advanced users that have
already worked with TEE. For basic handling we recommend to continue with
section 3.2 Understanding the graphical user interface and following sections and
then return to this section.
TESSY will create a specific configuration file for each project database. This way you
can share the environment settings with other members of your development team. The
configuration file is stored within your project root together with other project related
files. Such a configuration file contains only the compiler/target environments you want
to use. All other environment configurations are not visible for the user as long as this file
is assigned to a given project database.
To customize the configuration file within the Test Environment Editor (TEE)
After the update you cannot open the project in previous versions of TESSY!
TESSY will recognize, if an upate of the database is necessary (see figure 3.8).
When you want to open the project, you will be asked to update the database
(see figure 3.9).
Þ Click “OK”.
When TESSY starts the first time, the graphical user interface (GUI) will open within the
Overview perspective.
Please check the terminology shown in the figure “TESSY interface” and the explanations
beneath. This terminology will be used throughout this manual.
The menu bar provides actions as “File”, “Windows” etc. Within the “Help” you find the
online help of TESSY. Many of these actions may also be available within the context
menu of individual views, if the actions are applicable for the items within the view.
At the global tool bar of TESSY interface you can select a project, save changes etc. By
rolling over an icon with the cursor a tooltip will tell you the purpose of each icon. There
may also be individual tool bars within the views. Here you find the tools for operating,
e.g. to start the test execution .
Save all changes in any views or editors by clicking the save icon .
TESSY contains several perspectives to present information based on different tasks in the
test workflow (“Requirement Management”, “Overview”, “TIE” etc.). Each perspective
offers several views. In the perspective bar (containing the perspective names) you can
switch between the different perspectives. The perspectives - from the left to the right -
follow the actions taken while preparing, running and evaluating a test.
Every perspective name has several right click menu options (the context menu).
By clicking on the left symbol you can open other perspectives (see figure 3.11):
3.2.4 Views
Notice that the views appear differently combined with other views, e.g. the view Test
Results within the Overview perspective is combined with the Test Items view, but within
the TDE perspective it is combined with the Test Project view. The reason for the different
combinations is to give you a fast overview and comparison between various information
within each project step.
You can change the appearance of views for you own needs and open views of one per-
spective into another perspective: Adding views to
a perspective
Þ Activate (open) the perspective where you want to add a view.
Þ Click “OK”.
Changing view Change the position of views with drag and drop:
position
Þ Click on a name of a view and move it where you like: You can move views
to another location within the same group of views or into another group of
views or even outside the current window.
Þ Right-click on the perspective switch and choose “Reset” to switch back to the
original positions of all views of the respective perspective (see figure 3.14).
Figure 3.14: Move the views separately. To switch back, use “Reset”.
Figure 3.15: To switch back all positions of views and perspectives use “Reset
Workbench”.
You can maximize and minimize views for better visibility. Maximize and
minimize views
To maximize a view,
Þ use the button within the right corner (see figure 3.16) or double click on the
tab of the view.
The view will be maximized and other views of the perspective will be minimized, displayed
by the view symbol on the left and the right (see figure 3.17).
Figure 3.17: Maximized view with minimized views on the right and the restore-button
on the left
There are navigation views that present hierarchical structured data. Selections on such
tree items may cause other views or the editor pane to change the information being
displayed.
All views are context sensitive: If you select an item within one view, another view
might display other information. If something is not displayed correctly, make sure
you selected the desired item.
The status bar provides status information about the application and current status, e.g.
the directory of the project root and the configuration file.
Most contents, tabs etc. have options that are displayed in the context menu, which is
retrieved with a right click. It shows main operations as “Copy”, “Paste”, “Delete” etc.
The context menu is context sensitive and changes as different items are selected (see
figure 3.18).
3.3.2 Shortcuts
You can operate within TESSY with several shortcuts by using the keyboard:
Important: For using shortcuts make sure to be in editing mode of the current
view! Otherweise the shortcut will not work.
Warning: You cannot reverse the deleting of data. Before deleting make sure
this database is really not needed anymore.
copy Ctrl+C
cursor positioning tab Moves the cursor to the next input section on
right alternatively the right side of the line.
Ctrl+right arrow key Only within TDE.
cursor positioning Shift+tab Moves the cursor to the input section on the left
left alternatively side of the line.
Ctrl+left arrow key Only within TDE.
delete Del Only possible if the item to delete does not con-
tain any data or folders or modules!
Only manually created test cases can be deleted!
You cannot delete test cases created by CTE.
This prevents possible inconsistencies within the
CTE document.
paste Ctrl+V
rename F2
Drag & Drop You can also operate with “drag & drop”, which is the same as “cut & paste”.
Shortcuts for certain views are described in the related section within chapter 6
Working with TESSY.
This chapter offers a brief introduction about unit testing with TESSY and the classification
tree method. It provides basic knowledge for organizing and executing a unit test in general
and in particular with TESSY. The chapter about the classification tree method helps you
to understand the logical system and to use the CTE.
Testing is an integral part of the software development process for embedded sys-
tems and its necessity to attain software quality is undisputed. This section gives a
comprehensive overview on aspects of unit tests of embedded software.
International standards like IEC 61508 require module tests. According to part 3 of IEC
61508, the module test shall show that the module under test performs its intended
function, and does not perform unintended functions. The results of the module testing
shall be documented.
IEC 61508 classifies systems according to their safety criticality. There are four safety
integrity levels (SIL), where 1 is the lowest level and 4 the highest, i.e. systems at level 4
are considered to be the most critical to safety. Even for applications of low criticality (i.e.
at SIL 1), a module test is already “highly recommended”. The tables contained in the
annexes of IEC 61508, Part 3 specify the techniques that should be used, e.g. for module
testing the technique “functional and black box testing” is highly recommended at SIL 1
already. Other techniques, such as dynamic analysis and testing are recommended at SIL
1 and highly recommended at SIL 2 and higher.
Part 4 of IEC 61508 defines a (software) module as a construction that consists of pro-
cedures and/or data declarations, and that can interact with other such modules. If we
consider embedded software which is written in the C programming language, we can take
a C-level function as a module. To prevent a mix-up between C-level functions and C
source modules, we will refer to the C-level functions as units from now on.
Also other standards like the British Def Stan 00-55, ISO 15504 or DO-178B require
module testing (where the nomenclature ranges from “module” to “unit” to “component”).
However, all standards have more or less the same requirements for that kind of test: the
tests have to be planned in advance, test data has to be specified, the tests have to be
conducted, and the results have to be evaluated and documented.
During unit testing of C programs, a single C-level function is tested rigorously, and is
tested in isolation from the rest of the application. Rigorous means that the test cases are
specially made for the unit in question, and they also comprise of input data that may be
unexpected by the unit under test. Isolated means that the test result does not depend on
the behavior of the other units in the application. Isolation from the rest of the application
can be achieved by directly calling the unit under test and replacing the calls to other unit
by stub functions.
Unit testing tests at the interface of the unit, and unit testing does not consider the internal
structure of the unit, and therefore unit testing is considered as black-box testing. The
interface of the unit consists of the input variables to the unit (i.e. variables read by the
unit) together with the output variables (i.e. variables written by the unit). A variable can
both be an input and an output (e.g. a variable that is incremented by the unit), and the
return value of the unit - if present - is always an output. The structure of a test case
follows from the structure of the interface.
Unit testing is conducted by executing the unit under test with certain data for the input
variables. The actual results are compared to those predicted, which determines if a test
case has passed or failed.
Unit testing (of C-level functions, as described) is well suited to fulfill the requirements of
module testing for IEC 61508, because unit testing is
• a black-box, because the internals of the unit are not taken into account,
and
• Finding errors early: Unit testing can be conducted as soon as the unit to
be tested compiles. Therefore, errors inside the unit can be detected very
early.
• Saving money: It is general knowledge that errors which are detected late
are more expensive to correct than errors that are detected early. Hence, unit
testing can save money.
• Reducing complexity: Instead of trying to create test cases that test the
whole set of interacting units, the test cases are specific to the unit under
test. Test cases can easily comprise of input data that is unexpected by
the unit under test or by even random input test data, which is rather hard
to achieve if the unit under test is called by a fully-functioning unit of the
application. If a test fails, the cause of the failure can be easily identified,
because it must stem from the unit under test, and not from a unit further
down the calling hierarchy.
• Giving confidence: After the unit tests, the application is made up of single,
fully tested units. A test for the whole application will be more likely to
pass, and if some tests fail, the reason will have probably stemmed from the
interaction of the units (and not from an error inside a unit). The search for
the failure can concentrate on that, and must not doubt the internals of the
units.
Unit testing verifies that certain input data generates the expected output data. Therefore,
units that do data processing in its widest sense, e.g. generation of data, analysis of data,
sorting, making complex decisions, difficult calculations are best suited for unit testing. To
find such units, the application of metrics (e.g. the cyclomatic complexity according to
McCabe) may be appropriate.
Other criteria for selecting units to test may be how critical the functionality is to the unit’s
operation, or how often a unit is used in the application.
The interaction of the units is not tested during the unit test. This includes the semantic of
the parameters passed between units (e.g. the physical unit of the values), and the timely
relationships between units (e.g. does a unit fulfill its task fast enough to let a calling
unit fulfill their tasks also at the required speed?) In addition, the interrupt behavior of
the application is not in the scope of unit testing. Questions like “Does my interrupt
really occur every 10 ms?” or “Which interrupt prolonged my unit unacceptably?” are
not addressed by unit testing, because unit testing explicitly aims at testing the functional
behavior of the unit isolated from environmental effects such as interrupts.
Regression testing is the repetition of tests that have already passed after the implementa-
tion of bug fixes or improvements in the software. Regression testing proves that a change
in the software did not result in any unexpected behavior. Regression testing is a key
to software quality. Obviously, the practice of regression testing requires the automation
of the tests, because the effort to repeat the tests manually is too high. Even for non-
repetitive unit tests, the proper tool support will save you lots of time, but tool support is
indispensable for the repetition of the unit tests.
The dilemma: It is commonly accepted that a software developer is badly suited to test
his own software, especially if the complete implementation, or the compliance of the
implementation with the specification is an issue (blindness against own faults). If the
developer has forgotten to implement a certain functionality, it is likely he will also forget
a test that will reveal the missing functionality. If the developer has misinterpreted the
specification, it is likely that his tests will pass in spite of the wrong functionality.
On the other hand, experience has shown that a tester, who should test a code not written
by him must put a lot of effort into understanding the function´s interface. The tester
must find out the meaning of the variables, and which values to use to conduct certain
tests. E.g., if the test specification requires the test of something “green”, which variable
(or variables) represents the color, and which value of the variable represents green? The
prediction of the expected results poses similar problems.
If the developer does not do tests, this gives rise to additional efforts, because the failed
test has to be passed to the developer, he has to reproduce the failure, correct the problem,
and then normally a concluding external regression test has to take place. Furthermore,
additional effort rises due to the fact that the developer will not hand out his software to
the QA department without having done at least some tests. This duplicated test effort
could be saved if the developer immediately starts testing by using the externally predefined
test cases.
The way out: A way out of that dilemma could be that a tester, who has not written the
code, specifies the test cases according to the functional specification of the unit, including
the expected results. He can use abstract data for this (e.g. color = green). The set of test
cases is handed over to the developer of the software. For him, it should be no problem to
set the input variables to the required values (e.g. the appropriate RGB value for green).
If a test fails, the developer can immediately correct the problem and re-run all tests that
have passed so far (regression testing). Testing is seen as an additional step during the
implementation of software, in comparison to the compiling step, where the compiler finds
all syntax errors, and the developer corrects them interactively, verifying his changes by
subsequent compiler runs.
However, standards require the organizational separation of development and test, due
to the initial mentioned reason of blindness against own faults. Possibly, it could be
sufficient to only separate the specification of the test cases from the development, and to
consider the conduction of predefined test cases not to suffer under the above mentioned
blindness.
For embedded software it is essential that the unchanged source code with all the non-
ANSI keywords and non-ANSI peculiarities is used for testing. For instance, some cross
compiler for embedded systems allow for bit fields that are smaller than the integer size,
e.g. 8-bit wide bit fields in a 16-bit application. This is forbidden by the ANSI C standard,
but justifiable by the perfect adaptation to the embedded system. Naturally, the unit test
results are worthless, if this illegal size cannot be maintained during the tests. This requires
specialized tools. Furthermore, it is also essential that the concluding tests at least execute
on the actual hardware, i.e. the embedded microcontroller. This is a challenge, but there
are ways to attenuate this. Using a cross compiler for the microcontroller in question is a
prerequisite, preferably the exact version that will be used also for the user application.
Unit test tools can follow two technical approaches towards unit test: The test application
approach uses a special application for conducting the unit tests. This is the usual approach.
The original binary test uses the unchanged user application for testing.
The usual method for unit test tools to conduct unit tests is to generate a test driver (also
called test harness) and compile the test driver together with the source code of the unit
under test. Together they form the test application. The test driver includes startup code
for the embedded microcontroller, the main() function entry, and a call to the unit under
test. If required, the test driver contains also code for stub functions and the like. For each
unit to test, an own test application is created. This test application is used to conduct the
unit tests. For that, the test application is loaded into an execution environment capable
of executing the test application. This execution environment is normally a debugger
connected to an (instruction set) simulator, an in-circuit emulator stand-alone or connected
to a target system, a JTAG or BDM debugger or the like. After test data is transferred to
the execution environment, (the test data may already be included in the test application),
tests are conducted and the results are evaluated.
To execute the test application on the actual hardware, the test application must not only
be compiled using a cross compiler for the microcontroller in question, but also the test
application must fit into the memory present on the actual hardware. Also, the startup
code of the test application must take into account peculiarities of the actual hardware,
e.g. the enabling of chip selects and the like. Making the test application fit into memory
can be simplified by using an in-circuit emulator, which provides emulation memory, and
serves as a kind of generalized hardware platform for the microcontroller in question.
When the actual hardware has to be used and if memory on this hardware is very limited, the
test application must be minimized to fit into this memory. This is especially challenging for
single chip applications, where only the internal memory of the microcontroller is available.
If test data is included in the test application (and memory is limited), a single test
application can only include a few test cases, which in turn means several test applications
for the test of one unit, which is cumbersome. An approach which avoids this, keeps the
test data separated from the test application, which allows not only for a minimized test
application, but also allows you to change the test data without having to regenerate the
test application.
Another approach is to use the unchanged user application for unit testing. This resembles
the manual test that is usually done by a developer after the application is completed.
The complete application is loaded into the execution environment, and the application is
executed until the unit to be tested is eventually reached. Then the input variables are set
to the required values, and the test is conducted.
The advantage of the Original Binary Test approach is that the unit under test is tested
exactly in its final memory location. There is no extra effort (or hassle) for compiling and
linking a test application, because the user application is used, which is already compiled
and linked or had to be compiled and linked anyway. Because the user application must
fit in the memory anyway, problems regarding the size of the application can be neglected.
Even applications that already reside in the ROM of the hardware can be tested. Even if
the cross compiler used to compile the user application is no longer at hand, tests are still
feasible.
However, this Original Binary Test approach has some disadvantages compared to using a
test application:
• There is no control over the test execution. It depends on the user applica-
tion, when the unit under test is reached. It may be the case that the unit
under test is never reached, or only after some special external event has
happened, e.g. the push of a button of the actual hardware and an interrupt
resulting from this.
• During the Original Binary Test, stub functions cannot be used. This is clear
because the application is already linked using the current functions that are
called by the unit under test. A unit is always tested using the other units of
the application. Therefore, the unit under test is not isolated from the rest
of the application, and errors of called units may show up during the test of
the unit under test.
• It is not possible to use arbitrary test data for the unit test. For instance, if
the unit under test gets its test data by a pointer pointing to a memory area,
the amount test data must fit into this memory area, which was allocated by
the user application.
Apart from its easy usage, which possibly could be the only means to do some unit testing
at all, the Original Binary Test has strong disadvantages, which are essential for proper
unit testing and therefore one could even insist that it is not a unit test in its strictest
sense.
4.1.5 Conclusion
Besides being required by standards, unit testing reduces the complexity of testing, finds
errors early, saves money, and gives confidence for the test of the whole application. If
used in the right way, unit testing can reduce development/test time and therefore reduce
4.2.1 General
Testing is a compulsory step in the software development process. The planning of such
testing often raises the same questions:
Anyone who has been confronted with such issues will be glad to know that the CTM offers
a systematic procedure to create test case specifications based on a problem definition.
The CTM is applied by a human being. Therefore, the outcome of the method depends
on the experiences, reflections, and appraisals of the user of the CTM. Most probably
two different users will come out with a different set of test case specifications for the
same functional problem. Both sets could be considered to be correct, because there is no
absolute correctness. It should be clear that there are set of test cases that are definitively
wrong or incomplete. Because of the human user, errors cannot be avoided. One remedy
is the systematic inherent in the method. This systematic guides the user and stimulates
his creativity. The user shall specify test cases with a high probability to detect a fault in
the test object. Such test cases are called error-sensitive test cases. On the other hand,
the user shall avoid that too many test cases are specified, that are superfluous, i.e. do not
increase test intensiveness or test relevance. Such test cases are called “redundant” test
cases. It is advantageous, if the user is familiar with the field of application the method is
applied in.
The CTM is a general method: It can not only be applied to module/unit testing of embed-
ded software, but to software testing in general and also to functional testing of problems,
that are not software related. The prerequisite to apply the method is to have available
a functional specification of the behaviour of the test object. The CTM incorporates se-
veral well-known approaches for test case specification, e.g. equivalent partitioning, and
boundary value analysis.
The CTM stems from the former software research laboratory of Daimler in Berlin, Ger-
many.
The first step is to describe the expected behaviour of the test object, e.g. “If the button is
pushed, the light will go on; if the button is released, the light will go off”. Data processing
software normally solves functional problems, since input data is processed according to an
algorithm (the function) to become output data (the solution).
Analyse the functional specification. This means, you think about this specification with
the objective to figure out the test-relevant aspects of the specification. An aspect is
considered relevant if the user expects that aspect to influence the behaviour of the test
object during the test. In other words, an aspect is considered relevant if the user wants
to use different values for this aspect during testing. To draw the tree, these aspects are
worked on separately. This reduces the complexity of the original problem considerably,
what is one of the advantages of the CTM.
Consider systems that measures distances in a range of some meters, e.g. the distance to
a wall in a room. Those systems usually send out signals and measure the time until they
receive the reflected signal. Those systems can base on two different physical effects: One
can use sonar to determine the distance, whereas the other can use radar.
The question is now: Is the temperature of the air in the room a test relevant aspect for the
test of these measurement systems? The answer is yes for one system and no for the other:
The speed of sound in air (sonar) is dependent on the temperature of the air. Therefore, to
get exact results, the sonar system takes this temperature into account during the calculation
of the distance. To test if this is working correct, you have to do some tests at different
temperatures. Therefore, the temperature is a test-relevant aspect for the sonar system. On
the other hand we all know that the speed of a radar signal, that travels at the speed of light,
is independent from the temperature of the air it travels in (it did not even need air to travel).
Therefore, the temperature of the air is not a test-relevant aspect for the testing of the radar
system. It would be superfluous to do testing at different temperatures.
This example shows that it needs careful thinking to figure out (all) test relevant aspects.
It would lead to poor testing if someone simply takes the test cases for the radar system
and applies them to the sonar system without adding some temperature-related test cases.
Additionally, this example illustrates that it is advantageous to have some familiarity with the
problem field at hand when designing test cases.
After all test relevant aspects are determined, the values that each aspect may take are
considered. The values are divided into classes according to the equivalence partitioning
method: Values are assigned to the same class, if the values are considered equivalent for
the test. Equivalent for the test means that if one value out of a certain class causes a test
case to fail and hence reveals an error, every other value out of this class will also cause
the same test to fail and will reveal the same error.
In other words: It is not relevant for testing which value out of a class is used for testing,
because they all are considered to be equivalent. Therefore, you may take an arbitrary
value out of a class for testing, even the same value for all tests, without decreasing the
value of the tests. However, the prerequisite for this is that the equivalence partitioning
was done correctly, what is in the responsibility of the (human) user of the CTM.
Please note:
• Equivalent for the test does not necessarily mean that the result of the test
(e.g. a calculated value) is the same for all values in a class.
An ice warning indication in the dashboard of a car shall be tested. This ice warning indication
depends on the temperature reported by a temperature sensor at the outside of the car, which
can report temperatures from -60°C to +80°C. At temperatures above 3°C the ice warning
shall be off, at lower temperatures it shall be on.
It is obvious that the temperature is the only test-relevant aspect. To have an reasonable
testing effort, we do not want to have a test case for every possible temperature value.
Therefore, all possible temperature values need to be classified according to the equivalence
partitioning method.
It is best practice to find out if invalid values may be possible. In our case a short circuit or
an interruption of the cable could result in an invalid value. Therefore, we should divide the
temperature in valid and invalid values first. The invalid values can relate to temperatures
that are too high (higher than 80°C) and to ones that are too low (lower than -60°C). It is
tempting to form two classes out of the valid temperatures: The first class shall contain all
values that result in the ice warning display being on (from -60°C to 3°C) and the other class
shall contain all values that result in the ice warning display being off (from 3°C to 80°C).
The equivalence partitioning in the figure above leads to at least four test cases, because we
need to take a value out of each class for the tests.
you can consider each class isolated from the other classes and decide, if and how it
needs to be sub-divided or not. Furthermore, this equivalence partitioning on several levels
documents the thoughts resp. stages of work until the final equivalence partition. This
serves understandability and traceability of the result. Also it allows easily reverting steps
if the final equivalence partition has become too fine granulated.
For the example ice warning, the classification of the valid values is not detailed enough,
because according to the equivalence partitioning method, it would be sufficient to use a
single, arbitrary value out of a class for all the tests. This could be for instance the value
2°C out of the class of temperatures, for which the ice warning display is on. In consequence,
no test with a minus temperature would check if the ice warning display is on. To avoid this
consequence, you could divide this class further according to the sign of the temperature:
Using the CTM, the result of the repetition of equivalence partitioning for all test relevant
aspects is depicted in the CT. The root represents the functional problem, the test relevant
aspects. Test relevant aspects (classifications) are drawn in nodes depicted by rectangles.
Classes are ellipses.
The idea behind using boundary values is that values at the borders of a range of values are
better suited to form error-sensitive test cases than values in the middle. The idea behind
boundary values analysis is contrary to equivalence partitioning, because one method takes
a set of values as equivalent and the other method prefers special values in such a set.
Despite the fact that the idea behind boundary values analysis is exactly the opposite of
equivalence partitioning, both approaches can be expressed in the CTM.
f. Testing a hysteresis
The current problem specification of the ice warning-example does not mention hysteresis.
It may be tempting to extend the current problem specification in that fast changes in
the state of the ice warning display shall be avoided. For instance, the ice warning display
shall be switched off only after the temperature has risen to more than 4°C. This could be
realized by a hysteresis function. The necessary test cases for such a hysteresis function
can be specified by the CTM.
Test cases are specified in the so-called combination table below the CT. The leaf classes
of the CT form the head of the combination table. A line in the combination table depicts
a test case. The test case is specified by selecting leaf classes, from which values for the
test case shall be used. This is done by the user of the method, by setting markers in the
line of the respective test cases in the combination table.
Figure 4.4: Result of the CTM: tree (above) with combination table (below)
It may be tempting to combine every class with every other class during the specification
of the test cases. Besides the fact, that not every combination might be possible for logical
reasons, it is not the intention of the CTM to do so, it could be done automatically by a
tool. This would lead to many test cases, with the disadvantages of loss of overview and
too much effort for executing the test cases.
The objective of the CTM is to find a minimal, non-redundant but sufficient set of test
cases by trying to cover several aspects in a single test case, whenever possible. Similar
to the drawing of the tree, it depends on the appraisal and experience of the user of the
method, how many and which test cases are specified.
Obviously the size of the tree influences the number of test cases needed:
A tree with more leaf classes naturally results in more test cases than a tree with less leaf
classes. The number of leaf classes needed at least for a given tree is called the minimum
criterion. It can be calculated from the consideration that each leaf class should be marked
in at least one test case, and that some leaf classes cannot be combined in a single test
case, because the classes exclude each other.
Similar a maximum criterion can be calculated, which gives the maximal number of test
cases for a given CT. A rule of thumb states that the number of leaf classes of the tree gives
the order of magnitude for the number of test cases required for a reasonable coverage of
the given tree.
Problem definition:
A start value and a length define a range of values. Determine if a given value is within the
defined range or not. Only integer numbers are to be considered.
It is obvious, that completed testing is practically impossible, because we get 65536 * 65536 *
65536 = 281.474.976.710.656 test cases, even if we assume only 16 bit integers. If we would
assume 32 bit integers . . . well, we better do not.
The start of the range and the length can be regarded as test relevant aspects. This is
convenient since, according to the problem definition, a range of values is defined by a
start value and a length. It reflects the intention to use different values for the start and
the length during testing.
We should have some test cases, which result in inside, and other test cases which result in
outside. We call the corresponding aspect position, because the position of the value under
test with respect to the range determines the result. So the three test-relevant aspects
to be used for classifications are initial value, length and position and they thus form the
basis of the CT:
Now classes are formed for the base classifications according to the equivalence partitioning
method. Usually, the problem specification gives us hints how to form the classes. E.g.
if the problem specification would state: “If the start value is greater than 20, the length
value doubles”, then we should form a class for start values greater than 20 and a class for
start values smaller or equal to 20.
Unfortunately, the problem specification at hand is too simple to give us similar hints.
However, since the start value can take on all integer numbers, it would be reasonable to
form a class for positive values, a class for negative values, and another class for the value
zero. It would also be reasonable to form just two classes, e.g. one class for positive start
values including zero and the other class for negative start values. This depends on ones
emphasis having zero as value for the start of the range in a test case or not.
Because of the systematic inherent in the CTM, and because range_length is an integer
as well as range_start, it is stringent to use for range_length the same classes as for
range_length. This results in the following tree:
To specify a first range (to be used in the first test case), we have to insert a line in the
combination table and to set markers on that line:
Figure 4.9: A first specification for the range in the combination table
Two markers are set on the line for the first specification. One marker selects the class
positive for the start of the range. The other marker selects the class positive for the
length of the range. A range with the start value of, say, 5 and a length of 2 would accord
to the specification. This first specification was named trivial.
We can insert a second line in the combination table and specify a much more interesting
tests case:
Figure 4.10: A second specification for the range in the combination table
For the second specification again two markers are set. They specify that a negative value
shall be used both for the start and for the end of the range. Hence a range with the start
value of -5 and a length of -2 would accord to the second specification. But this value
pair raises some questions: Shall the value -6 lie inside the range? Or shall the value -4 lie
inside the range? Or shall no value at all lie inside the range, if the length of the range is
negative? Each opinion has its supporters and it is hard to decide what is to be considered
correct. Actually, at this point it is out of our competence to decide what is correct. We
have found a problem of the specification!
Probably a test case using a negative length would not have been used if the test case
specification would have been done spontaneous and non-systematic. But a negative length
is completely legal for the functional problem specification that was given above. If you
consider that the problem specification at hand was a very simple one, you may imagine
how likely it is to overlook a problem in a more comprehensive and complicated problem
specification.
In case we are not satisfied with the fact that a fixed single positive value, e.g. 5, may serve
as value for the start of the range in all test cases, we can sub-divide the class positive
according to a suitable classification. In our example, we classify according to the size.
The idea behind this is to have a class containing only a single value, in our case the
highest positive value existing in the given integer range. We use this value because it is
an extreme value, and as we know, using extreme values (or boundary values) in test cases
is well-suited to produce error-sensitive (or interesting) test cases.
In the figure above, the positive values for the start of the range are subdivided according to
their size. This results in the two classes normal positive and maximal positive. The class
maximal positive holds the highest possible positive value (i.e. MAX_INT), and the class
normal positive holds all other positive values. This satisfies mathematical completeness.
Remark 1: Another possibility to classify the positive start values would have been for
instance to classify in odd and even values. This would have been completely legal. This
would have been probably also sensible for e.g. a problem of number theory, but not
target-oriented for the problem at hand.
Remark 2: Please note that for the moment we do not know and we need not to know
the size (in bits) of the integers used in the problem at hand. We simply specify “the
highest positive value in the given integer range”. This keeps our test case specification
abstract! E.g. our test case specification is appropriate for any integer size. As soon as we
assume we use e.g. 16 bit integers, and therefore parameterize our test case by specifying
32767 as value in the class maximal positive, we loose this abstraction. E.g. if we port the
parameterized test case to a 32 bit integer system, the test case looses its sense. This is
not the case if we port the abstract test case specification.
With the CT extended according to figure 4.11 The CT for is_value_in_range, 4th step,
we can insert an additional line in the combination table and specify again an interesting
range for a third test case:
The third range specification in the figure above combines the highest positive number for
the start value of the range with a positive length, i.e. the range exceeds the given integer
range.
The situation with the third range specification is similar to the situation depicted in the
figure above. The situation raises some questions: Will the situation be handled sensible
and gracefully by the test object? Or will it crash due to the overflow? Will the negative
values on the left hand side ibe accounted to lie inside the range or not? And what is correct
with respect to the last question? The problem specification above does not give an answer
to the latter question, again we have found a weak point in the problem specification.
To sum up, designing test cases according to the CT method has revealed two problems
of the problem specification and has lead to interesting test cases so far.
In the figure above, one possible completed CT is depicted. Classifications are depicted by
rectangles, classes by ellipses. The “range” node is a composition with two classifications
as child elements. This tree is discussed in the following:
• Analogous to the class maximal positive for the start value of the range, a
class maximal negative is introduced. The idea behind this class is to combine
the maximal negative start value with a negative length of the range, what
shall provoke an underflow or negative wraparound. This idea comes from
the systematic in the CTM: If a positive wrap-around is seen as an interesting
test case, also a negative wrap-around should be exercised.
• The final tree features still the three initial classes positive, zero, and negative
for the length of the range. It is important to note that the tree reveals at
a glance that nothing like maximal positive length or similar is considered to
be useful for the testing problem at hand.
• It is obvious that a position can either be inside or outside the range, hence
this classification suggests itself. Furthermore, it is obvious that there are
two different areas outside the range: below the range and above the range.
This is reflected in the classification position outside. (If the tree would miss
such a classification, it may well be considered incorrect).
• The class inside of the classification position could well be a leaf class of the
classification tree. However, in the CT in the figure above, this class is sub-
divided further in the sub-classes range_start, opposite_border, and inlying.
This is done to force the use of boundary values in the test cases. If a test
case specification selects the class range_start, the value that shall be chec-
ked if it is inside the range or not shall take the value of the start of the range,
that is the lowest value that is considered to be inside the range, a boundary
value. The class opposite_border is intended to create an analogous test
case specification, but using the highest value that is considered to be inside
the range. The class range_start and the class opposite_border both contain
only a single value. All other values inside the range are collected in the class
inlying; this class exists mainly because of the requirement for completeness
of equivalence partitioning. A similar approach to use boundary values is
visible in the classes at border for positions outside the range.
In the next figure, the same CT is depicted with a completed combination table, what
results in a complete test case specification for the functional problem:
The test case specification above lists 14 test cases. Please note that these are specified
by the user and depend on its judgment. Based on the CT it is possible for some values
to be determined that provide clues to the number of test cases required.
The first value is the number of test cases, if each leaf class is included at least once in a
test case specification. This number is known as the minimum criterion. In our example,
the largest amount of leaf classes, namely seven, belong to the base classification position.
Seven is thus the value of the minimum criterion. The maximum criterion is the number
of test cases that results when all permitted combinations of leaf classes are considered.
In our example, the maximum criterion amounts to 105 (i.e. 5 * 3 * 7). The maximum
criterion takes into account that it is not possible to select e.g. a negative length and
a positive length for the same test case specification, because this is impossible by the
construction of the tree. The maximum criterion takes not into account that it is not
possible to select e.g. a zero length and inlying, because this is not impossible by the
construction of the tree, but by the semantics of the function problem.
A reasonable number of test case specifications obviously lies somewhere between the
minimum and the maximum criterion. As a rule of thumb, the total number of leaf classes
gives an estimate for the number of test cases required to get sufficient test coverage. In
the test case specification, the CT has 15 leaf classes, what fits well to 14 test cases.
By the test case specification in the figure above, you can deduct how the functional
problem specification was extended with respect to the questions raised in sections “A
second range specification” and “Another interesting test case specification”:
• “If the length of the range is negative, are there values that can be inside the
range?” The answer is “yes”, because in test case specification no. 5 and
no. 6 a negative length shall be used and the position of the value shall be
inside the range.
• “If the length of the range exceeds the given integer range, shall negative
values be inside the range?” Test case specification no. 12 clarifies that this
should not be the case.
The leaf class inlying is selected for only one test case specification (no. 1). This reflects the
fact that this class exists only because of the requirement for mathematical completeness
of equivalence partitioning, and not because the inlying values are considered to produce
error-sensitive test cases.
Here is an alternative test case specification to the functional problem specification at hand
depicted:
What are the differences to the more elaborated test case specification in the section
above?
• The start value of the range is not mentioned in the CT. This means, the
start value is not considered to be a test-relevant aspect by the user of the
CTM. In consequence, any arbitrary value can be used as start value in the
four test cases. This value can be fix for all test cases, but does not have to
be.
• The usage of boundary values is not forced by the alternative test case specifi-
cation. This is questionable, because boundary values produce error-sensitive
test cases. The alternative test case specification minimizes testing effort (by
specifying only four test cases), but this is at the cost of thoroughly testing.
But the point is not which test case specification is better. The main point is:
This chapter will show you on the basis of prepared exercises how to work with TESSY:
In this exercise we will get to know the basic functionality of testing with TESSY. We will
operate with the example “is_value_in_range” which will give you a fast introduction and
an overview as well as the terms of importance.
Central test A unit test in TESSY is divided into the following central test activities:
activities
• Determining test cases.
Usually you would import your requirements first. To keep this exercise understan-
dable for beginners, we will first exercise a simple project, then import some basic
requirements and restart the test!
We will now follow a simple source code example to show how to excercise those activities
with TESSY.
Example “is_value_in_range”
To understand TESSY´s file system and databases, consult section 3.1 Creating
databases and working with the file system.
Þ Start TESSY by selecting “All Programs” > “TESSY 4.x” > “TESSY 4.x”.
If another project is opened within TESSY, click “File” > “Select Project” >
“New Project” and then click on .
TESSY now creates the project “Example1” (see figure 5.3). This will take a
few seconds.
TESSY now opens your project. This will take a few seconds.
Organizing The project “Example1” is opened within the Overview perspective. You can create diffe-
“Example1” rent folders within a test collection, each containing modules with various test objects. To
keep it simple, we will create now one test collection with one folder.
Þ In the Test Project view click on the icon (New Test Collection) in the tool
bar of the view.
The module relates to one or many source files which are to be tested.
Figure 5.4: Test collection “Is_value_in_range” with an example folder and module
Rename or delete a module or a folder by using the context menu (right click >
“rename” or “delete”) or the key F2.
Usually at this point you will have to specify the target environment, that is to determine
the compiler, the target and the microcontroller. You will do that in the “Test Environment
Editor” which we will get to know later.
Please notice beneath in the Properties view at tab “General” that the GNU GCC compiler
is already selected for this module (see figure 5.5), which is enough for our example.
Now we will add the source file to the module. The source file contains the C-function to
be tested: C-source file
5.1.3 Adding the test object and analyzing the C-source file
We will use the example C-source file “is_val_in_range.c” which is stored under
“C:\Program files\Razorcat\TESSY_4.x\Examples\IsValueInRange”.
Copy the C-source file, paste it in the project root an add it to the module:
Adding the
C-source
Þ Select the source file “is_val_in_range.c” from the folder where you just
pasted the source.
Þ In the Test Project view above click on (Analyze Module) to start the
module analysis (see figure 5.8).
TESSY now analyzes the C-source file, this will take a few seconds. After successful
processing,
TESSY will as well analyze the C-source file by just clicking on the white arrow
after adding the C-source file.
Now all functions which were defined in the C-source file are displayed as children of the
module above within the Test Project view (see figure 5.9).
Figure 5.9: The function of the C-source is displayed as child of the module.
Our sample C-source file contains only one function, our test object “is_value_in_range”.
The term “test object” indicates the functions within the module we are attempting
to test.
Determine Now we can edit the interface information for every test object and determine which values
passing directions are input and which ones are output variables. Input values are all interface elements that
are read by the test object. Output values are written by the test object.
Upon opening the module, TESSY will try to set the default passing directions (input or
putput) automatically. You can change these default interface settings to your needs.
In our sample the passing directions are already defined, you do not have to take
actions.
Þ In the Interface view open the Parameter paragraph to see the inputs and
output values that are already defined in our example (see figure 5.11)
Usually now you would design the test cases, either manually or within the Classification
Tree Editor (CTE), based on specifications of your test object.
Since the CTE is a subject for its own, we will not make use of the CTE in this example,
but simply enter some ad-hoc test data manually.
To learn about the CTE refer to section 6.7 CTE: Designing the test cases or follow
the Quickstart 2: The Classification Tree Editor (CTE).
Now we will add three test cases each with one test step within the Test Items view:
The first test case is created and a test step is automatically added.
Þ Expand the test cases by clicking on the arrows in front of the test cases.
Figure 5.13: Three test cases were added in the Test Items view
• The first number is the number of the test case, the number in brackets shows the
quantity of the test steps included.
• Test case numbers will be counted continuously: If you delete test cases, new test
cases will get a new number and existing test cases will not be renumbered.
• If you cannot click on “New Test Case” or “New Test Step” because the icons are
inactive, you might be in the wrong selection: Select the test object within the
Test Project view, then select the Test Items view.
Þ Switch to the perspective “TDE - Test Data Editor”. The TDE will also open
with a double click on a test case or a test step.
In the Test Data view you can see the test cases and steps in tabular form.
After saving, the symbol of the test object in the Test Project view as well as
the symbol of the test case in the Test Items view turns yellow to indicate that
the test case is ready to run (see figure 5.14).
Figure 5.14: Data is entered, test step turns yellow and test case is ready to run.
Please notice the changes of the test object icon to indicate different conditions:
Þ Now enter data for the other two test cases as shown in table 5.18.
range_start: 3 20 0
range_length: 2 8 5
v1: 4 22 6
• Test case 1.1: The range starts at 3 and has a length of 2. Therefore, the range
ends at 5 and the given value 4 is supposed to be inside of the range (yes).
• Test case 2.1: The range starts at 20 and has a length of 8. Therefore, the range
ends at 28 and the given value 22 is supposed to be inside of the range. Because we
want to force an incorrect output, we state this to be not inside of the range (no).
• Test case 3.1: The range starts at 0 and has a length of 5. Therefore, the range
ends at 5 and the given value 6 is supposed NOT to be inside of the range (no).
The test step icons in the Test Items view will now turn to yellow (see figure 5.16). This
indicates that we are now ready to run the test.
Þ Click on (Start Test Execution) in the tool bar of the Test Project view.
A progress dialog will be shown while TESSY generates, compiles and links the
test driver and runs the test. This will take a few seconds.
After the test run, test case icons (within TDE) should be (see figure 5.17):
• Within the Test Data view the second test step is marked with a red cross
and the expected result “no” is marked red to indicate, that the result did
not match the expected result (the actual result is “yes”).
• Within the Test Project view the test collection, folder, module and test
object are marked with a red cross to indicate, that not all results did match
the expected results.
• The Test Items view indicates with a red cross, that test case 2 did not match
the expected result.
You can see the results of every test step within the Test Results view.
To analyze the source code coverage of the test, repeat the test run with the branch,
MC/DC and MCC-coverage instrumentation:
Þ In the tool bar of the Test Project view click on the arrow next to the Execute
Test icon and select “Edit Test Execution Settings . . . ”.
Þ In the following dialog tick the boxes “Run” (default) and “Create New Test
Run”.
Þ Choose the instrumentation “Test Object” and untick the box “Use preselected
coverage”.
Þ Tick the boxes for “Branch Coverage (C1)” and “Modified Condition / Decision
Coverage (MC/DC)” (see figure 5.19).
Þ Click on “Execute”.
Figure 5.19: Selecting Branch and MC/DC Coverage for test run
A progress dialog will be shown while TESSY generates, compiles and links the test driver
and runs the test. This will take a few seconds.
The CV shows the results of the coverage measurement of a previously executed test. Analyzing with
the CV
You can select a default coverage measurement as well for your whole project or
any specific module or test object. Refer to section 6.2.3.4 Coverage tab.
The flow chart view displays the code structure and the respective coverage in graphical
form. Within each flow chart, you will see the decisions and branches of the function being
displayed. Green and red colors indicate whether a decision has been fully covered or a
branch has been reached.
The Branch C1 Coverage view displays the branch coverage for each individual test case
and test step as well as the total coverage for all test cases and test steps.
The MC/DC-Coverage view displays the coverage of the currently selected decision within
the flow chart view (see figure 5.23). If no decision is selected (as initially when starting
the CV), the MC/DC Coverage view is empty.
The current example is_value_in_range has only simple decisions, for which MC/DC is
the same as branch coverage.
5.1.10.4 Analyzing
• three test cases were executed (each with one test step).
• the first decision has an else branch on the right, that was executed three
times.
• the if branch on the left of the first decision was not reached and is therefore
marked red.
• the second decision was fully covered and is therefore marked green.
• the else branch on the right of the second decision was reached two times,
the else branch was reached once.
The respective code section is highlighted within the source code view (see
figure 5.24). This allows finding out the execution path of the selected test
step.
The respective code section is highlighted within the source code view (see
figure 5.25).
Þ In the Test Project view of the Overview perspective click on the arrow next
to the Generate Report icon and select “Edit Test Details Report Set-
tings. . . ”.
Þ In the dialog click on the button “Browse. . . ” and select your project folder
‘”Example1”.
Þ Click on “Make New Folder” and create a folder “Reports” (see figure 5.26).
TESSY creates the report within the new folder. This will take a few seconds.
The selection of the folder is stored and will be used for any further
report generation.
Important: If you get an error “No matching program found for the
file . . . ”, then the Adobe Reader is not installed, which you need to open
the reports (PDF files). Download and install the Adobe Reader from
http://get.adobe.com/reader/ and generate the report again.
If you use a Version Control System (VCS) providing keyword expansion to embed
version control information in your source files, TESSY will display such expanded
keywords within the test report. The following keywords are available: $Revision$
(Revision number), $Author$ (User who checked in the revision) and $Date$ (Date
and time stamp for the revision).
You can change the default Razorcat logo within the reports to your own company
logo: In the menu bar select “Window” > “Preferences” > “Test Report Options”.
Klick on “Browse” and select your logo image file (PNG, JPG or GIF).
We will now import some very basic requirements and repeat some steps of this exercise.
Requirement This way you get to know the feature of requirement management and you can consolidate
Management the just learned workflows.
Þ Right click within the blank RQMT Explorer view and select “Import” from
the context menu (see figure 5.28).
In the following Import dialog you can import various file formats. In our example we select
the file we just copied into our project:
Þ Click on “. . . ” and select the file “Is Value In Range Requirements.txt” from
your project.
Þ Leave the File Content Type and the Target Document as it is and click “OK”
(see figure 5.29).
The newly imported requirement document will be displayed in the RQMT Explorer view
(see figure 5.30).
Þ Right click the document and select “Properties” from the context menu.
Þ Change the alias to “IVIR” and click “OK” (see figure 5.31).
The document alias will be used for reporting, in order to have an abbreviation of
the document name when building the requirement identifier, e.g. IVIR-[1.0] in
our example.
Committing Before linking any tests to a requirement, the respective requirements document needs to
be checked in as initial revision:
Þ Select the document and click on (Commit Changes) in the global tool bar.
Þ Enter “Initial revision” as commit comment and click “OK” (see figure 5.32).
The view shows the imported requirements and the module, test object and
test cases in a tree-based arrangement.
Use the toggle buttons on the right to link modules, test objects or test cases
to requirements:
Þ Link the second test case with the second requirement (see figure 5.33).
• Test case 1.1: range start 3, length 2, given value 4, supposed to be inside of the
range (yes)
• Test case 2.1: range start 20, length 8, given value 22. Because we wanted to force
an incorrect output, we stated this to be not inside of the range (no).
Þ In the Test Item view select the first test case and have a look at the Test
Definition view: It shows the requirements we just linked with our test cases.
Figure 5.34: Test Definition view within TDE with linked requirement
At this stage we can already generate a report showing the planned test case for our
requirements:
Þ Switch to the Test Project view of the Overview perspective and click on the
arrow next to the Generate Report icon .
Þ A dialog for the settings for the Planning Coverage Report will open.
Figure 5.36: Dialog of the settings for the Planning Coverage Report
Þ Click on “Generate”.
Planning
Coverage Report
The report shows the available requirements and the linked test cases. It provides an
overview about the planned tests if all requirements are covered by at least one test case.
Since we have links to two of our requirements, the resulting requirement coverage should
be as shown above.
Notice the usage of the requirement document name and alias within the report!
It is important to select an appropriate alias in order to get useful report outputs.
We have planned test cases for the first two requirements, whereas the third requirement
is not yet linked with any test case, because there are no tests available to validate this
requirement.
We will now execute our tests again to see the results of the test cases with respect to the
linked requirements within the execution coverage report.
Þ Generate a test details report to review the results on test object level: Click
on the arrow next to the Generate Report icon (see figure 5.38).
The report will show additional paragraphs with the linked requirements after
the overview pages and for each test case (see figure 5.39):
Test Details
Report
Execution Now we will generate a coverage report showing the test case results with respect to our
Coverage Report requirements:
Þ In the global tool bar click on the arrow next to the Generate Report icon
and select “Generate Execution Coverage Report” (see figure 5.40).
TESSY creates the coverage report showing the available requirements and the
results of the linked test cases. It provides an overview about the current test
status, e.g. if tests for any requirements are failed.
Since one of our test cases was passed while the other one was failed, the
resulting requirement coverage should be as in figure 5.41.
Test Coverage
Report
The first requirement has one test case linked which was successfully executed, the second
requirement has also one test case linked, but this one failed. The third requirement has
still no test case assigned.
If the interface of the test object changes, TESSY will indicate the changes with specific
test readiness states. With the Interface Data Assigner (IDA) you can assign the elements
Using IDA of a changed (new) interface to the elements of the old one.
In this section we will change the interface of the test object by editing the C-source and
exercise a reuse operation within the IDA.
The target of this section is to show you the three different test readiness states
“changed”, “deleted” and “new”.
Therefore we will first change a test object and add two new test objects cal-
led “delete” and “new”. In a second step we will remove the “delete” object so it
appears as deleted. The names are chosen to illustate the test readiness states.
Þ Select the module and “Edit Source” from the context menu (see figure 5.43).
Þ Select the line “result is_value_in_range (struct range r1; value v1) ”
(see figure 5.44).
Changing the
C-source
Þ Edit the line as shown in figure 5.45
Þ Now add a “delete” object and a “new” object as shown in figure 5.46
Þ Save the changes with “File” > “Save” and close the file.
In the Test Project view you can see now three test objects with different test
readiness states (see figure 5.47):
The test object is_value_in_range has changed. You see the test object, but
there is no operation possible. You have to start a reuse operation.
The test objects “deleted” and “new” are newly available since the last in-
terface analysis. You have to add test cases, test steps and enter data for a
test.
Deleted test objects that did not contain any test cases and test steps are not
displayed anymore because they are considered as not important. If you want to
display a deleted test object, you have to add at least one test case and one test
step!
Before deleting the test object “deleted”, we will have to add some test cases with test
steps:
Þ Switch to the Test Item view and add a test case and a test step.
Þ Select the module and “Edit Source” from the context menu.
Þ Save the changes with “File” > “Save” and close the file.
In the Test Project view you can see now three test objects with three different
test readiness states (see figure 5.49):
The test object “deleted” has been removed. You still see the object, but there
is no operation possible.
The test object “new” is not shown anymore as “newly available”, because the
last interface analysis already detected the object as new.
Important: Note that removed and changed test objects require a reuse ope-
ration before you can further operate on them!
Warning: If you do not assign the interface object, you will loose the test data
entered for parameter v1 and the global variable v1 will have no values after the
reuse operation!
Þ Double click the test object “is_value_in_range” in the Test Project view to
assign its interface.
• On the right side within the IDA perspective you see the Compare view with the
test object is_value_in_range.
• Within the Compare view you can see the old interface of our test object
is_value_in_range and the new one. The red exclamation mark within the new
interface indicates the need to assign this interface object before starting the reuse.
• The title of the view shows the old name versus the newly assigned name. In our
case the names are the same since only the interface did change.
Þ Assign the interface object “long v1” either by using the context menu or just
drag and drop from the left side (see figure 5.51).
The data of all test cases and test steps will be copied from the old interface
to the current test object interface.
The test object changes to yellow to indicate that all test cases are ready
to be executed again.
• Removed and changed test objects require a reuse operation before you can further
operate on them.
• Those test objects that remained unchanged will automatically be reused, e.g. they
will be ready to use without further activities required.
• Removed test objects will only be displayed as “removed”, if they did contain any
test cases and test steps.
To understand the handling and create a simple classification tree we consider some aspects
from the Quickstart 1: Unit test exercise is_value_in_range.
After editing the test object interface and switching to the CTE, the root “is_value_in_range”
of the classification tree appears on the draw pad.
Important: If you cannot see the root on the draw pad, move the scrollbar at
the bottom or enlarge the window.
Example is_value_in_range
Our three test-relevant aspects are the start value of the range, the length and the
position, which is a given value under test (v1). We should have test cases which result in
“inside the range”, and other test cases which result in “outside”, to have a high coverage
of possibilities.
So the three test-relevant aspects to be used for classifications are start value, length and
position and they thus form the basis of a classification tree.
In the ’Advanced Knowledge’ of this manual you find detailed information of using
The Classification Tree Method (CTM).
Þ Create a new classification either using the context menu (“New” > “Classifi-
cation”) or press Ins (see figure 5.53).
Þ Double click the new classification or press F2 to start the inline editor for the
tree item.
Within the draw pad you can move the classifications with drag and drop: Select
either a classification, a sub tree (click on ) or select all (click on ) and press
the mouse button until the cursor turns into a cross with four arrows , then
move the selection.
Þ Now create the classes “positive”, “zero” and “negative” for the classifications
range_start und range_length: Right click the classification and select “New”
> “Class” or select the classification and press Ins.
Þ Create the classes “inside” and “outside” for the classification “position”.
Þ In the tool bar click on (Arrow Layout) to rearrange the tree for a better
overview.
Try the several layouts for the tree, e.g. leftdown or horizontal
may give you a better overview.
Þ In the Test Data view on the right enter the value 3 as range start (see figure
5.57).
Now enter more test data for the other classes of the classification “range_start”:
When selecting a tree item within the classification tree you will see
the test data entered for this tree item within the Test Data view.
• The name of the selected item will be displayed as column header in the Test Data
view (see red marked sections in figure 5.57, here: “positive”).
• The class will be marked with a blue dot (when selected) to indicate that test data
is assigned.
• All tree items with test data assigned will be marked by a yellow dot when not
selected (e.g. select the root “is_value_in_range”).
Now enter test data for the classes of the classification “range_length”: Entering values
for
Þ Select the class “positive” of the classification “range_length” and enter 2 as “range_length”
range_length.
• All tree items with assigned test data are marked with a yellow dot, when not
selected.
• When selecting a tree item, you will see the test data entered for this item within
the Test Data view.
• When selecting any interface element within the Test Data view, all classification
tree elements that contain test data for this interface element will be marked with
a blue dot.
Figure 5.59: Blue dots indicate that “range_start” elements contain data
Þ Create the test cases either using the context menu (“New” > “Testcase”, see
figure 5.60) or press Ins.
Test cases are defined by setting markings in the combination table: If you move the mouse
pointer over the combination table, connecting lines to the classes of the classification tree
are drawn. If the mouse pointer is placed over a point of intersection it is changed to a
circle.
Þ Move the mouse over the line of the first test case.
Þ Click on the circles that connect the first test case with the two positive classes.
The test data for the test case 1 is displayed within the Test Data view (see
figure 5.61). The test data is read-only because it is defined by the marks set
within the combination table.
Þ Create more marks within the combination table for the other test ca-
ses to cover completely all classes of the classifications “range_start” and
“range_length” with your test cases.
Figure 5.62: Completed table with all test cases for example “is_value_in_range”
Certainly there are various possibilities of combining the classes within the test
cases.
As you can see the classification “position” is still not marked for any test item. This would
be rated as an error because each class of the classification tree should be marked at least
within one test item.
We also did choose all possible combinations for the first two classifications. In a
real example you would need to select only the most interesting combinations in order to get
a reasonable number of test cases.
Þ Select the test cases one after another and review the test data resulting from
your mark settings being displayed within the Test Data view (see figure 5.63).
Figure 5.63: Test data is displayed when selecting a test case in the combination table
You will see the test cases updated with the test data values entered within the CTE
perspective.
• Test items with values stemming from the CTE perspective are marked with new
status indicators: (test case) and (test steps).
• Values stemming from the CTE are read-only. If you want to change them, switch
back to the CTE perspective and do your changes there.
To understand the handling in this chapter we assume you have worked with
TESSY in unit testing. If not, please proceed with the Quickstart 1: Unit test
exercise is_value_in_range first and come back to this chapter afterwards.
If you have already worked with former versions of TESSY, this chapter may help
you to learn the differences in handling with TESSY 3.x!
The calls to the component functions stimulate the component. Like testing
of a single function, a test case for a component also comprises of input and output
data. A component may have internal functions that can not be called from the
outside of the component, but only from functions inside the component.
Relevant for the result of a component test is the sequence of the calls from within
the component to (callable) functions in other components. This is with respect to
the number of the calls, the order of the calls, and the parameters passed by the
calls to other components.
Obviously, the functionality of the functions in a component and the interfaces
between them are tested by component testing, at least for the most part. Hence,
component testing can be considered well as integration testing for the functions in
the component.
You already know the basic functionality of a unit test. We will now follow a simple source
code example to show how to excercise component testing with TESSY.
Example interior_light
The interior light of a car shall be controlled by two inputs: The door and the
ignition of the car.
The functional specification comprises of three simple requirements:
The specification above is obviously not complete. Especially the initial state of door,
ignition, and light is not given. It is not specified what shall happen e.g. if the ignition
is switched off after it was switched on, etc. But this simple specification is sufficient to
demonstrate temporal component testing with TESSY.
• Door = open
• Ignition = off
• Light = off
Þ Create a test project “interior_light” (if you need any help, consult section
Quickstart 1: Unit test exercise is_value_in_range).
The C-source code will now be analyzed, afterwards the test project view dis-
plays the functions of the source (see figure 5.67).
Þ In the Properties view switch the Kind of Test to “Component” (see figure
5.68).
Now the only test object displayed has the default name “Scenarios” (see figure 5.69).
This is different to unit testing with TESSY, where the names of the possible test objects
in interior_light.c (i.e. the functions) would be listed instead.
You can open the C-source file with a rightclick onto the module and choosing “Edit
Source” from the context menu. The C++-Perspective will open and the file will be
displayed (see figure 5.70).
This implementation features a heartbeat function, the tick() function. The implementa-
tion assumes that tick() is called every 10 ms. Based on this assumption, for the timer the
value “500” is calculated.
Heartbeat Function
In the section “External Functions” of the interface, the two functions LightOff()
and LightOn() are listed. These two functions are used (called) by the component
“Interior Light”, but these two functions are not implemented in interior_light.c.
The component “Interior Light” expects these two functions to be provided by anot-
her component of the application.
However, we want to test the component “Interior Light” without that other com-
ponent, i.e. isolated from the rest of the application. Therefore, we direct TESSY
to provide replacements, i.e. stub functions for these two functions.
Þ Rightclick the function “LightOff()” and choose “Create Stub” (see figure
5.74).
The dots with the blue border will change to blue filled dots .
The passing direction of the variable timer is “Irrelevant”, because it is not used by
init(). Since we are only interested in the variables “sensor_door”, “sensor_ignition”,
and “state_door” as input, we manually set the passing direction for these variables to
“In”:
Þ Switch to the SCE perspective (or doubleclick a test case, in this case the SCE
perspective will open automatically).
Þ The bracket of the scenarios in the Test Project view will open to indicate that
it contains test cases but no data (see figure 5.76).
You can add a description to the test cases (see figure 5.77):
Þ Doubleclick the value cells in the INIT column to open the inline editor (see
figure 5.78).
sensor_door = open
sensor_ignition = off
state_door = open
Þ Save by clicking on .
The view External Function Calls displays the two functions LightOn() and LightOff()
that the component “Interior-Light” supposes in another component. TESSY provides
stub functions for these two functions during component testing.
Please notice that you can rename the test cases: Renaming Test
Cases
Þ Rightclick a test case and choose “Rename” from the context menu.
The new name will be displayed in the center of the SCE perspective (see figure
5.81).
Figure 5.81: The names of the test cases are displayed in tabs of the view Work Task
As we know, the implementation of the component Interior Light assumes that the function
tick() is called every 10 ms, i.e. the function tick() is the work task or handler task or he-
artbeat of the component. To enable TESSY for temporal component testing, Tessy must
know about this, i.e. we must specify tick() as work task for the component manually.
Þ Click on (Set as Work Task) in the tool bar (see figure 5.82).
Since we want to test the temporal behavior of the component, we need to establish a
time base:
Þ Within the Scenario view in the center of the SCE perspective click on
(Insert Time Steps).
The scenario consists of 4 calls to the work task, in our case tick(). The calls occur at 0
ms, 10 ms, 20 ms, 30 ms simulated time.
To stimulate the component besides the calls to tick(), you can simply drag and drop a
function to the scenario.
Þ Drag the init() from the Component Function view to the Scenario view (see
figure 5.83).
Now we stimulate the component actually. We drag the component function set_sensor_door()
to 30 ms simulated time:
Now you have to specify a value for the parameter of set_sensor_door() in the properties
of set_sensor_door():
Þ Open the Properties view and click in the cell under “value”.
Þ Enter “closed”.
The value will be displayed in the parameter of the time step function (see
figure 5.85).
In the scenario above, after the fourth call to tick(), TESSY calls the component function
set_sensor_door() with the parameter value “closed”. This should cause the component
to react by calling LightOn(). We know from the implementation of Interior Light that
this call will happen one tick later, i.e. after TESSY has called tick() a fifth time, at 40
ms simulated time.
Þ Specify the expected result by dragging the function LightOn() from the Ex-
ternal Function Calls tab to 30 ms simulated time (see figure 5.86).
The scenario above expects the call to LightOn() to happen at 30 ms simulated time, i.e.
prior to the fifth call to tick(). This is indicated by the time frame “30 - 30 ms”. We know
that the call will occur one tick later, and we want to treat this behavior as correct:
Notice that the icon of the test object has changed to yellow , indicating that
the scenario is executable (the purple sign indicates a comment or description).
Besides the call to LightOn(), we also expect a call to LightOff(). Since the ignition will
not be operated in this scenario, we expect the call to LightOff() to occur 5 seconds after
the call to set_sensor_door() at the latest:
Þ Drag LightOff() to 30 ms and extend the time frame by 5000 ms (see figure
5.88).
Figure 5.88: Setting the call “LightOff” and extending the time frame
In the scenario above, we have specified that we expect the call to LightOff() after
5 seconds at the latest, i.e. we would take a call to LightOff() after, say, 4 seconds
as a correct result.
If we would want to accept only the occurrence of the call to LightOff() at 5030 ms
of simulated time as correct, we then could use the context menu’s Insert Time
Step At. . . command to create this point in time and assign the expected call to
LightOff() to 5030 ms of simulated time, of course with the property “Time Frame”
set to 0 and not to 5300.
Þ To specify the expected behavior, drag and drop the function LightOn() from
the External Function Calls tab to 10 ms simulated time.
Þ Select the 40 ms time step: The test data view will show a new column named
“40 ms”. Set the value of the variable “sensor_ignition” to “on” and set the
other values to be ignored (see figure 5.89).
The time step will now get a small yellow icon indicating that test data is fully
available.
Þ To specify the expected behavior, drag and drop the function LightOff() from
the External Function Calls tab to 40 ms simulated time.
The expected result of this action is the interior light going off immediately, i.e. after
the next call to the work task tick(). This is specified in the scenario by dragging
and dropping the function LightOff() to 40 ms simulated time and by increasing the
time frame by 10 ms.
This is performed very similar as in the first scenario.
After the scenarios have been executed, the color of the scenario icons changes to green.
This indicates an overall “passed” verdict (see figure 5.91).
To understand the overall handling and create a simple classification tree we consider
some aspects from the Quickstart 1: Unit test exercise is_value_in_range. Follow the
steps below to import a C++ project with sample test cases:
“C:\Program files\Razorcat\tessy_4.0\Examples\C++”
Þ Click “Open”.
TESSY will show an example project within the project list. Such an example
project will be copied completely to a user defined location when being opened.
(see figure 5.93).
The project root will be displayed within the bottom line of TESSY.
Þ Click “OK”.
Figure 5.95: The project root is displayed within the bottom line of TESSY.
Þ Click “OK”.
Þ After a few seconds the new project will be displayed within the Test Project
view of TESSY.
Please review the test cases defined for some selected test objects. You will find more
explanation and comments on the specifics of C++ testing within the application note
“Using C++”.
The test driven development support in TESSY allows to prepare tests already before
any source code is available. If you have specification of what the individual software
functions should do, you can manually create (synthetic) test objects and their (synthetic)
interface variables. Afterwards you can create test cases and fill them with data as for
normal test objects. Once the first version of the software is available, you need to assign
the test objects to their respective implementation functions and you can start running
the tests against the implementation. To create synthetic test objects for test driven
development:
Þ Create a module without assigning a source file. Only modules in this initial
state allow to create synthetic test objects.
Þ Add new test objects by clicking on the icon (New Test Object). Rename
the newly created test objects.
Each synthetic test object has an initially empty interface. To add variables:
Þ Click on the icon (New Variable) to create input and output variables.
You can create variables of all basic types to build the necessary interface.
Afterwards you can create test cases.
The test cases will get a yellow icon if they are completely filled with test data. You cannot
execute the tests because there is no source file to be tested available yet. But you can
create a test details report showing the specification of all test cases:
When a first version of the source file is available, add this file to your module. Now you
can analyze the module and you will see the functions contained within the source file and
the test objects that you prepared for test driven development. It doesn’t matter if the
implemented test objects have other names, you can assign test objects within the next
step.
In the example shown within figure 5.104 you can see that the implementation uses a
struct to hold all parameters whereas the synthetic test object has all values within scalar
variables. Such differences can be resolved using the IDA assignment as it would be done
for normal interface changes.
The test object “calc_add” is now ready for execution, all prepared test cases and data
are available.
This chapter provides detailed information of the test activities possible with TESSY. The headlines
of the sections follow the actions taken during a test and refer to the corresponding perspectives
and views, e.g. “CTE: Designing the test cases”.
The subsections describe the views of each perspective, displaying used icons and status indicators
and giving quickly answers to your questions of “What can I do within this view?” and “How do
I do . . . ?”. So if you need help at some point, ask “Where am I?”. You should find the answer
easily within this chapter. If you have questions about the workflow, consult chapter 5 Practical
exercises.
Some views are displayed within various perspectives. Because views are context
sensitive, not every operation is possible within every perspective. In this case the
manual will refer to the respective section and perspective, where all operations of
the view are possible.
The menu bar provides global operations such as project handling commands, editing
commands, window settings, TESSY preferences and the help contents.
“Select Project” Opens the dialog “Select Project”. If you select another project,
TESSY closes the current project, restarts and opens the
selected project.
“New Project” Opens the dialog “Create Project”. Refer to section 3.1.1
Creating a project database.
“Import Project” Opens the windows explorer. Choose a project and click “Open”.
“Edit Project” Opens the dialog “Project Configuration”. Refer to section 3.1.1
Creating a project database.
“Close Project” Closes the current project. TESSY will restart and show the
dialog “Select Project”.
“Edit Environment” Opens the TEE, the Test Environment Editor. Refer to chapter
6.5 TEE: Configuring the test environment.
Here you will find common actions as “Delete” or “Undo”, “Redo” etc. You can use as
well the context menu. Refer to section 3.3 Using the context menu and shortcuts.
“Reset Workbench” With a click you reset the positions of all perspectives and views
to the default setting.
Within section “Preferences” of the Windows menu you find many options for setting basic
functions to your needs:
“Preferences” Within this section you can choose if selections and settings
> “Test Execution should be remembered, e.g. if you tick unter “Remember test
Settings” instrumentation settings” the option “Globally for all test
objects” the last used coverage selection will be used, see
section 6.2.2.7 Instrumentation settings.
The following preferences will be stored within backup files when saving the whole project
database as described within Backup, restore, version control:
• Coverage settings
• Dialog settings
The static analysis settings are used for calling the respective static analyzer tool. You need
to enter the path to the binary and change the command line options to your needs.
For a selection of the most applicable safety standards, there are pre-defined coverage set-
tings available according to the recommendations given within those standards. You can
choose a coverage setting for the appropriate standard and level as default for all modules
of your project. When running tests with coverage instrumentation, the respective settings
will be applied automatically.
You can as well define your own coverage settings if the standard you are using is not
available within the list.
“Create Support File” Creates a support file. Refer to section 7.1 Contacting the
TESSY support.
“Start Shell” Starts a bash shell that can be used to try out the command line
execution of TESSY. The PATH variable is already set to the
bin directory of the currently running TESSY installation, so
that you can run tessycmd immediately. Refer to section 6.13
Command line interface.
Test Project view upper left To organize the project: Create test collections, modules
and test objects; execute the test, create reports and have
a fast overview on your project.
Properties view lower left To edit all properties, e.g. adding sources or including
paths to your modules.
Requirement lower left To select and link the requirements that you managed
Coverage view within the Requirement management perspective.
Test Items view upper right To create test cases and test steps manually.
Evaluation upper right To view evaluation macro results if the usercode of the test
Macros view object contains any.
Console view lower right To display messages of sub processes invoked during test
execution, e.g. compiler calls.
Exports files.
Generates various test reports. The test details report for a test object will be
generated as default (Ctrl + R).
Inserts a new folder (Shift + Ins), optional for organizing your test project.
Inserts a new module (Ins). Modules will contain the test objects available
within the C-source files to be tested, i.e. C functions.
Variant module that was created with the “Create Variant” option.
The test object interface has changed. A reuse operation within IDA is requi-
red.
The C-source of the test object has changed. A new function has been added.
The test object has been removed. You still see the object, but there is no
operation possible. Only displayed when the test object contained any test
cases before the removal.
The test execution has been aborted for this test object.
The test result of a test run is failed. This may be either due to a mismatch
of actual and expected results or if the coverage did not achieve the minimum
coverage.
Test results and coverage of the test run are both OK.
The coverage did not achieve the required minimum coverage. The red part
of the bar indicates the missing percentage of coverage, e.g. more red means
less achieved coverage.
The coverage achieved the minimum coverage, but the minimum coverage
was less than 100.
You need at least one test collection to organize your test, and within at least one module Creating the test
and one test object. Folders and further test collections are optional and just have the
purpose to organize your test project.
Þ Click on the icon (New Test Collection) on the tool bar of the view.
TESSY now analyzes the C-source file, this will take a few seconds.
After successful processing,
TESSY will as well analyze the C-source file by just clicking on the white arrow
next to the module after adding the C-source file.
Now all functions which were defined in the C-source file are displayed as children of the
module above within the Test Project view (see figure 6.7).
Figure 6.7: The function of the C-source is displayed as child of the module.
When using the CLANG parser (which is the default since TESSY v4.0), opening older
modules created with TESSY v3.2 and before will show the following warning:
Figure 6.8: Warning when opening older modules with CLANG enabled
• If you want to convert the module and start working with the CLANG mode,
select “OK” which will do all necessary conversion. Refer to the restrictions
that apply: Operating limits. Please create a backup of your project before
converting the modules.
• If you have a legacy project and just want to execute tests, select “Cancel”
and change the attribute “Enable CLANG” to false within your project con-
figuration using the TEE Adjusting enabled configurations. If you did this
successfully, there will be no more warning when opening the module.
When working with the CLANG mode there are several parser options which can all be set
in the TEE:
Enable Create Default If set to true, the TESSY parser creates a default constructor if it
Constructors is missing.
Enable Create Function If set to true, external functions that are called are by default
Stubs marked to create stub code unless they are listed in attribute
“Function Stub Exclude List”.
Enable Create Method If set to true, undefined called methods are marked to create stub
Stubs code unless they are listed in attribute “Method Stub Exclude
List.”
Enable Define Variables If set to true, external variables that are used are marked to be
defined unless they are listed in attribute “Variable Exclude List”.
Enable Exceptions If set to true, TESSY enables exceptions for the test object and
generates a try-catch block around it. Also the TIE displays an
artificial global variable called throws exception which can be set
to OUT in order to test an exception thrown be the test object.
By default the attribute is set to true.
Function Stub Exclude The comma separated list of functions is excluded from automatic
List stub creation.
Method Stub Exclude The comma separated list of methods is excluded from automatic
List stub creation.
Variable Exclude List The comma separated list of variables is excluded from being
automatically defined.
General information about editing the environment can be looked up in the section TEE:
Configuring the test environment.
Important: Setting changes of the parser options made in TEE will be effective
when analyzing a module. Some of the options only apply when initially analyzing
modules. In this case it is necessary to reset a module before starting the analysis
to see the effects of the latest changes.
For more information about the parser options and more attributes available within
the environment editor TEE please refer to the application note “Environment
Settings (TEE)” in TESSY (“Help” > “Documentation”).
Module testing of software variants often require very similar tests that only differ in small
parts. Therefore the comfortable reuse and adaption of existing tests reduces the testing
A base module serves as parent for all variant sub modules. It contains the information
that will be shared with all variants. Any module can be used as base module (i.e. can be
the parent of sub modules). If the parent module has changed If you have i.e. created a
module with some test cases, you can create a variant:
Þ Rightclick on the folder and select “New Variant Modules” from the context
menu (see figure 6.9).
Þ Choose the parent module and click on “OK” (see figure 6.10).
Þ The variant will be displayed within the Test Project view with the icon .
Figure 6.11: Test Project view with a module and a variant module.
If the parent module has changed, TESSY will mark the children with an exclamation
mark. The mouseover states, that the module needs to be synchronized with its parent
(see figure 6.12).
Figure 6.12: The variant module needs to be synchronized with the parent.
Þ Rightclick the module and select “Synchronize Module” from the context menu.
The Synchronize Module dialog is displayed. The parent and all child modules
will be shown and can be synchronized in one step.
For the first synchronization or if a modules interface has changed, the IDA will be opened
to assign the parent modules interface to the child module interface. You will be asked if
the IDA perspective shall be opened:
The assignments made within IDA will be saved for future synchronizations of the child
module(s). When invoking the “Synchronize Module” operation again (e.g. after changes
to the parent module), the last used assignments will be applied without showing the
“Synchronize Module” dialog.
If you want to force a new assignment of the parent module interface to the child module
interface, you can analyze the child module which will reset the interface assignments. A
subsequent module synchronization will then show the IDA again.
The following indicators display the status of the inherited test cases and test steps within
the Test Items view:
icon meaning
The small triangle indicates that the test case or test step is inherited.
The filled triangle indicates that the inherited data of the test case and test
step was overwritten.
The test case or test step was added for this variant test object.
Important: Deleted test cases/steps are only faded out within the child module.
They can be made available again using “Restore Deleted” from the context menu.
Figure 6.16: Test cases and test steps that were inherited of a variant module
After entering test data for a particular test object you are ready to execute the test.
During this process, TESSY will perform the following steps:
• Generate the test driver based on the interface information and user code
provided.
• Link the test driver to the test object to create an executable file.
The test driver is necessary to call the function under test and will be gene-
rated automatically. Test driver and the function under test form a complete
(embedded) test application, including the startup code for it, and will use an
appropriate compiler for the particular embedded microcontroller architecture. If
the function under test uses external variables that are not defined, the test driver
generated by TESSY can define those variables.
Once the test driver has been compiled, it can be run as often as required.
You can select a subset of your test cases and run the test again by just selecting
the run option. Changes to test data and expected results might require building a
new test driver. TESSY will check this automatically and generate a new driver.
Stub functions
If the function under test itself calls other functions (subroutines), TESSY
can provide replacement functions (stubs) for the missing subroutines with the test
driver. TESSY features two types of stub functions:
• Stub functions for which you may provide the C source code for the
bodies of the stub functions.
• Stub functions for which TESSY is able to check if the expected value for
a parameter is passed into the stub function and for which TESSY is able to
provide any value specified by the user as return value of the stub function.
A progress dialog will be shown while TESSY generates, compiles and links the
test driver and runs the test. This will take a few seconds.
Action Meaning
“Abort On Missing Stub Building the test driver application will be aborted with an
Code” error if there are non-void stub functions without any code
provided to return a value. You can uncheck this action to
ignore this error if you are sure that the return values of
your stub functions are not used. (For more information
please refer to Defining stubs for functions.)
Option Meaning
“Create New Test Run” A new history in the module will be generated.
“Test Cases Separately” The download and execution process of the test driver will
be started separately for each test case. This provides an
initial state of memory (and variables) for each test case
and is useful if the test cases shall be executed
independently. The disadvantage of this approach is an
increased execution time. (Due to start/stop of debugger
and download of test driver.) It is recommended to set this
option for dedicated test objects only.
Within the Test Execution Settings dialog you can select the various possible coverage
instrumentation for this test run (see figure 6.17):
Þ Select from the pull-down menu if the coverage shall be used for the test object
or the test object and the called functions.
The coverage instrumentation is now used for this test run, even if you have
selected a different coverage instrumentation as default for your project (see
section 6.1.4 Windows > Preferences menu) or for the module or test object
within the Properties view (see section 6.2.3.4 Coverage tab).
If you tick the box “Use preselected Coverage”, coverage selection will be
applied according to the following rules:
• If a coverage selection is set in the Properties view (see section 6.2.3.4 Co-
verage tab), that selection will be used.
• If no coverage selection is set in the Properties view, but in the Test Exe-
cution Settings of the Windows > Preferences menu the option “Remember
instrumentation settings” is set, the last used selection will be used.
For more information about the coverage measurements refer to the application
note “Coverage Measurement” in TESSY (“Help” > “Documentation”).
Viewing test After a test run, the Test Project view gives an overview about the coverage, if selected:
results
• The actual results will be compared with the expected values according to
the evaluation mode. The result will be either failed or passed.
• The last step of test execution is the generation of an XML result file. This
file contains all test data and the actual results. It will be used for reporting.
The results of every coverage measurement can be reviewed in the CV (Coverage Viewer)
See 6.9 CV:
Analyzing the as soon as the test was carried out. For details refer to section 6.9 CV: Analyzing the
coverage
coverage.
• A green tick will indicate that all actual values comply with the expected values with
respect to the evaluation modes and the coverage reached at least the minimum
coverage.
• A red cross will indicate that either some actual values yield failed results or the
coverage did not reach the minimum coverage.
• If the interface has changed, the test object will indicate changes with test readiness
states (see Status indicators).
• The time of the test run is stated within the Test Project view:
Important: The results of the coverage measurement are also part of the
test result for a test object, e.g. if all outputs yield the expected result but the
coverage was less than the minimum coverage, the test result will be failed.
Warning: Using the option “Reset Module” from the context menu will delete
the module with all test results!
In the Test Project View test objects that are not relevant for the project can be hidden
by setting a test object filter:
Þ Select the test objects you wish to hide in the dialog that is shown.
Þ Once a filter has been set, you can toggle it on and off by clicking on .
Þ To modify an existing test object filter, click on the arrow next to the button
and select the “Select Test Object Filter” command.
Figure 6.20: A filter has been set but is currently disabled (filtered test objects appear
faded).
Figure 6.21: The Filter is enabled, the affected test objects are hidden.
Figure 6.22: Clicking on the arrow opens the “Select Test Object Filter” dialog.
Important: The filter setting will be saved for each filtered test object within
the test database. When saving and restoring modules as TMB files, these filter
settings will also be persisted and restored.
The search filter helps to find and select elements by their name. Typing into the search
field will result in the Test Project View being updated after a small delay such that
only matching elements and their ancestors and descendants are displayed. Elements are
automatically expanded to be visible, with the exception of modules that need to be
analyzed.
Important: All reports are generated as PDF files. You need the Adobe
Reader to open the files. Download and install the Adobe Reader from
http://get.adobe.com/reader/.
Test Details Report Contains information about the test cases, their
properties and values, linked requirements and
if test cases failed or were successful.
All reports are created as PDF documents based on XML data files. These XML
data files can also be used for generating reports or further processing if desired.
Þ Click in the Test Project view (i.e. within the Overview perspective) on the
arrow next to the Generate Report icon .
TESSY creates the report within the new folder. This will take a few seconds.
Important: The first time you create a report, the “Edit Settings” dialog will
be opened automatically. These settings are memorized and used for the following
reports.
You can as well change basic settings, e.g. output directories, filenames and the
logo on the reports. Refer to section 6.1.4 Windows > Preferences menu.
Batch Test TESSY provides a batch test feature with various operations for test execution and re-
porting. You can define which operations shall be performed and which settings shall be
used. This setup can be saved into batch script (TBS) files for test automation using the
command line interface of TESSY (see chapter 6.13 Command line interface).
Þ In the Test Project view right click a project, a module or a test object.
Þ Select “Define Batch Operation. . . ” from the context menu (see figure 6.26).
Þ Under “Test Objects” choose the project or modules or test objects for the
batch test. Click “Select All” to select all at once (see figure 6.28).
Þ Switch to the setting by either marking the operation on the left side or use
the tabs on the upper right side (see figure 6.29). The optional settings for
this operation will then be shown on the right side of the window.
You can create a TBS file for command line execution by saving the batch test settings:
Þ In the batch operation settings window click on (Save batch file as. . . ).
Þ Choose the type of the file and click “Save” (see figure 6.30).
The Test Project view provides the import and export of test data and module backup files
(*.TMB):
When importing test data for individual test objects there are following options (see figure
6.31):
• “Update passing directions”: If you tick the box, the passing directions of
all interface variables will be set according to the passing directions specified
within the import file. All other interface variables will be set to IRRE-
LEVANT. The test object will be ready to execute when using this option
because all variable with passing directions IN, OUT or INOUT will be filled
with values.
• “Overwrite/append test cases”: Either delete existing test cases before im-
porting or append any imported test cases at the end of the test case list.
When exporting data there are following options (see figure 6.32):
Properties view
The Properties view is divided into several tabs on the left and provides various settings
which are explained in the following:
The General tab (see figure 6.33) is used to determine the test environment. Following
options are available:
Option Function
Test The path has been specified during database creation and is not adjustable
Directory here.
Figure 6.34: The Compiler pane in the Sources tab of the Properties view
In the upper pane of the Sources tab the source files to be tested are added. All exported Adding the
functions will be displayed if the module is opened. Some additional compiler options can C-source file
be specified on module level by selecting the module entry, other options can be specified
for each source file in the list.
Þ Select a source file and “Remove File” from the context menu.
Þ Select a source file and “Replace File” from the context menu.
The lower Compiler pane of the Sources tab displays information about the item
selected from the upper Sources pane. Some of the displayed options (e.g. Includes) in the
lower Compiler pane can be specified in the Test Environment Editor and will be inherited
from there.
Module options apply to all source files unless otherwise specified on file level.
File options apply to one selected source file and will overwrite options that are
specified on module level.
Includes Add an include path of the headers which are included within the
source file.
Options Specify additional directives for your target compiler for your needs. Note
that macros for the preprocessor and include paths have to be specified
within the Defines tab respectively within the Includes tab.
All compiler options added here will be used for the compilation of the
source file when building the test driver application.
Settings Depending on the selected item in the Sources pane the following features
can be enabled (box is checked) or disabled (box is unchecked) in the
Compiler pane:
• Enable User Includes: When enabled, all included header files of the
source file(s) are included in the user code.
Table 6.48: Optional functions of the Sources tab of the Properties view
Figure 6.35: The Setting tab of the Properties view with module selected
Any linker options like object files or libraries can be added here. You can use predefined
variables like $(PROJECTROOT) or $(SOURCEROOT) as described in section Creating
databases and working with the file system. It is recommended to add such linker options
using the environment editor TEE: Configuring the test environment.
While you have probably chosen an instrumentation for coverage measurements as default
for your whole project as described within section 6.1.4 Windows > Preferences menu,
See 6.1.4
Windows > within the Properties view you can enable a different coverage measurement on folder or
Preferences menu
test collection level or for a single module:
Þ In the Test Project view select your module, for which you want to choose the
coverage measurement.
Þ In the Properties view select your coverage selection in the pull-down menu
(see figure 6.38).
The pre-selected coverage instrumentations according to the selected safety standard level
will be displayed.
TESSY supports the following instrumentations:
See 6.2.2.6 You can choose a different instrumentation for each test run. The options will be shown
Executing tests
within the Execute Test dialog (refer to section 6.2.2.6 Executing tests).
See 6.9 CV: To analyze the coverage refer to chapter 6.9 CV: Analyzing the coverage. For more
Analyzing the
coverage information about coverage measurements and usage of coverage analysis refer to the
application note “Coverage Measurement” in TESSY (“Help” > “Documentation”).
The Attributes tab specifies settings required by the compiler or the target environment of
the module. Most attributes were preset and inherited from the Test Environment Editor
(TEE).
You can change the default values or add new attributes to the Attributes pane: Insert attributes
Changes are carried out only locally and do not influence other modules.
Þ Enter an attribute name and select an appropriate type, e.g. String. Available
types are String, Boolean, Integer, Real, File, Folder and Url.
You can remove user defined attributes. You cannot remove default attributes, only reset
the value to its default state, if changed before.
Þ Click on .
Those tabs provide editable textboxes to be used for specifications, descriptions and com-
ments by the tester.
Within the Requirements Coverage view you can link the requirements with your test cases.
We will describe this view in section 6.4 Requirement management > 6.4.16 Requirements
Coverage view.
In the Test Items view you get an overview about your test cases and test steps, and you
can as well create test cases and test steps manually without using the Classification Tree Test Items view
Editor (CTE, see section 6.7). This is useful for simple test objects with a few test cases
that can be documented in a few words manually.
Test case passed: The actual results did match the expected results.
Test case failed: The actual result of at least one test step did not match the
expected results.
Test Case Generator: This test case generates test steps automatically, i.e. if
you enter a range. It does not contain any data yet.
Test Case Generator with data: This test case has automatically generated
test steps.
Test step passed: The actual result did match the expected results.
Test step failed: The actual result did not match the expected results.
The test case has been created by the CTE and therefore can be changed only
within CTE. The test case does not contain any data.
The test case has been created by the CTE and therefore can be changed only
within CTE. The test case does contain some data.
The test case has been created by the CTE and therefore can be changed only
within CTE. At least one test step is ready to be executed.
The test step has been created by the CTE and therefore can be changed only
within CTE. It does not contain any data.
The test step has been created by the CTE and therefore can be changed only
within CTE. It does contain some data.
The test case has been created by the CTE and therefore can be changed only
within CTE. At least one test step is ready to be executed.
The first test case is created and a test step is automatically added.
• The first number is the number of the test case, the number in brackets shows the
quantity of the test steps included.
• Test case numbers will be counted continuously: If you delete test cases, new test
cases will get a new number and existing test cases will not be renumbered.
• If you cannot click on “New Test Case” oder “New Test Step” because the icons
are inactive, you might be in the wrong selection: Select the test object within
the Test Project view, then select the Test Items view.
• If you double click a test case, the TDE will be openened to enter test data. Make
sure to adjust or review the passing directions first in the TIE.
Every test step contains a complete set of test data. For instance, the mechanism of test
steps can be used to achieve an initialization of the test object before executing the test
step that checks the actual test condition of the current test case.
You can generate test steps automatically, i.e. with ranges of input values:
A new test case will be created. The star symbol indicates, that this test case is generated
and you cannot add any test steps, because these will be generated automatically (see
figure 6.46).
To fill the data and generate the test steps, you will use the Test Data view within the
TDE perspective:
After generating the test steps, the icon of the test case within the Test Items view will
change to yellow as well as test steps (see figure 6.47).
Figure 6.47: The test steps were generated and are ready to be executed
The test steps are read only! You can change the type of the test case and test steps to
“normal”. That way you can edit the test steps as usual.
To change the status to normal,
Þ Rightclick the test case and select “Change Test Case Type to Normal” (see
figure 6.48).
Changing test The test case and test steps are changed to type “normal” but will indicate originally being
case to type generated with a status (see figure 6.49).
normal
Figure 6.49: The test case and test steps originally being generated
You can reverse the action with a righclick and choose “Change Test Case Type to Gene-
rator” from the context menu.
If test cases and test steps were assigned within CTE, the icons of test cases and test
steps within the Test Items view are displayed with a CTE symbol to indicate that you can
change those test cases only within CTE (see figure 6.50).
If test cases and test steps were inherited of a variant module as described in chapter
6.2.2.5 Creating variant modules, you can add, delete and overwrite the test steps and
data. The icons will indicate each status:
Figure 6.51: Test cases and test steps that were inherited of a variant module
Important: Deleted test cases/steps are only faded out within the child module.
They can be made available again using “Restore Deleted” from the context menu.
After deleting test cases or test steps, you can renumber the existing test cases and steps:
A notice will appear that all test cases will be renumbered (see figure 6.52).
Þ Click “OK”.
After a test run the Test Results view will display the coverage measurement results and
the results of expected outputs, evaluation macros and call traces, if applicable.
Important: The view is context sensitive: If the Test Results view is empty,
make sure a test run is selected within the Test Project view!
This view lists the detailed results of the evaluation macros if the usercode of the test
object contains any evaluation macros, see 6.8.10.3 Using evaluation macros. The results
are displayed wherever they occur within the usercode, e.g. within stub functions or test
step epilogs. You can select the filter items on the left side to show only the evaluation
macro results for e.g. the first test step. The list of results on the right will be filtered
accordingly.
The Console view displays messages of sub processes invoked during the compilation and
execution process of the test driver application. It provides a quick overview about any
error messages.
6.2.8.2 Handling
You can enable the console view to be shown whenever an error occurs during C-source
analysis or test driver compilation:
Þ Click on “Test Execution Settings” and check the setting “Show console on
error” (see figure 6.56.
Since the view refers to changes of requirements, this issue is discussed in section 6.4.8
Suspicious Elements view.
The variants view supports the variant management in TESSY: You can create a variant
tree according to the software variant structure you are going to test. These testing variants
are useful for tagging TESSY modules to certain software variants which facilitates filtering
and creation of variant TESSY modules.
Tip: You do not need to create a variant tree in order to create variant modules.
Any module can be a parent module of another. The variant tree just helps to
keep the module variant tree in sync with the actual inheritance structure of the
software variants being tested.
Let’s assume you have the following structure of base tests that shall be cloned as variant
modules in order to test all software variants:
To create variant modules for each of the base modules do the following:
Þ Create a new test collection for each variant and choose (New Variant
Modules...) from the context menu.
Þ Within the “Create Variant Modules” dialog select all base modules that shall
be cloned as variant modules.
Þ You can filter the modules being displayed by seleting the desired variant. Only
potential parent modules according to the variant hierarchy will be displayed.
The new variant modules will be created within the selected test collection (including the
folder hierarchy if the option “Take over folder hierarchy” was checked). The properties of
a variant module shows the assigned variant and the parent module which can be edited
using the edit button (see figure 6.62).
Figure 6.62: Properties view variants tab for editing the parent module
Important: All test data of a variant module will be deleted if you select
another parent module.
Within the C/C++ perspective you can edit your C-source file.
Þ Within the Test Project view right click the desired test object or module.
Project Explorer left To view the includes and the functions of the C-source
view file.
Console view lower middle Same view as within the Overview perspective.
Properties view lower middle Same view as within the Overview perspective.
in
Table 6.70: Structure of the C/C++ perspective
Important: Most party of this view are usual Eclipse functions! Please refer
to the Eclipse documentation: http://help.eclipse.org/
Editor view
Important: The Editor view is not a normal view in Eclipse sense, therefore
you cannot move the view as other views of the perspectives!
Þ Open the C/C++ perspective with a right click on the desired test object or
module within the Test Project viewk.
The view is context sensitive: If you choose a function within the Outline view,
the function will be highlighted within the Editor view!
Within this view you can browse easily between the includes and have an overview of all
functions of the C-source file.
The view is context sensitive: If you choose a function within the Project Explorer view,
the function will be highlighted within the Editor view.
The Outline view displays all functions of the C-source. Outline view
The view is context sensitive: If you choose a function within the Outline view, the function
will be highlighted within the Editor view (see figure 6.68).
Hides fields.
The Properties view displays all the properties which you organized within the Overview
6.2.3 perspective. Most operations are possible.
Properties view
For changing a source switch to the Properties view within the Overview perspective.
The Console view displays messages of sub processes invoked during the compilation and
execution process of the test driver application. It provides a quick overview about any
error messages. Same view as within the Overview perspective, see section 6.2.8 Console
view.
The basis for all testing activities should be a precise functional specification of the system
under test. All testing activities should be caused by requirements described within the
functional specification and each change of the requirements need to be tracked in order
to adjust the tests if necessary. That is the reason why TESSY incorporates a requirement
management solution that provides a basic environment for requirements engineering with
the following features:
There is a plugin available for the integration of Polarion. For more informa-
tion please refer to the application note “Polarion Export” in TESSY (“Help” >
“Documentation”)
You will use different views and perspectives for your requirement management:
6.4.1 Structure
1. To create and import requirements, track changes and versionize your requirements of the Requirement
Management
use the Requirement Management perspective. perspective
6.4.16
2. To link requirements with test cases use the Link Matrix view or the Requirements Requirements
Coverage view
Coverage view of the Overview perspective.
Requirements List view upper center To view imported requirements as list for a selected
document or folder.
Requirement Editor view upper center To organize the requirements, e.g. adding infor-
mation as text or images, opens only after double
clicking on a requirement in the RQMT Explorer view.
Test Means view lower center To list the available test means, up to unit test and
component test.
Validation Matrix view / upper center To assign requirements to test means, only visible
VxV Matrix view when there is a validation matrix and after double
clicking on it.
Link Matrix view lower center To link requirements with modules, test objects, test
cases and other requirements.
Suspicious Elements view lower center To have a quick look over all suspicious (modified)
elements.
History view right To display the version history of the selected require-
ment or document.
Related Elements view right To display linked elements for a selected requirement
and compare these versions.
Document Preview right To edit the HTML contents of the requirements, only
visible after double clicking the respective require-
ment.
RQMT Explorer The RQMT Explorer view displays an overall view of all requirements of a requirement
view document. If you double click a requirement, the requirement editor will open to display
all information of the specific requirement (see figure 6.71).
Adds filter.
Please note that with a rightclick on an element usually a context menu opens. It
contains the same buttons as shown in table 6.76. Depending on the circumstances
there might be more options available.
The RQMT Explorer view also offers the option to organize the document structure. You
can adapt it to your needs and therefore gain a better overview over sometimes numerous
elements. This includes the opportunity to add chapters and text elements to documents
as well as text elements to chapters (see figure 6.72).
To create new elements of all kind in the document structure use the RQMT Explorer tool
bar (see table 6.76) or the context menu after a rightclick on the respective element. By
default new elements are placed at the end of the document or chapter column and new
documents appear on document level. It is possible to drag chapters, test elements and
requirements into the desired position, even into other documents.
Figure 6.72: Example for the document structure within the RQMT Explorer view
In the following you find a brief overview about importing requirements. For
more detailed information please refer to the application note “Importing Exporting
Requirements” in TESSY (“Help” > “Documentation”)
Þ Right click a document or right click within the blank RQMT Explorer view and
select “Import” from the context menu (see figure 6.73). When no document
is selected, the import will create a new document.
Þ Select the File Content Type. For the possible types see table below.
*.txt Simple ASCII format were each line is recognized as a requirement. This is
the very basic format that allows importing all sorts of text as requirements.
*.csv, *.tsv Comma or tab separated ASCII text with a headline indicating the column
names. This format allows specifying requirement id, version and all other
available requirement properties.
*.xml TESSY specific XML format which provides specifying the chapter structure of
a requirement document. All available requirement properties may be specified
within this format. It is the recommended exchange format when importing
requirements from any other requirement management systems.
The newly imported requirement document will be displayed in the RQMT Explorer view
(see figure 6.75).
The asterisk (*) indicates that the requirement is new and not commited yet. A mouseover New imported
shows a tooltip (see figure 6.76). requirement
Figure 6.76: The asterix and a mouseover shows the status “new”.
You can commit all changes or changes of selected elements (see figure 6.77).
By ticking the box “Increment major version” a major version with a new
version number will be created.
If requirements have been changed, every commit creates a new requirement ver-
sion. The reason for this is traceability, only with these different requirement
versions changes in the requirements can be traced.
You can also discard the changes you made by clicking on (Discard Changes)
in the global tool bar. This will restore the last checked in status. With a click
on the little arrow next to the icon you can set whether you want to discard all
changes or changes of selected elements only.
You can rename a requirement document and assign an alias which is useful for the repor-
ting, because you have an abbreviation of the document name when building the require-
ment identifier. The identifier will be: [document alias]:[id]-[version]. To rename or give
an alias:
Þ Right click the document and select “Properties” from the context menu.
Þ Change the name or choose an alias (in this Project: “Example1” it is “IVIR’)
and click “OK”.
The new alias “IVIR” will be used within the Requirements List view and the document
preview (see figure 6.81). (The document preview will only be visible after double clicking
the respective requirement.)
The Requirement Editor will also open by double clicking in a requirement in the Require-
ment List view.
Within the Requirement Editor the requirements are displayed with text, figures, if available,
versions and IDs (see figure 6.84).
Requirements Every requirement has an explicit ID and a version number. TESSY provides the following
List view two mechanisms for assigning requirement version numbers:
Using external When using external version numbers, the following checks of the imported data will be
version numbers performed when importing:
• If any requirement content is changed but the version number is not changed,
TESSY will change the minor version number (e.g. from 1.0 to 1.1).
• If the version number was changed but no requirement content was changed,
a warning will be reported.
• If the new version number is less than the highest existing version number
for a requirement, an warning will be reported.
To edit a requirement:
The requirement will be opened within the requirement editor in the center of
the Requirement Management perspective.
TESSY will now create a locally modified version of the requirement which will
be illustrated with a “>” in front of the requirement name (see figure 6.85).
Þ A new version of the requirement will be created and you need to decide either
to increment the major or the minor version number within the check-in dialog
(see figure 6.78).
If you did only minor changes or want to commit a draft update of a require-
ment, you can decide to increment only the minor version. In all other cases,
it is recommended to increment the major version.
Figure 6.86: The first requirement has the version number 2.0
The VxV matrix supports the assignment of requirements to the test means used for
validation of the requirement. This helps filtering out those requirements that are to be
tested with unit and component testing. The assignments within the VxV matrix will be
used for requirement filtering for reporting.
Requirements will be tested using different test means, e.g. unit test, system test or review.
The default test means used within TESSY are for unit and component testing. You can
filter your requirements by test mean for later reporting issues.
Deletes the selected test mean (Del). Only test means that are not used can
be deleted.
Within the Link Matrix view you can link modules, test objects and test cases with re-
Link Matrix view quirements. It shows the link relationship of the elements currently contained within the
matrix.
Transposes the matrix, i.e. changes the rows and columns (Ctrl+T).
Adds all currently linked elements based on the elements selected within the
first column of the matrix.
Removes selected elements (Delete) from the Link Matrix. Does NOT delete
the element but removes it from the view.
Removes all elements from the Link Matrix. Does NOT delete the element.
Þ Drag & drop requirements, modules, test objects or test cases into the matrix.
The elements will be shown within one of the rows in the first column if they
are dropped there. If they are dropped in one of the right columns, they will
appear on top of the respective rows of the matrix.
Þ Use the context menu entry “Add to Link Matrix” within the RQMT Explorer
view, Test Project view or Test Items view.
Þ Click on (Remove All Elements) in the tool bar to remove all currently
displayed elements.
Þ Click on (Remove Selected Element) in the tool bar will remove the currently
selected element within a row of the matrix (if any element is selected).
This will only remove the elements from the matrix view, no changes
will be made to the elements themselves. They are not deleted in
the process and set links remain unchanged.
Important: Test cases can not be added to the Link Matrix view in the
Requirement Management view. To do so you have to switch to the Overview
perspective (see figure 6.91). Test cases can also be added to the Link Matrix in
the TDE perspective or the SCE perspective.
Figure 6.91: Adding Test Cases to the Link Matrix view in the Overview perspective
• The Link Matrix view is available within the Overview perspective and within the
Requirement Management perspective.
• The Link Matrix view will also be visible within the TDE perspective and the SCE
perspective if elements had already been added.
• The current contents of the Link Matrix are remembered when restarting TESSY
but the matrix itself is not persisted in any way. You can add or remove elements
and this will not cause any changes to the elements.
• The search button “Add All Elements Linked to Elements in Rows” allows finding
and adding the elements that are linked to the elements currently displayed within
the rows of the matrix.
• Setting links or changing elements will cause dependent elements to become suspi-
cious. Please refer to section 6.4.8 Suspicious Elements view for details.
If requirements have changed, the links within the Link Matrix view will be declared sus-
picious with an exclamation mark (see figure 6.92).
Þ A double click on a link within the matrix will delete the link and another
double click will create the link again.
Þ Right click a cell and select “Update Selected Suspicious Link” from the context
menu. The selected link will be updated.
Þ Right click a row and select “Update Suspicious Links” from the context menu.
All links within the selected row will be updated.
The Suspicious Elements view allows finding out the reason why an element is suspicious. Suspicious
In this case the version number has changed and a short description has been added. Elements view
During the testing process modules, test objects and test cases will be linked to
requirements indicating that the respective requirements are tested by the linked
elements.
Whenever a requirement changes because of modifications or because a new version
has been checked in, the linked elements will become suspicious and need to be
reviewed. The suspicious status will be indicated by an exclamation mark icon
decorator, i.e. for a suspicious test object.
“Set elements semantic equal” should only be used in situations where the change
of a requirement does not change its meaning such as spelling corrections, format-
ting etc. In all other cases the link should be updated.
When you have linked the test object and some test cases, any changes to the linked
requirements will cause the linked elements to become suspicious. Please switch to the
Overview perspective to be able to see that.
Figure 6.94: Suspicious test object and test cases in the Overview perspective
Determine the related modified requirements that causes the status of a test object being
suspicious within the Suspicious Elements view:
Þ Select the suspicious test object within the Test Project view. (Again you have
to do that within the Overview perspective.)
The Suspicious Elements view will display the changed requirements (see figure
6.95).
As you can see in figure 6.95 above, the requirement text of the requirement “[IVIR:3-
2.2]:Zero” has been edited. Therefore it has the addition “MODIFIED”.
If you select a test case in the Overview perspective, the Suspicious Elements view will also
show the changed requirement(s) (see figure 6.96).
Figure 6.96: Selecting the suspicious test case shows the modified requirement(s)
Differences view Within the Differences view you can determine the exact differences:
The Differences view shows all changes of the requirement (see figure 6.97).
You need to determine if the change of the requirement affects the linked test cases and
adapt the test data if necessary.
If no changes to the test cases are required, update the link to acknowledge the requirement
change. Therefore click on (Update Link). The suspicious icon will then disappear for
the respective test case.
For more information about the Difference view go to section 6.4.12 Differences
view / Reviewing changes.
You can also update requirement links in the Link Matrix view.
Attached Files The Attached Files view allows adding arbitrary files to the selected requirement. You can
view add additional documents with detailed information about the requirement. The files will
be stored within the TESSY database.
The Attributes view allows adding arbitrary attributes for the selected requirement or re- Attributes view
quirement document.
There are three predefined attributes named “Content Type”, “Enable Evaluation” and
“Enable Suspicious Calculation” on document level that control the behavior of the requi-
rement evaluation and suspicious calculation for elements linked to requirements.
Figure 6.100: Editing the requirement settings within the Attributes view
To edit an attribute:
The Attributes view will display the attributes for the selected element.
Þ Right click the desired attribute and select “Edit” from the pull-down menu.
Please note that it is not always possible to edit all of the given opportunities.
Usually it is possible to edit the value but name and description can only be edited
where the attribute was originally created. Type and version of a requirement can
not be edited.
For example you can edit the “Content Type” version of a requirement in the Attribute
Settings. This is necessary to enable the HTML Document View and the HTML Editing.
The "Content Type" of a document needs to be “HTML” instead of “PLAIN”.
Þ Select the desired requirement document within the RQMT Explorer view.
Þ Right click the attribute “Content Type” within the Attributes view and select
“Edit”.
Þ Change the value to “HTML” and click “OK” (see figure 6.101).
For more information about editing requirements in the HTML editor go to section
6.4.15.2 Editing the requirement as HTML version.
Differences view The Differences view will be displayed within the lower pane, which provides a direct
comparison of the respective requirement versions printed as text (see figure 6.103).
Reviewing Each requirement has a version history showing all of its changes.
changes To review the changes between any two versions of the history or between a historic version
and the current version,
Þ select either two versions within the history view to compare these versions or
select only one version within the view if you want to compare it against the
current version.
In this view you can see the links of requirements to other requirements, e.g. when creating
refined requirements based on a given requirements document.
After selecting a requirement in RQMT Explorer this view presents all linked elements of
the respective requirement. It shows all sub requirements or the linked main requirements
divided into Incoming Links and Outgoing Links.
Figure 6.106: Related Elements view with Incoming and Outgoing Links
In this view information about possible errors that appear e.g. in the process of test execu-
tions is displayed. It is divided into four columns: Message, Location, File and Line. The
first column gives you a detailed error message with all necessary information. The other
three contain all available information about where the error is located.
Within the pull-down menu in the Problems view it is possible to select the respective
module in the Test Project view in the Overview perspective or to copy it to the clip
board:
icon action
Copies to Clipboard.
Toggles to the HTML inline editor (only available of the “Content Type” of
the document is “HTML”).
After you created or imported requirements, you can edit them as HTML version:
Þ If a note is displayed, that the “Content Type” must be set to HTML, refer
to section 6.4.10.2 Editing attributes of a requirement. After changing the
content type to HTML, refresh the Document Preview with a click on .
Þ With a click on the icon you can switch between the WYSIWYG editor
and plain HTML (see figure 6.109).
Figure 6.109: HTML editing within the inline editor (WYSIWYG and plain HTML)
Important: By default you will find the Requirements Coverage view within
the Overview perspective!
Within the Requirements Coverage view you will link the test cases with the requirements. Requirements
You will as well have an overview of the requirements coverage. This is the reason why Coverage view
you will find this view within the Overview perspective. within the
Overview
perspective
6.4.16.1 Icons of the view tool bar
Refreshes the view in the Planning and Execution tab. With a click on the
little arrow next to the icon you can set on which selection you want to auto
refresh. You can also disable the auto refresh function (see figure 6.111).
Filters requirements in the Planning tab (component test, unit test, require-
ments without assigned test means).
The current status of the links between modules, test objects, test cases and requirements
reflects the current state of your requirements coverage. This coverage can be examined
on arbitrary levels of your test project.
You can also create a report that shows the currently achieved planning coverage in the
test Project view.
icon meaning
icon meaning
After execution of any tests, the test results are stored within test runs. The test result of
a test run covers the requirements that were linked to modules, test objects or test cases at
the time the test was executed. Therefore, the actual execution coverage result may differ
from the planning coverage result. The execution coverage view is read-only, because this
just displays the results. Any changes to requirement links need to be carried out within
the planning coverage view.
You can create a report that shows the currently achieved execution coverage.
icon meaning
Total number of test cases with achieved test results for linked requirements.
Passed test cases with achieved test results for linked requirements.
Failed test cases with achieved test results for linked requirements.
Linking The idea behind linking requirements to modules and test objects is based on the following
requirements process:
with test cases
• First the complete list of requirements is gathered.
• For further break down of the assignment individual test objects are linked
to the requirements. This especially makes sense if the module has a large
number of linked requirements.
Important: Please note that only linked requirements of test cases will be ana-
lyzed. Unlinked requirements on test case level will not be taken in consideration.
For this process TESSY provides the Requirement Coverage view within the Overview
perspective. It is divided into two tabs:
• The Planning tab (see section 6.4.16.2 Planning tab) is the editor for all
requirement links to modules, test objects and test cases.
• The Execution tab (see section 6.4.16.3 Execution tab) provides quick over-
view about the achieved test results for linked requirements.
Important: When selecting objects on upper levels of the test project, the
calculation of the test planning/execution links can take a moment.
The content that is displayed in the Planning tab or the Execution tab of the Requirement
Coverage view depends on the current selection in the Test Project view of the Overview
perspective. If not already linked with any requirement, all available requirements will be
displayed; otherwise only the linked requirements will be displayed.
If you want to display the requirements on test cases level, you need to select the respective
You can choose to display all available requirements by clicking on “Always show un-
linked requirements” in the Requirement Coverage view. Once chosen, this remains active
for other selections as well.
To execute a test, you need to create and configure a new module. The necessary settings,
besides the source files that you want to test, are the following:
• the compiler of a microcontroller target and debugger, i.e. the desired test
environment
• debugger settings
For a complete list of all the available attributes and possible values refer to the
application note “Environment Settings (TEE)”.
With the installation of TESSY, the configurations for all supported compiler and tar-
get environments (including necessary settings and files) were copied to the TESSY
installation directory. You need to enable the compiler and targets that you want to
use and add them to your configuration file as described in the following sections.
Their default settings may need to be adapted to your needs, e.g. the installation
path of the compiler or target debugger is one of the settings that normally need
to be changed to your local values. Settings which have already been used with a
previous version of TESSY were also taken over during installation.
The TEE configuration management allows you to create variants of compiler and
target settings and assign them to a module. We recommend to save your settings
in a specific configuration file, which is the default when creating a new project
database (see section 6.5.6 Configuration files). This allows easy sharing of specific
environment configurations between developers of the same development team.
As a result you have all your basic settings at one central place, i.e. include
paths, additional compiler options, etc. Once configured, you can start testing
immediately using the required configuration for all your modules.
Þ In the menu bar click on “File” > “Edit Environment” (see figure 6.113).
The TEE will start with the custom configuration file assigned to this project
database.
Important: TEE is not an eclipse-based program and does not provide views
and perspectives! Therefore you cannot drag and drop the panes.
General left Section General lists all supported compiler and target envi-
ronments which were configured with default settings.
Configurations left Section Configurations lists all supported compiler and target
combinations. The settings of these combinations have been
inherited by the paragraph General. The section displays a
configuration file with default environments.
When you have created your project database with the default settings, you
will already have a configuration file assigned to the project database. The name of
this file will be displayed within the lower right side in the status bar of TESSY.
This configuration file will be edited when opening TEE.
Copies a compiler, including the target and environment sub tree. Icon is only
active when a compiler is selected.
Copies a target, including the environment sub tree. Icon is only active when
a target is selected.
The following table shows the indicators of status and their meaning which are used by
TEE.
Item added as Windows environment variable for all processes, i.e. the make
call or the slave call, spawned using this test environment.
Item disabled.
Error.
Error, but the cause is somewhere within the attributes inherited by item.
Check for errors upwards or downwards within the configuration hierarchy and
fix the error there.
Information.
The information comes from another item upwards or downwards within the
configuration hierarchy.
Compiler active.
Target active.
Environment active.
Warning of error within the attributes of the item. This needs to be resolved
somewhere else within the configuration hierarchy.
Warning, but the cause is somewhere within the attributes inherited by this
item. Check for warnings and errors upwards within the configuration hierarchy
and fix the errors there.
Example:
The directory is added to the Windows path variable and also added as Win-
dows environment variable. The attribute has been inherited.
Different fonts as TEE will display the attributes in different fonts to indicate the following situations:
indicators
Normal letters Represent factory settings respectively default settings from paragraph
“General” and have been inherited.
Bold Value has been defined the first time for the attribute.
The TEE provides predefined configurations for all supported compiler and target
environments. By default, at least the GNU GCC and GNU GCC (C++) environ-
ments will be enabled and all your compiler/target environments used in previously
installed versions of TESSY, if any, and only basic attributes are visible for the user.
Under “Configurations” more sections are now visible, e.g. “System”, which
contains the compiler and targets possible to use (see figure 6.116).
You can see the enabled GNU GCC and GNU GCC (C++) environments. All other prede-
fined configurations are disabled (see figure 6.117).
When you select any item on the left pane, the attributes will be shown in the right pane
(see figure 6.118).
A system default configuration file contains the settings for all supported compiler
and target environments and has been installed with TESSY into the installation
directory. The configuration file assigned to the project database contains all settings
that are changed compared to the system default configuration. The contents of
this file are displayed within section “File”.
Configuration files of the respective sections will be stored in following default folders:
The System section is read only! Change your settings by drag ’n drop compiler
and targets onto the file section. Changes are possible within the General section.
The following steps are necessary to customize the configuration file of the currently open
project:
• Enable the compiler/target environments that you want to use for testing
(see section 6.5.7.1 Enabling configurations).
• Review and adjust the attributes (see section 6.5.7.3 Adjusting enabled con-
figurations).
You need administrator privileges to change the contents of the “General” section!
If you do not have enough privileges, TEE will inform you and save your changes
in a user specific file.
Þ Either in section General or under “Configuration > System” select the compiler
or target.
Þ Select “Enable Compiler” respectively “Enable target” from the context menu
or press Ctrl+E (see figure 6.119).
TEE will remove the red cross from the icon to indicate, that the item is
enabled.
You make a configuration available as compiler/target environment for usage within TESSY
by adding it to the configuration file. To add an environment, copy an available configu-
ration environment from section “System” onto section “File” by either using the context
menu or drag and drop:
Þ Rightclick the compiler and choose “Add to File Section” from the context
menu (see figure 6.120) or “grap” the compiler with the mouse and drop it
onto the file section (see figure 6.121).
Þ Change the settings to your needs, e.g. delete some debuggers if they are not
necessary for your test environment.
Figure 6.120: Adding an environment to a configuration file by using the context menu
Normally you need to change some settings for your specific environment. Some of the
settings will be checked for validity. The TEE will check all enabled configurations and
displays error and warning signs as soon as an error has been found, e.g. if the “Compiler
Install Path” must be corrected.
If you do as explained above, computer specific path settings are kept out of the configu-
ration file which you will probably share with other testers on different computers. On the
other hand your customizations made in the “Configurations” part of the TEE are saved in
the configuration file. So this part of your customizations will automatically be available
to other testers as well.
Important: Be aware that customized files in the upper part (“General”) of the
TEE, e.g. the “Makefile Template” and the “Linker File” amongst others that
sometimes need to be customized, have to be copied into the TESSY project.
The best way is to copy customized files from the “General” part of the TEE into
the “config” folder of the project root. After that is done the respective attributes
need be updated in the TEE.
TEE preserves all default settings. You can revert the default values with “Restore
Factory Value” in the context menu (right click the attribute).
Þ If you want to change only an attribute value, select “Edit Attribute Value”
from the context menu.
Depending on the attribute type, either a standard selection dialog for that
kind of information will appear (e.g. Browse for Folder in case of directories)
or the inline editor for the value will be activated.
If you have changed a default value other than the factory setting, the attribute
will be displayed in italics.
Þ If you want to delete the attribute value, select “Reset Attribute” from the
context menu or press Del.
This will either remove the local value and show the inherited value or delete
the whole attribute entry, if it is only defined locally in this section.
You see different attribute types available: String, Boolean, Number, Float,
File and Directory.
Þ Check the desired specific attribute flags. This depends on the type used. For
description see table below.
Þ Click “OK”.
flag description
Inheritable This flag will always be ticked by default. It controls the inheritance of
the attribute: The attribute will be available in all (child) section nodes.
Some basic attributes are defined at the main nodes, e.g. compiler. Each
supported compiler will inherit these basic attributes.
validate This flag may be important for directory or file types. The attribute value
will be validated by TEE, e.g. whether the path or file is available. An error
sign will indicate that the file or directory could not be found.
Read only This flag makes it impossible to change a default value by using the attribute
pane of the module properties dialog.
As List Using this flag, the attribute value will be handled as list of values (comma
separated). The values may be edited using a special list dialog. This is
useful for file or directory types.
Hex Format This flag is useful in combination with the number type. TEE will convert
all inputs (e.g. a decimal value) to a hex value, e.g. 1 > 0x01.
Visible This flag makes the attribute visible in the attribute pane of the module
properties dialog (and within the test report).
Not Empty Checks whether the value is not empty. An error sign will indicate that the
attribute does not have a value.
flag description
Environment This flag is useful during test execution and during the make process:
Variable TESSY will create an environment variable within the process space of the
process that will be used for test execution (e.g. running the slave process)
and for make (e.g. building the test driver).
Add to PATH This flag is useful for attributes of type directory. Like described above
Variable for the flag “Environment Variable”, the respective directory value will be
added to the PATH variable of the process space used for test execution
and make.
Makefile Vari- Adds this variable to the generated makefile for compilation/linking of the
able test driver application. You can use this variable within the makefile for
include paths or other settings required during the make process. A varia-
ble named “My Include Path” will be added to the generated makefile as
MY_INCLUDE_PATH with the respective value.
If you want to assign an existing customized configuration file to your project, do as follows: Assigning a
configuration file
Þ Under “Configuration File” click on “. . . ”, choose your file and click “Open”.
Click “OK”.
Within the TIE you determine which values are input and which ones are output variables.
Input values are all interface elements that have to be set before execution of a test
object. Output values are compared with the expected values after test execution.
After configuring the test environment of a module and opening the module the
analysis of the respective source files starts. The functions found within the source
files will be available as test objects, TESSY will try to assign useful default passing
directions automatically.
You need to specify missing information that TESSY could not determine automa-
tically, i.e. array dimensions or values of enumeration constants. This can happen
due to the usage of the “sizeof” operator when declaring arrays or enumeration
constants.
Test Project view upper left Same view as within the Overview perspective.
Properties view lower left Same view as within the Overview perspective.
Interface view upper right To display all interface elements of the test object and
to provide the edit fields to enter passing directions of
variables as well as additional information.
Plot Definitions right To create and configure plots (same view as within the
view TDE perspective).
The Test Project view displays your test project which you organized within the Overview 6.2.2 Test
Project view
perspective.
The Properties view is context sensitive: You can view the passing direction of a variable Passing directions
(e.g. IN, OUT, IRRELEVANT) if you select the variable within the Interface view. Then of a variable
the Properties view will display the passing direction and the type information (see figure
6.123).
External functions.
Internal functions.
External variables.
Global Variables.
Function.
Stub function.
6.6.4.3 Handling
You can browse through the interface items of the currently selected test object. An arrow
in front indicates further levels (see figure 6.125).
Important: In some windows versions you need to move the mouse over the
view to see the arrows! They will fade-out when the view is not active.
Figure 6.125: White arrow indicating further levels, black arrow when expanded
Interface The variables are either read within the function (IN), written within the function (OUT),
elements both read and written (INOUT), to be altered by usercode (EXTERN), or they are simply
not used within the function (IRRELEVANT).
The TIE classifies all recognized interface elements of the test object into the following
sections:
External Functions All functions which are not defined within the source file(s) of
the module. These functions are called from the test object.
Local Functions All functions defined within the source file(s). These functions
are called from the test object.
External Variables External declared variables which are not defined within the
source file(s).
Global Variables Global variables and module local static variables which are
defined within the source file(s).
Parameter Parameter of the test object. You can browse through the
structures by clicking on the plus sign to see the basic compo-
nents.
Unused Contains all sections and the related interface elements which
are not used in the current test object.
The passing direction reflects the kind of usage for each variable while testing the
test object. You can specify how TESSY treats a value for an interface variable either
to provide the value before test execution (IN) or to keep the value for evaluation
and reporting after test execution (OUT).
A drop-down menu will be displayed with the available options IN, OUT, IN-
OUT and IRRELEVANT.
You have to specify one of the following passing directions for each interface element:
• provide an input value for that interface element, because the element is only
read by the test object (IN),
• evaluate and report the results of that interface element, because the element
is only written by the test object (OUT),
• both provide a value and evaluate the result, because the interface element
is both read and written by the test object (INOUT),
• not use the interface element at all (IRRELEVANT). In this case, you will
not see this variable for all further testing activities.
The following table shows possible passing directions of the different types of interface
elements:
external variable x x x x x
global variable x x x x x
parameter x x x
return x x x
During processing when opening the module, TESSY analyzes the passing directions auto-
matically and stores its findings in the interface database. This information is available in
the TIE as default values of the passing directions. TESSY analyzes the usage of individual
interface elements by the test object.
Depending on that usage, the following passing directions will be set as default:
read only IN
In case that the passing directions or any other interface information could not be deter-
mined the respective fields in the TIE will be empty, i.e. if TESSY could not calculate
the size of an array dimension (indicated with a question mark), you have to set them
manually. Reset passing direction to default TESSY analyzes the usage of individual
interface elements by the test object. Change the passing direction of an interface element
to suite your needs.
Reset the passing direction for all interface elements of one section:
Þ Select the respective section and “Reset to Default Passing” from the context
menu (see figure 6.126).
Þ Select the respective interface element and “Reset to Default Passing” from
the context menu.
Important: If you change the data format, all newly entered values within the
Test Data view of the TDE will be formatted into the new format. Existing data
will not be formatted!
Pointers and complex data types will be treated slightly different as normal data types.
Pointers
Complex data types as “Structure” and “Union” have a dependency between their passing
direction of the overall structure/union and the passing directions of their components. To
avoid invalid combinations the TIE checks the setting of passing directions for these data
types in the following manner:
• When the passing direction of one component is set, TIE determines the
resulting passing direction for the overall structure/union and set them au-
tomatically.
• When the passing direction for the overall structure is set, all components
are automatically set to the same passing direction.
Arrays
The passing direction of the data type “Array” will be set for the entire array to the same
direction. Only one passing direction will be defined for the whole array and all elements. If
the array is made up of structured array elements (e.g. structures), it is possible to define Arrays
different passing directions for the components of these structures.
Array as parameters will be shown as pointers within the interface. They can be initialized
with NULL or pointing to a dynamic object (see figure 6.128).
The TIE displays all functions used by the test object either in section External Functions
or Local Functions and it provides an interface to define stubs for these functions that will
be executed instead of the original function. TIE distinct two different stub functions:
You can define stubs globally for all test objects of the module or create a stub
independently of the global module setting.
• Advanced stub variables cannot be created for arrays and pointer to arrays
• Multiple calls to advanced stubs will always use the same input values and
the results of the last call will be taken as output values
You can create stubs either for external or local functions which will be executed instead
of the original functions. There are several options available:
• Create stubs for all functions at once for all test objects of the module (global
setting).
• Create stubs for a single function for all test objects of the module (global
setting).
The enhancement to normal stub functions are advanced stub functions, that allow to
provide values for parameters and return values of stub functions like normal variables.
TESSY checks if the stub is called by the test object with the specified parameter values,
otherwise the test fails. You can also provide a return value for further processing by the
test object. This reveals if the test object handles the return values of the stub function
correctly.
You can create advanced stubs either for external or local functions. There are several
options available:
• Create advanced stubs for all functions at once for all test objects of the
module (global setting).
For test execution the information on data types of the test object interface has to be
complete. The dimensions for arrays, the values of the enumeration constants for enu-
merations, and the dimensions for bitfields have to be defined. If these values have been
automatically recognized by TESSY while opening the module, the respective text field
will show the calculated value for every data type. In this case, it is not possible to change
these values.
If a value for an interface element has not been recognized automatically, the respective
text field will be empty or contain the value -1. In case of arrays TIE will also use question
marks to indicate this issue, i.e. array[?]. In all those cases you have to add values
manually.
You can create new (synthetic) variables for usage within your test cases based on all basic
C/C++ types as well as based on all types available within your source files. To create a
new variable:
Þ Click “OK”.
The new variable will be shown within the TIE view with the default passing direction
“INOUT”. Adjust the passing direction to your needs.
TESSY provides an alias name mechanism to mirror the usage of #define to access
variables during the whole testing cycle (e.g. access to individual bits of common bitfield
structures). You will see your variables within TESSY named exactly as the defines you
are using in your code to access these variables.
Þ Change the value of the TEE attribute “Use Alias Names” to “true” (Refer to
chapter 6.5.7.3).
Instead of the real variable name “door_light_c.b.b0” that you are not using within your
code, you will now see the virtual name “door_light_left_b” given through the define
within the TESSY interface and within the test reports.
Important: In TESSY version 3.2 and higher the single Plots view has split in
two separate views, the Plots view and the Plot Definitions view!
The Plot Definitions view displays the plots for a selected test object or test run. Within
the view you can create or configure plots for a selected test object.
Deletes the selected plot respectively removes the selected variable from its
plot.
A test case plot spanns over all values of all test cases of the selected vari-
ables.
A test step plot provides one curve for each test case spanning over all values of
the test steps of this test case. This requires at least two test steps for each test
case to define a valid curve.
An array plot creates plots for array type variables. There will be one curve
spanning over the array values for each test step.
Variables can be dragged from the TIE or TDE onto the Plot Definitions view, and plots
and variables can be dragged and dropped within the Plot Definitions view.
scalar or array variable empty area A new plot containing the variable is created.
from TIE or TDE
plot The variable is added to the plot if possi-
ble. (1)
variable from Plot empty area The variable is moved to a new plot. (2)
Definitions view
another plot The variable is moved to the other plot.
(1, 2)
plot from Plot empty area A copy of the plot is created. (3)
Definitions view
Table 6.162: Drag and drop handling with the Plots and Plot Definitions view
(1) Restrictions apply: A scalar variable cannot be added to an array plot, and whole
arrays cannot be added to a test case or test step plot (whereas single array elements can
be added to test case or test step plots).
(2) If CTRL is being pressed while dropping the variable, it will be copied instead of moved
to the other plot.
(3) Only applies if CTRL is being pressed while dropping the plot.
Only the plots that are ticked with “Use in Report” will be displayed within the reports
(see figure 6.135).
After preparing a test in the TIE, you need to create well designed test case specifications.
The Classification Tree Method provides a systematical approach to create test case defini-
tions based on the functional specification of the function or system to be tested. TESSY
includes the specialized Classification Tree Editor CTE which assists you in creating low
redundant and error sensitive test cases.
The basic concept of the Classification Tree Method is to first partition the set of possible
inputs for the test object separately and from different aspects, and then to combine them
to obtain redundancy-free test cases covering the complete input domain.
Test Project view upper left Displays your test project. For editing your test project
switch to the Overview perspective.
Properties view lower left Displays the properties of your test project, e.g. sources
to the test object.
Test Data view right Allows to assign test data to classification tree elements.
The Test Project view displays your test project which you organized within the Overview 6.2.2 Test
Project view
perspective.
The Properties view displays all the properties which you organized within the Overview
perspective. Most operations are possible. 6.2.3
Properties view
For changing any module related settings switch to the Properties view within the Overview
perspective.
Classification
Tree view
Maximize the CTE window within the Classification Tree view to avoid additional
scroll bars and to always show the whole CTE window contents within the view.
Undoes the last move or edit operation within the classification tree pane.
Delete (Del).
Selects all leaves that are children of the current selection (Ctrl + L).
Zooms in.
Zooms out.
test item list lower left Defining test cases, test sequences and test steps. Every test
item creates a new line in the table pad.
draw pad upper right Drawing the classification tree with a root, classifications and
classes.
table pad lower right Marking classes of the classification tree in order to define
test cases, test sequences and test steps. Every test item
creates a new line in the table pad.
A test case is formed through the combination of classes from different classifications.
For each test case exactly one class of each classification is considered. The combined
classes must be logical compatible; otherwise the test case is not executable. You should
choose as many test cases that all aspects, not only individual but also in combination, are
considered enough.
Þ Use the context menu (“New” > “Classification”, see figure 6.138) or press
Ins.
Þ Double click the new classification or press F2 to start the inline editor for the
tree item.
Within the draw pad you can move the classifications with drag and drop: Select
either a classification, a sub tree (click on ) or select all (click on ) and press
the mouse button until the cursor turns into a cross with four arrows , then
move the selection.
Þ Right click the classification and select “New” > “Class” or select the classifi-
cation and press Ins.
Try the several layouts for the tree for a better overview, e.g. (Arrow Layout),
leftdown or horizontal !
You can assign test data to all interface variables for each tree node of the clas-
sification tree. This speeds up testing because the test data will be assigned
automatically to the test cases via the marked class nodes (refer to section 6.7.7
Test Data view).
Þ Create the test cases either using the context menu (“New” > “Testcase”, see
figure 6.139) or press Ins.
Connecting lines to the classes of the classification tree are drawn. If the mouse
pointer is placed over a point of intersection it is changed to a circle.
If you connect a test case with a class, the respective test data assigments of the
class will be assigned to the test case. If you want to review the resulting test data
assignments for the whole test case, select the test case within the test item list.
The Test Data view will now display the assignments for the test case.
The test data is displayed read-only because it is defined by the marks set
within the combination table and cannot be changed here.
• All tree items with assigned test data are marked with a yellow dot, when not
selected.
• When selecting a tree item, you will see the test data entered for this item within
the Test Data view.
• When selecting any interface element within the Test Data view, all classification
tree elements that contain test data for this interface element will be marked with
a blue dot.
Figure 6.141: The first two test cases are read only and created within the CTE
• Test items with values stemming from the CTE perspective are marked with new
status indicators: (test case) and (test steps).
• Values stemming from the CTE are read-only. If you want to change them, switch
back to the CTE perspective and do your changes there.
6.8.7 Test Whether using the CTE or creating the test cases manually within the TDE perspective,
Data view
you will use the Test Data view to enter the values. Because some operations and overviews
are only possible within the TDE perspective, switch to chapter 6.8.7 Test Data view to
learn how to use the Test Data view.
Instead of assigning test data directly to all variables of the test object interface for each
test case, you can assign them using the tree nodes of the classification tree. For each tree
node you can assign values to variables.
Child nodes inherit the values from their parent nodes, but you can as well overwrite
inherited variable values for a child tree node. Inherited values are marked with (see
also figure 6.142).
When combining leaf classes of the classification tree to test cases, the variable assignments
of the marked tree nodes will be assigned to the respective test case. In this way, you can
assign all test data within the classification tree and get your test cases automatically filled
by setting marks within the combination table.
The Test Data view on the right hand will show the test object interface with
the value assignments for this tree node as well as inherited values of parent
nodes of the tree node.
When selecting a test case within the test item list you will see the resulting test data
assignments according to the marks of the test case within the Test Data view.
When assigning test data to tree nodes of the classification tree, the same variable can be
assigned within different locations of the tree and each assignment can have another value
for the variable. The resulting value for such a variable (for a given test case) depends on
the classes being marked for a test case.
When calculating the variable assignments for a test case, CTE collects all marked tree
branches where the variable is assigned. A tree path is defined as the list of tree nodes up
to the root starting at the tree node where the variable is assigned. The tree paths are
sorted by position of their leaf nodes: The sort order is from left to right.
1. Left to right precedence: Tree paths are compared starting from the root until they
diverge. The rightmost diverged node wins.
2. Longer path precedence: If one tree path is a full sub path of the other, the longer
path wins.
The example below (see figure 6.143) shows different assignments of variable “x” within a
classification tree. The resulting value for “x” is indicated for each test case.
The resulting value for the test cases will be calculated like follows:
1. For the first test case the variable is assigned in class “b” which is a longer path than
the assignment within the root, so the value of class "b" will be taken.
2. For the second test case we have values within class “b” and class “e”. The tree
paths diverge below the root node and the classification “O” is on the right side so
that the value of class “e” will be taken.
3. In the third test case there are values within the root node and within classes “b” and
“c”. Both tree paths of the classes are longer than the root path and the classification
“B” is on the right side so that the value of class “c” will be taken.
4. In the fourth test case we have the tree paths of classes “b” and “g” that diverge at
the root. Because classification “O” is in the right side, the value of class “g” will
be taken.
5. In this test case all marked classes refer to the value defined within the root node so
that the value of the root node will be taken.
Test Project view upper left Same view as within the Overview perspective.
Test Results view upper left Same view as within the Overview perspective.
Test Items view lower left Same view as within the Overview perspective.
Properties view lower left Same view as within the Overview perspective.
Test Data view upper right To enter test data and expected values, after the test
execution, reviewing passed, failed and undefined values.
Test Definition lower center To display the test case specification, the optional des-
view cription and linked requirements of the current test case.
Declaration view lower center To define own variables for the user code.
Usercode Outline lower right To display the usercode that will be executed at a certain
view point in time during the test execution.
Plot Definitions lower right To create and configure plots (same view as within the
view TDE perspective).
Usercode
The Test Project view displays your test project which you organized within the Overview 6.2.2 Test
Project view
perspective.
6.2.6 Test The Test Results view displays the coverage measurement results and the results of a test
Results view
run of expected outputs, evaluation macros and call traces, if applicable. It is the same
view as within the Overview perspective.
6.2.7
Evaluation Macros This view lists the detailed results of the evaluation macros if the usercode of the test object
view
contains any evaluation macros. The results are displayed wherever they occur within the
usercode, e.g. within stub functions or test step epilogs. It is the same view as within the
Overview perspective.
6.2.5 Test Within the Test Items view you get an overview about your test cases and test steps which
Items view
you organized within the Overview perspective or the CTE (see section 6.7).
To create test cases and test steps manually without using the Classification Tree Editor,
switch to the Test Items view within the Overview perspective.
The Properties view displays all the properties which you organized within the Overview
6.2.3 perspective. Most operations are possible.
Properties view
For changing a source switch to the Properties view within the Overview perspective.
Type information The view is context sensitive: You can view the passing direction and all type information
of a variable of the variable (i.e. the basic type, the size as well as any modifiers and pragmas) if you
select the variable within the Test Data view (see figure 6.145).
Whether using the CTE or creating the test cases manually within the TDE perspective,
you will use the Test Data view to enter or review the input values and expected results of
all test cases and test steps.
Important: CTE exported values are read-only within the TDE perspective.
The cells are insensitive. Switch to the CTE perspective to change such values if
necessary (respectively the underlying document).
The following table shows the indicators of status and their meaning which are used by the
Test Data view.
Test step passed: The actual result did match the expected results.
Test step failed: The actual result did not match the expected results.
Test step generated: The test step was generated by the test case generator
but has no executable data yet.
Test step generated with data: The test step was generated by the test case
generator and executable test steps were generated.
The Test Data view displays the interface of the test object. On the left side of the Test Data view
you see the following interface elements and icons:
Inputs and Input values are all interface elements that are read by the test object.
Outputs Output values are written by the test object respectively are the expected
results.
Within the TIE you determine which values are Inputs and which are
Outputs. TESSY tries to find out the default passing directions (input or
output) automatically when analyzing the source files.
Globals Globals are the global variables referenced by your test object. Global
variables can contain global static variables and static local variables that
are defined within functions.
Dynamics Pointer targets, referenced through a pointer of the test object interface.
(white arrow) The arrow is displayed when an element has child elements. Click on the
arrow to expand.
If you want to expand all child elements, use the context menu (“Expand
all”).
Table 6.181: Interface elements and icons of the Test Data view
Every variable will be assigned to one of the interface elements described above, e.g.
Parameter, Global etc. Initially, the Dynamics section will always be empty. The colums
on the right represent the test steps where the values of the variables are defined.
• Select a column by clicking on the number of the test step. The selected column is
marked in blue (compare figure 6.147).
• Move the mouse pointer over the number of the test step to see the name of the
test step within a tool tip (compare figure 6.147).
• Select all values for a variable by clicking on the variable in the left column.
• If you select the icon “Highlight Undefined Values” in the tool bar, all variables that
do not contain any data are marked in yellow (compare figure 6.147).
Figure 6.147: Test step 1.1 is selected and undefined values are highlighted in yellow.
To choose the test steps you want to see in the Test Data view you need to select them in
the Test Item view (Ctrl + click) first. Make sure that (Link with the Test Item View)
is enabled in the Test Data view tool bar. After that only the selected test steps will be
displayed (see figure 6.148).
Entering values Values for interface elements are entered into the cells of the Test Data view. The values
will be checked and/or converted when leaving the cell or when changing to any neighboring
cell.
The TDE provides undo/redo functionality for all changes within the Test Data view!
By default, all imported or manually entered test data values are checked for syn-
tactical correctness, truncated to the type size and optionally formatted. The truncation of
values depends on which kind of number format was used:
• Decimal numbers will be checked against the minimum and maximum value of the
respective data type. When entering -10 for an unsigned type you will see a value
of 0 as test data. If the value is less than the minimum, the minimum value will be
used, if it is more than the maximum, the maximum value will be used.
• Hexadecimal and binary numbers will be truncated to the number of bytes available
for the respective data type, regardless if the type is signed or unsigned. When
entering 0xF11 for an 8 bit type you will see a value of 0x11 as test data. Also when
entering a binary 0b1100001111 you will see a value of 0b00001111 as test data.
• Missing leading zeros will be filled up for hexadecimal and binary values. If you enter
0x12 for a 16 bit value, you will see a value of 0x0012 as test data.
After the truncation of the value to the available data type size, it will be formatted according
to the data format configured within TIE. Suppose you have an 8 bit signed value with data
format “Decimal” and you enter a value of 0xF05: The value will firstly be truncated to 0x05
and then formatted as decimal number so that you will see 5 as test data value.
Important: If you change the data format within TIE, only newly entered test
data will be formatted according to the new format. If you want to change the
format of the available test data for a certain variable, you need to use the “Con-
vert to Data Format” menu entry within TDE. Make sure the box “Enable Value
Checking and Conversion” is checked within the menu “Window” > “Preferences”
> “Test interface Settings”.
Important: When running the test with undefined values, the initial value passed
to the test object depends on the default initialization of the compiler.
Clicking into a cell activates the inline editing mode and you can enter arbitrary values:
You can navigate between the cells with CTRL + cursor left/right.
You can apply the available operations of the context menu to multiple cells depending on
the current selection within the Test Data view:
• If you select a single variable of the interface tree, all values of all test steps
for this variable will be affected.
• If you select a test step column, all variables of this test step will be affected.
• If you select an array, a struct or a union, all components of this element will
be affected.
The current selection is highlighted in blue. You need to select a test step column
before right clicking for the context menu, because the right click will not select
the test step column.
Enums Enums
Þ Click in a cell.
A dropdown menu will open showing the available enum constants (see figure
6.150.
Þ Choose any constant or click into the inline editor field to enter any other
suitable value.
Input values
Input values are all interface elements that need to have a defined value at the beginning
of the execution of a test object. These values are included in the calculation of the output
values during test execution or will be used as a predicate of a condition.
External called functions can be defined as advanced stub functions to provide the return
value and the expected parameter values within the Test Data view. If a test object calls
an external function multiple times the same return value would be returned for each
invocation and also the parameters would be checked against the same parameter values
as specified within TDE. In order to provide different values for each invocation of the
advanced stub, you can enter multiple values as a vector written within braces, e.g. 1,2
(see figure 6.151). In this example the return value of the stub will be 1 for the first
call, 2 for the second call. You can also specify a vector value for the expected parameter
values.
Expected values
Expected values are the calculated results regarding the input values for the test object after
test execution. TESSY will compare both expected and actual values after test execution.
The result will be either failed or passed.
Important: The values are compared with the evaluation mode equal (==).
To change the evaluation mode refer to section Entering evaluation modes.
Þ Rightclick the variable and choose “Initialize Test Data. . . ” from the context
menu.
option means
Random a range of generated values for the initialization. The random values
will adhere to the min/max limits of each interface variable type
Ignore values all input and expected values will be set to “*none*”
Initialize all all array elements will be initialized. Otherwise only visible array
array elements elements will be initialized
The following table shows the initialization values for certain data types:
type contents
Integer 0x00000000
i.e. if 0x42 is entered as pattern, all int variables will be initialized
with 0x42424242.
Float 0.0
Array all array elements are initialized according to their type if option “Ini-
tialize all Array Elements” is used
Pointers Pointers are initialized with NULL provided that they do not point to
dynamic objects
It is possible to set the passing direction of variables that are not needed for testing any
more to “IRRELEVANT” in the Test Data Editor. Right click the variables you want to
hide and choose “Set Passing to IRRELEVANT” in the context menu. You can undo this
by choosing “Restore Passing” in the same menu if necessary.
The chosen variable will still be displayed and marked as “[IRRELEVANT]”. When saving
the test data the passing direction of this variable is updated and the variable will no longer
appear within the Test Data view.
Important: Passing directions set to irrelevant that have been saved can only
be restored in the Test Interface Editor (TIE) (see section 6.6.4.5 Setting passing
directions).
Using the evaluation mode allows to specify how to compare the actual value (calculated
during the test run) with your specified expected value. The evaluation mode together
with the expected value will be used to process the test results after the test run.
Þ Click in a cell.
Þ Enter the desired evaluation mode within the inline editor mode (see figure
6.153).
Figure 6.153: Entering evaluation mode “unequal” within the inline editor
equal == Checks the expected value and actual value for Equality.
This is the default setting.
unequal != Checks the expected value and actual value for inequality.
greater > Checks if the actual value is greater than the expected value.
less < Checks if the actual value is less than the expected value.
greater or >= Checks if the actual value is greater or equal to the expected
equal value.
less or <= Checks if the actual value is less or equal to the expected value.
equal
range [1:10] Checks if the actual value is within a range, here: range 1 to
10.
deviation 100 +/- 1 Checks if the actual value equals the expected value but takes
100 +/- 1% into account a deviation value. The deviation can either be
an absolute value or a percentage, i.e. the following expected
values would yield OK: 99, 100, 101.
By default, values have to be assigned for all variables with passing directions “IN” or
“INOUT”. It can be useful to not overwrite a value calculated in the last test step. In this
case you can use the special value “*none*”:
You can generate test cases and steps automatically, i.e. test steps of a range of input
values which you enter in the TDE:
Þ Create a generator test case within the Test Items view as described in section
6.2.5.4 Creating test steps automatically.
Þ Enter your values and a range of an input value, i.e. [6:9] as in our example
(see figure 6.154).
TESSY can generate the test cases stepwise: Enter a semicolon and the step size
behind the range, e.g. [6:15;3] would give you the values 6, 9, 12 and 15.
You can also enter specific values, e.g. [1,5,8] would be the values 1, 5 and 8.
Combinations are as well possible: [2:8;2,11,15,20:22] would be 2, 4, 6, 8, and 11,
15, 20, 21 and 22.
Figure 6.154: Generator test case 4 has a range value from 6 to 9 for parameter v1
TESSY will now automatically create a test step for every value within the
range you entered (see figure 6.155).
You might need to expand or scroll the Test Data view to see all the
test steps!
Figure 6.155: Four test steps are generated for every value within the range “6 to 9”.
The test steps are read only because they were generated!
You can change the type of the test case and test steps to “normal”. That way you can
edit the test steps as usual.
Þ Rightclick the test case and select “Change Test Case Type to Normal” (see
figure 6.156).
The test case and test steps are changed to type “normal” but will indicate originally being Changing test
generated with a status within the Test Items view (see figure 6.157). case to type
normal
Figure 6.157: The test case and test steps originally being generated.
You can reverse the action with a rightclick and choose “Change Test Case Type to Gene-
Inherited modules and their test objects need to be snychronized (see Creating variant
modules) to get the inherited test cases and test steps with all inherited values. The Test
Data view shows inherited and overwritten values with different colors.
• Dynamic objects will be inherited from the parent test object. Additional dynamic
objects cannot be created within the inherited test object.
• CTE test cases cannot be edited within the inherited test object. Any changes need
to be done within the parent test object.
• Inherited user code (e.g. prolog/epilog) cannot be overwritten with “empty” user
code. It is recommended to add a comment stating why the inherited usercode has
been overwritten instead.
See figure 6.158 for the color coding of values displayed within Test Data view:
• New variables of the variant test object are shown like normal values without
special highlighting. The variable “border_size” was introduced within the
variant source code, therefore there are no inherited values.
• The test step 3.1 has been deleted: The inherited values are displayed for
information only. The test step will be skipped when executing the test.
• The additional test step 4.1 cannot have any inherited values. All values are
displayed as normal.
6.8.7.10 Pointers
The context menu offers the following possibilities to assign a value for a pointer:
option means
Set Pointer The value of the selected pointer will be set to NULL. The text box will be
NULL filled with NULL.
Set Pointer You can select another interface element or a component of a structure or
Target union and assign its address to the pointer. The cursor will change, when
you move the mouse pointer over a variable:
The object type fits the pointers target type. You can assign the pointer.
The object type does not match the pointers target type. You cannot
assign the pointer.
When you click on an element, the variable name of that element will be
entered into the field next to the pointer. During test execution, the address
of the variable will be assigned into the input field of the pointer.
option means
Create pointer Allows to create a new object as target object for the pointer. The address
target value of the object will be assigned to the pointer. The type of the created object
depends on the target type of the pointer.
A new target object will be listed in the dynamic objects section of the TDE.
Array as It is also possible to create an array as target value using the Dimension
Target Value option of the Create Pointer Target dialog:
Þ Tick the check box for” As Array of Size” to enter an appropriate size
into the input field.
Þ Click “OK”.
The name of the new object appears in the input field of the pointer value.
TDE will create an array of the pointers target type. The pointer will point
to the first array element.
Within the Dynamics section, you will see the newly created target object.
You can enter values, like for every other interface element.
Variables defined as static local variables within the test object or called functions can also
be tested. Since such variables are not accessible from outside the location where they
are defined, TESSY instruments the source code and adds some code after the variable
definition to get a pointer to the memory location of the variable. All static local variables
can only be accessed after the code containing the variable definition has been executed.
You need to keep this in mind when providing input values or checking results for such
variables. The following restrictions apply for static local variables:
• The first time when the code of a static local variable definition is executed,
the variable will get the initialization value assigned from the source code.
It is not possible to set the initial value from TESSY. You need at least one
test step to initialize the variable by executing the definition code. The next
test step can then supply an input value for the variable.
• The same applies for expected values: If the source code of the variable
definition has not been executed, the result value of the respective variable is
not accessible and will be displayed as *unknown* in this case. This situation
may arise when the variable definition is located within a code block which
has not been executed, e.g. within an if statement block.
Figure 6.159: Test Definition view within TDE with linked requirement
The Test Definition view displays the test case specification, the optional description and
linked requirements of the current test case in individual input fields. The test case speci-
fication should enable the tester to provide concrete input values and expected results.
The Test Definition view is context sensitive! To display the specifications, definitions and
requirements for a test case:
Þ Select a test case within the Test Items view (see figure 6.159).
Important: The contents are not editable if the test cases have been created
and exported using the CTE!
Within the Declarations/Definitions view you can define your own helper variables that may
then be used within the user code. If you just want to declare a variable that is already
available within linked object files, you do this within the declarations section. If you want
to create a new variable, you need to enter the declaration into the declarations section
and the respective definition into the definitions section. The variable can then be used
within the prolog/epilog and stub function code.
Important: TESSY provides the means to add new variables within the TIE
perspective (see section 6.6 TIE: Preparing the test interface). Such variables
can be used like normal interface variables of the test object which is much more
convenient than defining them here in the Declarations/Definition view.
Within the Prolog/Epilog view you can specify usercode that will be executed at a certain
point in time during the test execution. The C part of the usercode will be integrated into
the test driver and executed at the places specified. The following figure outlines the call
sequence of the usercode parts.
The figure shows the interaction of the usercode sections with the assignment of test data
provided within TDE and the result values that are saved into the test database and eva-
luated against the expected results.
During the test object call, the code specified for the stub functions (if any functions are
called from your test object) will be executed depending on the code logic of your test
object.
Within the prolog/epilog code you can reference the global variables used by your test
object that have one the passing directions IN, OUT, INOUT or EXTERN. The following
special macros are available within the prolog:
• TS_THIS is available for C++ methods only and allows to access members
of the current “this” object (e.g. “TS_THIS.member = 5;”).
Example
Have a look at figure 6.161 Prolog/Epilog view in the beginning of this section. The
test step 1.1 prolog contains the code TS_REPEAT_COUNT=2, and the Repeat Count for
this prolog/epilog section was set to 5.
The whole prolog / test object call / epilog sequence will be repeated five times and the test
object will be called twice in every repetition of this loop. Since there are 5 loops, the test
object will be called 10 times in total.
In some cases it is useful to specify a common prolog/epilog for all test steps or for the
test steps of a certain test case. For this reason, you can enter prolog/epilog on test object
or on test case level. Such a default prolog/epilog will be inherited to the respective child
test steps. In this way you avoid to copy the same prolog/epilog multiple times to each
test step. Default prolog/epilog can be overwritten on test case/step level for individual
test cases/steps if desired.
Figure 6.163: TESSY provides default prolog/epilog on test object level to be inherited
to test cases and test steps
For both prolog and epilog, there are up to three tabs available, depending on the selected
object:
Test objects:
• “Default for Test Cases” - The default prolog/epilog that will be inherited to
all test cases of this test object
• “Default for Test Steps” - The default prolog/epilog that will be inherited to
all test steps of this test object
Test cases:
• “Default for Test Steps” - The default prolog/epilog for the test steps belon-
ging to that test case
Test steps:
Figure 6.164: TESSY allows Prolog/Epilog being inherited from test case or test object
The Prolog/Epilog view provides a popup menu containing variables for convenient edi-
ting.
Þ use the Usercode Outline view to mark the test case or test step for which you
want to set the usercode.
Þ Click into the Prolog or Epilog section of the Prolog/Epilog view and enter the
usercode.
Þ Press CTRL + Space or type the first letters and press CTRL + Space.
The popup menu appears (see figure 6.165), showing all available names re-
spectively the filtered list according to the characters you already typed.
Þ Use the Usercode Outline view to navigate and select a test case or test step
from the tree.
Within the test step epilog or within stub functions, you can evaluate any variable or expres-
sion using the evaluation macros. These predefined macros allow to check an expression
against an expected value. The result is stored within the test report like the evaluation
of normal output variables of the test object.
Evaluation macros can only be used within the following Usercode sections:
Evaluation A popup menu contains all available interface variables and symbolic constants for con-
macros venient editing as well as the available evaluation macros, e.g. TESSY_EVAL_U8 for
unsigned character values:
Þ Select the evaluation macro for the specific data type of the variable which
shall be evaluated. The only difference of the evaluation macros is the type of
argument for the actual and expected value, see table below for a description
of the available types.
Example: Below is an example showing the template in the second row and the edited
evaluation macro underneath.
Both value arguments given to the evaluation macro may be of any value that fits
the specified eval macro type. By convention, the first (left side) value should be
the actual value that shall be checked and the second (right side) value should be
the expected result. Like this you will get the same order of values within the test
report as for normal output values.
TESSY_EVAL_FLOAT float
TESSY_EVAL_DOUBLE double
operator meaning
== equal
!= unequal
< less
operator meaning
> greater
Each invocation of an evaluation macro results in an additional entry within the test report.
All evaluation macros will be added to the list of actual/expected values of the current test
step. The results will be displayed within the Usercode Outline view and the Evaluation
Macros view.
It is possible to format the output of the evaluation macros as binary value, decimal or
hexadecimal (default setting) by appending one of the following format specifiers at the
end of the evaluation macro name:
The report shown below contains all possible evaluation macro name formats. The format
specifier itself will be omitted within the final evaluation macro name.
The Stub Functions view displays the code for all stub functions. Normally all stub code
is defined on test object level.
In the Stub Functions view you can insert stub code for test steps, test cases, and test ob-
jects. The code fragments will be be combined into a single stub funktion implementation
and will be called in the order as shown in figure 6.171. If you don’t want to execute the
parent fragments for specific test cases or test steps, you need to add a return statement
within the respective stub code fragment.
Important: Stub code must be provided for all non-void stub functions in
order to return a valid value as result of the stub function call. If there are stub
functions without stub code, the test execution will be aborted with an error. If
the return value of a stub function is not used by the test object, you should add
at least a comment here.
Please note the error icon at the stub function name in figure 6.170 indicating that stub
code is missing. You can switch off this check by unchecking the respective test execution
preference. This preference setting will be stored within the preferences backup file as
described within Windows > Preferences menu.
Within the stub function code you can reference the parameters passed to the stub function
and also global variables used by your test object. The following special macros are available
within the stub body:
Figure 6.173: Stub Functions view with code using TS_CALL_COUNT macro
It is recommended to define test case/step specific stub code instead of using the
macros TS_CURRENT_TESTCASE/TS_CURRENT_TESTSTEP.
Example for the use of test object, test case and test step specific stub code:
If only stub code of e.g. the test step should be executed, you need to set a return at the
end of the inserted code on test step level. If only stub code of the test step and the test
case should be executed, you need to set a return at the end of the inserted code on test
case level. Stub code on test module level will be overwritten.
TESSY will automatically generate the code to execute a test including all the stub code
you inserted in the Stub Functions view. Below you can see brief examples of inserted
stub code on test object level, test case level and test step level on the left along with the
automatically generated code resulting from that on the right.
Figure 6.175: Stub code examples on test object, test case and test step level
In the automatically generated test code (see figure 6.176) you can recognize the test
execution direction as shown in figure 6.171.
For more information about the Usercode Outline view and navigating within the test items
see section Usercode Outline view.
The Usercode Outline view displays the usercode and stub function code that will be exe-
cuted at a certain point in time during the test execution and that you just defined in the
Prolog/Epilog view or Stub Functions view. Use this view to navigate within the test items
when editing prolog/epilog or stub function code.
The view shows entries for each location where usercode is defined. Click on a test case
or test step to see the inherited stub function code for the selected test item.
The Stub Functions view shows the stub code to be executed for the test step 1.1 that is
currently selected within the Usercode Outline view. Please note the hint within the text
field title indicating that the stub code is inherited from the test object level.
Now there is an inserted stub function code entry selected within the Usercode Outline
view. The entry indicates that the stub code is inserted for test step 2.1 which is also
indicated within the text field title.
Important: In TESSY version 3.2 and higher the single Plots view has split in
two separate views, the Plots view and the Plot Definitions view!
The Plot view displays the included test items and chart(s) for a plot selected in the Plot
Definitions view. The number of different charts per plot depends on the plot mode:
• test case plot: one chart for all included test items
For test step and array plots the chart can be selected by navigating the test item tree.
Important: If you use other evaluation modes than equal (e.g. <, <=, >,
>=, !=, [Range]), it is not possible to display the expected values within the plot
chart. Displaying the expected values is only possible when using the evaluation
mode == (equal). See section 6.8.7.6 Entering evaluation modes.
The test item tree on the left-hand side of the Plot view shows all test items included in
the selected plot, as defined in the Plot Definitions view via the “Set Included Test Items”
command.
This tree is for navigating the different charts of a plot, if there is more than one chart
available.
6.8.13.2 Chart
The chart displays the values of the variables included in the selected plot. The values are
color-coded:
For expected values, dotted blue lines represent the upper and lower bound of expected
values such as 10 ± 5.
Only variables that have “Use in Report” checked in the Plot Definitions view are shown in
the chart. Selecting a variable in the Plot Definitions view will highlight the corresponding
value series in the Plot view.
The Plot Definitions view allows creating and configuring plots from within the TIE and
TDE perspective. For details refer to section 6.6.5 Plot Definitions view within chapter
TIE: Preparing the test interface.
The Coverage Viewer (CV) displays the results of the coverage measurement of a previously
executed test, which is either
• an instrumentation which you selected within the Properties view for your
module or test object under test (see section 6.2.3.4 Coverage tab), or
• an instrumentation which you selected for your test run (see section 6.2.2.6
Executing tests).
The available information displayed and the sub windows shown within the CV depends
on the coverage options selected during the test run. The CV will be updated with the
coverage information of the currently selected test object whenever you switch to the CV
or when you select another test object within the Test Project view.
Test Project view upper left Same view as within the Overview perspective.
Called Functions middle left Contains the test object itself and the functions called
view from the test object.
flow chart view upper Displays the graphical representation of the control struc-
middle ture of the currently selected function (only displayed
when a coverage mesurement was selected for a test run!).
Coverage views upper / Displays the results for the selected coverage instrumen-
middle tation.
right
Code view lower pane Displays the source code of the currently selected function
(and highlighting selected decisions/branches).
Report views lower pane Displays the ASCII based coverage summary reports for
the selected instrumentation.
• C0 (Statement Coverage)
• C1 (Branch Coverage)
• DC (Decision Coverage)
For more information about coverage measurements and usage of coverage ana-
lysis refer to the application note “Coverage Measurement” in TESSY (“Help” >
“Documentation”).
There are no views for the Entry Point Coverage (EPC) and the Function Coverage
(FC)! The results are displayed only within the Test Overview Report (see section
6.2.2.11 Creating reports) or the Test Project view (see figure 6.182).
Figure 6.182: Results of the EPC are displayed within the Test Project view
The following figure 6.183 displays a component test with a Function Coverage instrumen-
tation result (amongst others).
Please notice:
• If you move the mouse over the result within the Test Project view, the percentage
of the coverage for the respective item will be displayed.
• The Called Functions view displays the coverage result for every function.
6.2.2 Test The Test Project view displays your test project which you organized within the Overview
Project view
perspective.
After a test run you will see columns being added to the Test Project view for each ap-
plied coverage measurement. The coverage icons provide an overview about the reached
coverage for each test object as well as cumulated for modules, folders and test collecti-
ons.
The Called Functions view contains the test object itself and all called functions of the test
object. It displays the achieved coverage of the current test run. By clicking on a function,
you can review the source code within the Code view and see the code structure within
the flow chart view.
Þ Click on (Toggle Code Coverage Highlighting) in the tool bar of the Code
view.
The respective source code lines will be marked within the Code view.
Þ Within the Called functions view move the mouse over the function.
The flow chart view displays the code structure and the respective coverage in graphical
form.
You might want to learn the functions of the flow chart view with an easy example:
Consult section 5.1.10 Analyzing the coverage of the Practical exercises.
Zooms out.
Zooms in.
The code structure of the function will be displayed in the flow chart view.
Zoom in or out using the tool bar icons or the entries from the chart menu.
Within each flow chart, you will see the branches and decisions of the function being Viewing
displayed in green and red colors, which indicates whether the respective decision has been Functions
fully covered or the respective branch has been reached:
If none of DC, MC/DC or MCC coverage has been selected for the last test execution, the
decision elements remain grey, but they are still selectable in order to find the respective
line of code in the source code view.
The following elements are displayed within the flow chart of the CV:
if decision
switch statement
do while loop
You can select decisions, branches and code statement elements within the flow chart. The
respective code section will then be highlighted within the source code view. Since not all
connection lines within the flow chart are branches in terms of the C1 branch definition,
some of the connection lines may not be selectable.
If an element (e.g. the ? operator and statements containing boolean expressions) also
appears in green or red, the element contains sub flow charts that can be visualized with
a right click on the respective element. CV will insert a new tab for the condition.
The CV provides search functionality for decisions and branches that are not fully covered
respectively reached through all the executed test cases. The decisions and branches are
already marked in red, but the search function can assist in finding all uncovered decisions
or unreached branches.
The chart will change into the search result mode, marking the found element
in blue.
The Statement (C0) Coverage view displays the statement coverage for each individual
test case and test step as well as the total coverage for all test cases (and test steps). The
coverage information in this view is calculated for the selected function within the Called
Functions view.
If you selected only the C0 coverage instrumentation for test execution, you will see the
code branches marked in red and green within the flow chart; “else” branches, that do not
exist within the code, will be displayed in the Flow Chart view in grey.
Also the loop branches of while, for and do statements that are irrelevant for C0 coverage
will be displayed in grey.
The flow chart shows code branches and not individual statements and also blocks of
statements will be shown as one block instead of individual items for each statement.
If you select individual test cases or test steps within the test case list, the respective
statements covered by those test steps will be marked within the flow chart, i.e. the
code branch containing these statements will be marked. This allows finding out the
execution path of the selected test step. By selecting multiple test steps, review the
resulting cumulated statement coverage within the flow chart. The total coverage number
will also be updated with the C0 statement coverage for the selected test cases / test
steps.
The coverage percentage is the relation between the total numbers of statements of the
currently selected function compared to the number of reached statements. This coverage
calculation includes the currently selected test cases and test steps within the test case
/ test step list (see figure IsValueStatementCoverageC0-1). By default, all test cases are
selected when opening the CV.
The Branch (C1) Coverage view displays the branch coverage for each individual test case
and test step as well as the total coverage for all test cases (and test steps). The coverage
information in this view is calculated for the selected function within the Called Functions
view.
If you selected only the C1 coverage instrumentation for test execution, you will see only
the C1 branches marked in red and green within the flow chart.
If you select individual test cases or test steps within the test case list, the respective
branches covered by those test steps will be marked within the flow chart. This allows
finding out the execution path of the selected test step. By selecting multiple test steps,
review the resulting cumulated branch coverage within the flow chart. The total coverage
number will also be updated with the C1 branch coverage for the selected test cases / test
steps.
The coverage percentage is the relation between the total numbers of branches of the
currently selected function compared to the number of reached branches. This coverage
calculation includes the currently selected test cases and test steps within the test case /
test step list. By default, all test cases are selected when opening the CV.
Refer to the description of the MC/DC Coverage view. The only difference is the calculation
according to the definition of the decision coverage.
The MC/DC Coverage view displays the coverage of the currently selected decision within
the flow chart view (see figure 6.190). If no decision is selected (as initially when starting
the CV), the MC/DC Coverage view is empty.
When selecting a decision, the respective combination table according to the MC/DC
coverage definition will be displayed within the MC/DC-Coverage view (see figure 6.191).
The combination table contains all atomic conditions of the decision. The conditions are
the basic atoms of the decision which remain after removing the or, and and not operators
from the decision. TESSY calculates the MC/DC set of true/false combinations of the
condition atoms that fits best to the test steps executed during the test run.
The last table column contains the test step that caused the execution of the decision with
the true/false combination of the respective table row. If one or more of the condition
combinations were not reached during the test run, the test step column of those rows will
be marked in red.
Þ Select a decision by clicking on the respective control flow element within the
flow chart view.
The code fragment will be marked within the source code view.
The decisions are either green or red depending on the degree of coverage. If no coverage
information is available, i.e. when you ran the test without any of DC, MC/DC or MCC
instrumentation selected, the decisions within the flow chart will appear in grey and the
Coverage view will not be available (N/A).
Refer to the description of the MC/DC Coverage view. The only difference is the calculation
according to the definition of the MCC coverage.
There are up to five coverage reports available depending on the instrumentation mode
selected for test execution. They contain the summarized coverage information of the last
test execution:
• The statement (C0) coverage report contains some meta information (e.g.
number of statements, reached statements, total statement coverage) and
the source code of the test object.
• The branch (C1) coverage report contains some meta information (e.g. num-
ber of branches, reached branches, total branch coverage) and the source
code of the test object.
• The decision coverage report lists all decisions of the test object code inclu-
ding the coverage tables with the respective decision condition combinations.
• The MC/DC coverage report lists all decisions of the test object code inclu-
ding the coverage tables with the respective MC/DC condition combinations.
• MCC coverage report also lists all decisions of the test object code including
the coverage tables with the respective MCC condition combinations.
For coherent testing it is essential to realize changes of the interface of test objects and to
re-execute previously passed tests to assure that any changes of the source do not cause
the previous passed tests to fail. This is often summed up with the keywords “regression
testing”.
If the interface of a test object changes, TESSY will indicate the changes with specific
status indicators at the test object icon. With the Interface Data Assigner (IDA) you can
assign the elements of a changed (new) interface to the respective elements of the old one
and start a reuse. The reuse operation will copy all existing test data to the newly assigned
interface elements.
To appropriately react to changes, TESSY needs to know the current structure of the
interface. Therefore it determines for each module the available functions and their
interfaces by analyzing the source code. This information is stored in the interface
database so that TESSY knows about any changes and can keep track of test data
assignments made for a whole module or just for individual test objects.
Test Project view upper left Displays your test project. For editing your test project
switch to the Overview perspective.
Properties view lower left Displays the properties of your test project, e.g. sources to
the test object.
Compare view right Displays two interfaces, either of the same test object (old
and new interface) or of different test objects. You can
assign the changes by drag & drop.
The following test object status indicators are relevant when reusing test data.
The test object has changed. You see these test objects, but there is no operation
possible. You have to start a reuse operation.
The test object is newly available since the last interface analysis. You have to
add test cases and test steps and enter data for a test.
The test object has been removed or renamed. You still see these test objects,
but there is no operation possible. You have to assign this test object to any other
and start the reuse operation.
The Test Project view displays your test project which you organized within the Overview 6.2.2 Test
Project view
perspective.
The Properties view displays all the properties which you organized within the Overview
6.2.3 perspective. Most operations are possible.
Properties view
For changing a source switch to the Properties view within the Overview perspective.
The Compare view shows two versions of an interface depending on the TESSY objects
selected for comparison:
• For a single module or test object, it shows the old interface on the left side
and the new interface on the right side.
• When assigning two different modules or test objects, it shows the interface
of the source object on the left side and the interface of the target object on
the right side.
The Compare view will be used for reuse operations of whole modules or individual test
objects as well as when assigning test data from one test object (or module) to another
test object of the same or different module.
Within the Compare view you can see the old interface of our test object and the new
one. The red exclamation mark within the new interface indicates the need to assign this
interface object before starting the reuse.
The title of the view shows the old name versus the newly assigned name.
To assign changes:
Þ Use the context menu or just drag and drop from the left side (see figure
6.194).
You can assign single functions and just commit the assignments for this function
(the other functions will stay in state “changed” and can be reused later). Or you
can assign and reuse whole modules (which is convenient when there are just little
changes within the function interfaces.
To commit assignments:
The data of all test cases and test steps will be copied from the old interface to the current
test object interface. The test object changes to yellow to indicate that all test cases
are ready to be executed again. If there are missing test data within the new interface
(e.g. due to additional variables being used by the test object), the icon will show an in-
termediate test object state . In this case you need to add any missing test data within
the test data editor.
• Removed and changed test objects require a reuse operation before you can further
operate on them.
• Unchanged test objects have been automatically reused when opening a module, i.e.
they will be ready to use without further activities required.
• Removed test objects will only be displayed as “removed”, if they did contain any
test cases and test steps.
You can use the IDA to assign test cases from one test object to another test object
within the current project. Both test objects can be either from the same or from different
modules. It is also possible to assign the contents of whole modules to other modules.
Important: When assigning test cases to another test object, the target test
object contents will be overwritten completely!
Þ At first drag the target test object into the right side of the Compare view (or
placeholder view).
Þ Secondly drag the source test object into the left side of the Compare view.
Þ Assign the interfaces to your needs. Variables that cannot be assigned can
be left out of scope (they will just not be used). Additional variables of the
target test object that cannot be assigned from the source test object will be
left empty after the assignment.
To commit assignments:
The data of all test cases and test steps will be copied from the source test object to
the target test object. The target test object changes to yellow if every variable of
the interface could be assigned from the source test object. Otherwise it will display an
intermediate test object state indicating that only parts of the test data are available.
The component test feature is only used for integration testing. You do not need
this feature for unit testing.
The component test feature within TESSY supports testing of several functions (repre-
senting the software component) that interact with themselves as well as with underlying
called functions (of other components). The main difference to unit testing of individual
functions is the focus of testing on the external interface of the component instead of in-
ternal variables or control flow. You should be familiar with the overall usage of TESSY for
the normal unit testing. Some features like usercode and stub functions are still available
for component testing, but the respective editors will be found at different locations.
The component test feature allows creating calling scenarios of functions provided by a
software component. Within these scenarios, the internal values of component variables and
any calls to underlying software functions can be checked. TESSY provides the Scenario
Editor (SCE) for this purpose. All scenario-related inputs are available through the SCE.
Instead of having individual test objects and test cases for the component functions, the
component test itself provides a special node called “scenarios” seen as one test object. The
test cases belonging to the scenarios node are the different scenarios for the component.
Within one scenario, you can set global input variables, call component functions, check
the calling sequence of underlying software functions and check global output variables.
The content of each scenario may be divided into the following parts:
The Usercode Editor (UCE) is not available for component testing, because the prolog/e-
pilog code and definitions/declarations sections can be edited directly within the SCE. You
will find C-code fragments that can be added into the scenario control flow. Also the code
for stub functions can be edited directly within SCE.
The component test management is based on TESSY modules alike a unit test. In contrary
to unit testing you will probably use multiple source files instead of only one file. Other
parts of the testing process stay basically the same:
Þ Include all the source files, include paths and defines necessary to analyze the
source code of the component.
As environment the default GNU GCC compiler is used. This means the component
tests will be executed on the Windows PC, using the microprocess of the PC as
execution environment. If you use a cross compiler for an embedded microcontroller,
you run the tests either on the actual microcontroller hardware or on a simulation
of the microcontroller in question.
In contrast to normal unit tests, you will only see one special test object called
“Scenarios” (see figure 6.197).
The interface of the component is a summarized interface of all the non-static component
functions:
The External Functions section marked with the icon lists the interface to the under-
lying software functions, if any external function is called from the component. These
external functions can be replaced by stub functions like within the normal unit test. The
Component Functions section marked with the icon lists all the component functions,
i.e. the functions visible from outside the component. Local static functions will not be
listed here. The meaning of the status indicators for component functions is as follows:
The variables used by this function are not available within the component
test interface of the scenario. These variables are set to IRRELEVANT.
The variables used by this function will be available within the scenario and
the passing direction may be adjusted.
The time based scenario description within SCE is based on time steps that represent
the cyclic calls to a special handler function of the component. Such a handler function
controls the behavior of a time-sliced component implementation.
The handler function needs to be selected as work task prior to executing any scenarios:
The icon of the function will change from to (see figure 6.199).
Figure 6.199: Two component functions were set as work task within the Component
Functions view
You can select several component functions as work tasks. This will be useful
when testing several components together which all have a handler function.
The Work Task Configuration view allows more detailed settings for the work tasks.
You can drop component functions directly into this view to configure them as work tasks.
The view provides the following global settings:
• Mode Variable Name (not used by default) which optionally provides calling
the work tasks depending on the value of the selected variable. All scalar
variables can be selected here.
For each work task, you can specify the following settings:
• Start Time: Determines the point in time where this work task shall be called
the first time for each scenario. The default is 0 ms which causes the work
task being called immediately starting with the first time step.
• Cycle Time: Determines the elapsed amount of time after which the work
task shall be called again. The default is 10 ms which causes the work task
being called for every 10 ms time step.
• Mode: If a global Mode Variable Name is selected, you can specify for which
value of this variable the respective work task shall be called. During test
execution, this work task will only be called within its specified start and
cycle time, if the mode variable has the specified value.
The order of appearance within the Work Task Configuration view reflects the actual calling
sequence of the work tasks for each time step of the scenario. You can reorder the work
tasks via drag and drop.
Another global setting is the calculated cycle time which depends on the cycle times of
the given work tasks. It will be calculated automatically from the cycle times of the given
work tasks.
Within the example in figure 6.201, the resulting global cycle time (i.e. the step width of
the time steps of the scenarios) will be 10 ms, because this is the greatest common divisor
of all the given work task cycle times (i.e. 20 and 50 ms in this example).
Testing a component requires a set of scenarios that stimulate the component and check
the behavior of the component. Such a scenario contains calls to component functions
and other possible actions and expected reactions of the component. A scenario can be
seen as a test case for the component. Therefore, TESSY displays the list of scenarios
within the Test Item view like normal test cases but with a different icon. There are two
possibilities for creating scenarios: Either by creation them ad hoc or by developing them
systematically using the classification tree method supported by CTE within TESSY.
After synchronizing the CTE test cases there will be the respective number of scenarios
within TESSY. You can add additional scenarios using the context menu within the scenario
list. To edit the scenario, start the scenario editor SCE. The (empty) scenarios will be
displayed within SCE providing the specification and description of the designed scenario
test cases.
• The stimulation of the component like any external application would do it.
This includes normal behavior as well as abnormal behavior which should
check the error handling of the component.
We will examine the different possibilities to check expected behavior of the component
under test. There are at least the following methods available:
The following sections describe the required settings for the above mentioned check met-
hods.
Þ Drag and drop the functions from the component functions onto the desired
time step (see figure 6.202).
There are some settings required for the function calls depending on the kind of function:
• component function: The return value has to be checked directly (for scalar
types) or assigned to a variable for later evaluation.
• external called function: The expected time frame of the call to these functi-
ons needs to be specified. This defines the time range starting from the
current time step, where a call to this function is rated as successful with
respect to the calling sequence.
You can set input values or check output values of any variable at every time step of the
scenario. According to your settings within TIE you have access to all variables available
within the component interface. The test data can be entered within the Test Data view
of the scenario perspective. When you select a time step, the Test Data view provides a
column named like the time step for entering either new test data values or editing existing
ones (see figure 6.203)
The Test Data view provides most of the editing features like for the normal unit testing.
After entering any values, the icon of the respective time step will change indicating the
test data status. The Test Data view shows columns for all time steps that contain test
data plus one column for the currently selected time step.
Time step indicator icons for test data (see also figure 6.204):
• Grey indicator: Some input values are assigned but some are still missing and
need to be provided. Select “*none*” for input values of time steps that you
do not want to assign.
• Yellow indicator: At least all input values are assigned for this time step. The
output values do not need to be assigned to execute a scenario.
Important: All time steps with test data need to have a yellow indicator before
the scenario can be executed!
The icon of the scenario will change to yellow if there are no more time steps with a grey
indicator.
When dragging component functions into the scenario, you need to provide the parameter
values. For scalar values, you can simply add decimal or floating point numbers depending
on the type of variable. You can also provide a symbolic name of a variable with the
corresponding type. This name will be added into the test application without any checking.
If the symbolic name does not exist, there will be error messages when compiling the test
application. Either provide a value (for scalar return value types) or specify the symbolic
name of a variable which the return value shall be assigned to (in this case, the variable
provided should be of the same type like the return value type).
The calling sequence of calls to underlying external functions may be checked on an abstract
level within the scenario. Not the absolute calling sequence will be evaluated, but the
existence of function calls within a given period of time within the scenario. This provides
a robust mechanism for call trace checking that ignores the internal implementation of the
component. How does it work? You specify the following information within the scenario
for each expected function call:
• The time step where you expect the call at the earliest.
• Optionally a period of time (the time frame) from that starting point where
the invocation of the function call is still successful with respect to the ex-
pected behavior of the component.
Both these settings are available for each expected call to an external function. The time
frame is zero by default indicating that the expected function call shall take place within
the same time step.
If you specify the time frame as 60 like within the example above, this indicates that
the expected call could take place within time step 20ms, 30ms or up to 80ms to be
successful.
The exact sequence of the calls to those functions will not be examined, any of them
may be called within the given time frame interval. The report shows the result of the
evaluation of the call trace for the example above. The actual call trace entry contains the
time step where this call occurred, the expected call trace entry shows the expected time
frame period.
The following table shows the possible evaluation results for the call trace of the example
calls to function crossed_50() and crossed_75().
40ms ok ok
50ms failed ok
60ms failed ok
70ms failed ok
80md failed ok
If you need to check the exact calling sequence, you should set the time frame to zero.
Other functions called in between the expected function calls are ignored. On the other
hand, the time frame provides you with a powerful way to describe expected behavior of
the component without knowing details about the exact implementation.
You may check that a function is not called within a given time interval. The example
below checks that the function crossed_75()is not called within 100ms after the stimulation
of the component by setting the expected call count to zero.
The crossed icon shows the special value of the expected call count, indicating a check
that the function shall not be called.
Because called external functions need to be replaced by stub functions, you can check
the parameter values like during unit testing, depending on the type of stub function you
choose.
For more information refer to section 6.6.4.8 Defining stubs for functions.
After implementing and editing the scenarios within SCE, execute the scenarios:
Þ Select the desired scenario test cases and execute the test using the Execute
Test button within the toolbar.
With TESSY you can easily backup modules into a directory and check in into a version
control system. Modules can also be restored from that directory which facilitates checking
out modules from the version control system onto another computer and restoring the test
database.
You can backup individual modules, folders or whole test collections. The backups will be
stored as TMB files. Restoring the files is either possible within the original folder or as
well from another location.
6.12.1 Backup
Þ In the menu bar select “File” > “Database Backup” > “Save. . . ”.
The Save Database dialog will be opened with your module already selected
(see figure 6.206).
Þ Decide, which modules you want to save by either selecting them separatly or
pressing the button “Select All”.
Þ Decide, if you want to save the coverage settings, test report options or dialog
settings from the Windows > Preferences menu.
Þ If you have linked your test cases with any requirement documents, you can
choose to save the referenced requirement documents as well. In this case the
requirements will be saved within the TMB file.
Þ Click “OK”.
Backup
The “Backup Folder” displays the backup directory of the current project. We
recommend to use this directory for any backup and restore operations.
For each module there will be a file named like the path to the module starting from
the test collection. The file name will be escaped using URL encoding which replaces for
instance the space character with a “%20”. The preferences are stored within separate files
within the “preferences” subdirectory.
6.12.2 Restore
The box “Modules” now shows the hierarchy of modules that can be restored
from the given TMB files within the backup directory (see figure 6.208).
Make sure you ticked the boxes with the requirements to import
them!
Þ Click “OK”.
You can also restore TMB backup files into another than the original location: If you
select any folder for which there are no corresponding TMB backup files, restore any of
the available TMB files as children of this folder. The original test collections and folders
of the TMB files will be restored as sub folders of the current folder instead.
We recommend to save backups of all test relevant files into a version control system on
a regular basis. At least when the test development is completed, the whole test setup
should be saved as backup.
Þ Save the following files and contents of directories into your version control
system:
PDBX file
Contents of the config directory
Contents of the backup directory
The directory work contains only temporary files created during development and
execution of the tests. You can delete this complete directory to save disk space
after the testing work is completed.
The directory persist contains the current databases of the test project. This directory
and the sub directories will be restored when restoring TMB backup files. The valuable
contents of this directory will be saved into the TMB files created during the backup
process.
When you restore the whole project onto another computer, the directory persist will be
restored from the TMB backup files.
TESSY provides a command line interface which allows writing batch script files that
control various operations within a TESSY project. The command line operations are
available by invoking an executable program called “tessycmd.exe”.
The program can be called either from a DOS command line shell or from a DOS batch
script or other script formats that support calling DOS executables.
Before invoking any tessycmd.exe commands you need to start TESSY. The tessycmd.exe
will connect to a running instance of TESSY in order to execute any commands. You
can run TESSY either in GUI mode with a graphical user interface (when started normally
using TESSY.exe) or in headless mode without a GUI (when started using tessyd.exe).
For information about the usage of TESSY together with continuous integration
servers like Jenkins refer to the application note “Continuous Integration with
Jenkins” (“Help” > “Documentation”).
At the end of your script you should shutdown TESSY using the same tessyd.exe application
with the parameter “shutdown”. The calling sequence for running batch tests would be
like follows:
tessycmd <commands>
tessyd.exe shutdown
When running TESSY in headless mode, the console output will be written into a
file “console.log” within the directory:
%USERPROFILE%\.tessy_40_workspace\.metadata
The executable that provides all command line operations is available within the TESSY in-
stallation directory: Program Files\Razorcat\TESSY_[version]\bin\tessycmd.exe
The available commands provide means to create, select and list TESSY objects, i.e. a
project, test collection, folder, module, test object. After invoking any create commands,
the respective new TESSY object will be selected. You can invoke further commands to
manipulate any previously created or selected TESSY objects.
• Connect to TESSY.
Important: If you are not connected, invoking any commands will fail. Multiple
connects will also cause an error.
The current state (connection and selection of TESSY objects) of the tessycmd.exe execu-
table is managed by the currently running TESSY application. If you restart TESSY, the
state of tessycmd.exe will be reset to the initial state, i.e. disconnected.
6.13.4 Commands
command operation
tessyd -f <name of pdbx file> imports and opens the project referred by the
given .pdbx file
Table 6.266: Excerpt of the possible commands of the command line interface
To execute tessycmd.exe within any directory, add the directory “bin” of the
TESSY installation to the windows path environment variable.
You will find the following example DOS script within the TESSY installation directory:
Program Files\Razorcat\TESSY_4.0\Examples\CommandLine\cmd_line_example.bat
The script is prepared to import TESSY backup files (TMB files) into the currently open TESSY
project. It will create a new test collection “Examples” and import the existing TMB files into a
newly created folder. After the import, it executes the imported modules:
For compiler/target settings refer to our application notes available in the Help
menu of TESSY (“Help” > “Documentation”)!
• Check this manual and make sure that you have operated correctly.
• Check section 7.2 Solutions for known problems.
• Check our application notes that are available in the Help menu of TESSY (“Help”
> “Documentation”).
• Check our website for commonly asked questions and current issues
http://www.razorcat.com.
If you have further questions or if there is a problem you could not solve with the documentations
described above, please contact our Technical Support per e-mail: support@razorcat.com
Include in your support request e-mail:
The TESSY Support File (TGZ file) contains information about test object in-
cluding data, compiler, project settings etc. It helps the support to detect the
cause of your problem.
Þ In TESSY, select the module or test object that is causing the problem.
Þ Click “Help” in the menu bar.
Þ “Support” > “Create Support File”
Þ Tick the box “Preprocessed sources” if possible.
Þ Click “OK”.
Þ Choose a folder and a name for the file and click “Save”.
The TESSY Support File (TGZ file) is created.
Þ Before reproducing the problem, switch to the Console view of the perspective “Over-
view”.
Þ In the tool bar click on the icon (Clear Console).
All messages will be deleted.
The additional information can relate to different process steps within TESSY. Enable the logging
of the information you suspect the problem to stem from:
Process Creation parts of TESSY do not start correctly or TESSY is not able
to start the test system (e.g. debugger).
Makefile Commands the test application (slave) or the test driver (master) cannot
be created or are created incorrectly.
High level you want to log the general TESSY activities. Seldom required
to find a problem.
Low level you want to log debugger-specific activities. Often very useful.
Þ You can save the settings for logging with ticking the box “Remember current
settings . . . ”.
Þ Do the actions again that lead to the problem (e.g. opening the module).
Þ Keep the respective element selected that caused the problem (e.g. the test
object in case of errors while executing) when creating the support file.
Error description: TESSY does not start or displays exceptions within all GUI windows
(views).
Possible cause: There might be a problem due to corrupted files needed for the
Eclipse TESSY product startup.
Solution:
Delete the following directories in given order. After every deletion try to start TESSY
again. If it fails, delete the next directory.
Important: This will reset your window layout of the GUI to the
default settings!
SQLException
SQL State: 08001
Error Code: 40000
java.net.ConnectException: Fehler beim Herstellen der
Verbindung zu Server localhost auf Port 1527 mit Meldung
Connection refused: connect.
Error description: When quitting TESSY, one of the error messages above is
displayed.
Possible cause: Two versions of TESSY were startet at the same time.
Solution:
If you want to use both versions of TESSY at the same time, you can change the config
file:
Important: The functions of the command line tool are limited if you use both
TESSY versions at the same time! The tool works with the TESSY version that
was startet at last.
%APPDATA%\Razorcat\Tessy\[TESSY Version]\config\tessy.conf.
Þ Remove the hashmark in front of the server port from the database.
TESSY_DERBYD_PORT=40000
Error description: The license server does not start, or you get an error when
starting it.
Specific occurrence -
or requirement:
Solution:
Þ Start the License Manager manually if it has not startet yet: “Start” > “All
Programs” > “TESSY” > “Floating License Manager”.
Figure 7.3: License key check unsuccessful: license key is incorrect for the host id
In many cases you can already determine the problem with the help of the error message.
In case of the error “No maching hostid for key” the license key does not match to the
host id of your computer:
Þ Configure the correct license key file in the manager: Click on (Configure)
and select the correct license key file. Click “OK”.
Þ If the error still appears, contact our support (see Contacting the TESSY
support) to get a new license key file.
Setting a variable declared with the “const” modifier keyword may result in undefined beha-
vior and lead to error messages. In those cases set the variable passing to “IRRELEVANT”
in the TIE. After that was done the test shall pass through without any restriction.
Important: Please note that normally constant variables are read-only variables
and can not be assigned.
1. Undefine the const modifier in the properties view (for individual modules)
The modifier "const" needs to be removed in order to write to such variables. You can
remove this modifier without changing the source file using a special define that replaces
the "const" keyword with nothing.
Þ In the Test Project view click on the module you want to test.
Þ To add a define that replaces the "const" keyword with an empty content click
on (see figure 7.5) in the Properties view. The New Define popup window
opens.
Þ Enter a define with the name "const" and an empty value as shown below (see
figure 7.6).
Assignments to read-only variables are now possible in the chosen module. When this
define is in place, all variables with the "const" modifier will appear as if the "const" has
not been used (i.e. the variables are not "const" any more and can be changed during the
test execution).
Þ In the TEE right click in “Makefile Template” > click “Open with Editor”.
Þ In the respective file, e.g. “ts_make_socket file.tpl” when using the GCC
compiler, add “-Dconst=” in the line “S_COMP_OPTIONS” (see figure 7.7).
After saving the Makefile Template (e.g. “ts_make_socket.tpl”) constant variables are
generally replaced with an empty content.
Batch Testing A testing procedure in which multiple test objects are executed automa-
tically one after each other without further user interaction.
Branch Coverage Is usually abbreviated “C1”. Roughly spoken: Branch Coverage reveals,
if all branches were executed, for example, an if-instruction has two branches, the
then-branch and the else-branch.
C1 Test During a C1 test, each branch of the test object will be instrumented with a
counter to monitor, how often a branch of the program is run through.
Code Coverage A test object is considered to consist of items like branches, conditions,
etc. Code coverage measures, how many of the items were exercised during the
tests. This number is related to the total number of items and is usually expressed in
percent. TESSY features C1 coverage (branch coverage) and C2 coverage (MC/DC:
Modified Condition)
Component Testing is the test of interacting test objects, i.e. interacting functions in
the sense of C. These functions can be a (single) calling hierarchy of functions, but
we will consider this mainly as unit testing. We consider as a component mainly a
set of functions that interact e.g. on common data and do not necessarily call each
other. Component testing then is testing of such a set of functions. The units do
not need to be in a calling hierarchy; they may only interact on data, like push() and
pop() of the abstract data type “stack”. A component according to this specification
may also be called a “module”, and its testing “module testing” respectively.
Debugger A computer program that is used to test and debug other programs (the “tar-
get” program). The code to be examined might alternatively be running on an
instruction set simulator (ISS), a technique that allows great power in its ability to
halt when specific conditions are encountered but which will typically be somewhat
slower than executing the code directly on the appropriate (or the same) processor.
Some debuggers offer two modes of operation - full or partial simulation, to limit
this impact.
Enums Type of the C language specification which allows to define a list of aliases which
represent integer numbers.
Expected Values Values expected to be calculated by the test object. The result values
are checked against the expected values after the test run.
Hysteresis Dependence of a system not just on its current environment but also on its
past. This dependence arises because the system can be in more than one internal
state.
Interface Data Assign editor (IDA) If the interface elements of the test object have
changed, you can assign the new interface elements to the old. Your test data will
be assigned automatically.
Input Values Function parameters, global and external variables which have effect on the
behavior of the function.
Interface Description Information about the passing direction and type of interface ele-
ments (parameter, global variables and extern variables). The interface description
is determined automatically by TESSY and is made visible / changeable in the TIE.
Integration Testing consists of a sequence of calls and can be considered either as unit
testing for a calling hierarchy of functions or as a component testing for a set of inte-
racting functions not necessarily calling each other. Component testing is integration
testing of the functions in the component.
Modified Condition / Decision Coverage (MC/DC) MC/DC coverage takes the struc-
ture of a decision into account. Each decision is made up from conditions, which are
combined by logical operators (and, or, not). Roughly spoken, to get 100 percent
MC/DC for a decision, each condition in the decision requires a pair of test cases,
that
Module A TESSY module comprises primarily of the test object (in C a function in the
sense of C) and source files, compiler settings, interface description and test data.
You can pool modules in projects.
Output Values The same as an expected value in the TESSY context. Both terms are
used in equivalence within this manual. The output (repectively expected) values
are evaluated against the actual result values after the test run.
Requirement Documented need of what a test should perform and important input for the
verification process. Requirements show what elements and functions are necessary
for the test.
Requirement, Functional Describes the features, specific behavior, business rules and
general functionality that the proposed system must support.
Requirement, Non-Functional Specifies criteria that can be used to judge the operation
of the test.
Stub Function Piece of code used to stand in for some other programming functionality.
A stub may simulate the behavior of existing code (such as a procedure on a remote
machine) or be a temporary substitute for yet-to-be-developed code.
System Testing Test of the application (software or software and hardware) as a whole.
Test Data Editor (TDE) With the TDE you can enter the input values and expected
values for the test run.
TESSY Support File Contains information about test objects including data, compiler,
project settings etc. It helps the support to detect the cause of a problem. In section
Contacting the TESSY support it is explained how to create a TESSY Support File.
Test Case Element that encapsulates the abstract test definition, e.g. the specification
and description of a test, and the concrete test data managed within test steps.
Test Definition Describes a test to be performed on the test system in textual format.
A test definition abstractly describes the inputs and the expected outcome of a test
and refers to a list of requirements which shall be validated with this test.
Test Driver C-source files generated by TESSY for the test execution. These files are
compiled and linked in order to build an application that prepares the input data,
call the test object and store the actual result data.
Test Environment Information about the test object, the compiler used, the target de-
bugger or emulator and more settings.
Test Run One execution of a test object with the given test cases. The result of a test
run is stored within an XML result file that may be further processed by external
tools.
Test Suite A collection of test objects with test scenarios and/or test cases that were
created to fulfill a certain test objective.
Test Interface Editor (TIE) With the TIE you can view all interface elements and review
or set the passing direction and/or other information of the interface elements.
Unit A single function, i.e. test object of a C program single function; the smallest
reasonable test object of a C program.
Usercode In the usercode you can enter C code, which is executed before or after test
cases/test steps during the execution of a test object.
Workspace The space at local disk where the TESSY application reads and writes data.
Place for configuration and temporary report data. Project data can be saved sepa-
rately.
5.13 Three test cases were added in the Test Items view . . . . . . . . . . . . . . 78
5.14 Data is entered, test step turns yellow and test case is ready to run. . . . . . 80
5.15 Entering data for test object is_value_in_range . . . . . . . . . . . . . . . . 81
5.16 The test cases are ready to test . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.17 TDE after test run is_value_in_range . . . . . . . . . . . . . . . . . . . . . 82
5.18 Test results of is_value_in_range . . . . . . . . . . . . . . . . . . . . . . . . 83
5.19 Selecting Branch and MC/DC Coverage for test run . . . . . . . . . . . . . . 84
5.20 Execute Test dialog while running the test . . . . . . . . . . . . . . . . . . . 84
5.21 The Coverage Viewer displays the coverage of is_value_in_range . . . . . . . 85
5.22 Branch coverage is_value_in_range . . . . . . . . . . . . . . . . . . . . . . 86
5.23 Decision coverage is_value_in_range . . . . . . . . . . . . . . . . . . . . . . 87
5.24 Code section of the if branch of the first decision . . . . . . . . . . . . . . . 88
5.25 Code section of the second decision . . . . . . . . . . . . . . . . . . . . . . 89
5.26 Creating the folder for reports . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.27 Content of the test report is_value_in_range . . . . . . . . . . . . . . . . . 91
5.28 Importing a requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.29 Import dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.30 The new requirement document . . . . . . . . . . . . . . . . . . . . . . . . 93
5.31 Changing the alias of the new requirement document . . . . . . . . . . . . . 94
5.32 Comment for the initial revision of the commit . . . . . . . . . . . . . . . . . 94
5.33 Linking test cases with requirements . . . . . . . . . . . . . . . . . . . . . . 95
5.34 Test Definition view within TDE with linked requirement . . . . . . . . . . . 96
5.35 Editing the settings of a Planning Coverage Report . . . . . . . . . . . . . . 97
5.36 Dialog of the settings for the Planning Coverage Report . . . . . . . . . . . . 97
5.37 Planning coverage report of the IVIR requirement document . . . . . . . . . 98
5.38 Generating a Test Details Report . . . . . . . . . . . . . . . . . . . . . . . . 99
5.39 Part of the generated test report of is_value_in_range . . . . . . . . . . . . 100
5.40 Creating an Execution Coverage Report . . . . . . . . . . . . . . . . . . . . 100
5.41 Coverage Report of is_value_in_range . . . . . . . . . . . . . . . . . . . . . 101
5.42 Overview perspective after test run (with requirements) . . . . . . . . . . . . 103
5.43 Use the context menu to edit a source . . . . . . . . . . . . . . . . . . . . . 103
5.44 Editing the C-source file is_val_in_range.c . . . . . . . . . . . . . . . . . . 104
5.45 Changed C-source file of is_value_in_range . . . . . . . . . . . . . . . . . . 104
5.46 Adding a “delete” and “new” object . . . . . . . . . . . . . . . . . . . . . . 105
5.47 Changed and new test objects of is_value_in_range . . . . . . . . . . . . . . 105
5.48 Remove the code for test object “deleted”. . . . . . . . . . . . . . . . . . . . 106
5.49 Changed and new test objects of is_value_in_range . . . . . . . . . . . . . . 106
5.50 Changed, deleted and new test object of is_value_in_range . . . . . . . . . 108
0.1 Where to find - matters of the several parts of the TESSY manual . . . . . . xxi
0.2 Font characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii