TESSY UserManual 43
TESSY UserManual 43
Windows is a registered trademark of Microsoft. TESSY and CTE are registered trademarks of Ra-
zorcat Development GmbH.
All other registered or unregistered trademarks referenced herein are the property of their respective
owners and no trademark rights to the same is claimed.
Liability exclusion
Razorcat Development GmbH assumes no liability for damage that is caused by improper installation
or improper use of the software or the non-observance of the handling instructions described in this
manual.
Thanks
Various contents are based on application notes and publications on TESSY written by Frank Büch-
ner, Hitex Development Tools GmbH. We would like to thank Frank for his valuable contribution and
commitment in supporting TESSY and spotlighting functionalities and features.
7 Troubleshooting 522
7.1 Contacting the TESSY support . . . . . . . . . . . . . . . . . . . . . . . . . . 523
7.2 Enhanced error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
7.2.1 Problems Log dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
7.2.2 Problems view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
7.2.3 Opening external problem logs using the Help menu . . . . . . . . . . 530
7.3 Solutions for common problems . . . . . . . . . . . . . . . . . . . . . . . . . . 532
7.3.1 TESSY does not start or gives errors when starting . . . . . . . . . . . 532
7.3.2 TESSY gives errors when quitted . . . . . . . . . . . . . . . . . . . . . 533
7.3.3 License server does not start or gives errors . . . . . . . . . . . . . . 534
7.3.4 Working with constant variables . . . . . . . . . . . . . . . . . . . . . . 536
7.3.5 Dealing with too long project paths . . . . . . . . . . . . . . . . . . . . 540
Appendix 542
A Abbreviations 543
B Glossary 545
Index 567
About TESSY
The test system TESSY was developed by the Research and Technology Group of Daimler.
The former developers of the method and tool at Daimler were:
Klaus Grimm
Matthias Grochtmann
Roman Pitschinetz
Joachim Wegener
TESSY has been well-tried in practice at Daimler and is since applied successfully. TESSY is
commercially available since spring 2000 and is further developed by Razorcat Development
GmbH.
TESSY offers an integrated graphic user interface conducting you comfortably through the
unit test. There are special tools for every testing activity as well as for all organizational and
management tasks.
Dynamic testing is indispensable when testing a software system. Today, up to 80% of the
development time and costs go into unit and integration testing. It is therefore of urgent
necessity to automate testing processes in order to minimize required time and costs for
developing high-quality products. The test system TESSY automates the whole test cycle;
unit testing for programs in C/C++ is supported in all test phases. The system also takes
care of the complete test organization as well as test management, including requirements
coverage measurement and traceability.
The TESSY User Manual provides detailed information about the Installation and registration
of TESSY, Theory: Basic knowledge about testing, Tutorial: General handling and Reference
book: Working with TESSY. (Please study the list under Subject matter for more details about
the different chapters in this manual.)
There is also a chapter Tutorial: Practical exercises containing five basic examples of pos-
sible ways to operate with TESSY. We strongly recommend to work through these practical
exercises as they are also a perfect quickstart to TESSY!
Apply for our e-mail list if you want to be informed of a new version of TESSY manual
by sending an e-mail to support@razorcat.com.
Refer as well to our detailed application notes regarding compiler/target settings and
other specific themes that are available in the Help menu of TESSY (“Help” > “Doc-
umentation”).
Find some videos about TESSY features as well as support videos on our website
https://www.razorcat.com/en/tessy-videos.html.
Subject matter
The structure of the manual guides you through working with TESSY from the start to the
specific activities possible. In order:
Section Matter
Preface Describes New features in TESSY 4.0 and New features in TESSY
4.1, New features in TESSY 4.2 and New features in TESSY 4.3, also
contains the Safety Manual.
1 Installation and Lists all technical requirements to work with TESSY and describes
registration how to install the software.
Section Matter
2 Migrating from Lists the changed and new functions and handling within the new
TESSY 3.x to version. This will help you when switching from TESSY version 3.x to
4.x the new TESSY 4.x.
3 Theory: Basic Contains a brief introduction about unit testing with TESSY and the
knowledge classification tree method (CTM).
4 Tutorial: Explains the workflow of Creating databases and working with the
General file system. Check this section carefully to know how to handle your
handling project data! The TESSY interface and basic handling is explained
in the following sections Understanding the graphical user interface
and Using the context menu and shortcuts.
5 Tutorial: In this chapter you will get to know TESSY with the help of
Practical exercises that are prepared to follow easily though most of the
exercises TESSY functions:
Section Matter
6 Reference This chapter explains in detail the unit test activities possible
book: Working with TESSY.
with TESSY
You will notice that the headlines of the sections follow the ac-
tions taken during a test. TESSY provides different editors and
windows (“perspectives” and “views”) for different configurations and
steps taken during and after a test. You will find the name of the
perspective or view as well as the description of the step within the
headline, e.g. 6.8 CTE: Designing the test cases.
Therefore, if you need help at some point, ask either “How do I
handle …?” or “Where am I?” and follow the headlines.
Table 0.1: Where to find - matters of the several parts of the TESSY manual
Helpers
• The Index in the very end of this manual helps you finding topics with the help of
keyword.
The sidearrow
• Various information is clearly represented within tables, e.g. icons and indica- shows where to
find information
tors (symbols of the interface) and their meanings. For a fast access to all tables and references.
consult the List of Tables in the appendix of this manual.
• Figures are used to demonstrate described information. You may as well check
the List of Figures in the appendix to find those figures.
• Cross references as well as the content directory are active links (blue colored),
which makes it easy to switch to the referenced chapter or section.
To help you to work with this manual, different font characters and signs are used to mark
specific information:
General information:
Warning: There might be some damages to your data if you do not operate cor-
rectly! Please follow instructions carefully.
A light bulb provides hints, references and additional information on handling with
TESSY for better usability.
Safety Manual
TESSY can be used for testing of safety-relevant software. Therefore, the core workflow of
TESSY as well as the release and test process of the TESSY product has been certified
according to ISO 26262-08:2018 and IEC 61508-3:2010. (The second edition of ISO 26262
was included in the certification during the re-certification of TESSY 4.3 by TÜV SÜD Rail
GmbH.) In the course of the re-certification of TESSY 4.1 by TÜV SÜD Rail GmbH the cer-
tification was extended to also cover EN 50128 and IEC 62304. Our quality management
system ensures proper handling of all development processes for the TESSY product and
constantly improves all procedures concerning quality and safety.
The figure above shows the core workflow of TESSY that is fully automated and subject to
tool qualification. All other tool capabilities like editing or environment and interface settings
are additional features out of scope of the tool qualification. The core workflow of TESSY
has been certified according to ISO 26262:2011 and IEC 61508:2010 as well as EN 50128
and IEC 62304. Starting from editing of test data, the core workflow covers test execution,
evaluation of test results and report generation. Additionally, the coverage measurements
have been verified according to our certified safety plan. Please note, that the Classification
Tree Editor (CTE) which covers test preparation activities is not part of the certified core
workflow of TESSY.
Safety-relevant problems arising in released TESSY versions will be reported (once they are
detected) and regarded closely to have them fixed as fast as possible. If you work with TESSY
in a safety-related environment, please register for our safety customer e-mail-list:
You will be informed about current and newly arising “known problems” as well as work-
arounds.
The “Tool Qualification Pack” (TQP) is an additional purchase of documents and tests for
TESSY, provided as baseline for the certification process in order to qualify TESSY as a
software verification tool according to DO-178B/C.
Please contact via support@razorcat.com.
Additionally, TESSY has been qualified by the German certification authority TÜV SÜD Rail
GmbH as a testing tool for usage in safety-related software development according to ISO
26262 and IEC 61508. TESSY was also evaluated against IEC 62304 (medical technology)
and EN 50128 (railway technology). EN 50128:2011 is an application standard derived from
IEC 61508. TESSY was classified as a T2 offline tool in accordance with EN 50128:2011. The
TÜV certificate and a certification report is available on http://www.razorcat.com.
The TQPack contains tests for ANSI-C compliant source code using the GNU GCC compiler
that is part of the TESSY installation. Using an embedded compiler/debugger for a specific
microcontroller requires adaptation of the TQPack for this specific target environment. This
can be provided as an engineering service by Razorcat.
When executing tests using coverage measurements, it is recommended that all tests are ex-
ecuted once with and once without coverage instrumentation. This can easily be achieved us-
ing the additional execution type “Run without instrumentation” for the test execution. TESSY
uses a copy of the original source file when creating the test application. This copy of the
source file will be instrumented for coverage measurements. Usually both test runs yield the
same result, indicating that the instrumentation did not change the functional behavior of the
test objects.
Please note, that the source code will be instrumented even if no coverage measurement
has been selected in the following cases:
Some extra code will be added at the end of the copied source file in the following cases:
Please keep this behavior in mind when preparing and executing tests with TESSY.
When running tests on a specific target platform, adaptations of compiler options and target
debugger settings may be needed within the respective target environment. The verification
of the TESSY core workflow covers tests conducted on a Windows host system using the GNU
GCC compiler. In order to verify the transmission of test data and expected results to and
from the target device, there are tests available that may be executed using the adapted target
environment. These tests check the communication layers of the test driver application.
For details on how to run these tests refer to the application note “048 Using Test
Driver Communication Tests.pdf” within the TESSY installation directory.
It is recommended to run these tests with your specific compiler/target environment after
initial project setup or after any changes of the environment settings.
The command line execution mode of TESSY is designed for usage on continuous integration
platforms like e.g. Jenkins. Therefore it is desired that TESSY does an auto-reuse of existing
tests on interface changes and tries to execute as many tests as possible with newer versions
of the source code being tested when running in CLI mode.
As a result, the tests executed in CLI mode may be run with test data that do not match with
the source code being tested (e.g. with uninitialized new variables) which could hide existing
or newly introduced errors within that source code. It is recommended to regularly check that
the existing tests still match with the interface of the software being tested.
Operating limits
TESSY is constructed for usage as a unit testing tool in order to verify the functional cor-
rectness of the function under test. The following restrictions and prerequisites for TESSY
apply:
• The source code to be tested shall be compilable without errors and warnings by
the compiler of the respective microcontroller target. TESSY may fail analyzing
the interface of the module to be tested, if there are syntactical errors within the
source code.
• TESSY does not check any runtime behavior or timing constraints of the function
under test.
• The test execution on the target system highly depends on the correct configura-
tion of the target device itself, the correct compiler/linker settings within the TESSY
environment and other target device related settings within TESSY (if applicable).
Any predefined setup of the TESSY tool for the supported devices requires manual
review by the user to ensure proper operation of the unit testing execution.
• The usage of compiler specific keywords and compiler command line settings
may require additional tests for tool qualification. Correct operation of the TESSY
toolset with respect to the Qualification Test Suite (QTS) test results is only pro-
vided for ANSI compliant C code.
Since TESSY 4.x the test driver code will be generated and attached at the end of (a copy
of) each source file. The following restrictions apply:
• All types used within usercode must be available within the source file of the re-
spective test object.
For backward compatibility, you can disable the usage of the new CLANG parser and test
driver generation by setting the attribute “Enable CLANG” to “false” within the project config-
uration using the environment editor TEE.
Note the chapter Migrating from TESSY 3.x to 4.x and check as well the Tutorial:
Practical exercises to learn about TESSY´s new features!
C++
TESSY now integrates the CLANG parser for analysis of C++ source files and provides a new
test driver generator to support testing of all C++ features including templates and derived
classes. The interface and constructors of each C++ class are available for testing and the
TDE provides convenient editing of test data. Powerful reuse of test data by assigning the
new to the old C++ interface elements allows easy regression testing.
Warning: If you open an existing module created with TESSY 3.2 or earlier, you
will be warned that this module will be converted to the new interface database
format. Please refer to Analyzing modules for more information about the available
options.
Please note that Component testing of C++ modules is not yet supported.
Testing different but related configurations of software is now supported by the variant man-
agement of modules and test cases: Starting from a base module with test cases for all
variants you can derive sub modules for each individual variant configuration. Such variant
modules can overwrite or delete the base test cases or provide additional test cases specific
for the given variant. Changes to the base module and tests will be propagated to all sub
modules automatically.
Refer to Creating variant modules and Test cases and steps inherited from a variant module
for details about creating and editing test cases within variant modules.
The test execution (building the test driver and execution on the target) can now run in parallel
depending on the number of available cores of the host PC. Also the restore of TMB files now
runs in parallel. These optimizations greatly reduce the time required for automated test runs,
especially on continuous integration servers (e.g. Jenkins).
Support for test driven development has been added: You can now add test objects to mod-
ules without having a source file in place. For such test objects you can also add variables
for values being used for calculations by the test object according to the specification. In this
way, you can prepare your tests before any code is available. Later when the first code is
implemented, all tests can be assigned to the real implementation modules using the normal
IDA assign operation.
Refer to Quickstart 5: Test driven development (TDD) for an example on how to use this
feature.
All the following model elements of TESSY will get a UUID assigned to uniquely identify them
even after save and restore via TMB files on different computers:
Existing projects will be updated automatically when opened using TESSY 4.0.
Enhanced auto-reuse in command line execution mode provides more stable regression test
results in case of slightly changed interfaces as follows:
All those interface changes will be ignored when opening modules in command line mode.
Tests that cannot be executed for some reason (e.g. which would fail due to null pointer
access) can be excluded from the test execution. Such tests are displayed faded and they
are automatically skipped when executing all tests of a test object.
In TESSY 4.1 calculation of the McCabe metric (cyclomatic complexity, CC) has been added.
(More information in subsection 6.2.2.7 Static code analysis and quality metrics).
The complexity value for each test object will be summarized on module, folder and test
collection level using either the sum of all values, the average value or the maximum value.
As a derived measure based on the complexity, the TC/C ratio defines the number of test
cases necessary to reach 100% branch coverage. A value greater than 1 indicates that at
least a minimum number of test cases has been defined.
Another new measure for the quality of test cases is the result significance (RS). It reveals
weak test cases: It verifies that each test case has at least some expected results, checks
the call trace or uses evaluation macros.
Fault injection
The new feature for automated fault injection on source code level overcomes problems dur-
ing test implementation caused by programming paradigms that are widely used within safety
critical software engineering:
• Defensive programming
• Read-after-write
• Endless loops
In the past the necessary code coverage of 100% could not be reached with normal test cases
so that in practice additional testing code or code replacements using compiler macros were
implemented.
Now TESSY provides automated fault injection without affecting the original source code. The
fault injection will be placed into the flow graph of the tested function and it will be active only
for one or several test cases where it is assigned to. Such fault injection test cases will be
marked and documented within the test report.
Fault injections are created based on unreached branches of the function flow graph. They
can be applied without any change to the source code and they will be effective at the desired
location even after source code changes when doing regression testing.
Script perspective
The new script perspective allows editing tests within an ASCII editor using a dedicated test
scripting language. (More information in chapter 6.10 Script Editor: Textual editing of test
cases.)
All test data can be converted from the script format to the TESSY internal format and vice
versa. The ASCII format can also be used as complementary backup format besides the
hitherto TMB file format.
The script editor provides syntax highlighting, auto completion, formatting, validation and an
Outline view.
The new implementation of CTE is a full featured eclipse-based editor integrated into TESSY
which enhances the design of test cases and assignment of test data to tree nodes.
Legacy CTE trees will be updated and converted to the new CTEX file format automatically
when they are opened for editing.
(Please go to chapter 6.8 CTE: Designing the test cases for more information about the new
CTE. Information about the Classification Tree Method in general can be found in chapter 3.2
The Classification Tree Method (CTM).)
The new test scripting language provides the means to store the test contents for each test
object as ASCII file beside the binary module archive (TMB file). (More information in sections
6.10.4 Script states et seq.)
This allows easy versioning of tests and comparison of arbitrary versions using a version
control system. For reviews of tests, changes can easily be found using standard ASCII diff
tools.
The TDE now allows entering arithmetic expressions as test data values. The resulting value
of an expression will be calculated and updated on code changes. The following operators are
supported: Addition, subtraction, multiplication, division, shift, binary or/and. The operands
can be numbers, defines and enum constants.
In order to enable hardware I/O stimulation and measurement during unit testing, TESSY
provides a hardware adapter interface allowing control of external measurement hardware.
This hardware device has to implement a configuration interface as well as reading and writing
methods for hardware signal data. (More information in chapter 6.6 THAI: TESSY Hardware
Adapter Interface.)
During module analysis, TESSY reads the configuration of the hardware device in order to
determine the available interface (i.e. the available I/O signals).This list of signals (including
passing directions) will appear within the interface of the TESSY module (for each test object).
The signals may be edited within the Test Data Editor (TDE) as any other normal test object
variable.
CTE extensions
In TESSY 4.2 a generator for classification trees has been added that generates a new tree
based on the interface of a test object. A classification will be generated for each input variable
with possible values as classes according to the equivalence partitioning method.
More information is provided in subsection 6.8.6.6 Automated tree generation based on func-
tion interface.
The new project-wide data dictionary provides means to assign application domain related
names to implementation variables that can be used throughout all CTE test specifications.
Data dictionary entries will be synchronized on interface changes and related CTE classifi-
cations will be updated automatically.
The component test feature has been extended and now also supports the testing of C++
source code. The necessary objects for calling the work tasks and stimulated methods will be
created as synthetic variables within the test interface. The respective objects are selected
within the Properties view of the scenario editor (SCE) perspective.
The new call pair coverage measurement (CPC) now supports measuring whether all call
locations of functions or methods within the test object have been exercised at least once.
This fulfills the requirements of ISO 26262 as an alternate coverage method for integration
testing instead of applying the function coverage (FC) method.
The new testing effort estimation is based on a customizable formula that is based on the
available metrics provided by TESSY.
For more information go to subsection 6.2.2.8 Testing effort estimation and tracking.
Additional metrics have now been added that can also be used within the time estimation
formula:
• Number of statements
• Number of calls
• Maximum depth
When analyzing a module, the estimated time is updated based on the given formula. The
actual time can be tracked within the Test Project view for each test object.
Figure 0.11: Estimated and actual time within the Test Project view
Warning and error level thresholds as well as the formula itself can be defined within the
preferences.
A new task element provides means to protocol external tests and reviews and link them to
requirements. This allows full verification coverage of requirements that are not testable with
a normal unit or integration test.
More information can be found in subsection 6.2.2.4 Creating tests and reviews.
Each task has a passed/failed test result and PDF or image files can be attached as a docu-
mentation of the review process. The result of a task counts as one test for all linked require-
ments which provides full coverage also by external test or review.
Figure 0.13: Task elements can be linked to requirements using the link matrix
TESSY 4.2 offers a completely reworked error handling to provide enhanced error messages
and logging capabilities for command line execution. A new error dialog shows the full excep-
tion chain, the context (e.g. the affected test object) and provides easy access to the error
log file.
TESSY 4.3 presents a completely reworked Environment Editor (TEE) perspective that pro-
vides editing of the project configuration which is stored within the project configuration file.
• The All Environments view containing all available system configurations sup-
ported by TESSY.
• The Project Environments view containing the environments that are selected for
the current project and stored within the configuration file.
• The Attributes view showing the attribute settings for one or several selected en-
vironments within the Project Environments view.
All views have filters to easily find desired elements. The Attributes view shows the list of
attributes within groups or as plain list. The groups are defined within the system configuration
file which is part of the TESSY installation.
Attribute settings can be compared and assigned between different project environments to
facilitate merging of related or different compiler/target combinations.
More information is provided in chapter 6.5 TEE: Configuring the test environment.
The new mutation test feature in TESSY 4.3 automatically checks the error detection capa-
bility of existing test cases. This unique function thus improves the review of test methods
and test quality, as required by the standards for functional safety (IEC 61508, IEC 62304,
ISO 26262 and EN 5012) and significantly reduces the manual review efforts.
With this unique method, test quality can be evaluated easily and automatically.
The mutation test minimally mutates the C/C++ source code to be tested at error-sensitive
locations and therefore incorporates typical programming errors. If the unit and integration
tests detect the error, these tests are considered useful. Non-detection indicates weak test
cases, and the mutation test gives hints where they can be optimized and a better quality in
testing can be achieved.
The benefit of the new mutation test feature is that it automatically checks the quality of ex-
isting tests. Auditors can use it to subject all tests to quality control in the shortest possible
time and to verify compliance with the standards for test creation. Especially in the develop-
ment of safety-critical systems, a high verifiable quality of functional and non-functional tests
is crucial for the safety of embedded applications and their functional safety certification.
The Mutation view lists all mutated code locations and shows the result (i.e. if any test has de-
tected to mutation). It is possible to exclude mutations in order to prevent from mutations that
cannot be found (e.g. equivalent mutants). For component testing, the coverage information
is used to determine all code locations that are reached and though should be mutated. This
significantly reduces the amount of mutations because only code locations exercised by the
tests are taken into account.
This feature checks the independence of test cases and ensures that all expected outcomes
have explicitly been calculated.
In order to check whether variables with passing direction OUT have really been written dur-
ing execution of a test case, two different test data patterns can automatically be applied to
initialize such variables (or components of structs/unions). This works without the need to
change the passing direction.
The new Mutation testing as well as test data initialization patterns in TESSY 4.3 can be
activated as additional test execution types within the normal test execution dialog.
• “Run without instrumentation” executes and evaluates only those test cases and
evaluation checks that can be applied independently of any source code instru-
mentation (e.g. coverage measurement, testing of static local variables or call
trace as well as fault injection requires code instrumentation).
This mode shall ensure that the instrumentation does not change the behavior or
hides any errors within the software.
• “Run with test data pattern” initializes all OUT variables with two configurable pat-
terns.
• “Run mutation test” executes the test cases with mutated code.
All additional executions are conducted after a successful completion of the normal test ex-
ecution (either with or without coverage measurement) and all of them must yield the same
results as the normal test execution.
If you are using Windows 10, please remember to associate PDF files
with your third party PDF viewer.
• To run TESSY 4.x you need at least a 1.5 GHz CPU and 4 GB RAM for TESSY.
Important: TESSY 4.1 is 64bit only! Please make sure the computer you want to
use with TESSY 4.1 is running on a 64bit version of Windows.
Older TESSY versions can be installed on Windows PCs with 32bit or 64bit.
1.2 Setup
TESSY allows you to have multiple TESSY installations with different versions on the same
computer. You do not have to uninstall older versions.
1.3 Installation
Download
TESSY
Þ Read the license agreement carefully. Check the box to accept and click “Next”.
Þ Now select the setup type: “Complete” (default) is recommended. click “Next”.
Þ Select the TESSY testarea folder (“Folder for temporary files:”; default “C:\tessy”)
click “Next”.
Þ Select the program folder (default “TESSY 4.x”) and decide, whether the installa-
tion on the system is for “all users” or for a certain user. Click “Next”.
Þ Start the installation by clicking “Next”. The installation will take a few moments.
1.4 Registration
Þ Start TESSY by clicking “Start” in Windows > “TESSY 4.x” > “TESSY 4.x” (see
fig. 1.7).
If a valid license is found, TESSY will start. If there is no valid license, the License Manager
will start with a License Key Request popup window (see figure 1.9).
License key
request
The License Key Request popup window will appear (see figure 1.9).
Þ Click on “Online Request” and fill out the form for a license key request.
You will get a license key via e-mail within a license key file. The license key file is a plain text
file with the ending .txt.
Important: The license key file is not generated and send out automatically, there-
fore it can take up to a workday until you receive it!
• Node-locked license
• Floating license
The node-locked license is a single user license issued for a given host ID. It’s not possible to
share the license with other users within your network. A node-locked license operates only
on the particular computer for which the license is issued.
The floating license is a server license issued for a given host ID for a dedicated server within
your network. It’s possible to share the license with other users within your network.
The Floating License Server (FLS) is running on a central network server and manages the
licenses that are in use. Thus TESSY can be used on any computer within the network.
The number of users who can use the software simultaneously is determined by how many
licenses you have purchased.
The License Manager (FLM) is started locally on a computer and displays the state of the
FLS (e.g. how many licenses are currently checked out).
If you want to run TESSY with a time limited demo license, just follow the instructions
in Registering a node-locked license on one computer. The License Manager (see
figure 1.14) will keep you informed about the validity of your license key.
Warning: The registry entries for TESSY’s Floating License Server (FLS) are gen-
erated during installation and must not be altered manually! Otherwise your host
ID and therefore the license key might become invalid.
During the installation the FLS needs an operational Ethernet network interface. The FLS
may not be used as a floating license server but as a node-locked license server only if a
proper Ethernet network interface is missing.
The tool “hostid.exe”, which can be found in the FLS installation folder, outputs the
current status if called with option “-f”.
If the host ID was destroyed for some reason, you can reinstall the license server. The installer
will repair the corresponding registry entries and generate a new host ID.
Please keep in mind, that the last operational host ID is saved in the registry as well. Therefore
you should not delete the registry manually before reinstalling the license server.
After startup the license server will generate a special key which you can sent to repair@ra-
zorcat.com in order to receive a new license file.
When you have received the license key file (*.txt file), Node-locked
license
Þ open the License Manager by clicking “Start” in Windows > “Razorcat Floating
License Server 8.x” > “Floating License Manager”.
Þ In the opening popup window click on “Done”. (see figure 1.9)
The Configure window for the License Server will open (see figure 1.11).
Optionally you can run the license server as service. In this case the autostart will be set for
the computer and all its users.
Important: Please note: You need administrator privileges to run the license
server as service.
The floating license is a server license issued for a given host ID for a dedicated server within
your network. It’s possible to share this kind of license with other users within the respective
network.
Important: Make sure that the license you want to use for the following process
really is a floating license not a node-locked license.
To run a central license server within your network, please take the following steps: Floating license
Þ Login as administrator.
Þ Install the Floating License Server (FLS) on your network server.
You can download the latest standalone version of the Razorcat FLS from
https://www.razorcat.com/de/downloads-tessy.html.
Þ Start the License Manager by clicking “Start” in Windows > “Razorcat Floating
License Server 8.x” > “Floating License Manager”.
Þ Click on “Configure” in the toolbar.
The following dialog window will pop up. Please choose following options within
the dialog to run the license server as service (see fig. 1.12).
Þ Under “License Key File” choose the license key file (*.txt) you have received.
The license key file will be copied into the bin directory of the license server in-
stallation. Please check if the license file has been copied after completing these
steps.
Þ Click on “OK”.
The license server will start automatically and the License Manager (see figure
1.14) will inform you about the configuration changes you have just made.
Important: Make sure that the license key file can be found in the
bin directory of the license server. If not, it needs to be copied to this
place.
To be able to work with TESSY on various computers within one network you have to select
the floating license server on every single computer you want to use TESSY.
Þ Start the License Manager by clicking “Start” in Windows > “Razorcat Floating
License Server 8.x” > “Floating License Manager”.
Þ Then click “License” > “Server”.
The following dialog window will pop up.
Þ Under “Address” insert the license server name or IP address within your network.
Þ Click “OK”.
During the process of updating TESSY it is possible that TESSY also requires an updating of
the Razorcat Floating License Server (FLS) on your network server and an updated license
key file.
You can download the latest standalone version of the Razorcat FLS (for server installations) Download the
from https://www.razorcat.com/de/downloads-tessy.html. latest FLS
A newer version of the license server can be installed in parallel to any existing FLS instal-
lation. To start the updated version of the license manager the previously running license
server needs to be stopped and deactivated.
The Floating License Manager (FLM) is used to control and to configure the Floating License
Server. In general there is no need to use the FLM, because the configuration of a local
license server automatically takes place during the installation of the license file. In some
cases it is helpful to use the FLM to change the default settings to suit your needs.
The FLM is in any case necessary to configure a central license server (see Registering a
floating license for network usage) as well as in case of problems.
Þ Start the License Manager by clicking “Start” in Windows > “Razorcat Floating
License Server 8.x” > “Floating License Manager”
Figure 1.14: License key check successful: this license key is correct
For more information about the handling and status of licenses please read the fol-
lowing sections: 1.6 Using a license without connection to the license server (FLS),
1.6.1 Checking-out the license for use on your local computer, and 1.6.2 Using a
license on a computer with no connection to the license server.
1.5 Uninstallation
Important: By uninstalling TESSY the project root will not be deleted, neither your
project data or your configuration file will be deleted. Nevertheless: Please make
sure that your data is saved.
The “Razorcat Floating License Server” (FLS) and the “Razorcat Shared” installation files will
remain on the computer. If you want to delete those as well, you can use the windows “Apps
& features” function (see figure 1.15).
Important: For using your TESSY license temporarily without connection to the
license server, you need a “Floating License”.
You can temporarily check-out your TESSY floating license on your computer and work inde-
pendently of the connection to the Floating License Server (FLS).
This is useful when there is temporarily no connection to the license server available (e.g.
when traveling, see subsection 1.6.1 Checking-out the license for use on your local com-
puter).
It is also possible to check-out a license for a computer that will never be able to reach the
license server directly (e.g. a stand-alone computer or a computer within an isolated net-
work, see subsection 1.6.2 Using a license on a computer with no connection to the license
server).
Þ click “Start” in Windows > “Razorcat Floating License Server 8.x” > “Floating
License Manager”.
Þ In the License Manager click on “License” > “Info”.
Þ Next to “State” the amount of days for possible check-outs will be displayed (see
figure 1.16).
Figure 1.16: The license info shows the possible number of days for checking out the license
Licenses in use
The License Manager also provides information about the licenses that are in use on your
computer or were transferred part-time to another computer (click on > “Info”).
joe@company.com:0
checked out for 2 day(s) since License checked out statically
Wed Jun 06 17:21:07 2018 (for a selected number of days)
joe@company.com:50874
currently checked out since License checked out dynamically
Wed Jun 06 17:21:39 2018 (as long as TESSY is in use)
Use this license check-out option if you want to make use of the license independently of the
license server connection (e.g. when traveling).
Þ In the menu bar click on “License” > “Check Out…” (see figure 1.17).
Þ Choose the amount of days. The Registration Information will be filled out auto-
matically (see figure 1.18).
Þ Click “ok” and save the file.
You can now use this license file on your local computer. To register the license
refer to section 1.4.2 Registering the license.
To be able to use the TESSY floating license on another computer with no license server
connection (e.g. a stand-alone computer or a computer within an isolated network) the pro-
cedure described in section 1.6 Using a license without connection to the license server (FLS)
needs to be modified.
Þ Open the license manager on your computer with NO license server connection
(see 1.4.3 The License Manager (FLM)).
Þ In the menu bar click “License” > “Request…” .
Transmitting the Þ Copy the first 12 characters in “Registration Information” (see figure 1.9).
license file Þ Transmit these 12 characters to your local computer WITH license server connec-
tion.
Þ Insert the transmitted 12 characters into “Registration Information” (see figure 1.18)
when checking-out on your local computer WITH license server connection (see
section 1.6.1 Checking-out the license for use on your local computer).
Þ Choose the amount of days, click “ok”, and save the license file.
Þ Make this file available on the other computer with NO license server connection.
Þ You can now register this license file on the other computer with NO license server
connection (see section 1.4.2 Registering the license).
Figure 1.19: Transmitting the license file to a computer with no FLS connection
In the new version of TESSY you will find some new functions as mentioned in section New
features in TESSY 4.0 and New features in TESSY 4.1.
You will have to convert your projects to use them with the new TESSY 4.x version. When
you open a project, TESSY will ask you if you want to convert your project. By clicking “Yes”
TESSY will convert the project automatically.
Warning: Once you have converted your project it cannot used by TESSY 3.x
anymore! If you want to use your project with TESSY 3.x, make a copy of the
project.
This chapter offers a brief introduction about unit testing with TESSY and the classification
tree method. It provides basic knowledge for organizing and executing a unit test in general
and in particular with TESSY. The chapter about the classification tree method helps you to
understand the logical system and to use the CTE.
International standards like IEC 61508 require module tests. According to part 3 of IEC
61508, the module test shall show that the module under test performs its intended func-
tion, and does not perform unintended functions. The results of the module testing shall be
documented.
IEC 61508 classifies systems according to their safety criticality. There are four safety in-
tegrity levels (SIL), where 1 is the lowest level and 4 the highest, i.e. systems at level 4 are
considered to be the most critical to safety. Even for applications of low criticality (i.e. at SIL
1), a module test is already “highly recommended”. The tables contained in the annexes of
IEC 61508, Part 3 specify the techniques that should be used, e.g. for module testing the
technique “functional and black box testing” is highly recommended at SIL 1 already. Other
techniques, such as dynamic analysis and testing are recommended at SIL 1 and highly rec-
ommended at SIL 2 and higher.
Part 4 of IEC 61508 defines a (software) module as a construction that consists of proce-
dures and/or data declarations, and that can interact with other such modules. If we consider
embedded software which is written in the C programming language, we can take a C-level
function as a module. To prevent a mix-up between C-level functions and C source modules,
we will refer to the C-level functions as units from now on.
Also other standards like the British Def Stan 00-55, ISO 15504 or DO-178B require module
testing (where the nomenclature ranges from “module” to “unit” to “component”). However,
all standards have more or less the same requirements for that kind of test: the tests have
to be planned in advance, test data has to be specified, the tests have to be conducted, and
the results have to be evaluated and documented.
During unit testing of C programs, a single C-level function is tested rigorously, and is tested
in isolation from the rest of the application. Rigorous means that the test cases are specially
made for the unit in question, and they also comprise of input data that may be unexpected
by the unit under test. Isolated means that the test result does not depend on the behavior
of the other units in the application. Isolation from the rest of the application can be achieved
by directly calling the unit under test and replacing the calls to other unit by stub functions.
Unit testing tests at the interface of the unit, and unit testing does not consider the internal
structure of the unit, and therefore unit testing is considered as black-box testing.
The interface of the unit consists of the input variables to the unit (i.e. variables read by the
unit) together with the output variables (i.e. variables written by the unit). A variable can both
be an input and an output (e.g. a variable that is incremented by the unit), and the return
value of the unit - if present - is always an output. The structure of a test case follows from
the structure of the interface.
Unit testing is conducted by executing the unit under test with certain data for the input vari-
ables. The actual results are compared to those predicted, which determines if a test case
has passed or failed.
Unit testing (of C-level functions, as described) is well suited to fulfill the requirements of
module testing for IEC 61508, because unit testing is
• a black-box, because the internals of the unit are not taken into account, and
• Finding errors early: Unit testing can be conducted as soon as the unit to be
tested compiles. Therefore, errors inside the unit can be detected very early.
• Saving money: It is general knowledge that errors which are detected late are
more expensive to correct than errors that are detected early. Hence, unit testing
can save money.
• Reducing complexity: Instead of trying to create test cases that test the whole
set of interacting units, the test cases are specific to the unit under test. Test cases
can easily comprise of input data that is unexpected by the unit under test or by
even random input test data, which is rather hard to achieve if the unit under test
is called by a fully-functioning unit of the application. If a test fails, the cause of
the failure can be easily identified, because it must stem from the unit under test,
and not from a unit further down the calling hierarchy.
• Giving confidence: After the unit tests, the application is made up of single, fully
tested units. A test for the whole application will be more likely to pass, and if some
tests fail, the reason will have probably stemmed from the interaction of the units
(and not from an error inside a unit). The search for the failure can concentrate on
that, and must not doubt the internals of the units.
Unit testing verifies that certain input data generates the expected output data. Therefore,
units that do data processing in its widest sense, e.g. generation of data, analysis of data,
sorting, making complex decisions, difficult calculations are best suited for unit testing. To find
such units, the application of metrics (e.g. the cyclomatic complexity according to McCabe)
may be appropriate.
Other criteria for selecting units to test may be how critical the functionality is to the unit’s
operation, or how often a unit is used in the application.
The interaction of the units is not tested during the unit test. This includes the semantic of
the parameters passed between units (e.g. the physical unit of the values), and the timely
relationships between units (e.g. does a unit fulfill its task fast enough to let a calling unit fulfill
their tasks also at the required speed?) In addition, the interrupt behavior of the application is
not in the scope of unit testing. Questions like “Does my interrupt really occur every 10 ms?”
or “Which interrupt prolonged my unit unacceptably?” are not addressed by unit testing,
because unit testing explicitly aims at testing the functional behavior of the unit isolated from
environmental effects such as interrupts.
Regression testing is the repetition of tests that have already passed after the implementation
of bug fixes or improvements in the software. Regression testing proves that a change in the
software did not result in any unexpected behavior. Regression testing is a key to software
quality. Obviously, the practice of regression testing requires the automation of the tests,
because the effort to repeat the tests manually is too high. Even for non-repetitive unit tests,
the proper tool support will save you lots of time, but tool support is indispensable for the
repetition of the unit tests.
The dilemma: It is commonly accepted that a software developer is badly suited to test his
own software, especially if the complete implementation, or the compliance of the implemen-
tation with the specification is an issue (blindness against own faults). If the developer has
forgotten to implement a certain functionality, it is likely he will also forget a test that will reveal
the missing functionality. If the developer has misinterpreted the specification, it is likely that
his tests will pass in spite of the wrong functionality.
On the other hand, experience has shown that a tester, who should test a code not written
by him must put a lot of effort into understanding the function´s interface. The tester must
find out the meaning of the variables, and which values to use to conduct certain tests. E.g.,
if the test specification requires the test of something “green”, which variable (or variables)
represents the color, and which value of the variable represents green? The prediction of the
expected results poses similar problems.
If the developer does not do tests, this gives rise to additional efforts, because the failed
test has to be passed to the developer, he has to reproduce the failure, correct the problem,
and then normally a concluding external regression test has to take place. Furthermore,
additional effort rises due to the fact that the developer will not hand out his software to the
QA department without having done at least some tests. This duplicated test effort could
be saved if the developer immediately starts testing by using the externally predefined test
cases.
The way out: A way out of that dilemma could be that a tester, who has not written the
code, specifies the test cases according to the functional specification of the unit, including
the expected results. He can use abstract data for this (e.g. color = green). The set of test
cases is handed over to the developer of the software. For him, it should be no problem to set
the input variables to the required values (e.g. the appropriate RGB value for green). If a test
fails, the developer can immediately correct the problem and re-run all tests that have passed
so far (regression testing). Testing is seen as an additional step during the implementation
of software, in comparison to the compiling step, where the compiler finds all syntax errors,
and the developer corrects them interactively, verifying his changes by subsequent compiler
runs.
However, standards require the organizational separation of development and test, due to
the initial mentioned reason of blindness against own faults. Possibly, it could be sufficient to
only separate the specification of the test cases from the development, and to consider the
conduction of predefined test cases not to suffer under the above mentioned blindness.
For embedded software it is essential that the unchanged source code with all the non-ANSI
keywords and non-ANSI peculiarities is used for testing. For instance, some cross compiler
for embedded systems allow for bit fields that are smaller than the integer size, e.g. 8-bit wide
bit fields in a 16-bit application. This is forbidden by the ANSI C standard, but justifiable by
the perfect adaptation to the embedded system. Naturally, the unit test results are worthless,
if this illegal size cannot be maintained during the tests. This requires specialized tools.
Furthermore, it is also essential that the concluding tests at least execute on the actual hard-
ware, i.e. the embedded microcontroller. This is a challenge, but there are ways to attenuate
this. Using a cross compiler for the microcontroller in question is a prerequisite, preferably
the exact version that will be used also for the user application.
Unit test tools can follow two technical approaches towards unit test: The test application
approach uses a special application for conducting the unit tests. This is the usual approach.
The original binary test uses the unchanged user application for testing.
The usual method for unit test tools to conduct unit tests is to generate a test driver (also
called test harness) and compile the test driver together with the source code of the unit
under test. Together they form the test application. The test driver includes startup code for
the embedded microcontroller, the main() function entry, and a call to the unit under test. If
required, the test driver contains also code for stub functions and the like. For each unit to test,
an own test application is created. This test application is used to conduct the unit tests. For
that, the test application is loaded into an execution environment capable of executing the test
application. This execution environment is normally a debugger connected to an (instruction
set) simulator, an in-circuit emulator stand-alone or connected to a target system, a JTAG or
BDM debugger or the like. After test data is transferred to the execution environment, (the
test data may already be included in the test application), tests are conducted and the results
are evaluated.
To execute the test application on the actual hardware, the test application must not only be
compiled using a cross compiler for the microcontroller in question, but also the test applica-
tion must fit into the memory present on the actual hardware. Also, the startup code of the
test application must take into account peculiarities of the actual hardware, e.g. the enabling
of chip selects and the like. Making the test application fit into memory can be simplified
by using an in-circuit emulator, which provides emulation memory, and serves as a kind of
generalized hardware platform for the microcontroller in question.
When the actual hardware has to be used and if memory on this hardware is very limited, the
test application must be minimized to fit into this memory. This is especially challenging for
single chip applications, where only the internal memory of the microcontroller is available. If
test data is included in the test application (and memory is limited), a single test application
can only include a few test cases, which in turn means several test applications for the test of
one unit, which is cumbersome. An approach which avoids this, keeps the test data separated
from the test application, which allows not only for a minimized test application, but also allows
you to change the test data without having to regenerate the test application.
Another approach is to use the unchanged user application for unit testing. This resembles
the manual test that is usually done by a developer after the application is completed. The
complete application is loaded into the execution environment, and the application is executed
until the unit to be tested is eventually reached. Then the input variables are set to the required
values, and the test is conducted.
The advantage of the Original Binary Test approach is that the unit under test is tested exactly
in its final memory location. There is no extra effort (or hassle) for compiling and linking a test
application, because the user application is used, which is already compiled and linked or
had to be compiled and linked anyway. Because the user application must fit in the memory
anyway, problems regarding the size of the application can be neglected. Even applications
that already reside in the ROM of the hardware can be tested. Even if the cross compiler
used to compile the user application is no longer at hand, tests are still feasible.
However, this Original Binary Test approach has some disadvantages compared to using a
test application:
• There is no control over the test execution. It depends on the user application,
when the unit under test is reached. It may be the case that the unit under test is
never reached, or only after some special external event has happened, e.g. the
push of a button of the actual hardware and an interrupt resulting from this.
• During the Original Binary Test, stub functions cannot be used. This is clear be-
cause the application is already linked using the current functions that are called
by the unit under test. A unit is always tested using the other units of the applica-
tion. Therefore, the unit under test is not isolated from the rest of the application,
and errors of called units may show up during the test of the unit under test.
• It is not possible to use arbitrary test data for the unit test. For instance, if the unit
under test gets its test data by a pointer pointing to a memory area, the amount test
data must fit into this memory area, which was allocated by the user application.
Apart from its easy usage, which possibly could be the only means to do some unit testing
at all, the Original Binary Test has strong disadvantages, which are essential for proper unit
testing and therefore one could even insist that it is not a unit test in its strictest sense.
3.1.5 Conclusion
Besides being required by standards, unit testing reduces the complexity of testing, finds
errors early, saves money, and gives confidence for the test of the whole application. If used
in the right way, unit testing can reduce development/test time and therefore reduce the time-
to-market. To conduct regression tests, test automation is indispensable. This requires tool
support.
3.2.1 General
Testing is a compulsory step in the software development process. The planning of such
testing often raises the same questions:
Anyone who has been confronted with such issues will be glad to know that the CTM offers
a systematic procedure to create test case specifications based on a problem definition.
The CTM is applied by a human being. Therefore, the outcome of the method depends on the
experiences, reflections, and appraisals of the user of the CTM. Most probably two different
users will come out with a different set of test case specifications for the same functional prob-
lem. Both sets could be considered to be correct, because there is no absolute correctness.
It should be clear that there are set of test cases that are definitively wrong or incomplete.
Because of the human user, errors cannot be avoided. One remedy is the systematic inher-
ent in the method. This systematic guides the user and stimulates his creativity. The user
shall specify test cases with a high probability to detect a fault in the test object. Such test
cases are called error-sensitive test cases. On the other hand, the user shall avoid that too
many test cases are specified, that are superfluous, i.e. do not increase test intensiveness
or test relevance. Such test cases are called “redundant” test cases. It is advantageous, if
the user is familiar with the field of application the method is applied in.
The CTM is a general method: It can not only be applied to module/unit testing of embedded
software, but to software testing in general and also to functional testing of problems, that are
not software related. The prerequisite to apply the method is to have available a functional
specification of the behavior of the test object. The CTM incorporates several well-known
approaches for test case specification, e.g. equivalent partitioning, and boundary value analy-
sis.
The CTM stems from the former software research laboratory of Daimler in Berlin, Ger-
many.
The first step is to describe the expected behavior of the test object, e.g. “If the button is
pushed, the light will go on; if the button is released, the light will go off”. Data processing
software normally solves functional problems, since input data is processed according to an
algorithm (the function) to become output data (the solution).
Analyze the functional specification. This means, you think about this specification with the
objective to figure out the test-relevant aspects of the specification. An aspect is considered
relevant if the user expects that aspect to influence the behavior of the test object during the
test. In other words, an aspect is considered relevant if the user wants to use different values
for this aspect during testing. To draw the tree, these aspects are worked on separately. This
reduces the complexity of the original problem considerably, what is one of the advantages
of the CTM.
Consider systems that measures distances in a range of some meters, e.g. the distance to a wall
in a room. Those systems usually send out signals and measure the time until they receive the
reflected signal. Those systems can base on two different physical effects: One can use sonar
to determine the distance, whereas the other can use radar.
The question is now: Is the temperature of the air in the room a test relevant aspect for the test
of these measurement systems? The answer is yes for one system and no for the other:
The speed of sound in air (sonar) is dependent on the temperature of the air. Therefore, to get
exact results, the sonar system takes this temperature into account during the calculation of the
distance. To test if this is working correct, you have to do some tests at different temperatures.
Therefore, the temperature is a test-relevant aspect for the sonar system.
On the other hand we all know that the speed of a radar signal, that travels at the speed of light,
is independent from the temperature of the air it travels in (it did not even need air to travel).
Therefore, the temperature of the air is not a test-relevant aspect for the testing of the radar
system. It would be superfluous to do testing at different temperatures.
This example shows that it needs careful thinking to figure out (all) test relevant aspects. It would
lead to poor testing if someone simply takes the test cases for the radar system and applies
them to the sonar system without adding some temperature-related test cases. Additionally, this
example illustrates that it is advantageous to have some familiarity with the problem field at hand
when designing test cases.
After all test relevant aspects are determined, the values that each aspect may take are
considered. The values are divided into classes according to the equivalence partitioning
method: Values are assigned to the same class, if the values are considered equivalent for
the test. Equivalent for the test means that if one value out of a certain class causes a test
case to fail and hence reveals an error, every other value out of this class will also cause the
same test to fail and will reveal the same error.
In other words: It is not relevant for testing which value out of a class is used for testing,
because they all are considered to be equivalent. Therefore, you may take an arbitrary value
out of a class for testing, even the same value for all tests, without decreasing the value of
the tests. However, the prerequisite for this is that the equivalence partitioning was done
correctly, what is in the responsibility of the (human) user of the CTM.
Please note:
• Equivalent for the test does not necessarily mean that the result of the test (e.g.
a calculated value) is the same for all values in a class.
An ice warning indication in the dashboard of a car shall be tested. This ice warning indication
depends on the temperature reported by a temperature sensor at the outside of the car, which
can report temperatures from -60°C to +80°C. At temperatures above 3°C the ice warning shall
be off, at lower temperatures it shall be on.
It is obvious that the temperature is the only test-relevant aspect. To have an reasonable testing
effort, we do not want to have a test case for every possible temperature value. Therefore,
all possible temperature values need to be classified according to the equivalence partitioning
method.
It is best practice to find out if invalid values may be possible. In our case a short circuit or
an interruption of the cable could result in an invalid value. Therefore, we should divide the
temperature in valid and invalid values first. The invalid values can relate to temperatures that
are too high (higher than 80°C) and to ones that are too low (lower than -60°C). It is tempting
to form two classes out of the valid temperatures: The first class shall contain all values that
result in the ice warning display being on (from -60°C to 3°C) and the other class shall contain
all values that result in the ice warning display being off (from 3°C to 80°C).
The equivalence partitioning in the figure above leads to at least four test cases, because we
need to take a value out of each class for the tests.
For the example ice warning, the classification of the valid values is not detailed enough, because
according to the equivalence partitioning method, it would be sufficient to use a single, arbitrary
value out of a class for all the tests. This could be for instance the value 2°C out of the class
of temperatures, for which the ice warning display is on. In consequence, no test with a minus
temperature would check if the ice warning display is on. To avoid this consequence, you could
divide this class further according to the sign of the temperature:
Using the CTM, the result of the repetition of equivalence partitioning for all test relevant
aspects is depicted in the CT. The root represents the functional problem, the test relevant
aspects. Test relevant aspects (classifications) are drawn in nodes depicted by rectangles.
Classes are ellipses.
The idea behind using boundary values is that values at the borders of a range of values
are better suited to form error-sensitive test cases than values in the middle. The idea behind
boundary values analysis is contrary to equivalence partitioning, because one method takes a
set of values as equivalent and the other method prefers special values in such a set. Despite
the fact that the idea behind boundary values analysis is exactly the opposite of equivalence
partitioning, both approaches can be expressed in the CTM.
f. Testing a hysteresis
The current problem specification of the ice warning-example does not mention hysteresis. It
may be tempting to extend the current problem specification in that fast changes in the state
of the ice warning display shall be avoided. For instance, the ice warning display shall be
switched off only after the temperature has risen to more than 4°C. This could be realized by a
hysteresis function. The necessary test cases for such a hysteresis function can be specified
by the CTM.
Test cases are specified in the so-called combination table below the CT. The leaf classes of
the CT form the head of the combination table. A line in the combination table depicts a test
case. The test case is specified by selecting leaf classes, from which values for the test case
shall be used. This is done by the user of the method, by setting markers in the line of the
respective test cases in the combination table.
Figure 3.4: Result of the CTM: tree (above) with combination table (below)
It may be tempting to combine every class with every other class during the specification
of the test cases. Besides the fact, that not every combination might be possible for logical
reasons, it is not the intention of the CTM to do so, it could be done automatically by a tool.
This would lead to many test cases, with the disadvantages of loss of overview and too much
effort for executing the test cases.
The objective of the CTM is to find a minimal, non-redundant but sufficient set of test cases
by trying to cover several aspects in a single test case, whenever possible. Similar to the
drawing of the tree, it depends on the appraisal and experience of the user of the method,
how many and which test cases are specified.
Obviously the size of the tree influences the number of test cases needed:
A tree with more leaf classes naturally results in more test cases than a tree with less leaf
classes. The number of leaf classes needed at least for a given tree is called the minimum
criterion. It can be calculated from the consideration that each leaf class should be marked in
at least one test case, and that some leaf classes cannot be combined in a single test case,
because the classes exclude each other.
Similar a maximum criterion can be calculated, which gives the maximal number of test cases
for a given CT. A rule of thumb states that the number of leaf classes of the tree gives the
order of magnitude for the number of test cases required for a reasonable coverage of the
given tree.
Problem definition:
A start value and a length define a range of values. Determine if a given value is within the
defined range or not. Only integer numbers are to be considered.
It is obvious, that completed testing is practically impossible, because we get 65536 * 65536 *
65536 = 281.474.976.710.656 test cases, even if we assume only 16 bit integers. If we would
assume 32 bit integers …well, we better do not.
The start of the range and the length can be regarded as test relevant aspects. This is
convenient since, according to the problem definition, a range of values is defined by a start
value and a length. It reflects the intention to use different values for the start and the length
during testing.
We should have some test cases, which result in inside, and other test cases which result in
outside. We call the corresponding aspect position, because the position of the value under
test with respect to the range determines the result. So the three test-relevant aspects to be
used for classifications are initial value, length and position and they thus form the basis of
the CT:
Now classes are formed for the base classifications according to the equivalence partitioning
method. Usually, the problem specification gives us hints how to form the classes. E.g. if
the problem specification would state: “If the start value is greater than 20, the length value
doubles”, then we should form a class for start values greater than 20 and a class for start
values smaller or equal to 20.
Unfortunately, the problem specification at hand is too simple to give us similar hints. How-
ever, since the start value can take on all integer numbers, it would be reasonable to form
a class for positive values, a class for negative values, and another class for the value zero.
It would also be reasonable to form just two classes, e.g. one class for positive start values
including zero and the other class for negative start values. This depends on ones emphasis
having zero as value for the start of the range in a test case or not.
Because of the systematic inherent in the CTM, and because range_length is an integer as
well as range_start, it is stringent to use for range_length the same classes as for range_start.
This results in the following tree:
To specify a first range (to be used in the first test case), we have to insert a line in the
combination table and to set markers on that line:
Figure 3.9: A first specification for the range in the combination table
Two markers are set on the line for the first specification. One marker selects the class
positive for the start of the range. The other marker selects the class positive for the length
of the range. A range with the start value of, say, 5 and a length of 2 would accord to the
specification. This first specification was named trivial.
We can insert a second line in the combination table and specify a much more interesting
tests case:
Figure 3.10: A second specification for the range in the combination table
For the second specification again two markers are set. They specify that a negative value
shall be used both for the start and for the end of the range. Hence a range with the start
value of -5 and a length of -2 would accord to the second specification. But this value pair
raises some questions: Shall the value -6 lie inside the range? Or shall the value -4 lie inside
the range? Or shall no value at all lie inside the range, if the length of the range is negative?
Each opinion has its supporters and it is hard to decide what is to be considered correct.
Actually, at this point it is out of our competence to decide what is correct. We have found a
problem of the specification!
Probably a test case using a negative length would not have been used if the test case spec-
ification would have been done spontaneous and non-systematic. But a negative length is
completely legal for the functional problem specification that was given above. If you consider
that the problem specification at hand was a very simple one, you may imagine how likely it
is to overlook a problem in a more comprehensive and complicated problem specification.
In case we are not satisfied with the fact that a fixed single positive value, e.g. 5, may serve as
value for the start of the range in all test cases, we can sub-divide the class positive according
to a suitable classification. In our example, we classify according to the size. The idea behind
this is to have a class containing only a single value, in our case the highest positive value
existing in the given integer range. We use this value because it is an extreme value, and as
we know, using extreme values (or boundary values) in test cases is well-suited to produce
error-sensitive (or interesting) test cases.
In the figure above, the positive values for the start of the range are subdivided according to
their size.
This results in the two classes normal positive and maximal positive. The class maximal pos-
itive holds the highest possible positive value (i.e. MAX_INT), and the class normal positive
holds all other positive values. This satisfies mathematical completeness.
Remark 1: Another possibility to classify the positive start values would have been for in-
stance to classify in odd and even values. This would have been completely legal. This would
have been probably also sensible for e.g. a problem of number theory, but not target-oriented
for the problem at hand.
Remark 2: Please note that for the moment we do not know and we need not to know the
size (in bits) of the integers used in the problem at hand. We simply specify “the highest
positive value in the given integer range”. This keeps our test case specification abstract! E.g.
our test case specification is appropriate for any integer size. As soon as we assume we use
e.g. 16 bit integers, and therefore parameterize our test case by specifying 32767 as value in
the class maximal positive, we loose this abstraction. E.g. if we port the parameterized test
case to a 32 bit integer system, the test case looses its sense. This is not the case if we port
the abstract test case specification.
With the CT extended according to figure 3.11 The CT for is_value_in_range, 4th step, we
can insert an additional line in the combination table and specify again an interesting range
for a third test case:
The third range specification in the figure above combines the highest positive number for
the start value of the range with a positive length, i.e. the range exceeds the given integer
range.
The situation with the third range specification is similar to the situation depicted in the fig-
ure above. The situation raises some questions: Will the situation be handled sensible and
gracefully by the test object? Or will it crash due to the overflow? Will the negative values
on the left hand side be accounted to lie inside the range or not? And what is correct with
respect to the last question? The problem specification above does not give an answer to
the latter question, again we have found a weak point in the problem specification.
To sum up, designing test cases according to the CT method has revealed two problems of
the problem specification and has lead to interesting test cases so far.
In the figure above, one possible completed CT is depicted. Classifications are depicted by
rectangles, classes by ellipses. The “range” node is a composition with two classifications as
child elements.
• Analogous to the class maximal positive for the start value of the range, a class
maximal negative is introduced. The idea behind this class is to combine the max-
imal negative start value with a negative length of the range, what shall provoke
an underflow or negative wraparound. This idea comes from the systematic in the
CTM: If a positive wrap-around is seen as an interesting test case, also a negative
wrap-around should be exercised.
• An example for a composition is given by range. A composition may be used for a
relation “consists of”. In our case, the range consists of a start value and a length.
• The final tree features still the three initial classes positive, zero, and negative for
the length of the range. It is important to note that the tree reveals at a glance that
nothing like maximal positive length or similar is considered to be useful for the
testing problem at hand.
• It is obvious that a position can either be inside or outside the range, hence this
classification suggests itself. Furthermore, it is obvious that there are two different
areas outside the range: below the range and above the range. This is reflected
in the classification position outside. (If the tree would miss such a classification,
it may well be considered incorrect).
• The class inside of the classification position could well be a leaf class of the
classification tree. However, in the CT in the figure above, this class is subdivided
further in the sub-classes range_start, opposite_border, and inlying. This is done
to force the use of boundary values in the test cases. If a test case specification
selects the class range_start, the value that shall be checked if it is inside the range
or not shall take the value of the start of the range, that is the lowest value that is
considered to be inside the range, a boundary value. The class opposite_border
is intended to create an analogous test case specification, but using the highest
value that is considered to be inside the range. The class range_start and the
class opposite_border both contain only a single value. All other values inside
the range are collected in the class inlying; this class exists mainly because of the
requirement for completeness of equivalence partitioning. A similar approach to
use boundary values is visible in the classes at border for positions outside the
range.
The test case specification above lists 14 test cases. Please note that these are specified by
the user and depend on its judgment. Based on the CT it is possible for some values to be
determined that provide clues to the number of test cases required.
The first value is the number of test cases, if each leaf class is included at least once in a
test case specification. This number is known as the minimum criterion. In our example,
the largest amount of leaf classes, namely seven, belong to the base classification position.
Seven is thus the value of the minimum criterion. The maximum criterion is the number of
test cases that results when all permitted combinations of leaf classes are considered.
In our example, the maximum criterion amounts to 105 (i.e. 5 * 3 * 7). The maximum criterion
takes into account that it is not possible to select e.g. a negative length and a positive length
for the same test case specification, because this is impossible by the construction of the
tree. The maximum criterion does not take into account that it is not possible to select e.g. a
zero length and inlying, because this is not impossible by the construction of the tree, but by
the semantics of the function problem.
A reasonable number of test case specifications obviously lies somewhere between the min-
imum and the maximum criterion. As a rule of thumb, the total number of leaf classes gives
an estimate for the number of test cases required to get sufficient test coverage. In the test
case specification, the CT has 15 leaf classes, what fits well to 14 test cases.
By the test case specification in the figure above, you can deduct how the functional problem
specification was extended with respect to the questions raised in sections “A second range
specification” and “Another interesting test case specification”:
• “If the length of the range is negative, are there values that can be inside the
range?” The answer is “yes”, because in test case specification no. 5 and no. 6
a negative length shall be used and the position of the value shall be inside the
range.
• “If the length of the range exceeds the given integer range, shall negative values
be inside the range?” Test case specification no. 12 clarifies that this should not
be the case.
The leaf class inlying is selected for only one test case specification (no. 1). This reflects the
fact that this class exists only because of the requirement for mathematical completeness
of equivalence partitioning, and not because the inlying values are considered to produce
error-sensitive test cases.
Here is an alternative test case specification to the functional problem specification at hand
depicted:
What are the differences to the more elaborated test case specification in the section above?
• The start value of the range is not mentioned in the CT. This means, the start
value is not considered to be a test-relevant aspect by the user of the CTM. In
consequence, any arbitrary value can be used as start value in the four test cases.
This value can be fix for all test cases, but does not have to be.
• The problem of a negative length is completely neglected. For the problem speci-
fication from section Problem which specifies a length to be an integer and hence
also the length to be negative, this is a serious flaw.
But the point is not which test case specification is better. The main point is:
Test case specification according to the CTM visualizes testing ideas!
This chapter explains how to create databases for your test, how to work with the different
files and the graphical user interface of TESSY and provides some information about useful
shortcuts to work more efficient.
The following table provides a fast overview about TESSY’s file system and databases:
tessy.pdbx (file) Project file for the location of a TESSY project. The project can
be opened via double click. Contains the basic settings of a
project and can be renamed.
Project root Specifies the root directory of your project, so that all paths (e.g.
sources, includes, etc.) can be related to this root. Every project
will have an own project root. The project root as an absolute
path is intentionally not saved within the project file (tessy.pdbx)
which allows you to transfer projects to other computers. The
location of the project root will be detected automatically by
TESSY when opening a project. This is done by matching the
current absolute location of the PDBX file with the relative path
entry of the database location stored within the PDBX file.
Source root Optional directory to specify source and include paths to this
source root independently from the project root (e.g. if you want
source files to reside in another directory outside the project
root). The source root as an absolute path is intentionally not
saved in the project file (tessy.pdbx), only its existence is
indicated. Therefore the source root needs to be selected when
opening a project on a different computer. When opening such a
project using the command line, the source root needs to be
provided as command line argument. For more information
about the CLI mode please refer to section 6.17 Command line
interface.
persist (folder) Contains the databases for the project, one for requirements and
test collections, the other one for test data.
work (folder) Contains all temporary files generated during the test process.
This entire directory can be deleted without loosing any data of
the TESSY project.
Þ Start TESSY by selecting “All Programs” > “TESSY 4.x” > “TESSY 4.x”.
Loading TESSY will take a few seconds.
The Select Project dialog will open. At top you can see the path of your workspace
(see figure 4.1).
Creating a new
project
Þ Optional: Extend the “Advanced Settings” by clicking on the plus (see figure 4.4).
Configuration Enter the path to a specific configuration file or leave the field
File blank to use the default configuration. TESSY will create a new
configuration file containing only the GNU/GCC compiler. Refer to
section 6.5.6 Configuration files about how to customize this
configration file.
Project You can choose a different location if you would like to locate the
Location test project files into another sub directory of the project root.
Backup This directory will be used to store all backup and settings files of
Location your project which are vital for your project in order to restore it on
another computer. Refer to section 6.16 Backup, restore, version
control for information about files that are relevant for version
control.
It is recommended to use the default location but you can also
choose a different location preferably within the project root. It will
be used as standard for the backup and restore dialog.
By default the project root contains your development project, i.e. the source
files, and one sub folder “tessy” that contains all TESSY related files.
Additionally you can specify the source root to locate source and header files outside
the project root.
TESSY will use paths relative to those root paths for all files, e.g. references to source
and config files. This ensures that you can move whole projects to new locations.
Please keep in mind that the source root location will always be remembered locally on
each computer and the given absolute path will not be stored into the TESSY project
file (tessy.pdbx). If you transfer a project with an indicated source root to another
computer, you need to provide the source root (e.g. as command line argument when
running in CLI mode). When opening such a project with the GUI, TESSY will remind
you and ask for the source root location.
Þ Click “OK”.
Now TESSY creates automatically a sub folder “tessy” within the project root di-
rectory. This folder contains (within sub folders) the configuration file and the per-
sistence databases. This will take a few seconds. Afterwards TESSY will restart
(if another project was open before) and open the newly created project automat-
ically.
In the Select Project dialog you can create, import, clone, edit and remove or delete your
project with selecting the project and click on the icon in the tool bar:
Removes a project from the workspace. If you want to delete all data including
project and database location, tick the box “Delete project contents on disk”.
With a right click you can open the context menu for further options:
With a right click on a project in the project list you can mark a project as “Template Project”
(see figure 4.5).
At anytime you can remove the mark as template project. The project will then be a
normal project and you can open it as usual.
You can move your whole project directory and then import the project again:
Þ Either double-click on the tessy.pdbx file or use the Import Project button.
TESSY will ask you, if the project was moved or copied (see figure 4.6).
Þ Answer this question correctly and click “OK”.
If the project was copied, e.g. you want to create a new project as a copy of the
original one, a new project identifier needs to be assigned to distinguish the new
project from the original one. TESSY will do this automatically.
In the “Select Project” dialog all projects are listed with name and local path.
It is possible to handle projects with equal names. The table below explains in which way
TESSY will replace projects within the projects list if they have identical names:
Project named You create a new project You will get a warning:
’Alpha’ exists in ’Alpha’ in another “Project with identical name will be
location ’xy’. location. removed from the project list.”
The new project appears in the project
list, the old project will be removed
from the list (but not deleted!).
If you want to open the old project
again,
Þ click on “Import” and open the old
project. In that case the newer project
will again be removed from the list.
Project ’Alpha’ You try to create a new You will get an error, it is not possible
exists in location project ’Alpha’ in the to create the project, because two
’xy”. same location. projects cannot share the same
location.
Þ Change the location of the new
project.
Project ’Alpha’ You open an existing You will get a note “Information:
exists in location project ’Alpha’ (either Project ’Alpha’ replaced within the
’xy’. with click on “Import” or project list.”
within the command line The second project appears in the
interface) in another project list, the first project will be
location ’Beta’. removed from the list (but not
deleted!).
If you want to open the other project
again,
Þ click on “Import” and open the
other project.
Important: This section is only recommended for advanced users that have al-
ready worked with TEE. For basic handling we recommend to continue with section
4.2 Understanding the graphical user interface and following sections and then re-
turn to this section.
TESSY will create a specific configuration file for each project database. This way you can
share the environment settings with other members of your development team. The config-
uration file is stored within your project root together with other project related files. Such a
configuration file contains only the compiler/target environments you want to use. All other
environment configurations are not visible for the user as long as this file is assigned to a
given project database.
To customize the configuration file within the Test Environment Editor (TEE)
After the update you cannot open the project in previous versions of TESSY!
TESSY will recognize, if an upate of the database is necessary (see figure 4.8).
When you want to open the project, you will be asked to update the database (see figure
4.9):
Þ Click “OK”.
When TESSY starts the first time, the graphical user interface (GUI) will open within the
Overview perspective.
Please check the terminology shown in the figure “TESSY interface” and the explanations
beneath. This terminology will be used throughout this manual.
The menu bar provides actions as “File”, “Window” etc. Within the “Help” you find the on-
line help of TESSY. Many of these actions may also be available within the context menu of
individual views, if the actions are applicable for the items within the view.
At the global tool bar of TESSY interface you can select a project, save changes etc. By
rolling over an icon with the cursor a tooltip will tell you the purpose of each icon. There may
also be individual tool bars within the views. Here you find the tools for operating, e.g. to start
the test execution .
Save all changes in any views or editors by clicking the save icon .
TESSY contains several perspectives to present information based on different tasks in the
test workflow (“Requirement Management”, “Overview”, “TIE” etc.). Each perspective offers
several views.
In the perspective bar (containing the perspective names) you can switch between the differ-
ent perspectives. The perspectives - from the left to the right - follow the actions taken while
preparing, running and evaluating a test.
Every perspective name has several right-click menu options (the context menu).
By clicking on the left symbol you can open other perspectives (see figure 4.11):
4.2.4 Views
Notice that the views appear differently combined with other views, e.g. the view Test Results
within the Overview perspective is combined with the Test Items view, but within the TDE
perspective it is combined with the Test Project view. The reason for the different combinations
is to give you a fast overview and comparison between various information within each project
step.
You can change the appearance of views for you own needs and open views of one perspec-
tive into another perspective: Adding views to
a perspective
Þ Activate (open) the perspective where you want to add a view.
Þ Click “Window” > “Show View”.
A window displaying all views will open (see figure 4.13).
Þ Select the view you want to open.
Þ Click “OK”.
The selected view is added to the active perspective.
Change the position of views with drag and drop: Changing view
position
Þ Click on a name of a view and move it where you like: You can move views to
another location within the same group of views or into another group of views or
even outside the current window.
Þ Right-click on the perspective switch and choose “Reset” to switch back to the
original positions of all views of the respective perspective (see figure 4.14).
Figure 4.14: Move the views separately. To switch back, use “Reset”.
Figure 4.15: To switch back all positions of views and perspectives use “Reset Workbench”.
You can maximize and minimize views for better visibility. Maximize and
minimize views
To maximize a view,
Þ use the button within the right corner (see figure 4.16) or double-click on the tab
of the view.
The view will be maximized and other views of the perspective will be minimized, displayed
by the view symbol on the left and the right (see figure 4.17).
Figure 4.17: Maximized view with minimized views on the right and the restore-button on
the left
There are navigation views that present hierarchical structured data. Selections on such tree
items may cause other views or the editor pane to change the information being displayed.
All views are context sensitive: If you select an item within one view, another view
might display other information. If something is not displayed correctly, make sure
you selected the desired item.
The status bar provides status information about the application and current status, e.g. the
directory of the project root and the configuration file.
Most contents, tabs etc. have options that are displayed in the context menu, which is re-
trieved with a right click. It shows main operations as “Copy”, “Paste”, “Delete” etc.
The context menu is context sensitive and changes as different items are selected (see figure
4.18).
4.3.2 Shortcuts
TESSY allows it to operate with several keyboard shortcuts. A mouseover over the icons
of the view toolbar shows a tooltip explaining its function and also provides the shortcut if
available.
TESSY provides a complete list of shortcuts. To open it just click > “Help” in the menu
bar and then > “Key Assist…” in the pull down menu.
Important: For using shortcuts make sure that the current view is active (i.e. has
focus). Otherwise shortcuts will not work.
Warning: You cannot reverse the deleting of data. Before deleting make sure this
database is really not needed anymore.
Copy Ctrl + C
Cursor positioning Tab Moves the cursor to the next input section on
right alternatively the right side of the line.
Ctrl + right arrow Only within TDE.
key
Cursor positioning Shift + Tab Moves the cursor to the input section on the
left alternatively left side of the line.
Ctrl + left arrow key Only within TDE.
Paste Ctrl + V
Rename F2
Save Ctrl + S
Shortcuts for certain views differ. More precise descriptions can be found in the view
related sections within chapter 6 Reference book: Working with TESSY.
This chapter will show you on the basis of prepared exercises how to work with TESSY:
In this exercise we will get to know the basic functionality of testing with TESSY. We will
operate with the example “is_value_in_range” which will give you a fast introduction and an
overview as well as the terms of importance.
A unit test in TESSY is divided into the following central test activities: Central test
activities
• Determining test cases.
• Entering test data and expected values.
• Executing the test.
• Evaluating and documenting the test.
Usually you would import your requirements first. To keep this exercise understandable
for beginners, we will first exercise a simple project, then import some basic require-
ments and restart the test!
We will now follow a simple source code example to show how to exercise those activities
with TESSY.
Example “is_value_in_range”
To understand TESSY´s file system and databases, consult section 4.1 Creating
databases and working with the file system.
Þ Start TESSY by selecting “All Programs” > “TESSY 4.x” > “TESSY 4.x”.
Þ If the “Open Project” dialog will open, click on (Create Project).
If another project is opened within TESSY, click “File” > “Select Project” > “New
Project” and then click on .
The Project Configuration dialog opens (see figure 5.2).
The project “Example1” is opened within the Overview perspective. You can create different Organizing
folders within a test collection, each containing modules with various test objects. To keep it “Example1”
simple, we will create now one test collection with one folder.
Þ In the Test Project view click on the icon (New Test Collection) in the tool bar
of the view.
Þ Enter Is_value_in_range and press the “Enter”-key.
Þ Click on (New Folder), enter ExampleFolder , click “Enter”.
Þ Click on (New Module), enter ExampleModule, click “Enter”.
The module relates to one or many source files which are to be tested.
Figure 5.4: Test collection “Is_value_in_range” with an example folder and module
Rename or delete a module or a folder by using the context menu (right-click > “re-
name” or “delete”) or the key F2.
Usually at this point you will have to specify the target environment, that is to determine the
compiler, the target and the microcontroller. You will do that in the “Test Environment Editor”
which we will get to know later.
Please notice beneath in the Properties view at tab “General” that the GNU GCC compiler is
already selected for this module (see figure 5.5), which is enough for our example.
Now we will add the source file to the module. The source file contains the C-function to be
C-source file tested:
5.1.3 Adding the test object and analyzing the C-source file
We will use the example C-source file “is_val_in_range.c” which is stored under
“C:\Program Files\Razorcat\TESSY_4.x\Examples\IsValueInRange”.
Copy the C-source file, paste it in the project root an add it to the module:
It is useful to relate all sources, includes etc. to the project root. You
have a better overview about all sources, includes etc.
Adding the
C-source
Þ Select the source file “is_val_in_range.c” from the folder where you just pasted
the source.
Þ Click “Open”. The C-source file will be added.
Þ In the Test Project view above click on (Analyze Module) to start the module
analysis (see figure 5.8).
TESSY now analyzes the C-source file, this will take a few seconds. After successful pro-
cessing,
TESSY will as well analyze the C-source file by just clicking on the white arrow after
adding the C-source file.
Now all functions which were defined in the C-source file are displayed as children of the
module above within the Test Project view (see figure 5.9).
Figure 5.9: The function of the C-source is displayed as child of the module.
Our sample C-source file contains only one function, our test object “is_value_in_range”.
The term “test object” indicates the functions within the module we are attempting to
test.
Now we can edit the interface information for every test object and determine which values Determine
are input and which ones are output variables. Input values are all interface elements that passing
are read by the test object. Output values are written by the test object. directions
Upon opening the module, TESSY will try to set the default passing directions (input or output)
automatically. You can change these default interface settings to your needs.
In our sample the passing directions are already defined, you do not have to take
actions.
Þ In the Interface view open the Parameter paragraph to see the inputs and output
values that are already defined in our example (see figure 5.11).
Usually now you would design the test cases, either manually or within the Classification Tree
Editor (CTE), based on specifications of your test object.
Since the CTE is a subject for its own, we will not make use of the CTE in this example, but
simply enter some ad-hoc test data manually.
To learn about the CTE refer to section 6.8 CTE: Designing the test cases or follow
the Quickstart 2: The Classification Tree Editor (CTE).
Now we will add three test cases each with one test step within the Test Items view:
Adding test
cases
Figure 5.13: Three test cases were added in the Test Items view
• The first number is the number of the test case, the number in brackets shows the
quantity of the test steps included.
• Test case numbers will be counted continuously: If you delete test cases, new test
cases will get a new number and existing test cases will not be renumbered.
• If you cannot click on “New Test Case” or “New Test Step” because the icons are
inactive, you might be in the wrong selection: Select the test object within the
Test Project view, then select the Test Items view.
Þ Switch to the perspective “TDE - Test Data Editor”. The TDE will also open with a
double click on a test case or a test step.
In the Test Data view you can see the test cases and steps in tabular form.
Þ Under “Inputs” click on the arrow to open “struct range r1”.
Þ For test case 1 (1.1) enter 3 for “range_start”.
Þ Enter 2 for “range_len”.
Þ Enter 4 for “v1”.
Þ Click on (Save) to save your inputs.
After saving, the symbol of the test object in the Test Project view as well as the
symbol of the test case in the Test Items view turns yellow to indicate that the test
case is ready to run (see figure 5.14).
Þ Under “Outputs” click on the arrow ahead “Return”.
Þ Enter “yes” for the return value.
Figure 5.14: Data is entered, test step turns yellow and test case is ready to run.
Please notice the changes of the test object icon to indicate different conditions:
Þ Now enter data for the other two test cases as shown in table 5.1.
range_start: 3 20 0
range_length: 2 8 5
v1: 4 22 6
• Test case 1.1: The range starts at 3 and has a length of 2. Therefore, the range
ends at 5 and the given value 4 is supposed to be inside of the range (yes).
• Test case 2.1: The range starts at 20 and has a length of 8. Therefore, the range
ends at 28 and the given value 22 is supposed to be inside of the range. Because
we want to force an incorrect output, we state this to be not inside of the range
(no).
• Test case 3.1: The range starts at 0 and has a length of 5. Therefore, the range
ends at 5 and the given value 6 is supposed NOT to be inside of the range (no).
The test step icons in the Test Items view will now turn to yellow (see figure 5.16). This
indicates that we are now ready to run the test.
Þ Click on (Start Test Execution) in the tool bar of the Test Project view.
A progress dialog will be shown while TESSY generates, compiles and links the
test driver and runs the test. This will take a few seconds.
After the test run, test case icons (within TDE) should be (see figure 5.17):
• Within the Test Data view the second test step is marked with a red cross and the
expected result “no” is marked red to indicate, that the result did not match the
expected result (the actual result is “yes”).
• Within the Test Project view the test collection, folder, module and test object are
marked with a red cross to indicate, that not all results did match the expected
results.
• The Test Items view indicates with a red cross, that test case 2 did not match the
expected result.
You can see the results of every test step within the Test Results view.
To analyze the source code coverage of the test, repeat the test run with the branch, MC/DC
and MCC-coverage instrumentation:
Þ In the tool bar of the Test Project view click on the arrow next to the Execute Test
icon and select “Edit Test Execution Settings…” .
Þ In the following dialog tick the boxes “Run” (default) and “Create New Test Run”.
Þ Choose the instrumentation “Test Object” and untick the box “Use preselected
coverage”.
Þ Tick the boxes for “Branch Coverage (C1)” and “Modified Condition / Decision
Coverage (MC/DC)” (see figure 5.19).
Þ Click on “Execute”.
Figure 5.19: Selecting Branch and MC/DC Coverage for test run
A progress dialog will be shown while TESSY generates, compiles and links the test driver
and runs the test. This will take a few seconds.
Analyzing with
the CV
The CV shows the results of the coverage measurement of a previously executed test.
You can select a default coverage measurement as well for your whole project or any
specific module or test object. Refer to section 6.2.3.4 Coverage tab.
The Flow Chart view displays the code structure and the respective coverage in graphical
form. Within each flow chart, you will see the decisions and branches of the function being
displayed. Green and red colors indicate whether a decision has been fully covered or a
branch has been reached.
The Branch C1 Coverage view (see figure 5.22) displays the branch coverage for each indi-
vidual test case and test step as well as the total coverage for all test cases and test steps.
The MC/DC-Coverage view displays the coverage of the currently selected decision within
the Flow Chart view (see figure 5.23). If no decision is selected (as initially when starting the
CV), the MC/DC Coverage view is empty.
The current example is_value_in_range has only simple decisions, for which MC/DC is the
same as branch coverage.
5.1.10.4 Analyzing
• three test cases were executed (each with one test step).
• the if branch on the left of the first decision was not reached and is therefore
marked red.
• the first decision was not fully covered, so it is marked red.
• the second decision was fully covered and is therefore marked green.
• the else branch on the right of the second decision was reached two times, the
else branch was reached once.
Þ In the Test Project view of the Overview perspective click on the arrow next to the
Generate Report icon and select “Edit Test Details Report Settings…” .
Þ In the dialog you can select the Report Options you need.
Þ You can also change the Report Output Directory by clicking on “…” and create a
new folder by clicking “Make New Folder” in the additionally opened window (see
figure 5.26).
The selection of the new folder will only be effective for this report. Any
further report generations will again be based on the settings within the
Test Report Options in the Preferences.
Figure 5.26: Selecting a folder or creating a new folder for Test Details Reports
Important: The generation and opening of reports requires a third party PDF
viewer. If you get the error message “No matching program found for the file…”, no
such viewer was found.
This either means that there is no suitable PDF viewer installed on your computer.
The other possible reason is that you need to associate PDF files with your third
party PDF software in Windows 10.
After the source of error is found and fixed you need to generate the report again.
If you use a Version Control System (VCS) providing keyword expansion to embed
version control information in your source files, TESSY will display such expanded
keywords within the test report.
The following keywords are available: $Revision$ (Revision number), $Author$ (User
who checked in the revision) and $Date$ (Date and time stamp for the revision).
5.1.11.1 Change the default Test Report Options for your project
It is possible to permanently change the default report settings for your project, e.g. Output
Directories, File Names and Logo Image. This applies for all possible reports, not only the
Test Details Report.
You may also change the default Razorcat logo within the reports to your own com-
pany logo with your logo image file (PNG, JPG or GIF).
To set the Output Directory for the Test Details Report for your project permanently:
Þ In the menu bar select “Window” > “Preferences” > “Test Report Options”.
Þ Click on “Browse…” in the line “Test details Report” to select the desired directory
or change the path manually.
For more information about creating various reports please refer to section 6.2.2.19
Creating reports.
We will now import some very basic requirements and repeat some steps of this exercise.
Requirement This way you get to know the feature of requirement management and you can consolidate
Management the just learned workflows.
In the following Import dialog you can import various file formats. In our example we select
the file we just copied into our project:
Þ Click on “…” and select the file “Is Value In Range Requirements.txt” from your
project.
Þ Leave the File Content Type and the Target Document as it is and click “OK” (see
figure 5.30).
The newly imported requirement document will be displayed in the RQMT Explorer view (see
figure 5.31).
Þ Right-click the document and select “Properties” from the context menu.
Þ Change the alias to “IVIR” and click “OK” (see figure 5.32).
The document alias will be used for reporting, in order to have an abbreviation of
the document name when building the requirement identifier, e.g. IVIR-[1.0] in our
example.
Before linking any tests to a requirement, the respective requirements document needs to Committing
be checked in as initial revision: RQMTs
document
Þ Select the document and click on (Commit Changes) in the global tool bar.
Þ Enter “Initial revision” as commit comment and click “OK” (see figure 5.33).
An initial revision of the requirement document will be created.
TESSY manages different versions of a requirements document. You can track any
changes either from importing updated versions or from any modifications that you
did directly within TESSY.
Use the toggle buttons on the right to link modules, test objects or test cases to
requirements:
Þ Link the first test case with the first requirement.
Þ Link the second test case with the second requirement (see figure 5.34).
The third requirement is not linked.
• Test case 1.1: range start 3, length 2, given value 4, supposed to be inside of the
range (yes)
• Test case 2.1: range start 20, length 8, given value 22. Because we wanted to
force an incorrect output, we stated this to be not inside of the range (no)
Since we did not link any requirement to the third test case, the “Linked Require-
ments” will be empty when selecting the third test case.
Figure 5.35: Test Definition view within TDE with linked requirement
At this stage we can already generate a report showing the planned test case for our require-
ments:
Þ Switch to the Test Project view of the Overview perspective and click on the arrow
next to the Generate Report icon .
Þ Select “Edit Planning Coverage Report Settings…” (see figure 5.36).
Þ A dialog for the settings for the Planning Coverage Report will open.
Figure 5.37: Dialog of the settings for the Planning Coverage Report
Planning
Coverage
Þ Select an output directory for the report (default: C:\TESSY\report).
Report
Þ Select the first four Report Options (default).
Þ Select the Requirement.
Þ Do NOT select any Test Means.
Þ Click on “Generate”.
A planning coverage report will be created.
The report shows the available requirements and the linked test cases. It provides an overview
about the planned tests if all requirements are covered by at least one test case.
Since we have links to two of our requirements, the resulting requirement coverage should
be as shown above.
Notice the usage of the requirement document name and alias within the report! It
is important to select an appropriate alias in order to get useful report outputs.
We have planned test cases for the first two requirements, whereas the third requirement
is not yet linked with any test case, because there are no tests available to validate this
requirement.
We will now execute our tests again to see the results of the test cases with respect to the
linked requirements within the execution coverage report.
Þ Switch to the Overview perspective and execute our test object is_value_in_range
again: Click on the Execute Test icon .
Þ Generate a test details report to review the results on test object level: Click on
the arrow next to the Generate Report icon (see figure 5.39).
Test Details
Report
Execution Now we will generate a coverage report showing the test case results with respect to our
Coverage requirements:
Report
Þ In the global tool bar click on the arrow next to the Generate Report icon
and select “Generate Execution Coverage Report” (see figure 5.41).
TESSY creates the coverage report showing the available requirements and the
results of the linked test cases. It provides an overview about the current test
status, e.g. if tests for any requirements are failed.
Since one of our test cases was passed while the other one was failed, the result-
ing requirement coverage should be as in figure 5.42.
Test Coverage
Report
The first requirement has one test case linked which was successfully executed, the second
requirement has also one test case linked, but this one failed. The third requirement has still
no test case assigned.
One or more tests planned At least one test is linked to this requirement, but none of
them have been executed.
Some tests failed Some of the tests linked to this requirement have been ex-
ecuted and there were failed results.
Some tests passed Some of the tests linked to this requirement have been ex-
ecuted and all of them yield passed results.
All Tests passed All tests linked to this requirement have been executed and
all yield passed results.
If the interface of the test object changes, TESSY will indicate the changes with specific test
readiness states. With the Interface Data Assigner (IDA) you can assign the elements of a
Using IDA changed (new) interface to the elements of the old one.
In this section we will change the interface of the test object by editing the C-source and
exercise a reuse operation within the IDA.
Important: Make sure to keep the original C-source file “is_val_in_range.c” and
edit a copy. Do not change the original file in folder “C:\Program Files\Razor-
cat\TESSY_4.x\examples\IsValueInRange”!
The target of this section is to show you the three different test readiness states
“changed”, “deleted” and “new”.
Therefore we will first change a test object and add two new test objects called
“delete” and “new”. In a second step we will remove the “delete” object so it appears as
deleted. The names are chosen to illustrate the test readiness states.
Þ Select the module and “Edit Source” from the context menu (see figure 5.44).
Þ Now add a “delete” object and a “new” object as shown in figure 5.47
Changing the
C-source
Þ Save the changes with “File” > “Save” and close the file.
Þ Click on to analyze the module.
In the Test Project view you can see now three test objects with different test readi-
ness states (see figure 5.48):
The test object is_value_in_range has changed. You see the test object, but
there is no operation possible. You have to start a reuse operation.
The test objects “deleted” and “new” are newly available since the last interface
analysis. You have to add test cases, test steps and enter data for a test.
Deleted test objects that did not contain any test cases and test steps are not dis-
played anymore because they are considered as not important. If you want to display
a deleted test object, you have to add at least one test case and one test step!
Before deleting the test object “deleted”, we will have to add some test cases with test steps:
Þ Switch to the Test Item view and add a test case and a test step.
Þ Switch to the Overview perspective and to the Test Project view.
Þ Select the module and “Edit Source” from the context menu.
Þ Remove the test object “deleted” as shown in figure 5.49.
Þ Save the changes with “File” > “Save” and close the file.
Þ Click on to analyze the module.
In the Test Project view you can see now three test objects with three different test
readiness states (see figure 5.50):
The test object is_value_in_range is still displayed as changed since there was
no reuse operation yet.
The test object “deleted” has been removed. You still see the object, but there
is no operation possible.
The test object “new” is not shown anymore as “newly available”, because the
last interface analysis already detected the object as new.
Important: Note that removed and changed test objects require a reuse operation
before you can further operate on them!
Warning: If you do not assign the interface object, you will loose the test data
entered for parameter v1 and the global variable v1 will have no values after the
reuse operation!
• On the right side within the IDA perspective you see the Compare view with the
test object is_value_in_range.
• Within the Compare view you can see the old interface of our test object
is_value_in_range and the new one. The red exclamation mark within the new in-
terface indicates the need to assign this interface object before starting the reuse.
• The title of the view shows the old name versus the newly assigned name. In our
case the names are the same since only the interface did change.
Þ Assign the interface object “long v1” either by using the context menu or just drag
and drop from the left side (see figure 5.52).
Þ Commit the assignments by clicking on (Commit) in the menu bar of the Com-
pare view.
The data of all test cases and test steps will be copied from the old interface to
the current test object interface.
The test object changes to yellow to indicate that all test cases are ready to be
executed again.
• Removed and changed test objects require a reuse operation before you can fur-
ther operate on them.
• Those test objects that remained unchanged will automatically be reused, e.g.
they will be ready to use without further activities required.
• Removed test objects will only be displayed as “removed”, if they did contain any
test cases and test steps.
To understand the handling and create a simple classification tree we consider some aspects
3.2 The
from the Quickstart 1: Unit test exercise is_value_in_range. Classification Tree
Method (CTM)
This manual provides general information about the The Classification Tree Method
(CTM) in the chapter ’Basic knowledge’ as well as a detailed description of the CTE
in section 6.8 CTE: Designing the test cases within ’Working with TESSY’.
Switch to the CTE perspective to get to the automatically generated tree of the quickstart
6.8 CTE:
example “is_value_in_range”. Designing the test
cases
Figure 5.53: Automatically generated tree with the root “is_value_in_range” in the CTE
perspective
This tree within the CTE perspective (see figure 5.53) is generated based on the relevant
interface elements.
The tree elements of the automatically generated tree follow a basic structure. First of all, it
is categorized into ”Input“ and ”Output“ (see figure 5.54).
On the next level the interface elements are further subdivided into parameters, globals etc.
With the following levels the composite types, i.e. structures, unions or pointers, are broken
down into atomic types such as integers, floating point numbers, enumerations, etc.
More information about the automatically generated tree can be found in subsection
6.8.6.6 Automated tree generation based on function interface.
Figure 5.55: Child elements of an atomic type on the inputs side of the subtree
Some interface elements are marked as attached interface elements with a small
TIE icon (see figure 6.192) as they had been automatically attached.
For more information about attaching interface elements to a CTE node go to sub-
section 6.8.7.4 Attach selected interface element to CTE node.
The basic idea of the Classification Tree Method is to provide a systematical approach to
create test case definitions based on the functional specification of the function or system
to be tested. The TESSY included Classification Tree Editor CTE assists you in creating low
redundant and error sensitive test cases.
For more information about working with classes as well as with classifications and
test cases please refer to subsection 6.8.6.4 Creating classifications, classes and
test cases.
6.8 CTE:
Designing the test After preparing a test in the TIE, well designed test case specifications need to be created.
cases
A test case is formed through the combination of classes from different classifications. For
each test case exactly one class of each classification is considered.
The combined classes must be logical compatible, otherwise the test case is not executable.
You should choose and combine as many test cases as needed to cover all aspects that
should be tested.
Within the CTE tree area it is possible to move the classifications and other elements with
drag and drop: Just left click the element, hold the mouse button and move it to the desired
place. You may also select a group of elements and move them the same way.
The tree layout will be arranged automatically by clicking in the tool bar.
You may use the CTE to create or edit test cases manually. Making those kind of changes is
also possible within the TDE but in both perspectives values are always entered in the Test
Entering values Data view.
It is possible to assign test data using the tree nodes of the classification tree. For each tree
node values to variables can be assigned (see figure 5.57).
Please note that some of the operations and overviews are only available within the TDE
perspective.
More information about entering test data within the CTE perspective can be found
in subsection 6.8.7.1 Assigning test data to the CTE.
TESSY will generate the test data every time the CTE document is saved. Do so and select
the classes in the tree to examine the generated values.
Always keep in mind that attaching arbitrary test data to nodes can lead to very
confusing situations. So please be cautious and stick to conventions made by your
team or company or those proposed by Razorcat.
• When selecting a tree item, you will see the test data entered for this item within the Test
Data view.
• All CTE tree items with any assigned test data will be shown with a yellow square .
• When selecting an interface element within the Test Data view, the respective CTE tree
items with test data assigned for this interface element will be shown with a blue square
.
The CTE method offers a graphical representation of the recursive partitioning of classifi-
cations and classes in shape of a classification tree. Classifications are drawn as named
rectangles, respective classes are arranged below. To specify the test cases the classifica-
tion tree is used as the head of a combination table. In this Test Table the classes which are
to be combined can be marked.
A test case is formed through the combination of classes from different classifications. For
each test case exactly one class of each classification is considered. In this example all
classes get automatically generated test data.
In this way it is necessary to select one class from every classification in the input subtree to
compose an executable test case, i.e. a test case with a complete set of input values.
Important: Test units without dynamic data types are not generated with a com-
plete set of test data, the pointer’s dynamic object must be initialized and set by the
user. Afterwards TESSY will also generate values in the dynamic object.
More information about classifications, classes and test cases are provided in sub-
section Creating classifications, classes and test cases.
Important: For sake of simplicity the functions of the CTE Test Table within this
quickstart exercise please delete all *min* and *max* elements from the automati-
cally generated tree. You should also modify the output side of the tree.
In general, the automatically generated tree always needs a review and adjustments
from a tester. To demonstrate this procedure we delete classes and add more in this
example.
Þ Double-click on the red marked class in figure 5.58 or press F2 after selecting the
figure.
This will allow you to edit the class name.
Þ Enter “yes” and press “Enter” on your keyboard.
Þ Then Right-click on “Return” to open the respective context menu.
Þ Choose “Add Class” to create a new class.
Þ Name it “no” (like explained above).
Important: The classes names are mapped to values of the attached enumer-
ation type. In this way it is necessary to use *exactly* the values “yes” and “no”.
Comments after a pair of forward slashes “//” is allowed.
Setting marks Test cases are defined by setting marks in the Test Table:
Þ Create more marks within the combination table for the other test cases to cover
completely all classes of the classifications “range_start” and “range_length” with
your test cases.
For example the CTE could look like in figure 5.63.
Figure 5.63: Completed table with all test cases for example “is_value_in_range”
6.8.6.4
Creating
classifications,
classes and test
cases
Important: As you can see there are various possibilities of combining the classes
within the test cases.
In this case it was decided to choose all possible combinations for the classifications.
In real testing you would need to select the most interesting combinations only in
order to get a reasonable number of test cases.
Þ Select the test cases one after another and review the test data resulting from your
mark settings being displayed within the Test Data view (see figure 5.64).
In figure 5.64 the test data for test case 7 is displayed within the Test
Data view. The test data is read-only because it is defined by the marks
set within the combination table.
Figure 5.64: Test data is displayed when selecting a test case in the combination table
You will see the test cases updated with the test data values entered within the CTE perspec-
tive.
• Test items with data stemming from the CTE perspective are marked with special
status indicators: (test case) and (test step).
• The indicators will appear light gray when there is no data entered, dark gray when
there is some data entered and yellow when the entered data is completed.
• Data stemming from the CTE are read-only. If you want to change them, switch
back to the CTE perspective and do your changes there.
To understand the handling in this chapter we assume you have worked with TESSY
in unit testing. If not, please proceed with the Quickstart 1: Unit test exercise
is_value_in_range first and come back to this chapter afterwards.
If you have already worked with former versions of TESSY, this chapter may help you
to learn the differences in handling with TESSY 3.x!
The calls to the component functions stimulate the component. Like testing of a sin-
gle function, a test case for a component also comprises of input and output data. A
component may have internal functions that can not be called from the outside of the
component, but only from functions inside the component.
Relevant for the result of a component test is the sequence of the calls from within
the component to (callable) functions in other components. This is with respect to the
number of the calls, the order of the calls, and the parameters passed by the calls to
other components.
Obviously, the functionality of the functions in a component and the interfaces between
them are tested by component testing, at least for the most part. Hence, component
testing can be considered well as integration testing for the functions in the component.
You already know the basic functionality of a unit test. We will now follow a simple source
code example to show how to exercise component testing with TESSY.
Example interior_light
The interior light of a car shall be controlled by two inputs: The door and the
ignition of the car.
The functional specification comprises of three simple requirements:
• If the ignition is switched on, the interior light shall go off immediately.
The specification above is obviously not complete. Especially the initial state of door, ignition,
and light is not given. It is not specified what shall happen e.g. if the ignition is switched off
after it was switched on, etc. But this simple specification is sufficient to demonstrate temporal
component testing with TESSY.
• Door = open
• Ignition = off
• Light = off
Þ Create a test project “interior_light” (if you need any help, consult section Quick-
start 1: Unit test exercise is_value_in_range).
Þ Copy the C-source file “interior_light.c” which is stored under
“C:\Program Files\Razorcat\TESSY_4.x\Examples”.
Þ Paste it into your project root and add it to the module.
Þ Open the module.
The C-source code will now be analyzed, afterwards the Test Project view displays
the functions of the source (see figure 5.68).
Þ In the Properties view switch the Kind of Test to “Component” (see figure 5.69).
Now the only test object displayed has the default name “Scenarios” (see figure 5.70).
This is different to unit testing with TESSY, where the names of the possible test objects in
interior_light.c (i.e. the functions) would be listed instead.
You can open the C-source file with a rightclick onto the module and choosing “Edit Source”
from the context menu. The C++-Perspective will open and the file will be displayed (see
figure 5.71).
This implementation features a heartbeat function, the tick() function. The implementation
assumes that tick() is called every 10 ms. Based on this assumption, for the timer the value
“500” is calculated.
Heartbeat Function
To be able to test the temporal behaviour of a component in a simulated environment, a sim-
ulated time base needs to be available. This means, a certain function inside the component
is called in known equidistant times (e.g. every 10 ms). The calls to this function represent
the “heartbeat” of the component. They provide a time reference for the testing of the temporal
behaviour of the component.
The heartbeat function is usually called “handler function” or “work task” or simply “tick”.
In the section “External Functions” of the interface, the two functions LightOff() and
LightOn() are listed. These two functions are used (called) by the component “Interior
Light”, but these two functions are not implemented in interior_light.c.
The component “Interior Light” expects these two functions to be provided by another
component of the application.
However, we want to test the component “Interior Light” without that other component,
i.e. isolated from the rest of the application. Therefore, we direct TESSY to provide
replacements, i.e. stub functions for these two functions.
Þ Rightclick the function “LightOff()” and choose “Create Stub” (see figure 5.75).
In section “Component Functions”, all functions of the component “Interior-Light” are listed.
The passing direction of the variable timer is “Irrelevant”, because it is not used by init(). Since
we are only interested in the variables “sensor_door”, “sensor_ignition”, and “state_door” as
input, we manually set the passing direction for these variables to “In”:
Þ Click in a cell and choose “In” for “sensor_door”, “sensor_ignition”, and “state_door”
(see figure 5.76).
You can add a description to the test cases (see figure 5.78):
The view External Function Calls displays the two functions LightOn() and LightOff() that the
component “Interior-Light” supposes in another component. TESSY provides stub functions
for these two functions during component testing.
Please notice that you can rename the test cases: Renaming Test
Cases
Þ Rightclick a test case and choose “Rename” from the context menu.
The new name will be displayed in the center of the SCE perspective (see figure
5.82).
Figure 5.82: The names of the test cases are displayed in tabs of the view Work Task
As we know, the implementation of the component Interior Light assumes that the function
tick() is called every 10 ms, i.e. the function tick() is the work task or handler task or heartbeat
of the component. To enable TESSY for temporal component testing, Tessy must know about
this, i.e. we must specify tick() as work task for the component manually.
Since we want to test the temporal behavior of the component, we need to establish a time
base:
Þ Within the Scenario view in the center of the SCE perspective click on (Insert
Time Steps).
Þ Click three more times to include all in all 4 time steps.
The scenario consists of 4 calls to the work task, in our case tick(). The calls occur at 0 ms,
10 ms, 20 ms, 30 ms simulated time.
To stimulate the component besides the calls to tick(), you can simply drag and drop a function
to the scenario.
Þ Drag the init() from the Component Function view to the Scenario view (see figure
5.84).
Now we stimulate the component actually. We drag the component function set_sensor_door()
to 30 ms simulated time:
Now you have to specify a value for the parameter of set_sensor_door() in the properties of
set_sensor_door():
Þ Open the Properties view and click in the cell under “value”.
Þ Enter “closed”.
The value will be displayed in the parameter of the time step function (see figure
5.86).
In the scenario above, after the fourth call to tick(), TESSY calls the component function
set_sensor_door() with the parameter value “closed”. This should cause the component to
react by calling LightOn(). We know from the implementation of Interior Light that this call
will happen one tick later, i.e. after TESSY has called tick() a fifth time, at 40 ms simulated
time.
Þ Specify the expected result by dragging the function LightOn() from the External
Function Calls tab to 30 ms simulated time (see figure 5.87).
The scenario above expects the call to LightOn() to happen at 30 ms simulated time, i.e. prior
to the fifth call to tick(). This is indicated by the time frame “30 - 30 ms”. We know that the call
will occur one tick later, and we want to treat this behavior as correct:
Notice that the icon of the test object has changed to yellow , indicating that the
scenario is executable (the purple sign indicates a comment or description).
Besides the call to LightOn(), we also expect a call to LightOff(). Since the ignition will not be
operated in this scenario, we expect the call to LightOff() to occur 5 seconds after the call to
set_sensor_door() at the latest:
Þ Drag LightOff() to 30 ms and extend the time frame by 5000 ms (see figure 5.89).
Þ Save the data.
Figure 5.89: Setting the call “LightOff” and extending the time frame
In the scenario above, we have specified that we expect the call to LightOff() after 5
seconds at the latest, i.e. we would take a call to LightOff() after, say, 4 seconds as a
correct result.
If we would want to accept only the occurrence of the call to LightOff() at 5030 ms
of simulated time as correct, we then could use the context menu’s Insert Time Step
At… command to create this point in time and assign the expected call to LightOff() to
5030 ms of simulated time, of course with the property “Time Frame” set to 0 and not
to 5300.
Þ To specify the expected behavior, drag and drop the function LightOff() from the
External Function Calls tab to 40 ms simulated time.
The second scenario is now complete (see figure 5.91).
The expected result of this action is the interior light going off immediately, i.e. after
the next call to the work task tick(). This is specified in the scenario by dragging and
dropping the function LightOff() to 40 ms simulated time and by increasing the time
frame by 10 ms.
This is performed very similar as in the first scenario.
After the scenarios have been executed, the color of the scenario icons changes to green.
This indicates an overall “passed” verdict (see figure 5.92).
The testing of C++ code requires the same setup of the TESSY modules as normal C code.
To understand the overall handling and create a simple classification tree we consider some
aspects from the Quickstart 1: Unit test exercise is_value_in_range.
Important: Please note that for the testing of C++ code the utilized standard
libraries need to include the methods ”new“ and ”delete“. This is essential as other-
wise TESSY will will not be able to build the test driver successfully.
Follow the steps below to import a C++ project with sample test cases:
TESSY will show an example project within the project list. Such an example
project will be copied completely to a user defined location when being opened.
(see figure 5.94).
The project root will be displayed within the bottom line of TESSY.
Þ Click “OK”.
Figure 5.96: The project root is displayed within the bottom line of TESSY.
Please review the test cases defined for some selected test objects. You will find more ex-
planation and comments on the specifics of C++ testing within the application note “Using
C++”.
The test driven development support in TESSY allows to prepare tests already before any
source code is available. If you have specification of what the individual software functions
should do, you can manually create (synthetic) test objects and their (synthetic) interface
variables. Afterwards you can create test cases and fill them with data as for normal test
objects. Once the first version of the software is available, you need to assign the test objects
to their respective implementation functions and you can start running the tests against the
implementation. To create synthetic test objects for test driven development:
Þ Create a module without assigning a source file. Only modules in this initial state
allow to create synthetic test objects.
Þ Add new test objects by clicking on the icon (New Test Object). Rename the
newly created test objects.
Each synthetic test object has an initially empty interface. To add variables:
Þ Click on the icon (New Variable) to create input and output variables. You can
create variables of all basic types to build the necessary interface. Afterwards you
can create test cases.
Þ Change to the CTE perspective.
The test cases will get a yellow icon if they are completely filled with test data. You cannot
execute the tests because there is no source file to be tested available yet. But you can create
a test details report showing the specification of all test cases:
When a first version of the source file is available, add this file to your module. Now you can
analyze the module and you will see the functions contained within the source file and the test
objects that you prepared for test driven development. It doesn’t matter if the implemented
test objects have other names, you can assign test objects within the next step.
Important: When assigning synthetic test objects to the real implementation, the
synthetic variables will not appear within the new interface because the purpose of
this assignment is to assign the values of all synthetic variables to their respective
implemented variables.
In the example shown within figure 5.105 you can see that the implementation uses a struct
to hold all parameters whereas the synthetic test object has all values within scalar variables.
Such differences can be resolved using the IDA assignment as it would be done for normal
interface changes.
The test object “calc_add” is now ready for execution, all prepared test cases and data are
available.
This chapter provides detailed information of the test activities possible with TESSY. The
headlines of the sections follow the actions taken during a test and refer to the corresponding
perspectives and views, e.g. “CTE: Designing the test cases”.
The subsections describe the views of each perspective, displaying used icons and status
indicators and giving quickly answers to your questions of “What can I do within this view?”
and “How do I do …?”.
So if you need help at some point, ask “Where am I?”. You should find the answer easily within
this chapter. If you have questions about the workflow, consult chapter 5 Tutorial: Practical
exercises.
Some views are displayed within various perspectives. Because views are context
sensitive, not every operation is possible within every perspective. In this case the
manual will refer to the respective section and perspective, where all operations of
the view are possible.
The menu bar provides global operations such as project handling commands, editing com-
mands, window settings, TESSY preferences and the help contents.
“Select Project” Opens the dialog “Select Project”. If you select another
project, TESSY closes the current project, restarts and opens
the selected project.
“New Project” Opens the dialog “Create Project”. Refer to section 4.1.1
Creating a project database.
“Import Project” Opens the Windows Explorer. Choose a project and click
“Open”.
“Close Project” Closes the current project. TESSY will restart and show the
dialog “Select Project”.
“Edit Environment” Opens the TEE, the Test Environment Editor. Refer to chapter
6.5 TEE: Configuring the test environment.
Here you will find common actions as “Delete” or “Undo”, “Redo” etc. You can use as well the
context menu. Refer to section 4.3 Using the context menu and shortcuts.
“Show Console” Opens the Console view on the lower right of the graphical
user interface.
“Show Problems View” Opens the Problems view on the lower right of the graphical
interface.
“Reset Workbench” With a click you reset the positions of all perspectives and
views to the default setting.
Within section “Preferences” of the Window menu you find many options for setting basic
functions to your needs:
“Coverage Settings” In this section you can setup an instrumentation for coverage
measurement that will be the default for all of your projects
(see figure 6.4).
“Static Analysis” TESSY supports the static analysis tools “Cppcheck” and
“PC-Lint”.
To enable static analysis,
“Test Execution Within this section you can choose if selections and settings
Settings” should be remembered, e.g. if you tick unter “Remember test
instrumentation settings” the option “Globally for all test
objects” the last used coverage selection will be used, see
section 6.2.2.14 Instrumentation settings.
“Test Interface Settings” Within this section you can personalize TESSY’s behaviour
within the Interface view, such as
• sorting order,
“Test Project Settings” Within this section you can personalize TESSY’s behavior
within the Test Project view, such as
• sorting order,
The following preferences will be stored within backup files when saving the whole project database
as described within Backup, restore, version control:
• Coverage settings
• Dialog settings
• Metrics settings
• Test execution settings
• Test report options
You can configure the static analyzer tools “CppCheck” and “PC lint” to be executed when analyzing
modules.
The static analysis settings are used for calling the respective static analyzer tool. You need to enter
the path to the binary and change the command line options to your needs.
For a selection of the most applicable safety standards, there are pre-defined coverage settings avail-
able according to the recommendations given within those standards. You can choose a coverage
setting for the appropriate standard and level as default for all modules of your project. When running
tests with coverage instrumentation, the respective settings will be applied automatically.
You can as well define your own coverage settings if the standard you are using is not available within
the list.
You can enable the calculation of the CC average and maximum values and decide to show them
within the Test Project view. The two measures Result Significance (RS) and Test Case To Complexity
Ratio (TC/C) can be activated to be applied for the result calculation of test objects. If you tick the
Apply check box, a failed value of the respective measure will lead to a failed test execution result of
the test object.
The coverage totals are enabled by default in order to provide better progress information when de-
veloping and executing tests. The coverage totals will be propagated up to the test collection so that
you have the total number of branches or conditions on test collection level. As soon as any test
objects have been executed, the reached branches or conditions of those executed test objects will
be propagated upwards but all other test objects with missing tests or coverage data will still be taken
into account.
If you don’t need the metrics measurements or do not want to apply the coverage totals, you can
disable all those metrics to avoid unnecessary calculations.
The interface dictionary is used to collect additional information about global variables. It automatically
collects the global variables of all modules of a project. Whenever a module is analyzed the interface
dictionary will be updated with the variables contained within the module interface.
You can specify a description, color and a range of valid values for this variable. These information
can later be used for automatic generation and update of classification trees.
Using the “Verify…” button allows you to check the edited list of variables with the available variables
within all module interfaces of the project. If any variables are not existing in any module interface,
such variables will be decorated with a warning icon.
If you delete variables, they will be added again when a module containing those variables is ana-
lyzed.
The interface dictionary can be exported and imported as XML file (e.g. for editing or initial creation)
and it can be saved and restored during the database backup and restore operation.
“Create Support Creates a support file. Refer to section 7.1 Contacting the TESSY
File…” support.
“Start Shell” Starts a bash shell that can be used to try out the command line
execution of TESSY. The PATH variable is already set to the bin
directory of the currently running TESSY installation, so that you
can run tessycmd immediately. Refer to section 6.17 Command
line interface.
“Open Log File” Opens the external problems log 7.2.3 Opening external problem
logs using the Help menu.
“Open Workspace Opens an information dialog with a list of problems 7.2.3 Opening
Problems Log” external problem logs using the Help menu.
“Open Problems Opens the Windows file chooser to open a problems log7.2.3
Log…” Opening external problem logs using the Help menu.
Important: If you have not created a project yet, do so as described in the chapter
“Basic handling” in section 4.1.1 Creating a project database!
Test Project view upper left To organize the project: Create test collections, modules
and test objects; execute the test, create reports and
have a fast overview on your project.
Properties view lower left To edit all properties, e.g. adding sources or including
paths to your modules.
Requirement lower left To select and link the requirements that you managed
Coverage view within the Requirement management perspective.
Test Items view upper right To create test cases and test steps manually.
Evaluation upper right To view evaluation macro results if the usercode of the
Macros view test object contains any.
Console view lower right To display messages of sub processes invoked during
test execution, e.g. compiler calls.
Test Project
view
Exports files.
Synchronizes a module.
Generates various test reports. The test details report for a Ctrl + R
test object will be generated as default.
Inserts a new folder, optional for organizing your test project. Shift + Ins
Inserts a new module. Modules will contain the test objects Ins
available within the C-source files to be tested, i.e. C func-
tions.
Icon Meaning
Icon Meaning
Variant module that was created with the “Create Variant” option.
Indicator Status
The test object has been removed. You still see the object, but there is
no operation possible.
Only displayed when the test object contained any test cases before the
removal.
The test execution has been aborted for this test object.
Indicator Status
Test results and coverage of the test run are both OK.
The coverage achieved the minimum coverage, but the minimum cov-
erage was less than 100.
You need at least one test collection to organize your test, and within at least one module and Creating the test
one test object. Folders and further test collections are optional and just have the purpose to
organize your test project.
Þ Click on the icon (New Test Collection) on the tool bar of the view.
Þ Enter a name and click “OK”.
Þ Click on (New Folder), enter a name and click “OK”.
Þ Click on (New Module), enter a name and click “OK”.
Modules need to be created for each of the source files that shall be tested with either
a unit or integration test. After the module analysis, the module lists the testable
functions of its source files as test objects.
Tasks
The task element provides means to protocol reviews or external tests and link them to re-
quirements. This allows full verification coverage of requirements that are not testable with a
normal unit or integration test. A task can be created within a test collection or folder.
You can edit the task name, choose the type (review or test) and write down the desired
actions to be performed. Further types of tasks can be defined within the tasks preference
page.
Tasks also have a result that shall be set after the task actions have been completed. Choose
the execution date in order to track the completion of the task actions.
You can also attach files in PDF, ASCII text or image formats as a documentation of the
review process. The contents of those files will be appended to the test details report of a
task element (e.g. scanned check lists or filled PDF forms).
Executed tasks are displayed with their result within the Test Project view.
Figure 6.12: Executed task “Checklist” (Passed) in the Test Project view
When linking tasks to requirements, each task result counts as one test result compared to
the test case results of normal unit testing. Refer to section 6.4.7
For more information about the handling of the Link Matrix please refer to section 6.4.7.
By default test runs are overwritten by every new execution of the test. This behavior can be
changed within the Preferences:
Þ Click “OK”.
Historical test
runs
Indicator Status
Test run completed and closed, saved as historical, will not be overwrit-
ten.
TESSY now analyzes the C-source file, this will take a few seconds.
After successful processing,
TESSY will as well analyze the C-source file by just clicking on the white arrow next
to the module after adding the C-source file.
Now all functions which were defined in the C-source file are displayed as children of the module
above within the Test Project view (see figure 6.16).
Figure 6.16: The function of the C-source is displayed as child of the module
When using the CLANG parser (which is the default since TESSY 4.0), opening older modules created
with TESSY 3.2 and before will show the following warning:
Figure 6.17: Warning when opening older modules with CLANG enabled
• If you want to convert the module and start working with the CLANG mode, select “OK”
which will do all necessary conversion. Refer to the restrictions that apply: Operating limits.
Please create a backup of your project before converting the modules.
• If you have a legacy project and just want to execute tests, select “Cancel” and change the
attribute “Enable CLANG” to false within your project configuration using the TEE Adjusting
enabled configurations. If you did this successfully, there will be no more warning when
opening the module.
When working with the CLANG mode there are several parser options which can all be set in the
TEE:
Enable Create Default If set to true, the TESSY parser creates a default constructor if
Constructors it is missing.
Enable Create If set to true, external functions that are called are by default
Function Stubs marked to create stub code unless they are listed in attribute
“Function Stub Exclude List”.
Enable Create Method If set to true, undefined called methods are marked to create
Stubs stub code unless they are listed in attribute “Method Stub
Exclude List.”
Enable Define If set to true, external variables that are used are marked to be
Variables defined unless they are listed in attribute “Variable Exclude List”.
Enable Exceptions If set to true, TESSY enables exceptions for the test object and
generates a try-catch block around it. Also the TIE displays an
artificial global variable called throws exception which can be
set to OUT in order to test an exception thrown be the test
object. By default the attribute is set to true.
Function Stub Exclude The comma separated list of functions is excluded from
List automatic stub creation.
Method Stub Exclude The comma separated list of methods is excluded from
List automatic stub creation.
Variable Exclude List The comma separated list of variables is excluded from being
automatically defined.
General information about editing the environment can be looked up in the section TEE:
Configuring the test environment.
Important: Setting changes of the parser options made in TEE will be effective
when analyzing a module. Some of the options only apply when initially analyzing
modules. In this case it is necessary to reset a module before starting the analysis
to see the effects of the latest changes.
For more information about the parser options and more attributes available within
the environment editor TEE please refer to the application note “Environment Settings
(TEE)” in TESSY (“Help” > “Documentation”).
If decisions within the code have more than one atomic condition (e.g. “if (A && B)”), the
cyclomatic complexity will be incremented by one for each additional atomic condition within
Using CC for such decisions.
complexity
control Each occurrence of the following will also increase the CC by one:
• if ...
• for ...
• while ...
• do/while ...
• case ...: (only if directly followed by a code block)
• catch ...
• &&
• ||
• ?
An empty function or method has a complexity of one, while all preprocessor directives are
ignored.
The metric increases linearly with the number of binary decisions within a program, however
e.g. calculations are not taken into account. McCabe indicates 10 as the highest acceptable
cyclomatic complexity measure which means that values higher than 10 suggest a software
review.
Important: Values can only be calculated for environments working with CLANG,
therefore ”Enable CLANG“ needs to be set ”true“ in the TEE. (For more information
about the TEE see chapter 6.5 TEE: Configuring the test environment.) Other en-
vironments are not included in the calculation of metrics.
TESSY determines the cyclomatic complexity value for each test object on module, folder
and test collection level using the sum of all values (displayed as CC - Total Cyclomatic
Complexity). Optionally you can also display the average value (CC Avg) or the maximum
value (CC Max).
Within the preferences it is possible to set two threshold values as limits which will be high-
lighted in yellow (warning) or red (error). Values below the warning limit will be marked in
green.
Also important is the relation between the number of test cases and the complexity. For this
purpose TESSY provides the test case to complexity (TC/C) ratio. This measure indicates if
there are enough test cases available to have at least one test case for each linearly inde-
pendent path. As a result you will most probably reach 100% branch coverage for a TC/C
ratio greater than 1.
• A value smaller than 1 indicates that not enough test cases have been created
to pass through all linearly independent paths. (The value appears highlighted in
red.)
• A value greater or equal to 1 indicates that at least a minimum number of test
cases has been defined. (The value appears highlighted in green.)
Important: The TC/C ratio is only a hint for the necessary number of test cases to
achieve 100% branch coverage. Depending on the code to be tested there may be
less or even much more test cases necessary to do a full functional test. Also this
measure does not take test steps into account because test steps are only seen as
helper steps to prepare a test object for the actual test.
The result significance (RS) reveals weak test cases. This measure is available after the test
execution and it verifies that each test case applies at least one of the following checks:
If none of the above checks are made, the RS measure of the respective test case will be
marked as failed, otherwise it will be marked as passed (after running a test).
TESSY provides a testing effort estimation based on a customizable formula. This formula
can be edited within the metrics preference page as well as warning and error level thresholds.
The actual time spent for testing a test object can be tracked within a separate column of the
Test Project view.
Both testing effort columns need to be enabled within the metrics preference page to become
visible within the Test Project view (see figure 6.19).
The metrics preferences can be found in the Window menu of the TESSY Menu Bar.
More information about the basic settings of TESSY is provided in section 6.1 Menu
Bar Entries: Setting up the basics.
Figure 6.19: Select “Estimated Time” and “Actual Time” in the Preferences
The time estimation is an important topic for project management. In order to perform a
realistic time and cost estimation for testing, the following basic conditions should be consid-
ered:
The default formula prepared in TESSY is only a proposal and should therefore be reviewed
and adapted for each project. The formula for the estimated time can be edited according
to your needs. There are several predefined tokens available that represent values of the
available metrics provided by TESSY.
If the testing effort columns are enabled within the preferences, the Test Project view will show
the calculated estimation time for each test object after analysis of the module. Whenever
the module will be analyzed again, the estimated time will be updated based on the defined
formula.
The actual time can be edited within the “AT” column for each test object using the inline editor.
Values entered are interpreted as minutes but you can also explicitly specify the unit, e.g. 30m
for thirty minutes or 1h for one hour. Also combined values are possible, e.g. 1h20m .
Both testing effort values are cumulated for modules, folders and test collections. The time
being displayed will be rounded to avoid too long values exceeding the column width. All
testing effort values will be listed within the overview report as part of the metrics table.
Module testing of software variants often require very similar tests that only differ in small
parts. Therefore the comfortable reuse and adaption of existing tests reduces the testing
effort for each variant of the software.
TESSY provides a variant management for test modules so that basic tests within a base
test module can be inherited, altered or removed and additional tests can be added by sub
modules covering the test of each software variant.
A base module serves as parent for all variant sub modules. It contains the information that
will be shared with all variants. Any module can be used as base module (i.e. can be the
parent of sub modules) if the parent module has changed.
If you have i.e. created a module with some test cases, you can create a variant:
Þ Choose the parent module and click on “OK” (see figure 6.22).
Þ The variant will be displayed within the Test Project view with the icon .
Figure 6.23: Test Project view with a module and a variant module.
If the parent module has changed, TESSY will mark the children with an exclamation mark.
The mouse over states, that the module needs to be synchronized with its parent (see figure
6.24).
Figure 6.24: The variant module needs to be synchronized with the parent.
Þ Right-click the module and select “Synchronize Module” from the context menu.
The Synchronize Module dialog is displayed. The parent and all child modules will
be shown and can be synchronized in one step.
For the first synchronization or if a modules interface has changed, the IDA will be opened
to assign the parent modules interface to the child module interface. You will be asked if the
IDA perspective shall be opened:
The assignments made within IDA will be saved for future synchronizations of the child mod-
ule(s). When invoking the “Synchronize Module” operation again (e.g. after changes to the
parent module), the last used assignments will be applied without showing the “Synchronize
Module” dialog.
If you want to force a new assignment of the parent module interface to the child module
interface, you can analyze the child module which will reset the interface assignments. A
subsequent module synchronization will then show the IDA again.
The following indicators display the status of the inherited test cases and test steps within the
Test Items view:
The small triangle indicates that the test case or test step is inherited.
The filled triangle indicates that the inherited data of the test case and
test step was overwritten.
The test case or test step was added for this variant test object.
Important: Deleted test cases/steps are only faded out within the child module.
They can be made available again using “Restore Deleted” from the context menu.
Figure 6.28: Test cases and test steps that were inherited of a variant module
6.2.2.10 Notes
Added Notes will appear in the Notes view on the lower right of the Overview perspective
and can be edited or deleted there.
Right-click the respective note to open the pull down menu and choose the wanted action.
You can also choose to create a Notes Report.
Notes will appear in the Test Details Report as well as in the Test Overview Report
without any test execution. Notes will not appear in Planning Coverage Reports and
Execution Coverage Reports.
After entering test data for a particular test object you are ready to execute the test. During
this process, TESSY will perform the following steps:
• Generate the test driver based on the interface information and user code pro-
vided.
• Link the test driver to the test object to create an executable file.
• Run the test with the selected coverage instrumentation.
The test driver is necessary to call the function under test and will be gen-
erated automatically. Test driver and the function under test form a complete
(embedded) test application, including the startup code for it, and will use an
appropriate compiler for the particular embedded microcontroller architecture.
If the function under test uses external variables that are not defined, the test
driver generated by TESSY can define those variables.
Once the test driver has been compiled, it can be run as often as required. You
can select a subset of your test cases and run the test again by just selecting
the run option. Changes to test data and expected results might require build-
ing a new test driver. TESSY will check this automatically and generate a new
driver.
Stub functions
If the function under test itself calls other functions (subroutines), TESSY can
provide replacement functions (stubs) for the missing subroutines with the test
driver. TESSY features two types of stub functions:
• Stub functions for which you may provide the C source code for the bodies
of the stub functions.
• Stub functions for which TESSY is able to check if the expected value
for a parameter is passed into the stub function and for which TESSY is
able to provide any value specified by the user as return value of the stub
function.
Action Meaning
“Abort On Missing Stub Building the test driver application will be aborted with
Code” an error if there are non-void stub functions without any
code provided to return a value. You can uncheck this
action to ignore this error if you are sure that the return
values of your stub functions are not used. (For more
information please refer to Defining stubs for functions.)
Option Meaning
“Create New Test Run” A new history in the module will be generated.
“Test Cases Separately” The download and execution process of the test driver
will be started separately for each test case. This
provides an initial state of memory (and variables) for
each test case and is useful if the test cases shall be
executed independently. The disadvantage of this
approach is an increased execution time. (Due to
start/stop of debugger and download of test driver.) It is
recommended to set this option for dedicated test
objects only.
Besides the normal test execution, additional test execution types can be selected when
running tests. The purpose of these executions is an automated quality analysis of tests and
the test driver application itself.
The execution options can be selected within the test execution dialog.
• “Run without instrumentation” builds the test driver application without any instru-
mentation of the original source code.
Not only coverage instrumentation will be omitted but also instrumentation for call
trace, static local variables or fault injection. Because e.g. the call trace cannot be
evaluated without the instrumentation, there will be no evaluation of the call trace
for the test run with this option set. Also fault injection test cases will be skipped.
It may happen that e.g. for test objects that only have fault injection test cases no
test will be executed at all.
The results of all executable test cases will be checked if they yield the same
results as the normal test execution.
• “Run with test data pattern” executes the test object twice.
It initializes all variables with passing direction OUT with the pattern given within
attribute “Test Data Pattern” for the first run and with the alternate pattern given
within attribute “Test Data Alternate Pattern” for the second run.
Figure 6.34: Test Data Alternate Pattern and Test Data Pattern in the Properties view
Both test executions must yield the same result as the normal test execution.
• “Run mutation test” executes a mutation analysis on the given test cases as de-
scribed within section 6.15 Mutation testing.
The prerequisite for all additional execution types is the successful completion of the normal
test execution. All tests must yield passed results to be able run those execution types,
because the passed result is the reference against which the additional execution types are
checked.
Important: It may happen that existing successfully executed test cases are run
against updated source code within a batch test. If such a batch test will be run with
additional execution types selected, the test execution of the additional execution
types will be skipped if the normal test execution ends with failed test results.
Within the TESSY GUI you will always see the following results after running tests with addi-
tional execution types:
• The results of the normal test execution if there were any failed results. (Additional
execution types will have been skipped in this case.)
• The results of the normal test execution if all test runs were successful. (Normal
test runs, without instrumentation and with test data patterns.)
• The results of the last failed test run without instrumentation or with test data pat-
tern, which ever occurred first. This allows to examine the results of the respective
execution type in order to find the reason for the failure. Also debugging any failed
execution type is possible.
The Test Items view will display which additional execution type caused the failure with a
tooltip on the different results of each test item.
Figure 6.35: Test Items view showing additional execution type failure
Also the TDE will display the actual results of the last failed additional execution type, e.g.
the output variable still has the value of the test data pattern used for initialization in the case
below:
Within the Test Execution Settings dialog you can select the various possible coverage in-
strumentation for this test run (see figure 6.32):
Þ elect from the pull-down menu if the coverage shall be used for the test object or
the test object and the called functions.
Þ Untick the box “Use preselected Coverage”.
Þ Select the coverage instrumentation (more than one possible).
The coverage instrumentation is now used for this test run, even if you have se-
lected a different coverage instrumentation as default for your project (see section
6.1.2.1 Window > Preferences menu) or for the module or test object within the
Properties view (see section 6.2.3.4 Coverage tab).
If you tick the box “Use preselected Coverage”, coverage selection will be applied
according to the following rules:
• If a coverage selection is set in the Properties view (see section 6.2.3.4 Coverage
tab), that selection will be used.
• If no coverage selection is set in the Properties view, but in the Test Execution
Settings of the Window > Preferences menu the option “Remember instrumen-
tation settings” is set, the last used selection will be used.
For more information about the coverage measurements refer to the application note
“Coverage Measurement” in TESSY (“Help” > “Documentation”).
The debugging option within the test execution dialog also reflects the additional execution
types. By default, the last failed execution type is selected for debugging. The respective
test driver application is still available in this case so that the error can immediately be de-
bugged:
It is also possible to select the other execution types or the normal test execution. This
requires the respective test driver application to be built before debugging:
Important: After debugging any additional execution type, there will be no test
result for the respective test object. It requires another normal test execution (e.g.
without breakpoint) to see a test result again.
When debugging a test object with normal execution, there will be a test result available
afterwards as without debugging. This is useful e.g. for manually executed tests that require
external hardware setup during the test execution. Such tests can be executed in debug
mode and they will yield a test result at the end.
Viewing test After a test run, the Test Project view gives an overview about the coverage, if selected:
results
• The actual results will be compared with the expected values according to the
evaluation mode. The result will be either failed or passed.
• The last step of test execution is the generation of an XML result file. This file
contains all test data and the actual results. It will be used for reporting.
The results of every coverage measurement can be reviewed in the CV (Coverage Viewer) as
soon as the test was carried out. For details refer to section 6.11 CV: Analyzing the coverage.
See 6.11 CV:
Analyzing the
coverage
• A green tick will indicate that all actual values comply with the expected values with
respect to the evaluation modes and the coverage reached at least the minimum
coverage.
• A red cross will indicate that either some actual values yield failed results or the
coverage did not reach the minimum coverage.
• If the interface has changed, the test object will indicate changes with test readi-
ness states (see Status indicators).
• The time of the test run is stated within the Test Project view:
Important: The results of the coverage measurement are also part of the test
result for a test object, e.g. if all outputs yield the expected result but the coverage
was less than the minimum coverage, the test result will be failed.
Warning: Using the option “Reset Module” from the context menu will delete the
module with all test results!
In the Test Project view test objects that are not relevant for the project can be hidden by
setting a test object filter:
Þ To modify an existing test object filter click on the arrow next to the button and
select “Select Test Object Filter”.
Þ Select the test objects you wish to hide in the dialog that is shown.
Þ Once a filter has been set, you can toggle it on and off by clicking on .
Important: The filter can only be applied to test objects without any test cases.
Accordingly the Filter will automatically be removed for test objects that have at least
one test case.
Figure 6.42: A filter has been set but is currently disabled (filtered test objects appear
faded).
Figure 6.43: The Filter is enabled, the affected test objects are hidden.
Important: The filter setting will be saved for each filtered test object within the
test database. When saving and restoring modules as TMB files, these filter settings
will also be persisted and restored.
The search filter helps to find and select elements by their name.
Typing search terms into the search field will result in an updated Test Project view.
After a short delay only such matching elements and their ancestors and descendants are
displayed. Elements will automatically be expanded to appear visible with the exception of
modules that need to be analyzed.
Important: All reports are generated as PDF files by default. To be able to open
TESSY report files and enable the generation of test reports you need to install a
third party PDF viewer like Adobe Reader 7.0 or higher, Sumatra PDF, Foxit etc..
Other available report formats are HTML and Word. Because of potential layout
issues the usage of those formats is discouraged. Also task attachments e.g. will not
appear in HTML. Therefore HTML as well as Word are provided as complementary
helper formats only and without further support.
You can also create pure XML reports for further processing with your own tools.
All reports are created as PDF documents based on XML data files. These XML data
files can also be used for generating reports or further processing if desired.
Þ Click in the Test Project view (i.e. within the Overview perspective) on the arrow
next to the Generate Report icon .
Þ Select the report you wish to create (see figure 6.46).
TESSY creates the report within the new folder. This will take a few seconds.
When finished, TESSY will open the file automatically.
Important: The first time you create a report, the “Edit Settings” dialog will be
opened automatically. These settings are memorized and used for the following
reports.
Comments added or modified in the Properties view will only appear in the Test De-
tails Report after the test was executed. Notes added to test modules, test objects
and test cases via content menu or modified in the Notes view on the lower right
of the Overview perspective will appear in the Test Details Report as well as in the
Test Overview Report without any test execution. (Test cases can be found in the
Test Items view of the Overview Perspective.) Generally comments will not appear
in the Test Overview Report, also comments and notes will not appear in Planning
Coverage Reports and Execution Coverage Reports.
To understand the usage of notes within TESSY see section 6.2.2.10 Notes.
You can as well change basic settings, e.g. output directories, filenames and the
logo on the reports. Refer to section 6.1.2.1 Window > Preferences menu.
Figure 6.47: Test Details Report Settings dialog with default and optional settings
Figure 6.48: Test Overview Settings dialog with default and optional settings
Important: The option “Merge Details Reports Into One Document” is of special
significance as all the other report options simply change the outline of the Test
Overview Report or add or hide certain contents.
Merging the reports is only possible for PDF format and when both the Test
Overview Report and the Test Details Reports are generated within the same batch
operation. Therefore this option will be hidden when it is not applicable.
Figure 6.49: Planning Coverage Settings dialog with default and optional settings
Figure 6.50: Execution Coverage Settings dialog with default and optional settings
Batch Test TESSY provides a batch test feature with various operations for test execution and reporting.
You can define which operations shall be performed and which settings shall be used. This
setup can be saved into batch script (TBS) files for test automation using the command line
interface of TESSY (see chapter 6.17 Command line interface).
Þ Under “Test Objects” choose the project or modules or test objects for the batch
test. Click “Select All” to select all at once (see figure 6.53).
Þ Switch to the setting by either marking the operation on the left side or use the tabs
on the upper right side (see figure 6.54). The optional settings for this operation
will then be shown on the right side of the window.
You can create a TBS file for command line execution by saving the batch test settings:
Þ In the batch operation settings window click on (Save batch file as…).
Þ Choose the type of the file and click “Save” (see figure 6.55).
The Test Project view provides the import and export of test data and module backup files
(*.TMB):
When importing test data for individual test objects there are following options (see figure
6.56):
• “Update passing directions”: If you tick the box, the passing directions of all inter-
face variables will be set according to the passing directions specified within the
import file. All other interface variables will be set to IRRELEVANT. The test object
will be ready to execute when using this option because all variable with passing
directions IN, OUT or INOUT will be filled with values.
• “Overwrite/append test cases”: Either delete existing test cases before importing
or append any imported test cases at the end of the test case list.
When exporting data there are following options (see figure 6.57):
Properties view
The Properties view is divided into several tabs on the left and provides various settings which
are explained in the following:
The General tab (see figure 6.58) is used to determine the test environment. Following op-
tions are available:
Option Function
Test The path has been specified during database creation and is not
Directory adjustable here.
Figure 6.59: The Compiler pane in the Sources tab of the Properties view
Adding the In the upper pane of the Sources tab the source files to be tested are added. All exported
C-source file functions will be displayed if the module is opened. Some additional compiler options can be
specified on module level by selecting the module entry, other options can be specified for
each source file in the list.
Þ Select a source file and “Remove File” from the context menu.
Þ Select a source file and “Replace File” from the context menu.
Þ From the next dialog, select another source file.
The lower Compiler pane of the Sources tab displays information about the item selected
from the upper Sources pane. Some of the displayed options (e.g. Includes) in the lower
Compiler pane can be specified in the Test Environment Editor and will be inherited from
there.
Module options apply to all source files unless otherwise specified on file level. File
options apply to one selected source file and will overwrite options that are specified
on module level.
Includes Add an include path of the headers which are included within the
source file.
Options Specify additional directives for your target compiler for your needs.
Note that macros for the preprocessor and include paths have to be
specified within the Defines tab respectively within the Includes tab.
All compiler options added here will be used for the compilation of the
source file when building the test driver application.
Settings Depending on the selected item in the Sources pane the following
features can be enabled (box is checked) or disabled (box is
unchecked) in the Compiler pane:
Table 6.18: Optional functions of the Sources tab of the Properties view
Figure 6.60: The Setting tab of the Properties view with module selected
Any linker options like object files or libraries can be added here. You can use predefined
variables like $(PROJECTROOT) or $(SOURCEROOT) as described in section Creating
databases and working with the file system. It is recommended to add such linker options
using the environment editor TEE: Configuring the test environment.
While you have probably chosen an instrumentation for coverage measurements as default
for your whole project as described within section 6.1.2.1 Window > Preferences menu, within
See 6.1.2.1
Window > the Properties view you can enable a different coverage measurement on folder or test col-
Preferences menu
lection level or for a single module:
Þ In the Test Project view select your module, for which you want to choose the
coverage measurement.
Þ In the Properties view select your coverage selection in the pull-down menu (see
figure 6.63).
The pre-selected coverage instrumentations according to the selected safety standard level
will be displayed.
TESSY supports the following instrumentations:
You can choose a different instrumentation for each test run. The options will be shown within
the Execute Test dialog (refer to section 6.2.2.11 Executing tests). See 6.2.2.11
Executing tests
To analyze the coverage refer to chapter 6.11 CV: Analyzing the coverage. For more infor-
mation about coverage measurements and usage of coverage analysis refer to the application
See 6.11 CV:
note “Coverage Measurement” in TESSY (“Help” > “Documentation”). Analyzing the
coverage
The Attributes tab specifies settings required by the compiler or the target environment of the
module. Most attributes were preset and inherited from the Test Environment Editor (TEE).
Insert attributes You can change the default values or add new attributes to the Attributes pane:
Changes are carried out only locally and do not influence other modules.
You can remove user defined attributes. You cannot remove default attributes, only reset the
value to its default state, if changed before.
Þ Click on .
Those tabs provide editable textboxes to be used for specifications, descriptions and com-
ments by the tester.
If you want to add Notes, you have to use the content menu in the Test Project view.
More information about notes can be found in section 6.2.2.10 Notes.
Within the Requirements Coverage view you can link the requirements with your test cases
or tasks. We will describe this view in section 6.4 Requirement management > 6.4.16 Re-
quirements Coverage view.
In the Test Items view you get an overview about your test cases and test steps, and you can
Test Items view as well create test cases and test steps manually without using the Classification Tree Editor
(CTE, see section 6.8). This is useful for simple test objects with a few test cases that can
be documented in a few words manually.
Test case passed: The actual results did match the expected results.
Test case failed: The actual result of at least one test step did not match
the expected results.
Test Case Generator: This test case generates test steps automatically,
i.e. if you enter a range. It does not contain any data yet.
Test Case Generator with data: This test case has automatically gener-
ated test steps.
Test step passed: The actual result did match the expected results.
Test step failed: The actual result did not match the expected results.
The test case has been created by the CTE and therefore can be
changed only within CTE. The test case does not contain any data.
The test case has been created by the CTE and therefore can be
changed only within CTE. The test case does contain some data.
The test case has been created by the CTE and therefore can be
changed only within CTE. At least one test step is ready to be executed.
The test step has been created by the CTE and therefore can be
changed only within CTE. It does not contain any data.
The test step has been created by the CTE and therefore can be
changed only within CTE. It does contain some data.
The test case has been created by the CTE and therefore can be
changed only within CTE. At least one test step is ready to be executed.
• Test case numbers will be counted continuously: If you delete test cases, new test cases
will get a new number and existing test cases will not be renumbered.
• If you cannot click on “New Test Case” oder “New Test Step” because the icons are inactive,
you might be in the wrong selection: Select the test object within the Test Project view,
then select the Test Items view.
• If you double-click a test case, the TDE will be opened to enter test data. Make sure to
adjust or review the passing directions first in the TIE.
Every test step contains a complete set of test data. For instance, the mechanism of test
steps can be used to achieve an initialization of the test object before executing the test step
that checks the actual test condition of the current test case.
You can generate test steps automatically, i.e. with ranges of input values:
A new test case will be created. The star symbol indicates, that this test case is generated
and you cannot add any test steps, because these will be generated automatically (see figure
6.71).
To fill the data and generate the test steps, you will use the Test Data view within the TDE
perspective:
After generating one or more test steps, the icon of the test case within the Test Items view
will change to yellow as well as test steps (see figure 6.72).
The test steps are read only! You can change the type of the test case and test steps to
“normal”. That way you can edit the test steps as usual.
To change the status to normal,
Þ right-click the test case and select “Change Test Case Type to Normal” (see figure
6.73).
Changing test The test case and test steps are changed to type “normal” but will indicate originally being
case to type generated with a status (see figure 6.74).
normal
Figure 6.74: The test case and test steps originally being generated
You can reverse the action with a rightclick and choose “Change Test Case Type to Generator”
from the context menu.
If test cases and test steps were assigned within CTE, the icons of test cases and test steps
within the Test Items view are displayed with a CTE symbol to indicate that you can change
those test cases only within CTE. The following icons indicate CTE created test cases and
test steps in the Test Items view:
Table 6.22: Status indicators for test cases and test steps created in the CTE
If test cases and test steps were inherited of a variant module as described in chapter 6.2.2.9
Creating variant modules, you can add, delete and overwrite the test steps and data.
The following icons indicate the different test case and test step statuses in the Test Items
view:
Table 6.23: Various status indicators for test cases and test steps in the Test Items view
Important: Deleted test cases/steps are only faded out within the child module.
They can be made available again using “Restore Deleted” from the context menu.
Information about how to assign data in general and particularly to variants using the
CTE is provided in subsection 6.8.7.1 Assigning test data to the CTE.
After deleting test cases or test steps, you can renumber the existing test cases and steps:
After a test run the Test Results view will display the coverage measurement results and the
results of expected outputs, evaluation macros and call traces if applicable.
Important: The view is context sensitive: If the Test Results view is empty, make
sure a test run is selected within the Test Project view!
This view lists the detailed results of the evaluation macros if the usercode of the test ob-
ject contains any evaluation macros, see 6.9.11.3 Using evaluation macros. The results are
displayed wherever they occur within the usercode, e.g. within stub functions or test step epi-
logs. You can select the filter items on the left side to show only the evaluation macro results
for e.g. the first test step. The list of results on the right will be filtered accordingly.
The Console view displays messages of sub processes invoked during the compilation and
execution process of the test driver application. It provides a quick overview of any error
messages.
Saves to file.
6.2.8.2 Handling
You can enable the Console view to be shown whenever an error occurs during C-source
analysis or test driver compilation:
Since the view refers to changes of requirements, this issue is discussed in section 6.4.8
Suspicious Elements view.
In this view information about possible errors that appear e.g. in the process of test execu-
tions is displayed. It is divided into four columns: Message, Location, File and Line. The first
column gives you a detailed error message with all necessary information. The other three
contain all available information about where the error is located.
The variants view supports the variant management in TESSY: You can create a variant tree
according to the software variant structure you are going to test. These testing variants are
useful for tagging TESSY modules to certain software variants which facilitates filtering and
creation of variant TESSY modules.
Tip: You do not need to create a variant tree in order to create variant modules.
Any module can be a parent module of another. The variant tree just helps to keep
the module variant tree in sync with the actual inheritance structure of the software
variants being tested.
Let’s assume you have the following structure of base tests that shall be cloned as variant
modules in order to test all software variants:
To create variant modules for each of the base modules do the following:
Þ Create a new test collection for each variant and choose (New Variant Mod-
ules…) from the context menu.
Þ Within the “Create Variant Modules” dialog select all base modules that shall be
cloned as variant modules.
Þ You can filter the modules being displayed by selecting the desired variant. Only
potential parent modules according to the variant hierarchy will be displayed.
The new variant modules will be created within the selected test collection (including the
folder hierarchy if the option “Take over folder hierarchy” was checked). The properties of a
variant module shows the assigned variant and the parent module which can be edited using
the edit button (see figure 6.86).
Figure 6.86: Properties view variants tab for editing the parent module
Important: All test data of a variant module will be deleted if you select another
parent module.
Important: The C/C++ perspective is not displayed by default! Open the per-
spective as described below!
Within the C/C++ perspective you can edit your C-source file.
Þ Within the Test Project view right-click the desired test object or module.
Þ Select “Edit Source” (see figure 6.88).
Project Explorer left To view the includes and the functions of the
view C-source file.
Console view lower middle Same view as within the Overview perspective.
Properties view lower middle Same view as within the Overview perspective.
Important: Most party of this view are usual Eclipse functions! Please refer to
the Eclipse documentation: http://help.eclipse.org/
Editor view
Important: The Editor view is not a normal view in Eclipse sense, therefore you
cannot move the view as other views of the perspectives!
Þ Open the C/C++ perspective with a right click on the desired test object or module
within the Test Project view.
Þ Select “Edit Source” (see figure 6.90).
The view is context sensitive: If you choose a function within the Outline view, the
function will be highlighted within the Editor view!
Within this view you can browse easily between the includes and have an overview of all
functions of the C-source file.
The view is context sensitive: If you choose a function within the Project Explorer view, the
function will be highlighted within the Editor view.
The Outline view displays all functions of the C-source. Outline view
The view is context sensitive: If you choose a function within the Outline view, the function
will be highlighted within the Editor view (see figure 6.92).
Hides fields.
The Properties view displays all the properties which you organized within the Overview per-
6.2.3 spective. Most operations are possible.
Properties view
For changing a source switch to the Properties view within the Overview perspective.
The Console view displays messages of sub processes invoked during the compilation and
execution process of the test driver application. It provides a quick overview about any er-
ror messages. Same view as within the Overview perspective, see section 6.2.8 Console
view.
The basis for all testing activities should be a precise functional specification of the sys-
tem under test. All testing activities should be caused by requirements described within the
functional specification and each change of the requirements need to be tracked in order
to adjust the tests if necessary. That is the reason why TESSY incorporates a requirement
management solution that provides a basic environment for requirements engineering with
the following features:
There is a plugin available for the integration of Polarion. For more information please
refer to the application note “Polarion Export” in TESSY (“Help” > “Documentation”)
You will use different views and perspectives for your requirement management:
6.4.1 Structure
of the
Requirement 1. To create and import requirements , track changes and versionize your requirements
Management
perspective use the Requirement Management perspective.
6.4.16
Requirements 2. To link requirements with test cases use the Link Matrix view or the Requirements
Coverage view
Coverage view of the Overview perspective.
Requirements List view upper center To view imported requirements as list for a selected
document or folder.
Requirement Editor view upper center To organize the requirements, e.g. adding infor-
mation as text or images, opens only after double-
clicking on a requirement in the RQMT Explorer
view.
Test Means view lower center To list the available test means, up to unit test and
component test.
Validation Matrix view / upper center To assign requirements to test means, only visible
VxV Matrix view when there is a validation matrix and after double-
clicking on it.
Link Matrix view lower center To link requirements with modules, test objects,
test cases and other requirements.
Suspicious Elements lower center To have a quick look over all suspicious (modified)
view elements.
History view right To display the version history of the selected re-
quirement or document.
Related Elements view right To display linked elements for a selected require-
ment and compare these versions.
The RQMT Explorer view displays an overall view of all requirements of a requirement RQMT Explorer
document. If you double-click a requirement, the requirement editor will open to display all view
information of the specific requirement (see figure 6.95).
Adds filter.
Please note that with a right click on an element usually a context menu opens. It
contains the same buttons as shown in table 6.29. Depending on the circumstances
there might be more options available.
The RQMT Explorer view also offers the option to organize the document structure. You
can adapt it to your needs and therefore gain a better overview over sometimes numerous
elements. This includes the opportunity to add chapters and text elements to documents as
well as text elements to chapters (see figure 6.96).
To create new elements of all kind in the document structure use the RQMT Explorer tool bar
(see table 6.29) or the context menu after a right click on the respective element. By default
new elements are placed at the end of the document or chapter column and new documents
appear on document level. It is possible to drag chapters, test elements and requirements
into the desired position, even into other documents.
Figure 6.96: Example for the document structure within the RQMT Explorer view
In the following you find a brief overview about importing requirements. For more
detailed information please refer to the application note “Importing Exporting Re-
quirements” in TESSY (“Help” > “Documentation”)
Þ Right-click a document or right-click within the blank RQMT Explorer view and
select “Import” from the context menu (see figure 6.97). When no document is
selected, the import will create a new document.
Þ Select the File Content Type. For the possible types see table 6.31.
TESSY will pre-select the content type according to the contents of the
currently selected file. If the file cannot be imported, this field will be
empty. You can select a content type that you think the file contains
and start importing. TESSY will then show the errors within the file.
*.txt Simple ASCII format were each line is recognized as a requirement. This is the
very basic format that allows importing all sorts of text as requirements.
*.csv, *.tsv Comma or tab separated ASCII text with a headline indicating the column
names. This format allows specifying requirement id, version and all other
available requirement properties.
*.xml TESSY specific XML format which provides specifying the chapter structure of
a requirement document. All available requirement properties may be specified
within this format. It is the recommended exchange format when importing
requirements from any other requirement management systems.
The newly imported requirement document will be displayed in the RQMT Explorer view (see
figure 6.99).
New imported The asterisk (*) indicates that the requirement is new and not committed yet. A mouseover
requirement shows a tooltip (see figure 6.100).
Figure 6.100: The asterix and a mouseover shows the status “new”.
You can commit all changes or changes of selected elements (see figure 6.101).
If requirements have been changed, every commit creates a new requirement ver-
sion. The reason for this is traceability, only with these different requirement versions
changes in the requirements can be traced.
You can also discard the changes you made by clicking on (Discard Changes)
in the global tool bar. This will restore the last checked in status. With a click on the
little arrow next to the icon you can set whether you want to discard all changes or
changes of selected elements only.
You can rename a requirement document and assign an alias which is useful for the reporting,
because you have an abbreviation of the document name when building the requirement
identifier. The identifier will be: [document alias]:[id]-[version]. To rename or give an alias:
Þ Right-click the document and select “Properties” from the context menu.
Þ Change the name or choose an alias (in this Project: “Example1” it is “IVIR’) and
click “OK”.
The new alias “IVIR” will be used within the Requirements List view and the document pre-
view (see figure 6.105). (The document preview will only be visible after double-clicking the
respective requirement.)
Information about editing requirements can be found in section 6.4.4.3 Editing re-
quirements.
The Requirement Editor will also open by double-clicking in a requirement in the Requirement
List view.
Within the Requirement Editor the requirements are displayed with text, figures, if available,
versions and IDs (see figure 6.108).
Every requirement has an explicit ID and a version number. TESSY provides the following Requirements
two mechanisms for assigning requirement version numbers: List view
Using external When using external version numbers, the following checks of the imported data will be
version numbers performed when importing:
• If any requirement content is changed but the version number is not changed,
TESSY will change the minor version number (e.g. from 1.0 to 1.1).
• If the version number was changed but no requirement content was changed, a
warning will be reported.
• If the new version number is less than the highest existing version number for a
requirement, an warning will be reported.
To edit a requirement:
Þ A new version of the requirement will be created and you need to decide either to
increment the major or the minor version number within the check-in dialog (see
figure 6.102).
If you did only minor changes or want to commit a draft update of a requirement,
you can decide to increment only the minor version. In all other cases, it is rec-
ommended to increment the major version.
After committing, the ID of the requirement will be updated to display the new ver-
sion (see figure 6.110).
Figure 6.110: The first requirement has the version number 2.0
The VxV matrix supports the assignment of requirements to the test means used for validation
of the requirement. This helps filtering out those requirements that are to be tested with unit
and component testing. The assignments within the VxV matrix will be used for requirement
filtering for reporting.
Requirements will be tested using different test means, e.g. unit test, system test or review.
The default test means used within TESSY are for unit and component testing. You can filter
your requirements by test mean for later reporting issues.
Removes the selected test mean. Only test means that are Del
not used can be deleted.
Within the Link Matrix view you can link tasks, modules, test objects and test cases with
requirements. It shows the link relationship of the elements currently contained within the Link Matrix view
matrix.
In the example above (see figure 6.113) the requirement [IVIR:1-1.0] is linked with three test
cases and one task, i.e. in total there are four tests linked to that requirement.
Transposes the matrix, i.e. changes the rows and columns. Ctrl + T
Þ Drag & drop requirements, modules, test objects or test cases into the matrix.
The elements will be shown within one of the rows in the first column if they are
dropped there. If they are dropped in one of the right columns, they will appear on
top of the respective rows of the matrix.
Þ Click on (Remove All Elements) in the tool bar to remove all currently displayed
elements.
Þ Click on (Remove Selected Element) in the tool bar will remove the currently
selected element within a row of the matrix (if any element is selected).
This will only remove the elements from the matrix view, no changes
will be made to the elements themselves. They are not deleted in the
process and set links remain unchanged.
Important: Test cases can not be added to the Link Matrix view in the Require-
ment Management view. To do so you have to switch to the Overview perspective
(see figure 6.115). Test cases can also be added to the Link Matrix in the TDE
perspective or the SCE perspective.
Figure 6.115: Adding Test Cases to the Link Matrix view in the Overview perspective
• The Link Matrix view will also be visible within the TDE perspective and the SCE perspec-
tive if elements had already been added.
• The current contents of the Link Matrix are remembered when restarting TESSY but the
matrix itself is not persisted in any way. You can add or remove elements and this will not
cause any changes to the elements.
• The search button “Add All Elements Linked to Elements in Rows” allows finding and
adding the elements that are linked to the elements currently displayed within the rows of
the matrix.
• Setting links or changing elements will cause dependent elements to become suspicious.
Please refer to section 6.4.8 Suspicious Elements view for details.
If requirements have changed, the links within the Link Matrix view will be declared suspicious
with an exclamation mark (see figure 6.116).
Þ A double click on a link within the matrix will delete the link and another double
click will create the link again.
Suspicious The Suspicious Elements view allows finding out the reason why an element is suspicious.
Elements view In this case the version number has changed and a short description has been added.
During the testing process tasks, modules, test objects and test cases will be linked
to requirements indicating that the respective requirements are tested by the linked
elements.
Whenever a requirement changes because of modifications or because a new ver-
sion has been checked in, the linked elements will become suspicious and need to
be reviewed. The suspicious status will be indicated by an exclamation mark icon
decorator, i.e. for a suspicious test object.
“Set elements semantic equal” should only be used in situations where the change of
a requirement does not change its meaning such as spelling corrections, formatting
etc. In all other cases the link should be updated.
When you have linked the test object and some test cases, any changes to the linked require-
ments will cause the linked elements to become suspicious. Please switch to the Overview
perspective to be able to see that.
Figure 6.118: Suspicious test object and test cases in the Overview perspective
Determine the related modified requirements that causes the status of a test object being
suspicious within the Suspicious Elements view:
Þ Select the suspicious test object within the Test Project view. (Again you have to
do that within the Overview perspective.)
The Suspicious Elements view will display the changed requirements (see figure
6.119).
As you can see in figure 6.119 above, the requirement text of the requirement “[IVIR:3-
2.2]:Zero” has been edited. Therefore it has the addition “MODIFIED”.
If you select a test case in the Overview perspective, the Suspicious Elements view will also
show the changed requirement(s) (see figure 6.120).
Figure 6.120: Selecting the suspicious test case shows the modified requirement(s)
Within the Differences view you can determine the exact differences:
You need to determine if the change of the requirement affects the linked test cases and
adapt the test data if necessary.
If no changes to the test cases are required, update the link to acknowledge the requirement
change. Therefore click on (Update Link). The suspicious icon will then disappear for the
respective test case.
For more information about the Difference view go to section 6.4.12 Differences view
/ Reviewing changes.
You can also update requirement links in the Link Matrix view.
The Attached Files view allows adding arbitrary files to the selected requirement. You can Attached Files
add additional documents with detailed information about the requirement. The files will be view
stored within the TESSY database.
Attributes view The Attributes view allows adding arbitrary attributes for the selected requirement or require-
ment document.
Important: New attributes should be created for the requirement document. They
will then be inherited to each requirement of the document and can be overwritten
on requirement level.
There are three predefined attributes named “Content Type”, “Enable Evaluation” and “En-
able Suspicious Calculation” on document level that control the behavior of the requirement
evaluation and suspicious calculation for elements linked to requirements.
Figure 6.124: Editing the requirement settings within the Attributes view
To edit an attribute:
Þ Right-click the desired attribute and select “Edit” from the pull-down menu.
Þ Change value, name or description of the attribute if possible or add an descrip-
tion.
Please note that it is not always possible to edit all of the given opportunities. Usually
it is possible to edit the value but name and description can only be edited where the
attribute was originally created. Type and version of a requirement can not be edited.
For example you can edit the “Content Type” version of a requirement in the Attribute Settings.
This is necessary to enable the HTML Document View and the HTML Editing. The “Content
Type” of a document needs to be “HTML” instead of “PLAIN”.
Þ Select the desired requirement document within the RQMT Explorer view.
Þ Right-click the attribute “Content Type” within the Attributes view and select “Edit”.
Þ Change the value to “HTML” and click “OK” (see figure 6.125).
For more information about editing requirements in the HTML editor go to section
6.4.15.2 Editing the requirement as HTML version.
Differences view The Differences view will be displayed within the lower pane, which provides a direct com-
parison of the respective requirement versions printed as text (see figure 6.127).
Reviewing Each requirement has a version history showing all of its changes.
changes To review the changes between any two versions of the history or between a historic version
and the current version,
Þ select either two versions within the History view to compare these versions or
select only one version within the view if you want to compare it against the current
version.
Þ Click on (Compare) in the tool bar.
In this view you can see the links of requirements to other requirements, e.g. when creating
refined requirements based on a given requirements document.
After selecting a requirement in RQMT Explorer this view presents all linked elements of
the respective requirement. It shows all sub requirements or the linked main requirements
divided into Incoming Links and Outgoing Links.
Figure 6.130: Related Elements view with Incoming and Outgoing Links
As the the Problems view also appears in the Overview perspective please refer to subsection
6.2.10 Problems view.
Figure 6.132: Newly opened Document Preview within the TIE perspective
Toggles to the HTML inline editor (only available of the “Content Type” of the
document is “HTML”).
Back to parent.
Back home.
After you created or imported requirements, you can edit them as HTML version:
Figure 6.133: HTML editing within the inline editor (WYSIWYG and plain HTML)
Important: By default you will find the Requirements Coverage view within the
Overview perspective!
Requirements Within the Requirements Coverage view you will link the test cases with the requirements.
Coverage view You will as well have an overview of the requirements coverage. This is the reason why you
within the will find this view within the Overview perspective.
Overview
perspective
6.4.16.1 Icons of the view tool bar
Refreshes the view in the Planning and Execution tab. With a click on the
little arrow next to the icon you can set on which selection you want to auto
refresh. You can also disable the auto refresh function (see figure 6.135).
Filters requirements in the Planning tab (component test, unit test, require-
ments without assigned test means).
The current status of the links between modules, test objects, test cases and requirements
reflects the current state of your requirements coverage. This coverage can be examined on
arbitrary levels of your test project.
You can also create a report that shows the currently achieved planning coverage in the Test
Project view.
Task.
After execution of any tests, the test results are stored within test runs. The test result of a
test run covers the requirements that were linked to modules, test objects or test cases at
the time the test was executed. Therefore, the actual execution coverage result may differ
from the planning coverage result. The execution coverage view is read-only, because this
just displays the results. Any changes to requirement links need to be carried out within the
planning coverage view.
You can create a report that shows the currently achieved execution coverage.
Total number of test cases with achieved test results for linked require-
ments.
Passed test cases with achieved test results for linked requirements.
Failed test cases with achieved test results for linked requirements.
Linking The idea behind linking requirements to modules and test objects is based on the following
requirements process:
with test cases
• First the complete list of requirements is gathered.
• Then each applicable requirement is assigned to modules that implement func-
tionality referenced by the requirement.
• For further break down of the assignment individual test objects are linked to the
requirements. This especially makes sense if the module has a large number of
linked requirements.
• At last there is a small subset of all available requirements that must be verified.
To be taken in consideration the requirement linking for a given test object must
be further broken down to test case level.
Important: Please note that only linked requirements of test cases will be ana-
lyzed. Unlinked requirements on test case level will not be taken in consideration.
For this process TESSY provides the Requirement Coverage view within the Overview per-
spective. It is divided into two tabs:
• The Planning tab (see section 6.4.16.2 Planning tab) is the editor for all require-
ment links to modules, test objects and test cases.
• The Execution tab (see section 6.4.16.3 Execution tab) provides quick overview
about the achieved test results for linked requirements.
Important: When selecting objects on upper levels of the test project, the calcu-
lation of the test planning/execution links can take a moment.
The content that is displayed in the Planning tab or the Execution tab of the Requirement
Coverage view depends on the current selection in the Test Project view of the Overview
perspective. If not already linked with any requirement, all available requirements will be dis-
played; otherwise only the linked requirements will be displayed.
If you want to display the requirements on test cases level, you need to select the respective
test case in the Test Items view of the Overview perspective.
You can choose to display all available requirements by clicking on “Always show unlinked
requirements” in the Requirement Coverage view. Once chosen, this remains active for other
selections as well.
The environment editor perspective provides editing of the project configuration which is
stored within the project configuration file.
To execute a test, you need to create and configure a new module. The necessary settings,
The TEE besides the source files that you want to test, are the following:
perspective
• Include paths and defines for the source files
• The compiler of a microcontroller target and debugger, i.e. the desired test envi-
ronment
• Compiler and linker options
• Debugger settings
• Other optional module settings, e.g. ASAP conversion files
This can be done within the Test Environment Editor, the TEE.
For a complete list of all the available attributes and possible values refer to the
application note “Environment Settings (TEE)”.
With the installation of TESSY, the configurations for all supported compiler and tar-
get environments (including necessary settings and files) were copied to the TESSY
installation directory. You need to enable the compiler and targets that you want to use
and add them to your configuration file as described in the following sections.
Their default settings may need to be adapted to your needs, e.g. the installation path of
the compiler or target debugger is one of the settings that normally need to be changed
to your local values. Settings which have already been used with a previous version of
TESSY were also taken over during installation.
The TEE configuration management allows you to create variants of compiler and
target settings and assign them to a module. We recommend to save your settings
in a specific configuration file, which is the default when creating a new project
database (see section 6.5.6 Configuration files). This allows easy sharing of specific
environment configurations between developers of the same development team.
As a result you have all your basic settings at one central place, i.e. include
paths, additional compiler options, etc. Once configured, you can start testing
immediately using the required configuration for all your modules.
Þ In the menu bar click on “File” > “Edit Environment…” (see figure 6.137).
The TEE will start with the custom configuration file assigned to the respective
project database.
Project left Contains the environments that are selected for the current
Environments project and stored within the configuration file.
view
Attributes right Shows the attribute settings for one or several selected en-
view vironments within the Project Environments view.
The Attributes view shows the list of attributes within groups or as plain list. The groups are
defined within the system configuration file which is part of the TESSY installation.
When you have created your project database with the default settings, you will already
have a configuration file assigned to the project database. The name of this file will be
displayed within the lower left side in the status bar of TESSY (see figure 6.136).
This configuration file will be edited when opening TEE.
Shows groups.
The All Environments view shows all available system environments (i.e. compiler/target
combinations) in a flat list. The environments used as project environments can be seen on
top of the list. Such environments are decorated with an activation icon.
Environments can be dragged into the Project Environments view in order to make them
usable for your TESSY project. Alternatively it is possible to use the context menu to add an
environment to the project.
When selecting one of the listed environments the Attributes view will show the following
special attributes only:
Important: If those attributes are set, they will be stored on the local computer
and not within the configuration file of the project. Errors shown for such attributes
can be ignored if they are correctly set for the respective project environment.
The Projects Environments view shows all environments that are selected for this project. It
gets dirty when any change has been done to the configuration.
Additionally any problems with attribute values are indicated by error and warning markers.
You can add environments by their UUID if you have selected a suitable environment from
the compiler/targets matrix at:
https://www.razorcat.com/en/tessy-supported-compiler-debugger.html.
Beneath the actual link you will also find links to further information about the usage
and set up of environments.
You may also use the “Add Environment…” button in the menu bar of the Project
Environment view to open the input box.
Figure 6.142: Attributes list within the Attributes view of the TEE
Attributes in the Attributes view are shown within groups by default. You can toggle the “Show
Groups” button to see the flat list of attributes.
Important: The “Enable Expert Mode” filter button shows or hides advanced
attributes. The expert mode is off by default.
Item added as Windows environment variable for all processes, i.e. the
make call or the slave call, spawned using this test environment.
Error.
Different fonts TEE will display the attributes in different fonts to indicate the following situations:
as indicators
Normal Represent factory settings respectively default
letters settings from paragraph “General” and have been
inherited.
When selecting two or more environments within the Project Environments view, the Attributes
view adds individual columns for each environment and highlights any differences. The first
selected environment will be shown as first column within the Attributes view and the second
will be shown second etc.
Within the Attributes view the individual attribute values can be copied from the first to the
second column and vice versa. For this purpose the context menu provides “Copy to Left”
and “Copy to Right”.
If more than two environments are selected, only “Copy to Right” is available. This will copy
the respective attribute value from the first environment column to all other columns.
A system default configuration file contains the settings for all supported compiler and
target environments and has been installed with TESSY into the installation directory.
The configuration file assigned to the project database contains all settings that are
changed compared to the system default configuration. The contents of this file are
displayed within the project environment view.
Configuration files of the respective views will be stored in following default folders:
• \%APPDATA\%\Razorcat\TESSY\4.x.y\config\configuration.system.xml
Contains all your changes made in the “All Environments” view (i.e. compiler and
target path settings only) and will be stored within the user profile.
Normally you need to change some settings for your specific environment. Some of the
settings will be checked for validity. The TEE will check all enabled configurations and displays
error and warning signs as soon as an error has been found, e.g. if the “Compiler Install Path”
must be corrected.
If you do as explained above, computer specific path settings are kept out of the configu-
ration file which you will probably share with other testers on different computers. On the
other hand your customizations made are saved in the configuration file. So this part of your
customizations will automatically be available to other testers as well.
The TEE preserves all default settings. You can revert the default values by right-
clicking the attribute to open the context menu. There you click “ Remove/Reset”.
If you want to change an attribute value only, you can double-click the
respective attribute and enter the desired value.
Þ Check the desired specific attribute flags. This depends on the type used.
For description see table below.
Þ Click “OK”.
Flag Description
Inheritable This flag controls the inheritance of the attribute: The attribute will be
available in all (child) section nodes. Some basic attributes are defined
at the main nodes, e.g. compiler. Each supported compiler will inherit
these basic attributes.
Flag Description
Validate This flag may be important for directory or file types. The attribute value
will be validated by TEE, e.g. whether the path or file is available. An
error sign will indicate that the file or directory could not be found.
Read Only This flag makes it impossible to change a default value by using the
attribute pane of the module properties dialog.
As List Using this flag, the attribute value will be handled as list of values
(comma separated). The values may be edited using a special list dia-
log. This is useful for file or directory types.
Hex Format This flag is useful in combination with the number type. TEE will convert
all inputs (e.g. a decimal value) to a hex value, e.g. 1 > 0x01.
Visible This flag makes the attribute visible in the attribute pane of the module
properties dialog (and within the test report).
Not Empty Checks whether the value is not empty. An error sign will indicate that
the attribute does not have a value.
Environment This flag is useful during test execution and during the make process:
Variable TESSY will create an environment variable within the process space of
the process that will be used for test execution (e.g. running the slave
process) and for make (e.g. building the test driver).
Add to PATH This flag is useful for attributes of type directory. Like described above
Variable for the flag “Environment Variable”, the respective directory value will be
added to the PATH variable of the process space used for test execution
and make.
Flag Description
Makefile Vari- Adds this variable to the generated makefile for compilation/linking of
able the test driver application. You can use this variable within the makefile
for include paths or other settings required during the make process. A
variable named “My Include Path” will be added to the generated make-
file as MY_INCLUDE_PATH with the respective value.
In order to enable hardware I/O stimulation and measurement during unit testing, TESSY pro-
vides a hardware adapter interface allowing control of external measurement hardware. This
hardware device implements a configuration interface as well as reading and writing meth-
ods for hardware signal data. The following figure shows the architecture of the system within
the TESSY unit testing framework using a Raspberry Pi based HIL system as an example
Hardware implementation.
stimulation and
measurement
Figure 6.144: Integration of a hardware adapter (e.g. GAMMA) into the TESSY unit test
execution
During module analysis: TESSY reads the configuration of the hardware device in order to
determine the available interface (i.e. the available I/O signals).This list of signals (including
passing directions) will appear within the interface of the TESSY module (for each test object).
The signals may be edited within the Test Data Editor (TDE) as any other normal test object
variable.
During test execution: The input signals will be stimulated from the input values provided
within the TDE and the actual measurements will be saved to the respective output signals.
The synchronization between stimulation/measurement and the unit test execution will be
controlled by the debugger running the test object code. Therefore callback functions that
stimulate/measure the signal values will be executed before and after calling the test object.
Timing measurement may also be supported by the hardware adapter device, this will be
carried out using dedicated pins of the hardware.
The configuration file for the hardware adapter includes all necessary configuration data for
TESSY as well as the configuration data necessary for the hardware device (in XML format).
The configuration file will be specified as ”THAI Configuration File“ attribute within the Test
Environment Editor (TEE).
TESSY extracts the available hardware signals from the following XML data structure. The
required tags and attributes mandatory for TESSY are in bold.
There may be additional tags and attributes for the hardware device configuration, which will
be skipped by TESSY. Only the ”signals“ and the corresponding ”signal“ tags will be read and
TESSY will create the available unit test interface from these entries.
For more details about THAI including a sequenced diagram for hardware device
control and information about the interface DLL please refer to the application note
“TESSY Hardware Adapter Interface” in TESSY (“Help” > “Documentation”)
The THAI functionality can be enabled using the “Enable THAI” TEE attribute. It is recom-
mended to enable THAI for individual modules instead of enabling it for all modules of a whole
project.
The configuration of the THAI related attribute can be done within TEE but the THAI func-
tionality should be enabled for individual modules using the module properties as shown
below.
More Information about enabling the THAI functionality globally for all modules of a
project can be found in the application note “TESSY Hardware Adapter Interface” in
TESSY (“Help” > “Documentation”)
Þ Create a new module within TESSY and go to the Properties view. The ”Enable
Enable THAI THAI“ toggle button will be available within the Features section.
• If you select the ”Enable THAI“ check box, TESSY will add the required THAI
attributes within the attributes tab.
• If you switch to ”Attributes“ within the Properties view, you will see the required
THAI attributes.
• You need to enter the THAI Configuration File attribute and select a suitable con-
figuration file for your hardware device.
• The THAI Timer File attribute is filled with a default value. You will need to change
this attribute, if you are using the timing measurement feature.
• The THAI DLL attribute references your implementation DLL of the THAI interface.
• The Log Level attributes are optional.
• The THAI Timer File attribute is optional and can be left empty.
Attribute Description
Attribute Description
THAI Log Level Value from 0 (no logging) to 2 (full logging). The
actual log content depends on the implementation
of the DLL.
Available When THAI is enabled for a specific module, you will see the available hardware signals as
hardware inputs and/or outputs according to the contents of the THAI configuration file.
signals within
the interface
You can change the passing directions to IRRELEVANT in the Test Interface Editor (TIE) if
certain signals are not necessary for the given module. (More information about the TIE is
provided in chapter 6.7 TIE: Preparing the test interface.)
The hardware signals defined within the THAI configuration file will appear within the Test
Data Editor (TDE) as normal inputs and/or outputs. You need to assign values for each test Assign concrete
step as with normal variables of the test object interface. (More information about the TDE is values for each
provided in chapter 6.9 TDE: Entering test data.) test step
Important: You must not use special values like *none* for the hardware signals
defined within the THAI configuration file in the TDE. For each signal input and
output a concrete value is required.
When evaluating hardware signals it is recommended to specify ranges or values with de-
viations (e.g. 20 + / - 1%) due to the possible signal measurement deviations cause by the
hardware.
Within the TIE you determine which values are input and which ones are output variables.
Input values are all interface elements that have to be set before execution of a test object.
Output values are compared with the expected values after test execution.
After configuring the test environment of a module and opening the module the analysis
of the respective source files starts. The functions found within the source files will be
available as test objects, TESSY will try to assign useful default passing directions
automatically.
You need to specify missing information that TESSY could not determine automatically,
i.e. array dimensions or values of enumeration constants. This can happen due to the
usage of the “sizeof” operator when declaring arrays or enumeration constants.
Test Project view upper left Same view as within the Overview perspective.
Properties view lower left Same view as within the Overview perspective.
Interface view upper right To display all interface elements of the test object and
to provide the edit fields to enter passing directions of
variables as well as additional information.
Plot Definitions right To create and configure plots (same view as within the
view TDE perspective).
The Test Project view displays your test project which you organized within the Overview 6.2.2 Test
Project view
perspective.
The Properties view is context sensitive: You can view the passing direction of a variable (e.g. Passing
IN, OUT, IRRELEVANT) if you select the variable within the Interface view. Then the Proper- directions of a
ties view will display the passing direction and the type information (see figure 6.151). variable
Icon Meaning
External functions
Local functions
External variables
Global Variables
Parameter
Return
Unused
Indicator Status
Indicator Status
6.7.4.4 Handling
You can browse through the interface items of the currently selected test object. An arrow in
front indicates further levels (see figure 6.153).
Figure 6.153: White arrow indicating further levels, black arrow when expanded
Interface The variables are either read within the function (IN), written within the function (OUT), both
elements read and written (INOUT), to be altered by usercode (EXTERN), or they are simply not used
within the function (IRRELEVANT).
The TIE classifies all recognized interface elements of the test object into the following sec-
tions:
External Functions All functions which are not defined within the source file(s) of the
module. These functions are called from the test object.
Local Functions All functions defined within the source file(s). These functions
are called from the test object.
External Variables External declared variables which are not defined within the
source file(s).
Global Variables Global variables and module local static variables which are
defined within the source file(s).
Unused Contains all sections and the related interface elements which
are not used in the current test object.
The passing direction reflects the kind of usage for each variable while testing the test
object. You can specify how TESSY treats a value for an interface variable either to
provide the value before test execution (IN) or to keep the value for evaluation and
reporting after test execution (OUT).
You have to specify one of the following passing directions for each interface element:
• provide an input value for that interface element, because the element is only read
by the test object (IN),
• evaluate and report the results of that interface element, because the element is
only written by the test object (OUT),
• both provide a value and evaluate the result, because the interface element is
both read and written by the test object (INOUT),
• provide a value within the UCE (Usercode Editor) of TESSY (EXTERN) . With this
setting, the interface element is visible in the scope of the user code and may be
set using C code,
• not use the interface element at all (IRRELEVANT). In this case, you will not see
this variable for all further testing activities.
The following table shows possible passing directions of the different types of interface ele-
ments:
External variable x x x x x
Global variable x x x x x
Parameter x x x
Return x x
During processing when opening the module, TESSY analyzes the passing directions au-
tomatically and stores its findings in the interface database. This information is available in
the TIE as default values of the passing directions. TESSY analyzes the usage of individual
interface elements by the test object.
Warning: Although TESSY usually correctly recognizes all interface settings, open
the TIE for every test object and make sure that the values are set correctly or do
match your needs!
Depending on that usage, the following passing directions will be set as default:
Read only IN
In case that the passing directions or any other interface information could not be determined
the respective fields in the TIE will be marked “UNKNOWN” or “?”. If TESSY could not calculate
the size of an array dimension enum value (indicated with a question mark), you have to set
them manually.
TESSY analyzes the usage of individual interface elements by the test object. Change the
passing direction of an interface element to suite your needs.
Reset the passing direction for all interface elements of one section:
Þ Select the respective section and click “Reset to Default Passing” from the context
menu (see figure 6.154).
Þ Select the respective interface element and click “Reset to Default Passing” from
the context menu.
Important: If you change the data format, all newly entered values within the Test
Data view of the TDE will be formatted into the new format. Existing data will not be
formatted!
Pointers and complex data types will be treated slightly different as normal data types.
Pointers
The passing direction of the pointer and the target can be set independently, but they are
checked or corrected by TIE to ensure valid combinations.
Complex data types as “Structure” and “Union” have a dependency between their passing
direction of the overall structure/union and the passing directions of their components.
To avoid invalid combinations the TIE checks the setting of passing directions for these data
types in the following manner:
• When the passing direction of one component is set, TIE determines the resulting
passing direction for the overall structure/union and set them automatically.
• When the passing direction for the overall structure is set, all components are
automatically set to the same passing direction.
Arrays
The passing direction of the data type “Array” will be set for the entire array to the same
direction. Only one passing direction will be defined for the whole array and all elements. If
the array is made up of structured array elements (e.g. structures), it is possible to define Arrays
different passing directions for the components of these structures.
Array as parameters will be shown as pointers within the interface. They can be initialized
with NULL or pointing to a dynamic object, synthetic variable or global variable (see figure
6.156).
The TIE displays all functions used by the test object either in section External Functions or
Local Functions and it provides an interface to define stubs for these functions that will be
executed instead of the original function. TIE distinct two different stub functions:
You can define stubs globally for all test objects of the module or create a stub inde-
pendently of the global module setting.
• Advanced stub variables cannot be created for arrays and pointer to arrays.
• Pointers that are components of structs or unions will always be handled as IR-
RELEVANT.
• Multiple calls to advanced stubs will generally use the same input values and the
results of the last call will be taken as output values.
• If different values shall be used for multiple calls to advanced stubs, vector values
need to be utilized.
For more information about entering values (defines, enums, arithmetic expressions,
input values, vector values) please refer to subsection Entering values.
Warning: Due to the possibility of unforeseen side effects, please refrain from
stubbing standard or system functions of your chosen compiler.
For example: A stub of “memcpy()” in a GCC configuration may provoke an access
violation error or stubbing “__ARM_disabl_irq” within Keil ARM will fail the build
process entirely with an error message that the IDE installation may be damaged.
You can create stubs either for external or local functions which will be executed instead of
the original functions.
• Create stubs for all functions at once for all test objects of the module (global
setting).
• Create stubs for a single function for all test objects of the module (global setting).
• Create stubs for the current test object.
• Use global stub settings.
The enhancement to normal stub functions are advanced stub functions, that allow to provide
values for parameters and return values of stub functions like normal variables.
TESSY checks if the stub is called by the test object with the specified parameter values,
otherwise the test fails. You can also provide a return value for further processing by the test
object. This reveals if the test object handles the return values of the stub function correctly.
You can create advanced stubs either for external or local functions.
• Create advanced stubs for all functions at once for all test objects of the module
(global setting).
• Create advanced stubs for the current test object.
• Use global stub settings.
For test execution the information on data types of the test object interface has to be com-
plete. The dimensions for arrays, the values of the enumeration constants for enumerations,
and the dimensions for bitfields have to be defined. If these values have been automati-
cally recognized by TESSY while opening the module, the respective text field will show the
calculated value for every data type. In this case, it is not possible to change these values.
If a value for an interface element has not been recognized automatically, the respective text
field will be empty or contain the value -1. In case of arrays TIE will also use question marks
to indicate this issue, i.e. array[?]. In all those cases you have to add values manually.
Warning: Wrong array dimensions or wrong values for enumeration constants can
cause the test object to crash during test execution! TIE cannot check for plausibility
of used values!
You can create new (synthetic) variables for usage within your test cases based on all basic
C/C++ types as well as based on all types available within your source files.
Important: Restrictions apply for synthetic variables using types that are defined
within the source file only (and not within a header file): Such synthetic variables
cannot be used within stub code for local functions or within fault injection code.
If synthetic variables with source file defined types shall be used (e.g. within prolog
epilog) and either stubbing of local functions or fault injections are active, you may
need to set the module attribute “Synthetic Declarations In Source” to false in order
to prevent a compilation error.
Important: Creating enum variables is only possible for enum types with either a
tag name or which were defined using a typedef.
The new variable will be shown within the TIE view with the default passing direction “INOUT”.
Adjust the passing direction to your needs.
Important: Variables created for one test object are global and therefore available
for all test objects. This of course means that deleting a (synthetic) variable has
global consequences too. The variable will be deleted in every test object where it
is in use.
Synthetic variables in other test objects are by default listed in the ”Unused” section
of the interface. They can be moved to “Used Variables” manually if necessary.
TESSY provides an alias name mechanism to mirror the usage of #define to access variables
during the whole testing cycle (e.g. access to individual bits of common bitfield structures).
You will see your variables within TESSY named exactly as the defines you are using in your
code to access these variables.
Þ Change the value of the TEE attribute “Use Alias Names” to “true” (Refer to chapter
6.5.7.1).
Instead of the real variable name “door_light_c.b.b0” that you are not using within your code,
you will now see the virtual name “door_light_left_b” given through the define within the
TESSY interface and within the test reports.
External variables are listed in the interface of the TIE and can be handled like any other
variable. You can set e.g. the passing direction, the data format or add descriptions.
External variables are by default defined. This is indicated by an .
Þ Right-click the respective variable to open the context menu (see figure 6.162).
Þ In the context menu click “Don’t define Variable”
Undefining
external
variables
The now undefined external variable appears with an . (See figure 6.163).
6.7.4.14 Changing the default settings in the Test Environment Editor (TEE)
If you want to change e.g. the default settings of the external variables or functions for your
project, you should change it within the related configuration files.
Þ Set “Enable Define Variables” to “false” in the Attributes view of the TEE perspective
with a doubleclick (see figure 6.164).
Þ You will be asked to save or discard your settings when leaving the TEE perspec-
tive.
Saved changes will be active with the first opening or after a reset of the module.
The interface of existing modules will not be changed.
Þ Set “Enable Create Function Stubs” to “true” in the TEE Attributes view with a
doubleclick (see figure 6.164).
Þ Save your settings when leaving the TEE perspective.
Changes will be active with the first opening or after a reset of the module. The
interface of existing modules will not be changed.
For more information about the TEE please refer to chapter 6.5 TEE: Configuring the
test environment
You can also find more information about available attributes and their settings in
the application note “Environment Settings (TEE)” in TESSY (“Help” > “Documenta-
tion…”)
The list of unused functions and variables shows all external items that are used by other
test objects of the same module but not by the current test object itself. Because such items
need to be defined in order to link the test driver, you need to review this section to check
whether all external references have been defined or stubbed.
Please note: System functions or intrinsic functions must not be stubbed. For more
information please refer to chapter 6.7.4.9 Defining stubs for functions
Local functions are available in this list to be able to move them to the used functions of a test
object. Creating a stub for a local unused function does not usually make sense, because
the local function is not called from test object.
One possible use case could be the usage of such local functions as a value of a function
pointer. Via stub code it is possible to influence the behavior of the function.
Depending on the type of function or variable it is possible to stub the function or not define
the variable. You can also copy the name or move the function or variable to used functions
or variables.
Figure 6.165: List of unused functions and variables in the TIE interface
You can find more information about plot definition in chapter 6.9.14 Plots view.
The Plot Definitions view displays the plots for a selected test object or test run.
Within the view you can create or configure plots for a selected test object. You can also
select whether plots should be used in reports.
A test case plot spans over all values of all test cases of the selected variables.
A test step plot provides one curve for each test case spanning over all values of the
test steps of this test case. This requires at least two test steps for each test case to
define a valid curve.
An array plot creates plots for array type variables. There will be one curve spanning
over the array values for each test step.
To create a plot:
Þ Click on the newly created plot to choose the desired kind of plot (see figure 6.168).
It is possible to drag variables from the TIE or the TDE onto the Plot Definitions view. Also
plots and variables can be dragged and dropped within the Plot Definitions view. (see figure
6.169)
Scalar or array Empty area A new plot containing the variable is created.
variable from TIE or
TDE Plot The variable is added to the plot if possi-
ble. (1)
Variable from Plot Empty area The variable is moved to a new plot. (2)
Definitions view
Another plot The variable is moved to the other plot.
(1, 2)
Plot from Plot Empty area A copy of the plot is created. (3)
Definitions view
Table 6.62: Drag and drop handling with the Plots and Plot Definitions view
(1) Restrictions apply: A scalar variable cannot be added to an array plot, and whole arrays
cannot be added to a test case or test step plot (whereas single array elements can be added
to test case or test step plots).
(2) If CTRL is being pressed while dropping the variable, it will be copied instead of moved
to the other plot.
(3) Only applies if CTRL is being pressed while dropping the plot.
Only the plots that are ticked with “Use in Report” will be displayed within the reports (see
figure 6.170).
After preparing a test in the TIE, you need to create well designed test case specifications. The
3.2 The
Classification Tree Classification Tree Method provides a systematical approach to create test case definitions
Method (CTM)
based on the functional specification of the function or system to be tested. TESSY includes
the specialized Classification Tree Editor CTE which assists you in creating low redundant
and error sensitive test cases.
The basic concept of the Classification Tree Method is to first partition the set of possible
inputs for the test object separately and from different aspects, and then to combine them to
obtain redundancy-free test cases covering the complete input domain.
For further general information about the Classification Tree Method (CTM) please
refer to chapter 3.2 The Classification Tree Method (CTM).
Test Project view upper left Displays your test project. For editing your test project
switch to the Overview perspective.
Properties view lower left Displays the properties of tree and Test Table items.
Outline view lower left Displays the structure of the classification tree and the
Test Table and allows to navigate and select items in
the structure.
Test Data view right Allows to assign test data to classification tree ele-
ments.
The Test Project view displays the test project which has been organized within the Overview 6.2.2 Test
Project view
perspective.
The Properties view displays all the properties which you have organized within the Overview
perspective. Within the CTE perspective this view additionally provides all properties of items
used in the classification tree and the Test Table. Most operations are possible. 6.2.3
Properties view
For changing any module related settings switch to the Properties view within the Overview
perspective.
The Outline view displays the structure of the classification tree and the Test Table and allows
navigating and selecting items in the structure.
Classification
Tree editor
Maximize the CTE window within the Classification Tree editor to avoid additional
scroll bars and to always show the whole CTE window contents within the perspec-
tive.
Paste Ctrl + V
Delete Del
Undoes the last move or edit operation within the classifica- Ctrl + Z
tion tree pane.
Redoes the last move or edit operation within the classifica- Ctrl + Y
tion tree pane.
Selects all leaves that are children of the current selection. Ctrl + L
Tree area upper left Drawing the classification tree with a root, compositions,
classifications and classes.
Test Table lower left Marking classes of the classification tree in order to define
test cases, test sequences and test steps. Every test item
creates a new line in the Test Table.
Palette right Tool box to create tree items and define parents as well
as create and define dependencies.
The Palette view on the right side of the CTE contains a tool box to select and create tree
Creating tree items and define the parent structure. Furthermore the palette provides tool entries to define
items and define different types of dependencies for tree items.
the parent
structure and
dependencies
Creates a composition.
Creates a classification.
Creates a class.
Creates an OR dependency.
A test case is formed through the combination of classes from different classifications. For
each test case exactly one class of each classification is considered. The combined classes
must be logical compatible; otherwise the test case is not executable. You should choose and
combine as many test cases as needed to cover all aspects that should be tested.
For further general information about the Classification Tree Method (CTM) please
refer to chapter 3.2 The Classification Tree Method (CTM).
You can also press Insert on the keyboard to create a classification in the CTE.
Þ Double-click the new classification or press F2 after selecting the new classifica-
tion to start the inline editor for the tree item.
Within the tree area you can move the classifications and other elements with drag
and drop: Just left click the element, hold the mouse button and move it to the desired
place. You may also select a group of elements and move them the same way. The
tree layout will be arranged automatically by clicking in the tool bar.
You can assign test data to all interface variables for each tree node of the classifica-
tion tree. This speeds up testing because the test data will be assigned automatically
to the test cases via the marked class nodes (refer to section 6.8.7 Test Data view).
Test cases are defined by setting marks in the Test Table: Setting marks
Þ Click on one of the circles to connect a test case with a class. The empty circle
will turn into a black circle.
Þ Click on to save the classification tree.
If you connect a test case with a class, the respective test data assignments of the
class will be assigned to the test case. If you want to review the resulting test data
assignments for the whole test case, select the test case within the test item list. The
Test Data view will now display the assignments for the test case.
The test data of a test case is displayed read-only because it is defined by the marks
set within the combination table and cannot be changed here.
Figure 6.177: Classification Tree with test data for class “Zero”
• When selecting a tree item, you will see the test data entered for this item within the Test
Data view.
• When selecting any interface element within the Test Data view, all classification tree
elements that contain test data for this interface element will be marked with a blue square
( ).
Figure 6.178: Test cases and test steps created within the CTE in the Test Item view of the
Overview perspective
• Indicators will appear light gray when there are no values entered, dark gray when some
values are entered and yellow when the entering of values is completed.
• Values stemming from the CTE are read-only. If you want to change them, switch back to
the CTE perspective and do your changes there.
• Test cases created in the Test Item view do not appear in the CTE.
This feature can be activated in the “Preferences” > “CTE” > “Tree Generation” > “Update
generated tree based on interface changes”. The CTE document will then be updated with
interface elements not present in the actual tree.
With this feature active and TESSY detecting at least one interface element which is not
attached to any CTE node, TESSY will ask whether it shall merge a new generated tree with
the current tree.
In this dialog click “OK” if you want to continue. If you want to remove subtrees associated
with deleted interface elements, check the mark in the dialog.
Important: The algorithm merges the changes in the current CTE document and
detects already known and handled interface elements. This detection is based on
a two phase approach. First, the algorithm searches whether the interface element
is already associated with a CTE node. If this is not the case, it searches whether
a CTE node for the interface element is already at the expected position in the tree.
Based on information provided by the test object interface as well as the interface dictionary
and the configuration in the TIE it is possible to generate a classification tree. (More informa-
tion about the interface dictionary is provided in subsection 6.1.6 Interface dictionary.)
The tree always contains an “Inputs” and “Outputs” subtree and when available nodes for
parameter, globals, return value, parameter and return values of called functions.
The interface information defined in the TIE is used to generate subtrees based on the differ-
ent types of the interface items such as enums, arrays, scalars, pointers.
Please refer to the overview in figure 6.181, the blue marked boxes contain descriptions of
the generated nodes in the respective position.
Figure 6.181: An overview on the automated tree generation based on the function interface
In general there is no saved CTE file, therefore a new classification tree will be generated
automatically and the CTE will be instantly created.
The editor of the resulting tree is marked with an asterisk to show unsaved content. It is
possible to modify the content to your specific needs and changes can finally be saved.
This CTEX file attribute only exists when a CTE was saved.
Important: By removing the CTEX file from the attributes list the respective clas-
sification tree will be voided, but associated data will remain on the hard disk.
To be able to create a new tree it is necessary to discard a generated and already saved tree.
Therefore you need to remove the test specification:
The automatic tree generation in TESSY is enabled by default. You can disable it in the
Preferences menu.
Type specific tree generation configurations are also provided. Via “Max number of enum
constants” you can define for how many enum constants tree nodes should be generated.
This is useful for enums with many constants.
Important: Make sure that “Show synthetic variables” is ticked in order that syn-
thetic variables will be shown and considered in newly generated trees.
This checkbox is checked by default.
It occurs that in Classification Trees the name of leaf classes create a clear intention for the
test data attached to them. As a test engineer you are then confronted with the error prone
and tedious job to attach the right test data to such nodes.
The automated conversion of test data is about CTE classes like the children below the
classification “Variable range_start” in the following image (see 6.186). The names of the
classes can be associated uniquely with test data for the parameter “range_start”.
To associate test data manually you have to first select each class, in this case “*min*”, “-1”,
“0”, “1”, “5” and “*max*”. You need to find the parameter “range_start” in the Test Data view on
the right and add the corresponding value to each class separately.
Figure 6.186: CTE class node with children associated with test data
TESSY enables you to create this test data automatically, it just needs to be informed which
interface element in the Test Data view is associated with the corresponding CTE classifica-
tion in the tree area. This association is done by “Attach to CTE Node”.
If your initial classification tree is generated by TESSY, the interface will be attached to the
corresponding nodes automatically and TESSY will automatically parse the test data of leaf
classes of this classification from the class’ name.
Please keep in mind: This automated conversion can also be deactivated (see be-
low).
If your tree is created manually, you can associate an interface object manually.
The Test Data value will be derived from the class node’s name. All values and expressions
that can be entered within TDE for the respective interface object can be used as class name
as well.
For more information about entering values and expressions refer to section 6.9.7.4
Entering values.
Important: For performance reasons this derivation is only done when classifica-
tion trees are saved.
Pointers must have set a dynamic object as their targets in order that TESSY will create test
data. For other targets TESSY will assume that the test data is derived from other parts of
the classification tree.
For array elements the index is determined from the last line of the nodes’ names which are
associated with the array element type. I.e. the last line of such a node must match the
regular expression: .*("["[0-9]+"]")+
array_1[3]
array_2[5][17]
The derivation of the index from the name has the advantage that changes of the name will
be recognized by TESSY.
Important: Be aware that as of now test data will not be automatically removed.
If you do not want TESSY to derive test data from a class node’s name, there are two op-
tions:
• Deactivate the automated conversion for your whole TESSY project in the menu
bar by unchecking “Window” > “Preferences” > “CTE” > “Tree Generation” > “Con-
vert node names to test data”.
• Detach the interface from its classification node in the context menu of the respec-
tive node in the CTE by clicking the menu entry “Detach Interface from CTE Node”
(see figure 6.188).
Whether using the CTE or creating the test cases manually within the TDE perspective: 6.9.7 Test Data
view
Values are always entered in the Test Data view.
Some of the operations and overviews are only possible within the TDE perspective, so switch
to chapter 6.9.7 Test Data view to learn how to use the Test Data view.
Instead of assigning test data directly to all variables of the test object interface for each test
case, you can assign them using the tree nodes of the classification tree. For each tree node
you can assign values to variables. Assigning
values to
Child nodes inherit the values from their parent nodes, but you can as well overwrite inherited
variables
variable values for a child tree node.
When combining leaf classes of the classification tree to test cases, the variable assignments
of the marked tree nodes will be assigned to the respective test case. In this way, you can
assign all test data within the classification tree and get your test cases automatically filled
by setting marks within the combination table.
When selecting a test case within the test item list you will see the resulting test data assign-
ments according to the marks of the test case within the Test Data view.
Assigning test data to variants needs particular attention. You need to be aware of the differ-
ences between test data on the parent module and test data of the variant and where exactly
they are located.
TESSY offers two ways of handling test data in variants.
In general it is possible to edit test data directly within the CTE nodes.
If you do so, you need to make sure that the nodes name is not contradicting the test data.
Contradictions between node names and test data lead to contradictions in the test specifi-
Best practice: cations and the related test data. This, of course, will also effect your test reports.
Assigning data
in variants Important: Classes should be named after concepts or ideas like “Highest Value”
or “Highest Value +1” instead of e.g. “10”. The easiest way is to simply follow the
naming of the TDE expressions which are “*max*” and “*max-1*”.
If you do not want to or for some reason cannot use such abstract class names, it is still
possible to proceed as follows:
Þ Assign all test data which should be equal in all variants in the CTE and organize
your test cases in the CTE as usual.
Þ Leave all interface variables unassigned in the CTE which will vary in the variants.
Þ Save your entries and switch to the TDE perspective.
Þ Add the missing values within the TDE.
Þ You can later create or synchronize variants.
Þ In the TDE perspective you can now enter missing or varied values in the newly
created or synchronized variants.
When assigning test data to tree nodes of the classification tree, the same variable can be
assigned within different locations of the tree and each assignment can have another value
for the variable. The resulting value for such a variable (for a given test case) depends on the
classes being marked for a test case.
When calculating the variable assignments for a test case, CTE collects all marked tree
branches where the variable is assigned. A tree path is defined as the list of tree nodes
up to the root starting at the tree node where the variable is assigned. The tree paths are
sorted by position of their leaf nodes: The sort order is from left to right.
The example below (see figure 6.190) shows different assignments of variable “x” within a
classification tree. The resulting value for “x” is indicated for each test case.
The resulting value for the test cases will be calculated like follows:
1. For the first test case the variable is assigned in class “b” which is a longer path than
the assignment within the root, so the value of class “b” will be taken.
2. For the second test case we have values within class “b” and class “e”. The tree paths
diverge below the root node and the classification “O” is on the right side so that the
value of class “e” will be taken.
3. In the third test case there are values within the root node and within classes “b” and
“c”. Both tree paths of the classes are longer than the root path and the classification
“B” is on the right side so that the value of class “c” will be taken.
4. In the fourth test case we have the tree paths of classes “b” and “g” that diverge at the
root. Because classification “O” is in the right side, the value of class “g” will be taken.
5. In this test case all marked classes refer to the value defined within the root node so
that the value of the root node will be taken.
It is possible to define a connection between the selected interface element in Test Data view
and the CTE node currently selected in the CTE.
Þ Select tree node element in the tree area (see figure 6.191).
Þ Select the corresponding interface element in the Test Data view on the right.
Þ Right-click in respective interface element in the Test Data view to open the context
menu.
Þ Click “Attach to CTE Node” in the context menu.
The connection is visualized by a small TIE icon in the bottom left corner of the respective
CTE node:
• To perform an automated conversion from class node name to test data (see
6.8.6.7 Automated conversion from class node name to test data).
• To ensure a unique descriptive name for CTE nodes attached to variables which
have a detailed description in the interface dictionary (see 6.1.6 Interface dictio-
nary).
When modeling a classification tree there are situations where the tree contains classes
which must or must not be marked simultaneously. Dependencies express such relations
and help to force particular combinations of marks in test cases and steps.
A dependency is primarily a logical expression. It can only be defined for classes which are Defining
not further specialized, i.e. which do not have any child nodes. dependencies
using the Palette
Important: Please note that a dependency only implies an action. If the logical view
expression is false, no action will be triggered.
Creates an OR dependency.
The classification tree of “Value in Range” the classes “range_length -> negative” implies that
the position of the value can only be outside of the interval. Thus means a test case marked
“range_length -> negative” must also mark “position -> outside”.
More information about the TESSY example “Value in Range” is provided in section
5.1 Quickstart 1: Unit test exercise is_value_in_range
Due to the given situation a dependency needs to be defined which expresses this relation
in the “Value in Range” example:
Figure 6.193: Dependency defined between “range_length -> negative” and “position ->
outside”
With this a dependency between “range_length -> negative” and “position -> outside” was
defined. Therefore creating a mark for “range_length -> negative” will also create a mark for
“position -> outside” and as long as “range_length -> negative” has selected mark in the same
test case a mark for “position -> outside” will be present.
According to the result of this logical expression a “mark action” of the class connected with
this dependency is automatically triggered. An action can be selected at a class with a de-
pendency as input.
There are two actions: “activate mark” or “deactivate mark”. In the diagram the selected action
is represented by a black circle (activate mark) or white circle (deactivate mark) between
the incoming arrow and the class. The Properties view of “position -> outside” will show a
textual representation of the dependency behind the (dependency icon).
In the picture below (see figure 6.195) it is a simply “negative” triggered action which is the
default “activate Mark”. In “Value in Range” you can use the deactivate mark action in the
following way:
Figure 6.195: Dependency defined between “range_length -> zero” and “position -> inside”
Now a mark for “position -> inside” will be removed if a mark for “range_length -> zero” is
created and it is not possible to create a mark for “position -> inside” as long “range_length
-> zero” is selected.
Important: Please remark that a dependency only implies an action. If the logical
expression is false, no action will be triggered. Moreover it is only possible to set
marks if there are not any dependencies which exclude the mark directly or indirectly
by activating another mark.
• A class can only be connected with exactly one dependency. Multiple depen-
dencies however can be resolved by the user with the help of operators to one
composite dependency.
In “Value in Range” such a situation exists, so it is reasonable to mark “position -> outside” if
either “range_length -> zero” or “range_length -> negative” is selected.
Now both dependencies are modeled in one composite dependency and in the Properties
view of “position -> outside” the formula “negative” || “zero” is shown.
Test Project view upper left Same view as within the Overview perspective.
Test Results upper left Same view as within the Overview perspective.
view
Test Items view lower left Same view as within the Overview perspective.
Properties view lower left Same view as within the Overview perspective.
Test Data view upper right To enter test data and expected values, after the test
execution, reviewing passed, failed and undefined val-
ues.
Test Definition lower center To display the test case specification, the optional de-
view scription and linked requirements of the current test
case.
Call Trace view lower center To evaluate expected calls of functions for each test
step of a test object.
Declaration view lower center Declaration and definition for usercode e.g. for freely
definable and usable variables and functions.
Stub Functions lower center To display the code for all stub functions.
view
Usercode lower right To display the usercode that will be executed at a cer-
Outline view tain point in time during the test execution.
Plot Definitions lower right To create and configure plots (same view as within the
view TDE perspective).
Usercode
The Test Project view displays your test project which you organized within the Overview 6.2.2 Test
Project view
perspective.
The Test Results view displays the coverage measurement results and the results of a test 6.2.6 Test
Results view
run of expected outputs, evaluation macros and call traces, if applicable. It is the same view
as within the Overview perspective.
6.2.7
This view lists the detailed results of the evaluation macros if the usercode of the test object Evaluation Macros
view
contains any evaluation macros. The results are displayed wherever they occur within the
usercode, e.g. within stub functions or test step epilogs. It is the same view as within the
Overview perspective.
Within the Test Items view you get an overview about your test cases and test steps which
you organized within the Overview perspective or the CTE (see section 6.8 CTE: Designing 6.2.5 Test
Items view
the test cases).
To create test cases and test steps manually without using the Classification Tree Editor,
switch to the Test Items view within the Overview perspective.
The Properties view displays all the properties which you organized within the Overview per-
6.2.3 spective. Most operations are possible.
Properties view
For changing a source switch to the Properties view within the Overview perspective.
Type information The view is context sensitive: You can view the passing direction and all type information of
of a variable the variable (i.e. the basic type, the size as well as any modifiers and pragmas) if you select
the variable within the Test Data view (see figure 6.199).
Whether using the CTE or creating the test cases manually within the TDE perspective, you
will use the Test Data view to enter or review the input values and expected results of all test
cases and test steps.
Important: CTE exported values are read-only within the TDE perspective. The
cells are insensitive. Switch to the CTE perspective to change such values if nec-
essary (respectively the underlying document).
The following table shows the indicators of status and their meaning which are used by the
Test Data view.
Test step passed: The actual result did match the expected results.
Test step failed: The actual result did not match the expected results.
Test step generated: The test step was generated by the test case
generator but has no executable data yet.
Test step generated with data: The test step was generated by the test
case generator and executable test steps were generated.
The Test Data view displays the interface of the test object. On the left side of the Test Data
view you see the following interface elements and icons:
Inputs Input values are all interface elements that are read by the
Outputs test object. Output values are written by the test object re-
spectively are the expected results.
Within the TIE you determine which values are Inputs and
which are Outputs. TESSY tries to find out the default pass-
ing directions (input or output) automatically when analyzing
the source files.
Globals Globals are the global variables referenced by your test ob-
ject. Global variables can contain global static variables and
static local variables that are defined within functions.
Function
Table 6.71: Interface elements and icons of the Test Data view
Every variable will be assigned to one of the interface elements described above, e.g. Pa-
rameter, Global etc. Initially, the Dynamics section will always be empty. The columns on the
right represent the test steps where the values of the variables are defined.
• Move the mouse pointer over the number of the test step to see the name of the test step
within a tool tip (compare figure 6.201).
• Select all values for a variable by clicking on the variable in the left column.
• If you select the icon “Highlight Undefined Values” in the tool bar, all variables that do not
contain any data are marked in yellow (compare figure 6.201).
Figure 6.201: Test step 1.1 is selected and undefined values are highlighted in yellow.
To choose the test steps you want to see in the Test Data view you need to select them in the
Test Item view (Ctrl + click) first. Make sure that (Link with the Test Item View) is enabled
in the Test Data view tool bar. After that only the selected test steps will be displayed (see
figure 6.202).
Values for interface elements are entered into the cells of the Test Data view. The values Entering values
will be checked and/or converted when leaving the cell or when changing to any neighboring
cell.
The TDE provides undo/redo functionality for all changes within the Test Data view!
By default, all imported or manually entered test data values are checked for syntactical
correctness, truncated to the type size and optionally formatted. The truncation of values
depends on which kind of number format was used:
• Decimal numbers will be checked against the minimum and maximum value of the re-
spective data type. When entering -10 for an unsigned type you will see a value of 0 as
test data. If the value is less than the minimum, the minimum value will be used, if it is
more than the maximum, the maximum value will be used.
• Hexadecimal and binary numbers will be truncated to the number of bytes available for
the respective data type, regardless if the type is signed or unsigned. When entering
0xF11 for an 8 bit type you will see a value of 0x11 as test data. Also when entering a
binary 0b1100001111 you will see a value of 0b00001111 as test data.
• Missing leading zeros will be filled up for hexadecimal and binary values. If you enter
0x12 for a 16 bit value, you will see a value of 0x0012 as test data.
• Expressions will be validated for syntactical correctness and all values used within the
expression (defines, enum constants or numerical values) will be validated as well.
After the truncation of the value to the available data type size, it will be formatted according to
the data format configured within TIE. Suppose you have an 8 bit signed value with data format
“Decimal” and you enter a value of 0xF05: The value will firstly be truncated to 0x05 and then
formatted as decimal number so that you will see 5 as test data value.
Important: If you change the data format within TIE, only newly entered test data
will be formatted according to the new format. If you want to change the format of
the available test data for a certain variable, you need to use the “Convert to Data
Format” menu entry within TDE. Make sure the box “Enable Value Checking and
Conversion” is checked within the menu “Window” > “Preferences” > “Test interface
Settings”.
Important: When running the test with undefined values, the initial value passed
to the test object depends on the default initialization of the compiler.
Clicking into a cell activates the inline editing mode and you can enter arbitrary values:
You can navigate between the cells with CTRL + cursor left/right.
You can apply the available operations of the context menu to multiple cells depending on
the current selection within the Test Data view:
• If you select a single variable of the interface tree, all values of all test steps for
this variable will be affected.
• If you select a test step column, all variables of this test step will be affected.
• If you select an array, a struct or a union, all components of this element will be
affected.
The current selection is highlighted in blue. You need to select a test step column
before right-clicking for the context menu, because the right click will not select the
test step column.
Figure 6.203: Clicking in the cell shows a combo box with the union components
Important: If you want to select enum constants from anonymous enum declara-
tions (e.g. enum {A=42, B=43, C=44}) that are not used within your source file, you
need to set the module attribute “Collect All Enums” to “true” in the TEE.
Þ Click in a cell.
A dropdown menu will open showing the available enum constants for this enum
type (see figure 6.204).
Þ Choose any constant or click into the inline editor field to enter any other suitable
value.
Figure 6.204: Clicking in the cell shows a combo box with the available enum constants
Figure 6.205: Pressing Ctrl + Space opens a list of available defines or enum constants
Þ Enter a valid operator and complete your expression within the inline editor of the
cell.
Input values
Input values are all interface elements that need to have a defined value at the beginning
of the execution of a test object. These values are included in the calculation of the output
values during test execution or will be used as a predicate of a condition.
External called functions can be defined as advanced stub functions to provide the return
value and the expected parameter values within the Test Data view. If a test object calls an
external function multiple times the same return value would be returned for each invocation
and also the parameters would be checked against the same parameter values as specified
within TDE. In order to provide different values for each invocation of the advanced stub, you
can enter multiple values as a vector written within braces, e.g. 1,2 (see figure 6.207). In this
example the return value of the stub will be 1 for the first call, 2 for the second call. You can
also specify a vector value for the expected parameter values.
Handling of Arrays
Þ Choose ”Show All Array Elements“, ”Show Defined Array Elements“ or ”Show Array
Elements…“ to decide which array elements are shown.
By choosing ”Show Array Elements…“ you can select the shown array
elements individually.
Important: It is not necessary to enter test data for all array elements. Entering
test data to only one array element is enough to make the test executable.
Expected values
Expected values are the calculated results regarding the input values for the test object after
test execution. TESSY will compare both expected and actual values after test execution.
The result will be either failed or passed.
Important: The values are compared with the evaluation mode equal (==). To
change the evaluation mode refer to section Entering evaluation modes.
Þ Right-click the variable and choose “Initialize Test Data…” from the context menu.
The “Initialize Values” dialog opens.
Option Meaning
Ignore values All input and expected values will be set to “*none*”
Initialize all All array elements will be initialized. Otherwise only visible array
array elements elements will be initialized
The following table shows the initialization values for certain data types:
Type Contents
Integer 0x00000000
i.e. if 0x42 is entered as pattern, all int variables will be initialized
with 0x42424242.
Float 0.0
Array All array elements are initialized according to their type if option
“Initialize all Array Elements” is used
Pointers Pointers are initialized with NULL provided that they do not point to
dynamic objects
It is possible to set the passing direction of variables that are not needed for testing any more
to “IRRELEVANT” in the Test Data Editor. Right-click the variables you want to hide and
choose “Set Passing to IRRELEVANT” in the context menu. You can undo this by choosing
“Restore Passing” in the same menu if necessary.
The chosen variable will still be displayed and marked as “[IRRELEVANT]”. When saving the
test data the passing direction of this variable is updated and the variable will no longer appear
within the Test Data view.
Important: Passing directions set to irrelevant that have been saved can only
be restored in the Test Interface Editor (TIE) (see section 6.7.4.6 Setting passing
directions).
Using the evaluation mode allows to specify how to compare the actual value (calculated
during the test run) with your specified expected value. The evaluation mode together with
the expected value will be used to process the test results after the test run.
The default evaluation mode is equal (==). To enter another evaluation mode: Evaluation
modes
Þ Click in a cell.
Þ Enter the desired evaluation mode within the inline editor mode (see figure 6.210).
Figure 6.210: Entering evaluation mode “unequal” within the inline editor
equal == Checks the expected value and actual value for Equality.
This is the default setting.
unequal != Checks the expected value and actual value for inequality.
greater > Checks if the actual value is greater than the expected
value.
less < Checks if the actual value is less than the expected value.
greater or >= Checks if the actual value is greater or equal to the ex-
equal pected value.
less or equal <= Checks if the actual value is less or equal to the expected
value.
range [1:10] Checks if the actual value is within a range, here: range 1
to 10.
deviation 100 +/- 1 Checks if the actual value equals the expected value but
100 +/- 1% takes into account a deviation value. The deviation can ei-
ther be an absolute value or a percentage, for the example
the following actual values would yield OK: 99, 100, 101.
Important: Please note that in TESSY “Actual -> Expected” generally does not
work for evaluation mode entries.
By default, values have to be assigned for all variables with passing directions “IN” or “INOUT”.
It can be useful to not overwrite a value calculated in the last test step. In this case you can
use the special value “ *none* ”:
Þ Right-click a value and choose “Ignore Value” within the context menu.
You can generate test cases and steps automatically, i.e. test steps of a range of input values
which you enter in the TDE:
Þ Create a generator test case within the Test Items view as described in section
6.2.5.5 Creating test steps automatically.
Þ Switch back to the Test Data view.
Þ Enter your values and a range of an input value, i.e. [6:9] as in our example (see
figure 6.211).
TESSY can generate the test cases stepwise: Enter a semicolon and the step size
behind the range, e.g. [6:15;3] would give you the values 6, 9, 12 and 15.
You can also enter specific values, e.g. [1,5,8] would be the values 1, 5 and 8.
Combinations are as well possible: [2:8;2,11,15,20:22] would be 2, 4, 6, 8, and 11,
15, 20, 21 and 22.
Figure 6.211: Generator test case 4 has a range value from 6 to 9 for parameter v1
You might need to expand or scroll the Test Data view to see all the test
steps!
Figure 6.212: Four test steps are generated for every value within the range “6 to 9”.
The test steps are read only because they were generated!
You can change the type of the test case and test steps to “normal”. That way you can edit
the test steps as usual.
Changing test The test case and test steps are changed to type “normal” but will indicate originally being
case to type generated with a status within the Test Items view (see figure 6.214).
normal
Figure 6.214: The test case and test steps originally being generated.
You can reverse the action with a right click and choose “Change Test Case Type to Generator”
from the context menu.
Inherited modules and their test objects need to be synchronized (see Creating variant mod-
ules) to get the inherited test cases and test steps with all inherited values. The Test Data
view shows inherited and overwritten values with different colors.
• CTE test cases cannot be edited within the inherited test object. Any changes need to be
done within the parent test object.
• Inherited user code (e.g. prolog/epilog) cannot be overwritten with “empty” user code. It is
recommended to add a comment stating why the inherited usercode has been overwritten
instead.
See figure 6.215 for the color coding of values displayed within Test Data view:
6.9.7.10 Pointers
The context menu offers the following possibilities to assign a value for a pointer:
Option Meaning
Set Pointer The value of the selected pointer will be set to NULL. The text box will
NULL be filled with NULL.
Set Pointer You can select another interface element or a component of a structure
Target or union and assign its address to the pointer. The cursor will change,
when you move the mouse pointer over a variable:
The object type fits the pointers target type. You can assign the
pointer.
The object type does not match the pointers target type. You cannot
assign the pointer.
When you click on an element, the variable name of that element will
be entered into the field next to the pointer. During test execution, the
address of the variable will be assigned into the input field of the pointer.
Create pointer Allows to create a new object as target object for the pointer. The
target value address of the object will be assigned to the pointer. The type of the
created object depends on the target type of the pointer.
A new target object will be listed in the dynamic objects section of the
TDE.
Option Meaning
Array as It is also possible to create an array as target value using the Dimension
Target Value option of the Create Pointer Target dialog:
Þ Tick the check box for” As Array of Size” to enter an appropriate size
into the input field.
Þ Click “OK”.
The name of the new object appears in the input field of the pointer value.
TDE will create an array of the pointers target type. The pointer will point
to the first array element.
Within the Dynamics section, you will see the newly created target object.
You can enter values, like for every other interface element.
Variables defined as static local variables within the test object or called functions can also
be tested. Since such variables are not accessible from outside the location where they are
defined, TESSY instruments the source code and adds some code after the variable definition
to get a pointer to the memory location of the variable. All static local variables can only be
accessed after the code containing the variable definition has been executed. You need to
keep this in mind when providing input values or checking results for such variables. The
following restrictions apply for static local variables:
• The first time when the code of a static local variable definition is executed, the
variable will get the initialization value assigned from the source code. It is not
possible to set the initial value from TESSY. You need at least one test step to
initialize the variable by executing the definition code. The next test step can then
supply an input value for the variable.
• The same applies for expected values: If the source code of the variable definition
has not been executed, the result value of the respective variable is not accessible
and will be displayed as *unknown* in this case. This situation may arise when
the variable definition is located within a code block which has not been executed,
e.g. within an if statement block.
Figure 6.216: Test Definition view within TDE with linked requirement
The Test Definition view displays the test case specification, the optional description and
linked requirements of the current test case in individual input fields. The test case specifica-
tion should enable the tester to provide concrete input values and expected results.
The Test Definition view is context sensitive! To display the specifications, definitions and
requirements for a test case:
Þ Select a test case within the Test Items view (see figure 6.216).
Important: The contents are not editable if the test cases have been created and
exported using the CTE!
The Call Trace view displays the called functions for each test step of a test object within the
Expected Calls area. All functions that may be called from the test object are listed within the
Available Functions area.
The Call Trace view allows the evaluation of the called functions.
Within the two areas of the Call Trace view it is possible to manage the expected order of
function calls for all test steps individually making use of the tool bar. It is also possible to
use the blue arrows between the areas pointing left, right, up or down. When the editing is
completed the settings need to be saved.
Copies the actual called function into the Expected Calls area.
Disables the filter and shows all available functions (by default, functions
called by stubbed functions are filtered out).
Within the Declarations/Definitions view you can define your own helper variables that may
then be used within the user code. If you just want to declare a variable that is already
available within linked object files, you do this within the declarations section. If you want
to create a new variable, you need to enter the declaration into the declarations section and
the respective definition into the definitions section. The variable can then be used within the
prolog/epilog and stub function code.
Important: TESSY provides the means to add new variables within the TIE per-
spective (see section 6.7 TIE: Preparing the test interface). Such variables can be
used like normal interface variables of the test object which is much more conve-
nient than defining them here in the Declarations/Definition view.
Within the Prolog/Epilog view you can specify usercode that will be executed at a certain point
in time during the test execution. The C part of the usercode will be integrated into the test
driver and executed at the places specified. The following figure outlines the call sequence
of the usercode parts.
The figure shows the interaction of the usercode sections with the assignment of test data
provided within TDE and the result values that are saved into the test database and evaluated
against the expected results.
During the test object call, the code specified for the stub functions (if any functions are called
from your test object) will be executed depending on the code logic of your test object.
Within the prolog/epilog code you can reference the global variables used by your test ob-
ject that have one the passing directions IN, OUT, INOUT or EXTERN. The following special
macros are available within the prolog:
Example
Have a look at figure 6.219 Prolog/Epilog view in the beginning of this section. The test
step 1.1 prolog contains the code TS_REPEAT_COUNT=2, and the Repeat Count for this
prolog/epilog section was set to 5.
The whole prolog/test object call/epilog sequence will be repeated five times and the test object
will be called twice in every repetition of this loop. Since there are 5 loops, the test object will
be called 10 times in total.
In some cases it is useful to specify a common prolog/epilog for all test steps or for the test
steps of a certain test case. For this reason, you can enter prolog/epilog on test object or
on test case level. Such a default prolog/epilog will be inherited to the respective child test
steps. In this way you avoid to copy the same prolog/epilog multiple times to each test step.
Default prolog/epilog can be overwritten on test case/step level for individual test cases/steps
if desired.
Figure 6.221: TESSY provides default prolog/epilog on test object level to be inherited to
test cases and test steps
For both prolog and epilog, there are up to three tabs available, depending on the selected
object:
Test objects:
Test cases:
Test steps:
Figure 6.222: TESSY allows Prolog/Epilog being inherited from test case or test object
Important: To edit the default prolog/epilog, select the corresponding test object
or test case and edit the code via the “Default for Test Cases” and “Default for Test
Steps” tabs.
The Prolog/Epilog view provides a popup menu containing variables for convenient editing.
Þ use the Usercode Outline view to mark the test case or test step for which you
want to set the usercode.
Þ Click into the Prolog or Epilog section of the Prolog/Epilog view and enter the
usercode.
Þ Press CTRL + Space or type the first letters and press CTRL + Space.
The popup menu appears (see figure 6.223), showing all available names respec-
tively the filtered list according to the characters you already typed.
When generating the test driver, the prolog/epilog code is appended to the source file of the
current test object. Therefore, only variables that are declared or defined within the source
file of the current test object may be used within prolog/epilog or stub function code.
Þ Use the Usercode Outline view to navigate and select a test case or test step from
the tree.
Þ Enter the code within the Prolog/Epilog view.
A new node will automatically appear at the corresponding place in the outline
tree (see figure 6.224).
Within the test step epilog or within stub functions, you can evaluate any variable or expres-
sion using the evaluation macros. These predefined macros allow to check an expression
against an expected value. The result is stored within the test report like the evaluation of
normal output variables of the test object.
Evaluation macros can only be used within the following Usercode sections:
A popup menu contains all available interface variables and symbolic constants for conve- Evaluation
nient editing as well as the available evaluation macros, e.g. TESSY_EVAL_U8 for unsigned macros
character values:
Þ Select the evaluation macro for the specific data type of the variable which shall
be evaluated. The only difference of the evaluation macros is the type of argument
for the actual and expected value, see table below for a description of the available
types.
Example: Below is an example showing the template in the second row and the edited
evaluation macro underneath.
Both value arguments given to the evaluation macro may be of any value that fits
the specified eval macro type. By convention, the first (left side) value should be the
actual value that shall be checked and the second (right side) value should be the
expected result. Like this you will get the same order of values within the test report
as for normal output values.
TESSY_EVAL_FLOAT float
TESSY_EVAL_DOUBLE double
Operator Meaning
== equal
!= unequal
< less
Operator Meaning
> greater
Each invocation of an evaluation macro results in an additional entry within the test report. All
evaluation macros will be added to the list of actual/expected values of the current test step.
The results will be displayed within the Usercode Outline view and the Evaluation Macros
view.
It is possible to format the output of the evaluation macros as binary value, decimal or hex-
adecimal (default setting) by appending one of the following format specifiers at the end of
the evaluation macro name:
The report shown below contains all possible evaluation macro name formats. The format
specifier itself will be omitted within the final evaluation macro name.
The Stub Functions view displays the code for all stub functions. Normally all stub code is
defined on test object level.
In the Stub Functions view you can insert stub code for test steps, test cases, and test objects.
The code fragments will be be combined into a single stub function implementation and will be
called in the order as shown in figure 6.229. If you don’t want to execute the parent fragments
for specific test cases or test steps, you need to add a return statement within the respective
stub code fragment.
Important: Stub code must be provided for all non-void stub functions in order
to return a valid value as result of the stub function call. If there are stub functions
without stub code, the test execution will be aborted with an error. If the return value
of a stub function is not used by the test object, you should add at least a comment
here.
Please note the error icon at the stub function name in figure 6.228 indicating that stub code is
missing. You can switch off this check by unchecking the respective test execution preference.
This preference setting will be stored within the preferences backup file as described within
Window > Preferences menu.
Within the stub function code you can reference the parameters passed to the stub function
and also global variables used by your test object. The following special macros are available
within the stub body:
• TS_CALL_COUNT contains the number of calls to this stub function for the cur-
rent test step. This count will restart from one for each test step.
The size of the call count is limited to 8 bit (maximum call count of 255).
Fore more information about defining the call count size please refer
to the application note “Environment Settings (TEE)” in TESSY (“Help”
> “Documentation”).
Figure 6.231: Stub Functions view with code using TS_CALL_COUNT macro
It is recommended to define test case/step specific stub code instead of using the
macros TS_CURRENT_TESTCASE/TS_CURRENT_TESTSTEP.
Example for the use of test object, test case and test step specific stub code:
If only stub code of e.g. the test step should be executed, you need to set a return at the end
of the inserted code on test step level. If only stub code of the test step and the test case
should be executed, you need to set a return at the end of the inserted code on test case
level. Stub code on test module level will be overwritten.
It is recommended to read the sections Usercode Outline view and Prolog/Epilog view
to fully understand the handling of stub code.
TESSY will automatically generate the code to execute a test including all the stub code you
inserted in the Stub Functions view. Below you can see brief examples of inserted stub code
on test object level, test case level and test step level on the left along with the automatically
generated code resulting from that on the right.
Figure 6.233: Stub code examples on test object, test case and test step level
In the automatically generated test code (see figure 6.234) you can recognize the test exe-
cution direction as shown in figure 6.229.
For more information about the Usercode Outline view and navigating within the test items
see section Usercode Outline view.
The Usercode Outline view displays the usercode and stub function code that will be executed
at a certain point in time during the test execution and that you just defined in the Prolog/Epilog
view or Stub Functions view. Use this view to navigate within the test items when editing
prolog/epilog or stub function code.
The view shows entries for each location where usercode is defined. Click on a test case or
test step to see the inherited stub function code for the selected test item.
The Stub Functions view shows the stub code to be executed for the test step 1.1 that is
currently selected within the Usercode Outline view. Please note the hint within the text field
title indicating that the stub code is inherited from the test object level.
Now there is an inserted stub function code entry selected within the Usercode Outline view.
The entry indicates that the stub code is inserted for test step 2.1 which is also indicated
within the text field title.
The Plot view displays the included test items and chart(s) for a plot selected in the Plot
Definitions view. The number of different charts per plot depends on the plot mode:
• test case plot: one chart for all included test items
• test step plot: one chart per included test case
• array plot: one chart per included test step
For test step and array plots the chart can be selected by navigating the test item tree.
Important: If you use other evaluation modes than equal (e.g. <, <=, >, >=,
!=, [Range]), it is not possible to display the expected values within the plot chart.
Displaying the expected values is only possible when using the evaluation mode ==
(equal). See section 6.9.7.6 Entering evaluation modes.
The test item tree on the left-hand side of the Plot view shows all test items included in the
selected plot, as defined in the Plot Definitions view via the “Set Included Test Items” com-
mand.
This tree is for navigating the different charts of a plot, if there is more than one chart avail-
able.
6.9.14.2 Chart
The chart displays the values of the variables included in the selected plot. The values are
color-coded:
For expected values, dotted blue lines represent the upper and lower bound of expected
values such as 10 ± 5.
Only variables that have “Use in Report” checked in the Plot Definitions view are shown in the
chart. Selecting a variable in the Plot Definitions view will highlight the corresponding value
series in the Plot view.
The Plot Definitions view allows creating and configuring plots from within the TIE and TDE
perspective. For details refer to section 6.7.5 Plot Definitions view within chapter TIE: Prepar-
ing the test interface.
The Script Editor perspective provides textual editors supporting a test scripting language
for editing test cases, test data and usercode. The contents of the script editors show all
information of the internal TESSY data model for test objects, test cases and test steps.
Changes within the test scripts can be saved to the internal data model and vice versa. It is
also possible to merge concurrent changes made within the internal data model and within
the Script Editor.
When working with the Script Editor perspective, a new internally managed script file will be
Editing and created for each test object. All test data and usercode can be edited and saved within the
saving test data editor. It is necessary to commit script changes to the internal data model when tests shall be
and usercode executed. Until this point in time, the editor contents can just be saved to the underlying file.
After committing the changes, the script editor contents and the internal model are in sync.
Test Project view upper left Same view as within the Overview perspective.
Test Items view lower left Same view as within the Overview perspective.
Script Editor area upper Displays the internally managed script file and allows
middle editing and saving it.
Outline view right Displays the script structure and allows locating script
when linked with the Script Editor.
Commits changes and saves the current editor contents to Ctrl + Enter
the internal TESSY data model. The script contents must
be valid in order to do this synchronization
Merges changes and opens a merge dialog showing the Ctrl + Alt +
current script and data model contents. Enter
If changes have been made both within the script and the
internal model, it is necessary to merge both contents.
Replaces the editor contents with the contents of the se- Alt + F5
lected file.
Saves the editor contents into the selected (new) file. Ctrl + Alt + S
Within the Script Editor perspective, any selection of a test object will open or reveal the
respective Script Editor. The Script Editor provides syntax highlighting and an Outline view.
Figure 6.240: Element in the Outline view with related part in the Script Editor
The Script Editor also provides auto completion, formatting, validation as well as templates
for test cases or other parts of the script using the CTRL+SPACE shortcut. This will show a
menu containing context sensitive auto completions.
Test cases and test steps can be added by duplicating existing test items within the editor
and adjusting the test item numbers. Duplicate UUIDs can be removed, they will be created
when committing ( ) the changes to the model.
Alternatively, test items can be added or deleted using the Test Item view.
The script will then be out of date or in conflict and will need to be merged. This can easily
be done using the merge dialog (see section 6.10.7 Merging script contents) if there are no
real conflicting changes at the same locations.
After changes have been made in the Test Data Editor (TDE) and saved ( ) the Script Editor
will also be out of date and needs to be merged (see above).
Changes to the script and the internal model are always tracked. If there are any devia- Script changes
tions, the script editor window title will contain a status indicator prefix indicating possible are tracked
modification states (see figure 6.243).
The following script states are possible (a tooltip will be displayed when hovering over the
editor title):
> MODIFIED / The script has been changed but the model is unchanged.
< OUT OF DATE / The script is unchanged but the model has been
changed.
! CONFLICT / Both the script and the internal model have been changed
concurrently.
Any conflicts can be resolved when committing the changes or by using the merge operation
as displayed and described in section 6.10.7 Merging script contents.
The Outline view in the Script Editor perspective displays the script structure and allows
locating script items when linked with the Script Editor.
Table 6.83: Tool bar icons of the Outline view in the Script Editor perspective
The editor contents can be saved into the underlying script file while editing or when closing
TESSY. In order to execute a test the editor contents need to be synchronized with the internal
Synchronizing TESSY data model first.
script
The synchronizing can be done by using the buttons in the global TESSY tool bar described
in section 6.10.2 Script Editor related Icons of the main tool bar.
When committing script changes in conflict state, the merge dialog will automatically
appear.
If conflicting changes have been made within the internal model and the script file, the Merge
Changes button can be used to merge both contents. This will show a merge dialog with
both the model contents and the current script contents.
This merge dialog shows that the test data of the third test case has been changed within the
script. It is easy to resolve the conflict using the buttons of the Compare view tool bar. The merge
dialog
A tooltip will be displayed when hovering over the icons in the tool bar.
After closing the dialog, the Script Editor will contain the results of the merge operation (i.e.
the contents of the “User Script” text pane of the merge dialog).
The script needs to be committed ( ) in order to save the contents to the internal model.
If the merge dialog was opened automatically during a commit operation, the content
of the “User Script” pane will be used as content being saved to the internal model.
Script content can be edited outside of TESSY using the save button and the replace button
in the global TESSY tool bar (see section 6.10.2 Script Editor related Icons of the main tool
bar).
Use to save the editor contents into the selected (new) file and to replace the editor
contents with the contents of the selected file.
All test data and usercode can be edited within the TESSY Script Editor and script content
can be imported from outside TESSY. More information about the handling of the scripting
language can be found throughout this chapter 6.10 Script Editor: Textual editing of test
cases, particularly in subsection 6.10.3 Editing test objects, test cases and test steps.
The following figures work as an example of the TESSY scripting language in the Script
Editor. One by one the figures display an exemplary TESSY test object with test cases and
test steps.
Some of the basic elements that could occur are “specification”, “description” and “comment”
for the test object in general. It is also possible to define specific “declarations” and “defini-
tions” for the test object as well as a “prolog” and an “epilog”.
Prologs also exist on test case and test step level (as default): “default_testcase_prolog” or
“default_teststep_prolog”.
More elements that can be defined on test case and test step level are: “Inputs”, “outputs”,
“calltraces”, “stubfunctions”, “faultinjections” etc.
The test case as well as the test step script contains quite similar elements and they addi-
tionally have a number and a name.
The script ends with the “epilog” on test object level. The “epilog” element can also appear
on test case or test step level.
The Coverage Viewer (CV) displays the results of the coverage measurement of a previously Coverage
executed test, which is either measurement
• an overall instrumentation for your whole project, which you selected in the pref-
erences menu of TESSY (see section 6.1.2.1 Window > Preferences menu), or
• an instrumentation which you selected within the Properties view for your module
or test object under test (see section 6.2.3.4 Coverage tab), or
• an instrumentation which you selected for your test run (see section 6.2.2.11 Ex-
ecuting tests).
The available information displayed and the sub windows shown within the CV depends on
the coverage options selected during the test run. The CV will be updated with the coverage
information of the currently selected test object whenever you switch to the CV or when you
select another test object within the Test Project view.
For more information about static analysis and quality metrics (Cyclomatic Complexity
(CC)), Test Case To Complexity Ratio (TC/C) and Result Significance (RS)) please
refer to subsection 6.2.2.7 Static code analysis and quality metrics.
Test Project view upper left Same view as within the Overview perspective.
Called Functions middle left Contains the test object itself and the functions called
view from the test object.
Flow Chart view upper Displays the graphical representation of the control
middle structure of the currently selected function.
Coverage views upper/ Displays the results for the selected coverage instru-
middle mentation.
right
Fault Injections lower left Displays the fault injections. (For more information see
view chapter 6.14 Fault injection.)
Call Pair lower left Displays the call pair measurements (CPC).
Coverage view
Code view lower Displays the source code of the currently selected func-
right tion (and highlights selected decisions/branches).
Report views lower Displays the ASCII based coverage summary reports
right for the selected instrumentation.
• C0 (Statement Coverage)
• C1 (Branch Coverage)
• DC (Decision Coverage)
• MC/DC (Modified Condition / Decision Coverage)
• MCC (Multiple Condition Coverage)
• EPC (Entry Point Coverage) - only for unit tests
• FC (Function Coverage) - only for component tests
• CPC (Call Pair Coverage)
For more information about coverage measurements and usage of coverage analysis
refer to the application note “Coverage Measurement” in TESSY (“Help” > “Documen-
tation”).
There are no views for the Entry Point Coverage (EPC) and the Function Coverage
(FC)! The results are displayed only within the Test Overview Report (see section
6.2.2.19 Creating reports) or the Test Project view (see figure 6.255).
Figure 6.255: Results of the EPC are displayed within the Test Project view
The following figure 6.256 displays a component test with a Function Coverage instrumenta-
tion result (amongst others).
Please notice:
• If you move the mouse over the result within the Test Project view, the percentage of the
coverage for the respective item will be displayed.
• The Called Functions view displays the coverage result for every function.
The Test Project view displays your test project which you organized within the Overview 6.2.2 Test
Project view
perspective.
After a test run you will see columns being added to the Test Project view for each applied cov-
erage measurement. The coverage icons provide an overview about the reached coverage
for each test object as well as cumulated for modules, folders and test collections.
The Called Functions view contains the test object itself and all called functions of the test
object. It displays the achieved coverage of the current test run. By clicking on a function,
you can review the source code within the Code view and see the code structure within the
Flow Chart view.
Þ Click on (Toggle Code Coverage Highlighting) in the tool bar of the Code view.
The statements, branches or conditions of the source code will be marked within
the Code view according to the selected coverage measurement. If the respec-
tive code location have been covered successfully, i.e. 100% coverage has been
reached for this code part, it will be marked in green. Otherwise the code location
will be marked in red indicating that it has not been fully covered.
Þ Within the Called functions view move the mouse over the function.
All the coverages will be displayed (see figure 6.258).
The Flow Chart view displays the code structure and the respective coverage in graphical Coverage
form. graphically
displayed
You might want to learn the functions of the Flow Chart view with an easy example:
Consult section 5.1.10 Analyzing the coverage of the Tutorial: Practical exercises.
Zooms out.
Zooms in.
More information about fault injections is provided in chapter 6.14 Fault injection.
Zoom in or out using the tool bar icons or the entries from the chart menu.
Within each flow chart, you will see the branches and decisions of the function being dis- Coloring within
played in green and red colors, which indicates whether the respective decision has been the Flow Chart
fully covered or the respective branch has been reached: view
Figure 6.260: Source code view on the bottom right of the Coverage View perspective
The following elements are displayed within the flow chart of the CV:
Element Meaning
If-else decision
Switch statement
Element Meaning
Do while loop
Sub flows
Condition flow
———————————————————–
The detailed structure of a sub flow will be
shown within a separate flow chart. Such a
Condition view opens after double-clicking the
respective element (see figure 6.261).
Fault injection
Element Meaning
Plus symbol
Black triangle
Sub flow elements (e.g. the “?” ternary operator) are displayed with a special symbol which
can appear in different colors according to the general coloring rules within the Flow Chart
view, green for 100% coverage and red for less than 100% coverage.
It needs to be kept in mind that this flow chart symbol may appear in one color or in two
different colors in different positions according to the coverage of the sub flow. The various
coloring options display the coverage status of the individual test. It has to be interpreted on
the basis of the special symbolizing of sub flows.
Element Meaning
Showing that the decision of the condition (inner part of the sym-
bol) as well as the coverage of the sub flow (outer part of the
symbol) is fully covered (100%).
Element Meaning
Showing that neither the decision (inner part of the symbol) nor
the sub flow (outer part of the symbol) are fully covered (less than
100%).
Condition view Double-clicking or right-clicking a sub flow element opens the Condition view to show the
coverage details of the respective element (see figure 6.261). Selecting test steps in the
Branch (C1) Coverage view displays the coverage of the selected test steps in the Condition
view.
Figure 6.261: Condition view showing the sub flow coverage for one test case
You can select decisions, branches and code statement elements within the flow chart. The
respective code section will then be highlighted within the source code view. Since not all
connection lines within the flow chart are branches in terms of the C1 branch definition, some
of the connection lines may not be selectable.
If a condition or a code element contains sub flows (e.g. the “?” ternary operator or statements
containing boolean expressions) you can visualize the sub flow with a double click on the
respective element. CV will open a new flow chart showing the sub flow.
You may also want to select elements to create fault injections (see chapter 6.14 Fault injec-
tion).
The CV provides search functionality for decisions and branches that are not fully covered
respectively reached through all the executed test cases. The decisions and branches are
already marked in red, but the search function can assist in finding all uncovered decisions
or unreached branches.
The fault injection feature provides means to test code parts that are not testable using normal
testing inputs e.g. endless loops, read-after-write functionality or error cases in defensive
programming. Dedicated testing code can be injected at selected branch locations of the test
6.14 Fault object so that decision outcomes can be manipulated.
injection
Fault injections are edited within the Coverage Viewer (CV) based on the flow chart of the
test object. They are displayed as blue circles at the respective branch.
Fault injections that cannot be mapped to the current source code control flow are marked
with an error symbol within the Fault Injections view on the lower pane on the left.
For more information about the handling of fault injections please refer to chapter
6.14 Fault injection.
The Statement (C0) Coverage view displays the statement coverage for each individual test
case and test step as well as the total coverage for all test cases (and test steps). The
coverage information in this view is calculated for the selected function within the Called
Functions view.
If you only selected the C0 coverage instrumentation for test execution, you will see the code
branches marked in red and green within the flow chart; “else” branches, that do not exist
within the code, will be displayed in the Flow Chart view in gray.
Also the loop branches of while, for and do statements that are irrelevant for C0 coverage will
be displayed in gray.
The flow chart shows code branches and not individual statements and also blocks of
statements will be shown as one block instead of individual items for each statement.
If you select individual test cases or test steps within the test case list, the respective state-
ments covered by those test steps will be marked within the flow chart, i.e. the code branch
containing these statements will be marked. This allows finding out the execution path of the
selected test step. By selecting multiple test steps, review the resulting cumulated statement
coverage within the flow chart. The total coverage number will also be updated with the C0
statement coverage for the selected test cases/test steps.
The coverage percentage is the relation between the total numbers of statements of the
currently selected function compared to the number of reached statements. This coverage
calculation includes the currently selected test cases and test steps within the test case/test
step list (see figure 6.263). By default, all test cases are selected when opening the CV.
The Branch (C1) Coverage view displays the branch coverage for each individual test case
and test step as well as the total coverage for all test cases (and test steps). The coverage
information in this view is calculated for the selected function within the Called Functions
view.
If you only selected the C1 coverage instrumentation for test execution, you will see only the
C1 branches marked in red and green within the flow chart.
If you select individual test cases or test steps within the test case list, the respective branches
covered by those test steps will be marked within the flow chart. This allows finding out the
execution path of the selected test step. By selecting multiple test steps, review the resulting
cumulated branch coverage within the flow chart. The total coverage number will also be
updated with the C1 branch coverage for the selected test cases/test steps.
To understand the Decision Coverage view please refer to the description of the MC/DC
Coverage view below. The only difference is the calculation according to the definition of the
decision coverage.
The MC/DC Coverage view displays the coverage of the currently selected decision within
the Flow Chart view (see figure 6.265). If no decision is selected (as initially when starting
the CV), the MC/DC Coverage view is empty.
When selecting a decision, the respective combination table according to the MC/DC cover-
age definition will be displayed within the MC/DC-Coverage view (see figure 6.266).
The combination table contains all atomic conditions of the decision. The conditions are the
basic atoms of the decision which remain after removing the or, and and not operators from
the decision. TESSY calculates the MC/DC set of true/false combinations of the condition
atoms that fits best to the test steps executed during the test run.
The last table column contains the test step that caused the execution of the decision with
the true/false combination of the respective table row. If one or more of the condition com-
binations were not reached during the test run, the test step column of those rows will be
marked in red.
Þ Select a decision by clicking on the respective control flow element within the Flow
Chart view.
The code fragment will be marked within the source code view (see figure 6.266).
The decisions are either green or red depending on the degree of coverage. If no coverage
information is available, i.e. when you ran the test without any of DC, MC/DC or MCC instru-
mentation selected, the decisions within the flow chart will appear in gray and the Coverage
view will not be available (N/A).
To understand the MCC Coverage view please refer to the description in section 6.11.10
MC/DC Coverage view. The only difference is the calculation according to the definition of
the MCC coverage.
The call pair coverage measurement (CPC) supports measuring whether all call locations of
functions or methods within the test object have been exercised at least once. It fulfills the
requirements of ISO 26262 as an alternate coverage method for integration testing instead
of applying the function coverage (FC) method.
There are up to five coverage reports available depending on the instrumentation mode se-
lected for test execution. They contain the summarized coverage information of the last test
execution:
• The statement (C0) coverage report contains some meta information (e.g. number
of statements, reached statements, total statement coverage) and the source code
of the test object.
• The branch (C1) coverage report contains some meta information (e.g. number of
branches, reached branches, total branch coverage) and the source code of the
test object.
• The decision coverage (DC) report lists all decisions of the test object code in-
cluding the coverage tables with the respective decision condition combinations.
• The modified condition/decision (MC/DC) coverage report lists all decisions of the
test object code including the coverage tables with the respective MC/DC condi-
tion combinations.
• The multiple condition (MCC) coverage report also lists all decisions of the test
object code including the coverage tables with the respective MCC condition com-
binations.
For coherent testing it is essential to realize changes of the interface of test objects and to
re-execute previously passed tests to assure that any changes of the source do not cause the
previous passed tests to fail. This is often summed up with the keywords “regression testing”.
If the interface of a test object changes, TESSY will indicate the changes with specific status
indicators at the test object icon. With the Interface Data Assigner (IDA) you can assign the
elements of a changed (new) interface to the respective elements of the old one and start
a reuse. The reuse operation will copy all existing test data to the newly assigned interface
elements.
To appropriately react to changes, TESSY needs to know the current structure of the
interface. Therefore it determines for each module the available functions and their
interfaces by analyzing the source code. This information is stored in the interface
database so that TESSY knows about any changes and can keep track of test data
assignments made for a whole module or just for individual test objects.
Test Project view upper left Displays your test project. For editing your test project
switch to the Overview perspective.
Properties view lower left Displays the properties of your test project, e.g. sources
to the test object.
Compare view right Displays two interfaces, either of the same test object
(old and new interface) or of different test objects. You
can assign the changes by drag & drop.
The following test object status indicators are relevant when reusing test data.
The test object is newly available since the last interface analysis.
You have to add test cases and test steps and enter data for a test.
The Test Project view displays your test project which you organized within the Overview 6.2.2 Test
Project view
perspective.
The Properties view displays all the properties which you organized within the Overview per-
spective. Most operations are possible. 6.2.3
Properties view
For changing a source switch to the Properties view within the Overview perspective.
The Compare view shows two versions of an interface depending on the TESSY objects
selected for comparison:
• For a single module or test object, it shows the old interface on the left side and
the new interface on the right side.
• When assigning two different modules or test objects, it shows the interface of the
source object on the left side and the interface of the target object on the right
side.
The Compare view will be used for reuse operations of whole modules or individual test
objects as well as when assigning test data from one test object (or module) to another test
object of the same or different module.
Within the Compare view you can see the old interface of our test object and the new one.
The red exclamation mark within the new interface indicates the need to assign this interface
object before starting the reuse.
The title of the view shows the old name versus the newly assigned name.
To assign changes:
Þ Use the context menu or just drag and drop from the left side (see figure 6.270).
The red exclamation mark turns to a green check .
You can assign single functions and just commit the assignments for this function
(the other functions will stay in state “changed” and can be reused later). Or you
can assign and reuse whole modules (which is convenient when there are just little
changes within the function interfaces.
To commit assignments:
The data of all test cases and test steps will be copied from the old interface to the current
test object interface. The test object changes to yellow to indicate that all test cases are
ready to be executed again. If there are missing test data within the new interface (e.g. due
to additional variables being used by the test object), the icon will show an intermediate test
object state . In this case you need to add any missing test data within the Test Data Editor.
• Unchanged test objects have been automatically reused when opening a module, i.e.
they will be ready to use without further activities required.
• Removed test objects will only be displayed as “removed”, if they did contain any test cases
and test steps.
You can use the IDA to assign test cases from one test object to another test object within
the current project. Both test objects can be either from the same or from different modules.
It is also possible to assign the contents of whole modules to other modules.
Important: When assigning test cases to another test object, the target test object
contents will be overwritten completely!
To commit assignments:
The data of all test cases and test steps will be copied from the source test object to the target
test object. The target test object changes to yellow if every variable of the interface could
be assigned from the source test object. Otherwise it will display an intermediate test object
state indicating that only parts of the test data are available.
The component test feature is only used for integration testing. You do not need this
feature for unit testing.
The component test feature within TESSY supports testing of several functions (represent-
ing the software component) that interact with themselves as well as with underlying called
functions (of other components). The main difference to unit testing of individual functions is
the focus of testing on the external interface of the component instead of internal variables or
control flow. You should be familiar with the overall usage of TESSY for the normal unit test-
ing. Some features like usercode and stub functions are still available for component testing,
but the respective editors will be found at different locations.
The component test feature allows creating calling scenarios of functions provided by a soft-
ware component. Within these scenarios, the internal values of component variables and any
calls to underlying software functions can be checked. TESSY provides the Scenario Editor
(SCE) for this purpose. All scenario-related inputs are available through the SCE. Instead
of having individual test objects and test cases for the component functions, the component
test itself provides a special node called “scenarios” seen as one test object. The test cases
belonging to the scenarios node are the different scenarios for the component.
Within one scenario, you can set global input variables, call component functions, check the
calling sequence of underlying software functions and check global output variables.
The content of each scenario may be divided into the following parts:
The Usercode Editor (UCE) is not available for component testing, because the prolog/epilog
code and definitions/declarations sections can be edited directly within the SCE. You will find
C-code fragments that can be added into the scenario control flow. Also the code for stub
functions can be edited directly within SCE.
The component test management is based on TESSY modules alike a unit test. In contrary
to unit testing you will probably use multiple source files instead of only one file. Other parts
of the testing process stay basically the same:
Þ Create a new module as described in section 6.2.2.4 Creating tests and reviews.
Þ Include all the source files, include paths and defines necessary to analyze the
source code of the component.
Þ Activate “Component” as kind of test (see figure 6.272).
As environment the default GNU GCC compiler is used. This means the component
tests will be executed on the Windows PC, using the microprocess of the PC as exe-
cution environment. If you use a cross compiler for an embedded microcontroller, you
run the tests either on the actual microcontroller hardware or on a simulation of the
microcontroller in question.
In contrast to normal unit tests, you will only see one special test object called
“Scenarios” (see figure 6.273).
The interface of the component is a summarized interface of all the non-static component
functions:
The External Functions section marked with the icon lists the interface to the underlying
software functions, if any external function is called from the component. These external
functions can be replaced by stub functions like within the normal unit test.
The Component Functions section marked with the icon lists all the component functions,
i.e. the functions visible from outside the component. Local static functions will not be listed
here.
The variables used by this function are not available within the
component test interface of the scenario. These variables are
set to IRRELEVANT.
The time based scenario description within SCE is based on time steps that represent the
cyclic calls to a special handler function of the component. Such a handler function controls
the behavior of a time-sliced component implementation.
The handler function needs to be selected as work task prior to executing any scenarios:
Figure 6.275: Two component functions were set as work task within the Component
Functions view
You can select several component functions as work tasks. This will be useful when
testing several components together which all have a handler function.
The Work Task Configuration view allows more detailed settings for the work tasks.
You can drop component functions directly into this view to configure them as work tasks.
The view provides the following global settings:
• Time Unit (default is “ms”) which is just the display representation to be used within
the GUI and reports.
• Mode Variable Name (not used by default) which optionally provides calling the
work tasks depending on the value of the selected variable. All scalar variables
can be selected here.
For each work task, you can specify the following settings:
• Start Time: Determines the point in time where this work task shall be called the
first time for each scenario. The default is 0 ms which causes the work task being
called immediately starting with the first time step.
• Cycle Time: Determines the elapsed amount of time after which the work task
shall be called again. The default is 10 ms which causes the work task being
called for every 10 ms time step.
• Mode: If a global Mode Variable Name is selected, you can specify for which value
of this variable the respective work task shall be called. During test execution, this
work task will only be called within its specified start and cycle time, if the mode
variable has the specified value.
The order of appearance within the Work Task Configuration view reflects the actual calling
sequence of the work tasks for each time step of the scenario. You can reorder the work tasks
via drag and drop.
Another global setting is the calculated cycle time which depends on the cycle times of the
given work tasks. It will be calculated automatically from the cycle times of the given work
tasks.
Within the example in figure 6.277, the resulting global cycle time (i.e. the step width of the
time steps of the scenarios) will be 10 ms, because this is the greatest common divisor of all
the given work task cycle times (i.e. 20 and 50 ms in this example).
Testing a component requires a set of scenarios that stimulate the component and check the
behavior of the component. Such a scenario contains calls to component functions and other
possible actions and expected reactions of the component. A scenario can be seen as a test
case for the component. Therefore, TESSY displays the list of scenarios within the Test Item
view like normal test cases but with a different icon.
There are two possibilities for creating scenarios: Either by creation them ad hoc or by de-
veloping them systematically using the classification tree method supported by CTE within
TESSY.
After synchronizing the CTE test cases there will be the respective number of scenarios within
TESSY. You can add additional scenarios using the context menu within the scenario list. To
edit the scenario, start the scenario editor SCE.The (empty) scenarios will be displayed within
SCE providing the specification and description of the designed scenario test cases.
• The stimulation of the component like any external application would do it. This
includes normal behavior as well as abnormal behavior which should check the
error handling of the component.
• Checking the reaction of the component caused by the scenario stimulation.
We will examine the different possibilities to check expected behavior of the component under
test. There are at least the following methods available:
• Checking return values of component functions called while stimulating the com-
ponent.
• Checking the values of global variables (of the component).
• Checking the calling sequence of underlying external functions of the component.
This would check the interface to any underlying components used by the com-
ponent under test.
• Checking parameters of calls to underlying external functions (implemented as
stub functions).
• Providing return values from calls to underlying external functions (implemented
as stub functions) to the component.
The following sections describe the required settings for the above mentioned check meth-
ods.
Þ Drag and drop the functions from the component functions onto the desired time
step (see figure 6.278).
There are some settings required for the function calls depending on the kind of function:
You can set input values or check output values of any variable at every time step of the
scenario. According to your settings within TIE you have access to all variables available
within the component interface. The test data can be entered within the Test Data view of the
scenario perspective. When you select a time step, the Test Data view provides a column
named like the time step for entering either new test data values or editing existing ones (see
figure 6.279)
The Test Data view provides most of the editing features like for the normal unit testing. After
entering any values, the icon of the respective time step will change indicating the test data
status. The Test Data view shows columns for all time steps that contain test data plus one
column for the currently selected time step.
Time step indicator icons for test data (see also figure 6.280):
• Gray indicator: Some input values are assigned but some are still missing and
need to be provided. Select “*none*” for input values of time steps that you do not
want to assign.
• Yellow indicator: At least all input values are assigned for this time step. The output
values do not need to be assigned to execute a scenario.
Important: All time steps with test data need to have a yellow indicator before
the scenario can be executed!
The icon of the scenario will change to yellow if there are no more time steps with a gray
indicator.
When dragging component functions into the scenario, you need to provide the parameter
values. For scalar values, you can simply add decimal or floating point numbers depending
on the type of variable.
You can also provide a symbolic name of a variable with the corresponding type. This name
will be added into the test application without any checking. If the symbolic name does not
exist, there will be error messages when compiling the test application.
Either provide a value (for scalar return value types) or specify the symbolic name of a variable
which the return value shall be assigned to (in this case, the variable provided should be of
the same type like the return value type).
The calling sequence of calls to underlying external functions may be checked on an ab-
stract level within the scenario. Not the absolute calling sequence will be evaluated, but the
existence of function calls within a given period of time within the scenario. This provides
a robust mechanism for call trace checking that ignores the internal implementation of the
component.
How does it work? You specify the following information within the scenario for each expected
function call:
• The time step where you expect the call at the earliest.
• The number of expected consecutive calls to the function (default is 1).
• Optionally a period of time (the time frame) from that starting point where the invo-
cation of the function call is still successful with respect to the expected behavior
of the component.
Both these settings are available for each expected call to an external function. The time
frame is zero by default indicating that the expected function call shall take place within the
same time step. If you specify the time frame as 60 like within the example above, this
indicates that the expected call could take place within time step 20ms, 30ms or up to 80ms
to be successful.
The exact sequence of the calls to those functions will not be examined, any of them may be
called within the given time frame interval. The report shows the result of the evaluation of
the call trace for the example above. The actual call trace entry contains the time step where
this call occurred, the expected call trace entry shows the expected time frame period.
The following table shows the possible evaluation results for the call trace of the example
calls to function crossed_50() and crossed_75().
40ms ok ok
50ms failed ok
60ms failed ok
70ms failed ok
80md failed ok
If you need to check the exact calling sequence, you should set the time frame to zero. Other
functions called in between the expected function calls are ignored. On the other hand, the
time frame provides you with a powerful way to describe expected behavior of the component
without knowing details about the exact implementation.
You may check that a function is not called within a given time interval. The example below
checks that the function crossed_75()is not called within 100ms after the stimulation of the
component by setting the expected call count to zero.
The crossed icon shows the special value of the expected call count, indicating a check that
the function shall not be called.
Because called external functions need to be replaced by stub functions, you can check
the parameter values like during unit testing, depending on the type of stub function you
choose.
For more information refer to section 6.7.4.9 Defining stubs for functions.
After implementing and editing the scenarios within SCE, execute the scenarios:
Þ Select the desired scenario test cases and execute the test using the Execute Test
button within the tool bar.
The fault injection feature provides means to test code parts that are not testable using normal
testing inputs e.g. endless loops, read-after-write functionality or error cases in defensive
programming. Dedicated testing code can be injected at selected branch locations of the
test object so that decision outcomes can be manipulated. Such fault injections are valid for
specially marked fault injection test cases only. This ensures proper operation of the normal
test cases without any side effects that may be caused by fault injections.
Fault injections are edited within the Coverage Viewer (CV) (see chapter 6.11 CV: Analyzing
the coverage for more information about the Coverage Viewer) based on the flow chart of the
test object. They are displayed as blue circles at the respective branch. The injected code
will either be injected into the respective branch or directly before the decision of this branch.
This is useful because normally the decision outcome needs to be manipulated in order to
6.11 CV:
reach a formerly unreachable branch. Analyzing the
coverage
Each fault injection is identified by its branch path which represents the decisions that need
to be taken to reach the desired branch. This allows finding the right location of the fault
injection even after source code changes. Fault injections that cannot be mapped to the
current source code control flow are marked with an error symbol within the Fault Injections
view of the CV.
Such unmapped fault injections can be assigned to branches via drag and drop onto the
desired branch.
Table 6.92: Fault injection related tool bar icons in the Flow Chart view of the CV
Fault injection test cases are specially marked in order to distinguish between normal func-
tional test cases and those added for specific testing challenges like unreached branches.
The same circle icon (see subsection 6.14.2.1 Status indicator) decorates fault injection test
cases within the Test Item view in the Overview perspective.
Normal test cases can be changed to fault injection test cases and vice versa using the
context menu of the Test Items view in the Overview perspective.
Test cases for fault injections are only effective if they reach the decision of the branch
where the injected code has been added. Otherwise the injected code would be
useless because it would not be executed.
It is recommended to first run all normal test cases in order to reveal any unreached branches
within the control flow of the test object shown within the CV. Because the CV knows which
test cases have reached certain decisions with unreached branches, it is very easy to select
appropriate fault injection candidate test cases. For this purpose, the Edit Fault Injection
dialog (see figure 6.285) provides a list of test cases reaching the decision of a selected
branch. One or several of these test cases can be copied as fault injection test cases for
such unreached branches.
Fault injections are created and edited within the Coverage Viewer (CV). For more
information about the CV see chapter 6.11 CV: Analyzing the coverage
Þ Select a branch of the flow graph within the CV and click on the “Edit Fault Injection”
button within the Flow Chart view tool bar.
Fault injection This will open the Edit Fault Injection dialog (see figure 6.285).
dialog
Important: Fault injections will only be applied if at least one fault injection test
case is available! Also make sure that your fault injection test cases reach the
respective code location where the fault injection code gets injected.
To edit your fault injection within the dialog you need to know:
• The “Branch” text field is read-only and displays the current line number of the
decision of the selected branch. The branch path identifies the location of the
selected branch within the control flow of the test object.
• As an option you can enter a description for the cause of the fault injection. This
text will appear within the test report.
• Select whether the code shall be injected into the branch or directly before the
decision of the selected branch (by selecting the “Insert code before decision”
toggle button).
• Enter the code to be injected.
• Select the applicable fault injection test cases. Be aware that this is only possible if
the existing test cases have been executed with coverage measurement enabled.
Otherwise you can only select to include all fault injection test items.
Within the include list of the dialog you will find all normal functional test cases that reach the
branch decision if there is no other existing fault injection test case that reaches this decision.
Such fault injection candidates will be listed with a “Copy of” prefix in front of their name. They
will be actually copied when you save all changes done to fault injections.
Figure 6.286: Include list at the bottom of the Fault Injection dialog
No fault injection test cases “Copy of” candidates reaching the branch decision
available. (or an empty list if none reaches the decision).
Any fault injection test case Only fault injection test cases reaching the branch
reaches the branch decision. decision.
No fault injection test case “Copy of” candidates reaching the branch decision
reaches the branch decision. (or an empty list if none reaches the decision).
Table 6.94: Possible situations in the include list of the Edit Fault Injection dialog
After creating or editing fault injections the CV becomes dirty and displays an asterisk at
the respective Flow Chart view title. You will be asked to save the changes made when
switching to another test object or another perspective. Any copying of fault injection test
case candidates will be delayed until saving. This preserves the information about reached
branch decisions until the end of the editing operation.
The test details report contains a table with all fault injections for the respective test object
Fault injection and a list of test cases marked as fault injection test cases.
report
Mutation testing can be applied to test objects with existing successfully executed test cases.
It is important that all test cases are passed because a mutation can only be detected by one
of the following checks:
• A test case fails (i.e. the actual result does not match the expected outcome)
• A test case causes a timeout (i.e. the mutation caused an endless loop)
Each of those results are fine in the sense of mutation testing. This is because the test cases
should detect all mutations if they are well designed and covering all test relevant aspects.
The results of mutation testing can be displayed as mutation score within the Test Project
view when enabled within the Preferences. The column will show the percentage of killed
versus total number of mutations.
Within the Test Items view, each test case will also show its mutation score: Here it does not
matter how many mutants were killed by each test case. A failed result is only displayed if a
test case does not kill any mutant.
6.15.1 Preferences
Within the preferences there is a new section “Mutation Tests” that contains all possible set-
tings for the following operations that can be mutated:
• Logical operations
• Relational operations
Within the Matrices in the Preferences you can choose for an operation (represented by a row
in the matrix) the respective mutation operations that shall be performed (contained within the
columns). Different default settings can be chosen and individual settings within the matrix
are also possible (which will result in the “Custom” button being selected automatically.
Important: The default for logical operations is to invert the operation because a
subtle mutation from e.g. “&&” to “&” will most probably not be detected due to the
equivalent results of these operations in the C language.
The mutation score can be activated as additional result column for the Test Project view and
the Test Items view within the “Metrics” section in the Preferences:
The mutation score should be applied as useful hint where to enhance the test cases. But it
should not be required to always reach 100% mutation score because there may be cases
where mutations cannot be detected.
Mutation testing can be activated for each test object that has been executed successfully.
Please note that all tests need to be passed to apply mutation testing.
Test execution Select “Run mutation test” as additional test execution type:
settings for
mutation testing
This will cause the normal test execution being run and subsequently all additional test exe-
cution types will be executed as well. If the normal test execution fails, all further execution
types will be aborted as well.
The results of mutation testing can be examined within the Coverage Viewer perspective. The
Mutation view lists all mutations that were applied to the original source code. The view will
remain empty unless mutation tests have been executed.
When selecting a mutation or decision the respective code will be highlighted within the
source code view. The respective element within the coverage Flow Chart view will also
be selected.
Mutation testing
results
For each decision within the code the Mutations view shows the operations that were mutated.
Each operation is listed with its original code and with the mutated code as child entry. The
“Operation” column contains the original operation and the respective mutated operation.
The “Result” column shows which mutations have been killed by the test cases of this test
object.
Mutations can be detected by test failures (i.e. deviations from expected results, failed call
trace or eval macros), by an access violation or by an execution timeout. All such results yield
a passed mutation result. If the mutation survived all test cases the mutation result is failed.
A tooltip on a failed result icon indicates the kind of failure.
Important: The module attribute ”Execution Timeout“ will not be considered when
running mutation tests. A timeout value will automatically be derived from the actual
execution time of the test object execution preceding the mutation test execution.
This ensures that TESSY will recognize an endless loop caused by the mutated
code.
If certain code mutations cannot be detected by a test case (i.e. due to an equivalent mutant
that behaves the same as the original code), those mutations can be excluded. It is possible
to exclude a single mutation as well as the whole decision from being mutated.
Excluded mutations will not be applied anymore and they will also be excluded from the
mutation score calculation for the test object.
For component testing, all code locations that were not reached by the existing test cases
will automatically be excluded from being mutated. You can show the automatically excluded
mutations by deactivating the filter .
Important: Please note that this feature requires running the normal test with any
kind of coverage instrumentation.
With TESSY you can easily backup modules and tasks into a directory and check in into
a version control system. Modules and tasks can also be restored from that directory which
facilitates checking out modules and tasks from the version control system onto another com-
puter and restoring the test database.
You can backup individual tasks, modules, folders or whole test collections. The backups will
be stored as TMB files. Restoring the files is either possible within the original folder or as
well from another location.
Besides the TMB file, you can store SCRIPT files for each test object containing all tests.
These ASCII based files can be used for diff purposes to review the changes of tests within
the version control history.
Warning: Using the restore function you have to keep in mind that script files are
only transferred into the location where internal script files are stored. The TESSY
model itself will not be updated. (See chapter 6.10 Script Editor: Textual editing
of test cases for more information.) Therefore external changes to the generated
backup scripts are not recommended.
If uncommitted user script changes are found during the backup process, a warning
dialog will appear.
Use the import function instead of the database restore in order to apply script
changes made outside of TESSY.
6.16.1 Backup
Þ In the menu bar select “File” > “Database Backup” > “Save…” .
The Save Database dialog will be opened with your module already selected (see
figure 6.295).
Þ Decide, which modules you want to save by either selecting them separately or
pressing the button “Select All”.
Þ Decide, if you want to save the coverage settings, test report options or dialog
settings from the Window > Preferences menu.
Þ If you have linked your test cases with any requirement documents, you can
choose to save the referenced requirement documents as well. In this case the
requirements will be saved within the TMB file.
Þ If you want to safe variants and scripts, please remember to select it (see figure
6.295).
Þ Click “OK”.
The “Backup Folder” displays the backup directory of the current project. We recom-
mend to use this directory for any backup and restore operations.
For each module there will be a file named like the path to the module starting from the test
collection. The file name will be escaped using URL encoding which replaces for instance
the space character with a “ %20 ” . The preferences are stored within separate files within
the “preferences” subdirectory.
6.16.2 Restore
Þ If there are any requirement document backups, the respective requirement doc-
uments will appear within the box “Requirements”.
Þ Click “OK”.
If you want to include external user script changes into your restoration, you need
to select “Restore scripts” before starting the restoring process. But please keep
in mind, that only the internally managed user scripts will be updated and not the
TESSY model itself.
More information about the handling of scripts is provided in chapter 6.10 Script
Editor: Textual editing of test cases.
You can also restore TMB backup files into another than the original location:
If you select any folder for which there are no corresponding TMB backup files, restore any
of the available TMB files as children of this folder. The original test collections and folders of
the TMB files will be restored as sub folders of the current folder instead.
We recommend to save backups of all test relevant files into a version control system on a
regular basis. At least when the test development is completed, the whole test setup should
be saved as backup.
Figure 6.298: Directories and files within the database directory of the TESSY project
The directory work contains only temporary files created during development and
execution of the tests. You can delete this complete directory to save disk space
after the testing work is completed.
The directory persist contains the current databases of the test project. This directory and
the sub directories will be restored when restoring TMB backup files. The valuable contents
of this directory will be saved into the TMB files created during the backup process.
When you restore the whole project onto another computer, the directory persist will be
restored from the TMB backup files.
In the preferences.xml workspace specific options like “Dialog Settings”, “Script Editor”, “Static
Analysis”, “Tasks”, “Test Execution Settings”, “Test Interface Settings”and “Test Project Settings”
are stored.
Also project specific options like “Metrics” and “Test Report Options” are stored in the
preferences.xml.
TESSY provides a command line interface which allows writing batch script files that control
various operations within a TESSY project. The command line operations are available by
invoking an executable program called “tessycmd.exe”.
The program can be called either from a DOS command line shell or from a DOS batch script
or other script formats that support calling DOS executables.
Before invoking any “tessycmd.exe” commands you need to start TESSY. The “tessycmd.exe”
will connect to a running instance of TESSY in order to execute any commands. You can
run TESSY either in GUI mode with a graphical user interface (when started normally using
“TESSY.exe”) or in headless mode without a GUI (when started using “tessyd.exe”).
For information about the usage of TESSY together with continuous integration
servers like Jenkins refer to the application note “Continuous Integration with Jenk-
ins” (“Help” > “Documentation”).
As a precondition for command line execution you need to have a readily configured project
and some TMB backup files within this project containing your tests. This project should then
be restored from your version control system into any location on a computer controlled by
your continuous integration system. Via “tessycmd” commands you can now restore your test
project from the TMB files and execute the tests.
Transformation of the TESSY result XML files into XUNIT format is described within section
6.17.5 Execution and result evaluation.
Important: It is mandatory to utilize the GUI for creating test projects with test
data. Already developed tests can be run from the CLI.
For test automation on continuous integration servers or nightly builds it is required to start
TESSY in headless mode (i.e. without displaying a GUI). TESSY provides a special starter
application for this purpose called “tessyd.exe”. When invoking “tessyd.exe” within your batch
script, it will start TESSY in headless mode and wait until the TESSY application is ready to
receive commands via “tessycmd.exe”.
At the end of your script you should shutdown TESSY using the same “tessyd.exe” application
with the parameter “shutdown”. The calling sequence for running batch tests would be like
follows:
tessycmd <commands>
When running TESSY in headless mode, the console output will be written into a file
“console.log” within the directory:
%USERPROFILE%\.tessy_41_workspace\.metadata
For details on how to archive any problems and console outputs for further analysis
refer to subsection 7.2.3.2 Headless operation problems log.
The executable that provides all command line operations is available within the TESSY in-
stallation directory:
C:\Program Files\Razorcat\TESSY_4.x\bin\tessycmd.exe
The available commands provide means to create, select and list TESSY objects, i.e. a
project, test collection, folder, module, test object. After invoking any create commands, the
respective new TESSY object will be selected. You can invoke further commands to ma-
nipulate any previously created or selected TESSY objects. You need to call all commands
according to the following sequence:
• Connect to TESSY.
• Select or create TESSY objects.
• Invoke commands to start operations on the selected TESSY objects.
• Disconnect from TESSY.
Important: If you are not connected, invoking any commands will fail.
The current state (connection and selection of TESSY objects) of the “tessycmd.exe” exe-
cutable is managed by the currently running TESSY application. If you restart TESSY, the
state of “tessycmd.exe” will be reset to the initial state, i.e. disconnected.
6.17.4 Commands
Command Operation
tessyd -f <name of pdbx file> Imports and opens the project referred by the
given .pdbx file
Table 6.96: Excerpt of the possible commands of the command line interface
To execute “tessycmd.exe” within any directory, add the directory “bin” of the TESSY
installation to the Windows path environment variable.
The most common batch operation would be to start TESSY with a readily configured project,
run a TESSY batch operation and evaluate the results.
Please note the following options for “tessycmd” commands that are useful for this operation:
tessycmd exec-test <batch file> [-o <output directory>]
The optional output directory overwrites the report output directory given within the batch file
(*.tbs). If provided, the generated reports are copied to that output directory after execution
of the batch file.
In this way you can easily specify the output path for result XML files on the command line:
tessycmd xslt [-xsl <XSL file>] [-o <output file>] <XML file>
With the optional XSL file you can specify the XSL transformation being applied to the TESSY
XML result file created with the previous step. A template for an XUNIT compatible transfor-
mation can be found within the installation directory of TESSY:
C:\Program Files\Razorcat\TESSY_4.x\bin\plugins\com.razorcat.tessy.
reporting.templates\4.x\ci\TESSY_TestDetails_Report_JUnit.xsl
Important: There is no command to create a new project from scratch using the
command line because the necessary options would be too extensive to be han-
dled usefully on command line. Please follow the steps described within chapter 4.1
Creating databases and working with the file system and in chapter 6.16 Backup,
restore, version control on how to create an empty project with the required config-
uration and save this project to disk. Such an (empty) project can be copied to any
location on disk and populated with your TMB files using “tessycmd”.
The CLI command line execution mode of TESSY is designed for usage on continuous inte-
gration platforms like e.g. Jenkins. Therefore it is desired that TESSY does an auto-reuse of
existing tests on interface changes and tries to execute as many tests as possible with newer
versions of the source code being tested when running in CLI mode.
The following relevant source code changes will be auto-reused as far as possible in CLI
mode:
As a result, the tests executed in CLI mode may be run with uninitialized new variables which
could hide existing or newly introduced errors within the software being tested. Also endless
loops may occur due to such uninitialized variables.
Warning: Such auto-reused tests may have lost significant test data! Therefore it
is discouraged to apply CLI auto-reuse on the working copy of your project.
For details on how to archive any problems and console outputs for further analysis
refer to subsection 7.2.3.2 Headless operation problems log.
You will find the following example DOS script within the TESSY installation directory:
C:\Program Files\Razorcat\TESSY_4.x\Examples\CommandLine\cmd_line_example.bat
The script is prepared to import TESSY backup files (TMB files) into the currently open TESSY
project. It will create a new test collection “Examples” and import the existing TMB files into a
newly created folder. After the import it executes the imported modules. To run the script:
For compiler/target settings refer to our application notes available in the Help menu
of TESSY (“Help” > “Documentation”)!
• Check this manual and make sure that you have operated correctly.
• Check section 7.3 Solutions for common problems.
• Check our application notes that are available in the Help menu of TESSY (“Help”
> “Documentation”).
• Check our website for commonly asked questions and current issues http://www.ra-
zorcat.com.
If you have further questions or if there is a problem you could not solve with the documen-
tations described above, please contact our Technical Support per e-mail: support@razor-
cat.com
The TESSY Support File (TGZ file) contains information about test object including
data, compiler, project settings etc. It helps the support to detect the cause of your
problem.
Þ In TESSY, select the module or test object that is causing the problem.
Þ Click “Help” in the menu bar.
Þ Select “Support” > “Create Support File”.
Þ Tick the box “Preprocessed sources” if possible.
Þ You can also change the file name and choose a folder if you wish.
Þ Click “OK”.
The TESSY Support File (TGZ file) is created.
Þ Before reproducing the problem, switch to the Console view of the perspective
“Overview”.
Þ In the tool bar click on the icon (Clear Console).
All messages will be deleted.
The additional information can relate to different process steps within TESSY. Enable the
logging of the information you suspect the problem to stem from:
Process Creation parts of TESSY do not start correctly or TESSY is not able
to start the test system (e.g. debugger).
Makefile Commands the test application (slave) or the test driver (master) cannot
be created or are created incorrectly.
High level you want to log the general TESSY activities. Seldom re-
quired to find a problem.
Low level you want to log debugger-specific activities. Often very use-
ful.
Þ You can save the settings for logging with ticking the box “Remember current set-
tings …” .
Þ Do the actions again that lead to the problem (e.g. opening the module).
Þ Keep the respective element selected that caused the problem (e.g. the test object
in case of errors while executing) when creating the support file.
In general setting up software tests is a complex task. Problems showing up during the test
execution process can have various reasons.
To more easily locate possible errors that can appear while executing tests TESSY offers an
enhanced error handling to provide enhanced error messages and logging capabilities for
command line execution.
An error dialog shows the full exception chain, the context (e.g. the affected test object) and
provides easy access to the error log file and console messages of the affected operation
causing the error.
Important: TESSY will collect all problems and related console outputs into prob-
lem files within the “%USERPROFILE%\.tessy_42_workspace\.metadata” directory.
The newest file will be kept as “problems.zlog” and up to nine older files will appear
with time stamps in the file name.
If the test execution fails for some reason, TESSY opens an information dialog with more
details about the problems that occurred. This makes it easier to handle such issues.
Figure 7.4: Problems Log dialog with details and context menu for individual log entries
“Open Log File” in the Help Menu (see figure 7.6) will open the log file in a text editor.
A right click on an entry within the Problems Log dialog opens a context menu with several
options:
Copy to Clipboard Copies the text of the selected line and additional
stack trace information if available.
Open Execution Log Shows all console messages of the operation that
caused the error.
The Problems view in the Overview perspective also offers access to information about
execution problems. It displays all errors occurred during the current TESSY session (unless
“Clear problems view before execution” is selected within the execution preferences).
A right click in every line opens a context menu with several options:
Also a double click on every line in the Problems view reopens the Problems Log
dialog.
The item “Open Workspace Problems Log” in the Help menu of the menu bar opens an infor-
mation dialog with a list of problems that have occurred during the current TESSY session.
The other menu item “Open Problems Log …” opens the Windows file chooser so you can
open a log from a remote continuous integration server (e.g. Jenkins) which was produced
while running your current TESSY project. This allows you to analyze any errors or warnings
from the Jenkins job and find the related TESSY objects within your project.
Please follow the steps described below to open the log file:
Þ Run the command “tessyd shutdown --copy-log <directory>” as final part of your
Jenkins build step.
This will shutdown TESSY and copy the “problems.zlog” into the given direc-
tory. The directory must exist and should be located within the current Jenkins
workspace.
Þ Archive the contents of this directory as build artefact.
Þ Download the artefact to your local computer.
(There should be a “problems_<time stamp>.zlog” file.)
Þ Select the “problems_<time stamp>.zlog” file within the file chooser dialog.
The current time stamp will be added to the “problems.zlog” file name when using the “–copy-
log” option. This allows storing problem log files of multiple subsequent headless TESSY
sessions into the same artefact directory.
For more information about the command line interface please refer to section 6.17
Command line interface).
The Problems Log dialog will now display all problems that occurred during the Jenkins job
execution and you can find the related TESSY objects and the related console outputs for
each individual test object execution error or warning.
Error description: TESSY does not start or displays exceptions within all
GUI windows (views).
Solution:
Delete the following directories in given order. After every deletion try to start TESSY again.
If it fails, delete the next directory.
Important: Close TESSY completely before deleting any of those directories! The
following example path names reflect a TESSY version 4.0.5. You need to adjust
the path names for the TESSY version you are currently using!
Important: This will reset your window layout of the GUI to the default
settings!
Important: After this you need to re-import all your projects into the
project list again! The most simple way to do this is to double-click on
the respective “tessy.pdbx” file.
SQLException
SQL State: 08001
Error Code: 40000
java.net.ConnectException: Fehler beim Herstellen der
Verbindung zu Server localhost auf Port 1527 mit
Meldung Connection refused: connect.
Error description: When quitting TESSY, one of the error messages above is
displayed.
Possible cause: Two versions of TESSY were started at the same time.
Solution:
If you want to use two different versions of TESSY at the same time, you can change the
config file:
Important: The functions of the command line tool are limited if you use both
TESSY versions at the same time! The tool works with the TESSY version that was
started at last.
Error description: The license server does not start, or you get an error
when starting it.
Specific -
occurrence or
requirement:
Solution:
Þ Start the License Manager manually if it has not started yet: “Start” > “All Programs”
> “TESSY” > “Floating License Manager”.
Þ Click on (Check) to check your license key file.
Þ Check the error message (see figure 7.8).
Figure 7.8: License key check unsuccessful: license key is incorrect for the host id
In many cases you can already determine the problem with the help of the error message. In
case of the error “No maching hostid for key” the license key does not match to the host id of
your computer:
Þ Configure the correct license key file in the manager: Click on (Configure) and
select the correct license key file. Click “OK”.
Þ Click on to check the license key file again.
Þ If the error still appears, contact our support (see Contacting the TESSY support)
to get a new license key file.
Setting a variable declared with the “const” modifier keyword may result in undefined behavior
and lead to error messages. In those cases set the variable passing to “IRRELEVANT” in the
TIE. After that was done the test shall pass through without any restriction.
Important: Please note that normally constant variables are read-only variables
and can not be assigned.
1. Undefine the const modifier in the Properties view (for individual modules)
The modifier “const” needs to be removed in order to write to such variables. You can remove
this modifier without changing the source file using a special define that replaces the “const”
keyword with nothing.
Þ In the Test Project view click on the module you want to test.
Þ To add a define that replaces the “const” keyword with an empty content click on
(see figure 7.10) in the Properties view. The New Define popup window opens.
Þ Enter a define with the name “const” and an empty value as shown below (see
figure 7.11).
Assignments to read-only variables are now possible in the chosen module. When this define
is in place, all variables with the “const” modifier will appear as if the “const” has not been used
(i.e. the variables are not “const” any more and can be changed during the test execution).
2. Modify the attribute “Compiler Defines” in the TEE (for global use)
All “const” modifiers are now generally replaced with an empty content.
It is a known problem with TESSY that it leads to error messages if project paths are too
long. This occurs due to Windows APIs used by TESSY, also some of the external compilers
supported by TESSY show the same behavior. This can not be influenced by Razorcat.
There are two ways to avoid or deal with potential problems in relation to this:
• Store your TESSY project data in a particular project file higher up the path, e.g.:
C:\Projects\ProjectXY instead of
C:\Projects\ProjectXY\QualityAssurance\Test\TESSY\RootOfProjectXY.
• Shorten the too long project path with a virtual drive by using the Windows utility
“subst”.
Fore more information about creating a virtual drive refer to the Win-
dows operating system help.
Þ Open your project in TESSY using the new virtual drive, in this case: “T:\”.
Important: Be aware that making use of a virtual drive will only work for the actual
user. Therefore it is necessary that all users working on this project apply the same
subst command on their computers.
Batch Testing A testing procedure in which multiple test objects are executed automatically
one after each other without further user interaction.
Branch Coverage Is usually abbreviated “C1”. Roughly spoken: Branch Coverage reveals, if
all branches were executed, for example, an if-instruction has two branches, the then-
branch and the else-branch.
C1 Test During a C1 test, each branch of the test object will be instrumented with a counter
to monitor, how often a branch of the program is run through.
Classification Tree The objective of the Classification Tree Method is to determine a suffi-
cient but minimum number of test cases. It is a systematic approach to test planning by
test case specifications and priorizations.
Code Coverage A test object is considered to consist of items like branches, conditions, etc.
Code coverage measures, how many of the items were exercised during the tests. This
number is related to the total number of items and is usually expressed in percent.
TESSY features C1 coverage (branch coverage) and C2 coverage (MC/DC: Modified
Condition)
Component Testing is the test of interacting test objects, i.e. interacting functions in the
sense of C. These functions can be a (single) calling hierarchy of functions, but we
will consider this mainly as unit testing. We consider as a component mainly a set of
functions that interact e.g. on common data and do not necessarily call each other.
Component testing then is testing of such a set of functions. The units do not need
to be in a calling hierarchy; they may only interact on data, like push() and pop() of
the abstract data type “stack”. A component according to this specification may also be
called a “module”, and its testing “module testing” respectively.
Debugger A computer program that is used to test and debug other programs (the “target”
program). The code to be examined might alternatively be running on an instruction set
simulator (ISS), a technique that allows great power in its ability to halt when specific
conditions are encountered but which will typically be somewhat slower than executing
the code directly on the appropriate (or the same) processor. Some debuggers offer
two modes of operation - full or partial simulation, to limit this impact.
Enums Type of the C language specification which allows to define a list of aliases which
represent integer numbers.
Expected Values Values expected to be calculated by the test object. The result values are
checked against the expected values after the test run.
Fault Injection Fault injection is a technique to improve the coverage of tests by injecting
faults to test code paths. The TESSY fault injection feature provides means to test code
parts that are not testable using normal testing inputs e.g. endless loops, read-after-
write functionality or error cases in defensive programming.
Flow Chart A flow chart is a special type of diagram used to outline process, workflow or
algorithm. Various boxes represent the steps and their order is illustrated by connecting
the boxes with arrows.
Hysteresis Dependence of a system not just on its current environment but also on its past.
This dependence arises because the system can be in more than one internal state.
Interface Data Assign editor (IDA) If the interface elements of the test object have changed,
you can assign the new interface elements to the old. Your test data will be assigned
automatically.
Input Values Function parameters, global and external variables which have effect on the
behavior of the function.
Interface Description Information about the passing direction and type of interface ele-
ments (parameter, global variables and external variables). The interface description is
determined automatically by TESSY and is made visible / changeable in the TIE.
Integration Testing consists of a sequence of calls and can be considered either as unit
testing for a calling hierarchy of functions or as a component testing for a set of inter-
acting functions not necessarily calling each other. Component testing is integration
testing of the functions in the component.
Metrics Calculation of the cyclomatic complexity (CC) is a common measure for complexity
control. It measures the complexity of source code on the basis of the control flow graph
and indicates the number of linearly independent paths through the code.
Module A TESSY module comprises primarily of the test object (in C a function in the sense
of C) and source files, compiler settings, interface description and test data. You can
pool modules in projects.
Module Testing is a semantic term for testing a collection of cooperating functions (units).
Output Values The same as an expected value in the TESSY context. Both terms are used
in equivalence within this manual. The output (repectively expected) values are evalu-
ated against the actual result values after the test run.
Regression Testing Regression testing is the repetitive running of already successfully com-
pleted test cases. The intention of regression testing is to verify that modifications and
enhancements to a test object do not break the already successfully completed tests.
Requirement Documented need of what a test should perform and important input for the
verification process. Requirements show what elements and functions are necessary
for the test.
Requirement, Functional Describes the features, specific behavior, business rules and gen-
eral functionality that the proposed system must support.
Requirement, Non-Functional Specifies criteria that can be used to judge the operation of
the test.
Script Editor The TESSY Script Editor perspective provides textual editors supporting a test
scripting language for editing test cases, test data and usercode. A new internally man-
aged script file will be created for each test object and all test data and usercode can
be edited and saved.
Stub Function Piece of code used to stand in for some other programming functionality.
A stub may simulate the behavior of existing code (such as a procedure on a remote
machine) or be a temporary substitute for yet-to-be-developed code.
System Testing Test of the application (software or software and hardware) as a whole.
Test Data Editor (TDE) With the TDE you can enter the input values and expected values
for the test run.
TESSY Support File Contains information about test objects including data, compiler, project
settings etc. It helps the support to detect the cause of a problem. In section Contacting
the TESSY support it is explained how to create a TESSY Support File.
TESSY Hardware Adapter Interface (THAI) TESSY provides an hardware adapter interface
to enable stimulation and measurement of hardware signals as well as execution time
measurement during the unit test execution.
Test Case Element that encapsulates the abstract test definition, e.g. the specification and
description of a test, and the concrete test data managed within test steps.
Test Definition Describes a test to be performed on the test system in textual format. A test
definition abstractly describes the inputs and the expected outcome of a test and refers
to a list of requirements which shall be validated with this test.
Test Driver C-source files generated by TESSY for the test execution. These files are com-
piled and linked in order to build an application that prepares the input data, call the test
object and store the actual result data.
Test Environment Information about the test object, the compiler used, the target debugger
or emulator and more settings.
Test Run One execution of a test object with the given test cases. The result of a test run is
stored within an XML result file that may be further processed by external tools.
Test Suite A collection of test objects with test scenarios and/or test cases that were created
to fulfill a certain test objective.
Test Interface Editor (TIE) With the TIE you can view all interface elements and review or
set the passing direction and/or other information of the interface elements.
Unit A single function, i.e. test object of a C program single function; the smallest reasonable
test object of a C program.
Usercode In the usercode you can enter C code, which is executed before or after test
cases/test steps during the execution of a test object.
Workspace The space at local disk where the TESSY application reads and writes data.
Place for configuration and temporary report data. Project data can be saved separately.
5.35 Test Definition view within TDE with linked requirement . . . . . . . . . . . . . . 110
5.36 Editing the settings of a Planning Coverage Report . . . . . . . . . . . . . . . . 110
5.37 Dialog of the settings for the Planning Coverage Report . . . . . . . . . . . . . 111
5.38 Planning coverage report of the IVIR requirement document . . . . . . . . . . . 112
5.39 Generating a Test Details Report . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.40 Part of the generated test report of is_value_in_range . . . . . . . . . . . . . . 114
5.41 Creating an Execution Coverage Report . . . . . . . . . . . . . . . . . . . . . . 114
5.42 Coverage Report of is_value_in_range . . . . . . . . . . . . . . . . . . . . . . . 115
5.43 Overview perspective after test run (with requirements) . . . . . . . . . . . . . . 117
5.44 Use the context menu to edit a source . . . . . . . . . . . . . . . . . . . . . . . 118
5.45 Editing the C-source file is_val_in_range.c . . . . . . . . . . . . . . . . . . . . . 118
5.46 Changed C-source file of is_value_in_range . . . . . . . . . . . . . . . . . . . . 119
5.47 Adding a “delete” and “new” object . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.48 Changed and new test objects of is_value_in_range . . . . . . . . . . . . . . . 120
5.49 Remove the code for test object “deleted”. . . . . . . . . . . . . . . . . . . . . . 121
5.50 Changed and new test objects of is_value_in_range . . . . . . . . . . . . . . . 121
5.51 Changed, deleted and new test object of is_value_in_range . . . . . . . . . . . 123
5.52 Use drag and drop in IDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.53 Automatically generated tree with the root “is_value_in_range” in the CTE per-
spective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
5.54 Interface elements categorized into ”Inputs“ and ”Outputs“ . . . . . . . . . . . . 126
5.55 Child elements of an atomic type on the inputs side of the subtree . . . . . . . 127
5.56 The outputs subtree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.57 CTE tree area and Test Data view . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.58 Modify class elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.59 Deleting elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.60 Creating test cases in the CTE . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.61 Automatically generated tree with 9 Test Cases . . . . . . . . . . . . . . . . . . 134
5.62 Defining test cases in the combination table of CTE . . . . . . . . . . . . . . . . 135
5.63 Completed table with all test cases for example “is_value_in_range” . . . . . . 135
5.64 Test data is displayed when selecting a test case in the combination table . . . 136
5.65 Test data displayed within TDE . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.67 Example interior_light; ECU = Electronic Control Unit . . . . . . . . . . . . . . . 139
5.68 Test Project view with new project interior_light . . . . . . . . . . . . . . . . . . 140
5.69 Selecting “Component” in the module properties . . . . . . . . . . . . . . . . . 141
5.70 Scenario of a component test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.71 C-source code interior_light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
5.73 If a heartbeat function exists, timely behavior can be tested. . . . . . . . . . . . 144
5.74 The initial interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.124 Editing the requirement settings within the Attributes view . . . . . . . . . . . . 301
6.125 Changing the “Content Type” attribute to HTML . . . . . . . . . . . . . . . . . . 302
6.126 History view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
6.127 Differences view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
6.128 Related Elements view and its interrelations . . . . . . . . . . . . . . . . . . . . 305
6.129 Related Elements view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
6.130 Related Elements view with Incoming and Outgoing Links . . . . . . . . . . . . 306
6.131 View Document Preview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
6.132 Newly opened Document Preview within the TIE perspective . . . . . . . . . . . 308
6.133 HTML editing within the inline editor (WYSIWYG and plain HTML) . . . . . . . 309
6.134 Requirements Coverage view with no linked requirements . . . . . . . . . . . . 310
6.135 Setting or disabling the options of auto refreshing . . . . . . . . . . . . . . . . . 311
6.136 TEE - The Test Environment Editor perspective . . . . . . . . . . . . . . . . . . 314
6.137 Opening the Test Environment Editor (TEE) . . . . . . . . . . . . . . . . . . . . 316
6.138 The All Environments view in the TEE perspective . . . . . . . . . . . . . . . . 318
6.139 The Project Environments view in the TEE perspective . . . . . . . . . . . . . . 319
6.140 Search result list with additional information . . . . . . . . . . . . . . . . . . . . 319
6.141 Add an environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
6.142 Attributes list within the Attributes view of the TEE . . . . . . . . . . . . . . . . . 321
6.143 Comparing environments in the Attributes view . . . . . . . . . . . . . . . . . . 323
6.144 Integration of a hardware adapter (e.g. GAMMA) into the TESSY unit test execution328
6.145 XML data structure for the configuration THAI . . . . . . . . . . . . . . . . . . . 329
6.146 Enable THAI in the Properties view . . . . . . . . . . . . . . . . . . . . . . . . . 330
6.147 Required THAI attributes in the Attribute View . . . . . . . . . . . . . . . . . . . 331
6.148 Required THAI attributes in the TIE . . . . . . . . . . . . . . . . . . . . . . . . . 332
6.149 Required THAI attributes in the TDE . . . . . . . . . . . . . . . . . . . . . . . . 333
6.150 Perspective TIE - Test Interface Editor . . . . . . . . . . . . . . . . . . . . . . . 334
6.151 Information of passing direction and type . . . . . . . . . . . . . . . . . . . . . . 336
6.152 Interface view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
6.153 White arrow indicating further levels, black arrow when expanded . . . . . . . . 338
6.154 Resetting passing directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
6.155 Setting the data format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
6.156 Array as pointer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
6.157 Create a stub function within the context menu . . . . . . . . . . . . . . . . . . 346
6.158 Create a new variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
6.159 Example code snippet for alias names . . . . . . . . . . . . . . . . . . . . . . . 350
6.160 Show alias names preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
6.161 Defined external variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
6.162 Undefining an external variable . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
6.201 Test step 1.1 is selected and undefined values are highlighted in yellow. . . . . 402
6.202 Test Data view showing selected test steps. . . . . . . . . . . . . . . . . . . . . 403
6.203 Clicking in the cell shows a combo box with the union components . . . . . . . 406
6.204 Clicking in the cell shows a combo box with the available enum constants . . . 407
6.205 Pressing Ctrl + Space opens a list of available defines or enum constants . . . 408
6.206 Arithmetic expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
6.207 Entering values as vector for an advanced stub . . . . . . . . . . . . . . . . . . 410
6.208 Choosing shown arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
6.209 Passing direction set to irrelevant. . . . . . . . . . . . . . . . . . . . . . . . . . . 413
6.210 Entering evaluation mode “unequal” within the inline editor . . . . . . . . . . . . 414
6.211 Generator test case 4 has a range value from 6 to 9 for parameter v1 . . . . . . 416
6.212 Four test steps are generated for every value within the range “6 to 9”. . . . . . 417
6.213 Selecting “Change Test Case Type to Normal” . . . . . . . . . . . . . . . . . . . 418
6.214 The test case and test steps originally being generated. . . . . . . . . . . . . . 419
6.215 Inherited value coloring within Test Data view . . . . . . . . . . . . . . . . . . . 420
6.216 Test Definition view within TDE with linked requirement . . . . . . . . . . . . . . 423
6.217 Call Trace view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
6.218 Declarations/Definitions view . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
6.219 Prolog/Epilog view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
6.220 Call sequence of the usercode parts . . . . . . . . . . . . . . . . . . . . . . . . 426
6.221 TESSY provides default prolog/epilog on test object level to be inherited to test
cases and test steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
6.222 TESSY allows Prolog/Epilog being inherited from test case or test object . . . . 429
6.223 Prolog/Epilog functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
6.224 Editing C code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
6.225 Call the popup menu by pressing CTRL + space . . . . . . . . . . . . . . . . . 432
6.226 Editing the evaluation macro templates . . . . . . . . . . . . . . . . . . . . . . . 432
6.227 Formatting of evaluation macro values . . . . . . . . . . . . . . . . . . . . . . . 434
6.228 Stub Functions view without contents . . . . . . . . . . . . . . . . . . . . . . . . 435
6.229 Test execution direction using stub code . . . . . . . . . . . . . . . . . . . . . . 435
6.230 TESSY Preferences: Abort on missing stub code . . . . . . . . . . . . . . . . . 436
6.231 Stub Functions view with code using TS_CALL_COUNT macro . . . . . . . . . 437
6.232 Stub Code Levels in the Usercode Outline view . . . . . . . . . . . . . . . . . . 438
6.233 Stub code examples on test object, test case and test step level . . . . . . . . . 439
6.234 Automatically Generated Test Code . . . . . . . . . . . . . . . . . . . . . . . . . 439
6.235 Usercode Outline view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
6.236 Usercode Outline view showing inherited stub code . . . . . . . . . . . . . . . . 441
6.237 Usercode Outline view showing inserted stub code . . . . . . . . . . . . . . . . 441
6.238 Plots view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
0.1 Where to find - matters of the several parts of the TESSY manual . . . . . . . . xix
0.2 Font characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
6.18 Optional functions of the Sources tab of the Properties view . . . . . . . . . . . 240
6.19 Tool bar icons of the Test Items view . . . . . . . . . . . . . . . . . . . . . . . . 247
6.20 Column indicators of the Test Items view . . . . . . . . . . . . . . . . . . . . . . 247
6.21 Status indicators of the Test Items view . . . . . . . . . . . . . . . . . . . . . . . 249
6.22 Status indicators for test cases and test steps created in the CTE . . . . . . . . 254
6.23 Various status indicators for test cases and test steps in the Test Items view . . 254
6.24 Tool bar icons of the Console view . . . . . . . . . . . . . . . . . . . . . . . . . 258
6.25 Icons of the content menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
6.26 Structure of the C/C++ perspective . . . . . . . . . . . . . . . . . . . . . . . . . 266
6.27 Tool bar icons of the Outline view . . . . . . . . . . . . . . . . . . . . . . . . . . 270
6.28 Structure of the Requirement Management perspective . . . . . . . . . . . . . 274
6.29 Tool bar icons of the RQMT Explorer view . . . . . . . . . . . . . . . . . . . . . 276
6.30 Status indicators of the RQMT Explorer view . . . . . . . . . . . . . . . . . . . 276
6.31 Possible formats of requirement sources . . . . . . . . . . . . . . . . . . . . . . 279
6.32 Tool bar icons of the Requirements List view . . . . . . . . . . . . . . . . . . . . 284
6.33 Tool bar icon of the Requirements List view . . . . . . . . . . . . . . . . . . . . 284
6.34 Tool bar icons of the VxV Matrix view . . . . . . . . . . . . . . . . . . . . . . . . 288
6.35 Tool bar icons of the Test Means view . . . . . . . . . . . . . . . . . . . . . . . 289
6.36 Tool bar icons of the Link Matrix view . . . . . . . . . . . . . . . . . . . . . . . . 290
6.37 Status indicators of the Suspicious Elements view . . . . . . . . . . . . . . . . 290
6.38 Tool bar icons of the Suspicious Elements view . . . . . . . . . . . . . . . . . . 295
6.39 Tool bar icons of the Attached Files view . . . . . . . . . . . . . . . . . . . . . . 299
6.40 Tool bar icons of the Attributes view . . . . . . . . . . . . . . . . . . . . . . . . . 300
6.41 Tool bar icons of the History view . . . . . . . . . . . . . . . . . . . . . . . . . . 303
6.42 Tool bar icons of the Differences view . . . . . . . . . . . . . . . . . . . . . . . . 305
6.43 Tool bar icons of the Document Preview . . . . . . . . . . . . . . . . . . . . . . 308
6.44 Tool bar icons of the Requirements Coverage view . . . . . . . . . . . . . . . . 310
6.45 Indicators of the Planning tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
6.46 Indicator of the Execution tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
6.47 Structure of TEE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
6.48 Tool bar icons of the TEE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
6.49 Status indicator example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
6.50 Attribute fonts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
6.51 Contents, functions and storage location of configuration files . . . . . . . . . . 324
6.52 Meanings of flags in the attribute properties . . . . . . . . . . . . . . . . . . . . 327
6.53 THAI attributes and their descriptions . . . . . . . . . . . . . . . . . . . . . . . . 332
6.54 Structure of TIE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
6.55 Icons of the Interface view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
6.56 View icons of the Interface view . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
6.96 Excerpt of the possible commands of the command line interface . . . . . . . . 518