Iso-Iec TS 8200 2024
Iso-Iec TS 8200 2024
ISO/IEC TS 8200
Information technology — First edition
Artificial intelligence — 2024-04
Controllability of automated
artificial intelligence systems
Technologies de I'information
— —
Intelligence artificielle
Controlabilite des systemes d'intelligence artificiels automatises
Reference number
ISO/IEC TS 8200:2024(en) © ISO/IEC 2024
© ISO/IEC 2024
All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may
be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting on
the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address below
or ISO’s member body in the country of the requester.
ISO copyright office
CP 401 • Ch. de Blandonnet 8
CH-1214 Vernier, Geneva
Phone: +41 22 749 01 11
Email: copyright@iso.org
Website: www.iso.org
Published in Switzerland
Foreword v
Introduction vi
1 Scope 1
2 Normative references 1
3 Terms and definitions 1
4 Abbreviations 5
5 Overview 5
5.1 Concept of controllability of an AI system 5
5.2 System state 6
5.3 System state transition 7
5.3.1 Target of system state transition 7
5.3.2 Criteria of system state transition 7
5.3.3 Process of system state transition 7
5.3.4 Effects 8
5.3.5 Side effects 8
5.4 Closed-loop and open-loop systems 8
6 Characteristics of AI system controllability 9
6.1 Control over an AI system 9
6.2 Process of control 11
6.3 Control points 12
6.4 Span of control 13
6.5 Transfer of control 13
6.6 Engagement of control 15
6.7 Disengagement of control 16
6.8 Uncertainty during control transfer 17
6.9 Cost of control 17
6.9.1 Consequences of control 17
6.9.2 Cost estimation for a control 18
6.10 Cost of control transfer 18
6.10.1 Consequences of control transfer 18
6.10.2 Cost estimation for a control transfer 18
6.11 Collaborative control 18
7 Controllability of AI system 19
7.1 Considerations 19
7.2 Requirements on controllability of AI systems 20
7.2.1 General requirements 20
7.2.2 Requirements on controllability of continuous learning systems 21
7.3 Controllability levels of AI systems 21
8 Design and implementation of controllability of AI systems 22
8.1 Principles 22
8.2 Inception stage 23
8.3 Design stage 24
8.3.1 General 24
8.3.2 Approach aspect 24
8.3.3 Architecture aspect 25
8.3.4 Training data aspect 25
8.3.5 Risk management aspect 25
8.3.6 Safety-critical AI system design considerations 25
8.4 Suggestions for the development stage 25
9 Verification and validation of AI system controllability 26
9.1 Verification 26
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and expressions
related to conformity assessment, as well as information about ISO's adherence to the World Trade
Organization (WTO] principles in the Technical Barriers to Trade (TBT] see www.iso.org/iso/foreword.html.
In the IEC, see www.iec.ch/understanding-standards.
This document was prepared by Joint Technical Committee ISO/IEC JTC 1, Information technology,
Subcommittee SC 42, Artificial intelligence.
Any feedback or questions on this document should be directed to the user's national standards
body. A complete listing of these bodies can be found at www.iso.org/members.html and
www.iec.ch/national-committees.
1 Scope
This document specifies a basic framework with principles, characteristics and approaches for the
realization and enhancement for automated artificial intelligence (AI) systems’ controllability.
The following areas are covered:
2 Normative references
The following documents are referred to in the text in such a way that some or all of their content constitutes
requirements of this document. For dated references, only the edition cited applies. For undated references,
the latest edition of the referenced document (including any amendments) applies.
3.7
disengagement of control
control disengagement
process where a controller (3.6) releases a set of control points (3.16)
3.8
engagement of control
control engagement
process where a controller (3.6) takes over a set of control points (3.16)
Note 1 to entry: Besides taking over a set of control points, an engagement of control can also include a confirmation
about the transfer of control to a controller.
3.9
system
arrangement of parts or elements that together exhibit a stated behaviour or meaning that the individual
constituents do not
Note 2 to entry: In practice, the interpretation of its meaning is frequently clarified by the use of an associative noun
(e.g. aircraft system). Alternatively, the word "system” is substituted simply by a context-dependent synonym (e.g.
aircraft), though this potentially obscures a system's principles perspective.
Note 3 to entry: A complete system includes all of the associated equipment, facilities, material, computer programs,
firmware, technical documentation, services, and personnel required for operations and support to the degree
necessary for self-sufficient use in its intended environment.
Note 2 to entry: When leaving a stable system state, the system's parameters or observable characteristics change,
regardless of whether the next stable state is safe or unsafe, when the system (3.9) enters an unstable system state.
Note 3 to entry: A system (3.9) can be described as stable, if the system is in a stable state.
3.12
safe state
state (3.10) that does not have or lead to unwanted consequences or loss of control
3.13
unsafe state
state (3.10) that is not a safe state (3.12)
Note 1 to entry: Uncertain states are a subset of unsafe states.
3.14
failure
loss of ability to perform as required
[SOURCE: IEC 60050-192:2015, 192-03-01, modified — notes to entry have been deleted.]
3.15
success
simultaneous achievement by all characteristics of required performance
[SOURCE: ISO 26871:2020, 3.1.62]
3.16
control point
part of the interface of a system (3.9) where controls can be applied
Note 1 to entry: A control point can be a function, physical facility (such as a switch) or a signal receiving subsystem.
3.17
span of control
subset of control points, upon which controls for a specific purpose can be applied
3.18
interface
means of interaction with a component or module
3.19
transfer of control
control transfer
process of the change of the controller (3.6) that performs a control over a system (3.9)
Note 1 to entry: Transfer of control does not entail application of a control, but it is a handover of control points of the
system interface between agents.
Note 2 to entry: Engagement of control and disengagement of control are two fundamental complementary parts of
control transfer.
Note 2 to entry: External effects include all possible effects and side effects of control, e.g. environment change.
3.23
test completion report
test summary report
report that provides a summary of the testing that was performed
[SOURCE: ISO/IEC/IEEE 29119-1:2022, 3.87]
3.24
process
set of interrelated or interacting activities that transform inputs into outputs
[SOURCE: ISO/IEC/IEEE 15288:2023, 3.27]
3.25
function
defined objective or characteristic action of a system (3.9) or component
[SOURCE: ISO/IEC/IEEE 24765:2017, 3.1677.1]
3.26
functionality
capabilities of the various computational, user interface, input, output, data management, and other features
provided by a product
[SOURCE: ISO/IEC/IEEE 24765:2017, 3.1716.1, modified — Note 1 to entry has been removed.]
3.27
functional safety
part of the overall safety relating to the EUC (Equipment Under Control) and the EUC control system that
depends on the correct functioning of the E/E/PE (Electrical/Electronic/Programmable Electronic) safety-
related systems (3.9) and other risk reduction measures
[SOURCE: 1EC 61508-4:2010, 3.1.12]
3.28
system state observation
observation
act of measuring or otherwise determining the value of a property or system (3.9) state
[SOURCE: ISO/IEC TR 10032:2003, 2.65, modified — Note 1 to entry has been removed.]
3.30
atomic operation
operation that is guaranteed to be either performed or not performed
3.31
out of control state
unsafe state (3.13) in which the system (3.9) cannot listen for or execute feasible control instructions
Note 1 to entry: The reasons for out of control state include but are not limited to communication interruption, system
defection, resource limitation and security.
4 Abbreviations
AI artificial intelligence
ML machine learning
5 Overview
Controllability is the property of an AI system which allows a controller to intervene in the functioning of the
AI system. The concept of controllability is relevant to the following areas for which International Standards
provide terminology, concepts and approaches for AI systems:
a) AI concepts and terminology: This document inherits the definition of controllability from
ISO/IEC 22989;
b) AI system trustworthiness: ISO/IEC TR 24028 describes controllability as a property of an AI system
that is helpful to establish trust. Controllability as described by ISO/IEC TR 24028 can be achieved by
providing mechanisms by which an operator can take over control from the AI system. ISO/IEC TR 24028
does not provide a definition for controllability. Controllability in this document is used in the same
sense as in ISO/IEC TR 24028. A controller in the context of this document can be a human. This is the
same with the philosophy in ISO/IEC TR 24028. When an AI system is in its operation and monitoring
stage, a human can be in the loop of control, deciding control logics and providing feedback to the system
for further action;
c) AI system quality model: ISO/IEC 25059 describes user controllability as a sub-characteristic of usability.
ISO/IEC 25059 emphasizes the interface of an AI system, which enables the control by a controller, while
the controllability defined in this document is more about the functionalities that allow for control;
d) AI system functional safety: ISO/IEC TR 5469 uses the term control with two different meanings:
1) Control risk: This meaning refers to an iterative process of risk assessment and risk reduction. The
term control belongs to the context of management. This meaning differs from the use of control in
this document;
2) Control equipment: This meaning refers to the control of equipment as well as the needs of control
by equipment that has a certain level of automation. This meaning of control in ISO/IEC TR 5469 is
consistent to the use of control in this document;
e) AI risk management: ISO/IEC 2 38941121 uses the term control in the context of organization management,
meaning the ability of an organization to influence or restrict certain activities identified to be risk
sources. This meaning is different from the meaning of control or controllability in this document;
Controllability can be important for AI systems whose underlying implementation techniques cannot provide
full explainability or verifiable behaviours. Controllability can enhance the system’s trustworthiness,
including its reliability and functional safety.
No matter the automation level of an AI system, controllability of an AI system is important, so an external
agent can ensure that the system behaves as expected and to prevent unwanted outcomes.
The design and implementation of controllability of an AI system can be considered and performed in each
stage of the Al system life cycle defined in ISO/IEC 22989:2022, Clause 6.
Controllability is a technical prerequisite of human oversight of an AI system, so that the human-machine
interface can be technically feasible and enabled. The design and implementation of controllability should
be considered and practiced by stakeholders of an AI system that can impact users, the environment and
societies.
Controllability of an AI system can be achieved if the following two conditions are met:
— The system can represent its system states (e.g. internal parameters or observable characteristics) to a
controller such that the controller can control the system.
— The system can accept and execute the control instructions from a controller, which causes system state
transitions.
In a system, interacting elements can exchange data and cooperate with each other. These interactions can
lead to different sets of values for the system's internal parameters and consequently can result in different
observable characteristics.
A system can have several different states. The different states of a system can indicate a mapping from the
continuous parameter space to a discrete state space. When designing the different states of a system, at
least the following recommendations apply:
— The duration of a state is sufficient so that tests and specific operations against the state can be made.
— A state is observable by qualified stakeholders, via technical means, such as system logging, debugging
and breakpoints, etc.
— Entry into a state is possible via a set of defined operations on the system.
The states of an AI system can be identified during the design and development stage in the AI system
life cycle as described in ISO/IEC 22989:2022, Figure 3. The identification of the states of an AI system is
important for the implementation of controllability and can therefore affect the trustworthiness of the AI
system. According to the results of the design and development stages, the states of an AI system can be
organized into the following three categories:
The system state transition target is a finite subset of the system's possible states which are acceptable
by stakeholders according to a set of requirements. The system state transition target should be identified
during design and development and the transitions to a target state should be subject to verification and
validation during system testing.
The implementation and enhancement of controllability of an AI system depends on the ability of an AI
system to reach a specified target state. The following attributes of the intended target state should be
identified by the designers, developers, managers, users and any other stakeholders of the AI system:
— Completeness of the states of an AI system can be checked. States that are not noticed or hard to be
entirely identified can exist during the design and development stage. This is particularly the case when
an AI system is implemented by certain approaches, such as deep learning. As a deep learning model's
output universe cannot be entirely determined in advance, unidentified states can always exist;
— Stability of the states of an AI system should meet the requirements about control and state observation.
This attribute is important for the systems which are designed to be controlled by human, as human-in-
the-loop mechanisms are applied to prevent hazards.
Target states should be reachable under certain circumstances via actions. Actions can include:
— automated state transition by the system itself, if defined conditions are met;
— a sufficient condition that by itself causes the transition to take place as long as the condition is met;
— a necessary condition that is required to be met for the state transition to take place.
The satisfaction of a necessary condition does not by itself guarantee the transition happens.
b] Adaptation: An AI system state transition can change the environment where that system works in
or objects on which it operates. As a consequence, such environments and objects can react to the AI
system. These reactions can lead to an unstable adaptation period in which the system adjusts internal
parameters to enter an intended state. An adaptation subprocess is not a necessity that every system
state transition process contains.
EXAMPLE 2 An Al-based vehicle system automatically transits its state from low speed to high speed. As the
vehicle speeds up, the resistance (from ground, air, etc.] and running stability can change. To cope with this,
parameters in subsystems (such as electronic stability program] can be adjusted. Once the target state (high
speed] is reached, the adjustment approaches applied in the adaptation subprocess can be stopped.
5.3.4 Effects
The effects of an AI system state transition can include the current state of the system or an additional set of
actions needed to be taken by the system or its controller. There can be two types of effects:
a] For successful state transition: When a system successfully transits to the expected state, the system
can function as specified and is prevented from entering a hazardous state.
b] For unsuccessful state transition: When a system fails to transit to the expected state, it can be guided to
revert to the original state by configured operations or parameters (e.g. system reset). The system can
then retry the requested state transition or stay in the original state. For this, extra time, operations,
power and other resources can be necessary.
In a closed-loop system, the output is fed back to the input of the system, where control is determined by
the combination of system input and feedback (e.g. the control to an air conditioner is subject to both the
current and target temperatures). In an open-loop system, the output is not fed back to the input of the
system. Control is subject to the instructions issued by a controller rather than the output of the system (e.g.
a TV only accepts and responds to a control signal rather to the results of previous controls).
System state observation measures the appropriate system parameter values or system appearance. It can
be achieved via either system outputs or observation or based on analysis of system parameters without
output. This document does not treat closed-loop and open-loop separately and does not impose specific
settings to the approaches via which the system states are observed. It also does not impose specific settings
to the approaches via which the system states are observed.
Control over an AI system can help to conduct the intended business logic and to prevent the system from
causing harm to stakeholders. At least the following two ways exist to realize the controllability of an AI system:
— Use the facilities designed and implemented for the purpose of control;
— Take advantage of the functional operations (they are not specifically designed and implemented for
control but can be used for the purpose of control).
Control over an AI system is effective if at least the following are satisfied:
— Control is conducted when the system can be controlled for a specific purpose with acceptable side
effects.
— Control is conducted via a correct span of control based on control points provided by the system.
state
control state state observation
observation observation
AI functionalities >tart/stop an AI
functionality
Set parameters to an AI
functionality
_ Return
larameters
NOTE 1 The span of control represented in the diagram is an example. Each specific control can correspond to its
intended span of control that is configured, selected and used.
NOTE 2 See ISO/IEC 19505-112! for details on the notations in this diagram. The human body notation in Figure 1
does not mean that it is necessarily a human.
1] Computing resources can include computing device, memory, storage, data transmission facility
and any other hardware module that improve computing and data exchange. Status and parameters
of computing resources can be set and observed for the purpose of control. Physical facilities can
include hardware and associated software used for the formation or functioning of the Al system
(e.g. joysticks or gear shafts]. Devices in a component can provide control and state observation.
2] AI functionalities abstract those processes used for prediction, recommendation and classification.
Parameters and status of an AI functionality can be set and observed for the purpose of control.
3] Business logic implementations are the executable programs that form workflows. Each workflow
can invoke AI functionalities as building blocks. Implementations in this layer can include control
facilities that make sense to business logic.
4] System interface can contain a subset of declared functionalities for receiving control instructions,
providing parameter values, returning signals and showing observable characteristics. This subset
is the control point of the system. For a specific control, a span of control can be configured, selected
and used.
c] Dependencies can exist between control functionalities provided by different layers.
Al systems with a finite set of system states can be modelled based on an FSM. Applying control methods
based on an FSM is possible when the representation of the control transfer between different controllers is
through the transfer function S which is defined by a 3-tuple:
where
A control point of an AI system can include but is not limited to the following:
— A function. When a system is controlled programmatically, functions implementing control logics should
be designed. For this, local invocations or remote procedure calls can be considered.
— A physical facility. When a system is equipped with physical mechanisms for control, such as a steering
wheel on an assisted-driving vehicle, safety and usability factors that can affect the effectiveness and
efficiency of control should be considered.
— A signal input-output system. When a system is controlled remotely, a signal input-output subsystem can
be applied. In addition to considering the medium (e.g. air or water), distance and noise, the subsystem
should also consider expectations for control timeliness and sequencing.
Depending on the design, control points of a system can make use of the following:
— specifically designed and implemented facilities that are exclusively used for control;
— facilities that are parts of a system’s functions but can be re-used for control, such as the checkpoint and
the pause functions designed for debugging but useful for control in certain cases.
When necessary, the invocations of control points can be secured by authentication and authorization
mechanisms. For this, certification, encryption mechanisms and even control-specific channels can be
applied.
EXAMPLE An Al-based automated metal processing product line can be controlled via a digital control subsystem
as well as a set of physical facilities on the production line. An AI system is used for the analysis of photographs of the
key information of the processed metal (e.g. the position and the pose of a part being processed). The controls can
include starting, stopping and pausing of subprocesses, selecting and changing of chucks, heating, cooling, lathing
and milling of materials, changing of bit tools, etc. The controls of the system can be configured in advance and issued
in real time via the digital control subsystem. Physical facilities can also be used if manual and physical controls are
needed. To use the digital control subsystem and to enter the physical control area can require that the identification
information of human controllers be checked.
— that the system can accept and conduct the control instructions from a specific controller. If not, the
controller cannot fully perform the intended control. An incomplete span of control can lead to a control
transfer from the system to the controller;
— that the controller can handle or operate all the control points of an intended control. If not, either an
uncertainty handling mechanism should be prepared or the plan for this control should be cancelled due
to the lack of feasibility.
When interacting with the control points in a span of control, rules can exist about the sequence for using
the control points.
The transfer of control is a prerequisite when an external controller decides to intervene in the functioning
of an AI system in order to prevent unwanted outcomes. A control transfer process enables the controller
to obtain the control from any agent that is controlling the AI system. For this, a preparation process for
control transfer should be considered. Important subprocesses during a preparation include checking the
span of control, preparing for engagement of control, initializing uncertainty handling strategy as well as
estimating the cost of control and control transfer. The preparation process for the transfer is shown in
Figure 3 and described as follows:
a) A control transfer preparation process is conducted based on the requirements of the control. It includes
a sequence of subprocesses:
1] The controller checks the span of control that is necessary for the intended control.
2) If the controller does not hold all the control points for the required span of control, the controller
generates an additional control transfer request before its engagement of the control. The additional
request declares the controller's intent about the upcoming operation on a subset of control points
and is sent to the AI system. Upon receiving this request, the AI system replies to the controller with
a confirmation and the AI system is starting to prepare for its disengagement of control. The AI
system disengages its control only if the controller holds the authority for the requested control.
3) When the controller already holds all the needed control points of a span, the actions in 2) are
skipped.
4) A request of control transfer can fail, if uncertainties (see 6.8) appear during the communication
between the controller and the AI system. A failed request can trigger a redetermination of
requirements of the control which can cause the controller to adopt a different strategy for control.
5) If a control transfer request is successful then the preparation for the engagement of the control
is carried out. The controller derives a plan containing a sequence of actions (e.g. move to correct
position for control) that should be taken in order to be ready for the actual operation.
6) The cost of control as well as the possible control transfer are estimated. During this subprocess,
the controller gathers estimates of time, space, energy and material consumptions, as well as the
An important prerequisite for the controller to control an AI system is to engage a specific control.
Engagement of control means to carry out a sequence of the actions to control the AI system. In addition,
a set of criteria should be met when performing a specific action or a sequence of actions. Useful actions
include but are not limited to:
— move to a required position;
— wear or setup a required equipment;
— order or precedence restriction on the engagement of control when multiple control points are involved;
— authority restriction on the engagement of control when security requirements exist on the obtainment
of control points;
— Set up a recovery point during the "initialize uncertainty handling strategy" activity in Figure 3, such
that failures during the engagement of control process can be handled and the system configuration and
data being processed can be recovered (see "handle control execution exception" activity in Figure 1).
In an ML-based AI system, a recovery point can be a set of data including a checkpoint mirror of the ML
model, runtime configurations, etc.
— Arrange a plan to handle the possible damage to the environment. This is important for those AI systems
working with physical objects and for when their control or control transfer can influence environments
(e.g. an ML-based material processing system],
— Implement a mechanism to ensure the atomicity ofa control engagement, such thatthe control engagement
process is guaranteed to either successfully occur or entirely not occur. As a result, no engagement of
control over a partial span can occur.
The disengagement of control is a process opposite to the engagement of control. Disengagement of control
means the AI system is about to release and transfer control to the controller. The core task of this process
is to take a sequence of actions and then satisfy a set of criteria. Useful actions include but are not limited to
the following:
— leave a position;
The criteria of the engagement of control process can be selected and used in the context of control
disengagement, but with a different meaning for each:
— order restriction on the disengagement of control when multiple control points are relinquished;
— authority restriction on the disengagement of control when security requirements exist on the
relinquishment of control points;
— controller’s resources (e.g. idle time intervals) that can be used for the control transfer.
A transfer of control can fail if either the controller or the AI system does not prepare well or is affected by
unpredicted external factors. Uncertainty should be handled when a failure happens, and particularly in
the cases that can lead to loss of asset, performance or any other results and risks unacceptable to both the
controller and the AI system. Types of uncertainty include but are not limited to:
— communication failure;
— Specify and implement redo and undo procedures for atomic operations. This can involve the recovery of
environments changed by control transfer. In such a situation, undo or redo of an atomic operation can
be influenced.
The cost of control for the controller, the AI system, other entities and the environment should be estimated
and checked, including the following:
a) whether the magnitude of resources required by a control exceeds the limits of the system. Trade-offs
between the cost of control and the system's quality requirements based on ISO/IEC 25059 should be
considered;
b] whether the magnitude of resources required by a control affects the system’s functioning currently or
in the future;
c] whether the possible changes to the environment or entities that the system works with affect the
system’s functioning according to business requirements;
d] cognitive, physiological and physical capabilities of human controllers (e.g., reaction speeds for drivers
of a vehicle].
Once estimated, the cost of control should be provided to the controller or intended stakeholders of an
AI system, who determines the acceptability of the cost (see Figure 3] and take further actions regarding
control of the AI system
The aim of estimating a cost of control transfer is useful for determining the feasibility of an intended control
transfer. The following consequences can exist when a control transfer takes place from an AI system to a
controller:
a] Out of control state: When a transfer of control happens, the Al system releases a specific set of control
points to the controller. It is possible that the controller is not capable of managing the control points
due to the possibly large number of control points or the complexity of the engagement process. As long
as there is at least one control point that cannot be managed by the controller, the control of the AI
system can be lost.
b] Resource consumption: Several kinds of resources, including time duration, signal transmission
bandwidth, storage, energy, etc., can be consumed by a control transfer process.
The following should be checked when estimating the cost of a control transfer:
a] whether the control transfer makes use of resources that are required by the system’s functioning;
b] whether the control transfer makes use of a number of resources exceeding the system limits;
c] whether the control transfer can lead to an out of control state.
In an AI system, more than one component can exist that can listen for and execute control instructions.
Based on system design, there can also be multiple controllers. Each controller can issue control instructions
to one or more components. Controllers or controllable components collaborate for achieving a goal.
Collaborative control can be involved in the following cases:
a] Multi-controllable components and one controller: An AI system contains multiple components (e.g. an
Al-based multi-agent system] that each can listen for and execute controllability instructions from the
controller.
c) Multi-controllable components and multi-controllers: An AI system (e.g. a group of robots and controlled
by multiple human controllers) contains multiple components and each component can listen for
controllability instructions from multiple controllers.
For each of these cases, controllability characteristics are described in Table 1.
7 Controllability of AI system
4) Learning policy determination: For AI systems that can select the knowledge to learn from or
determine the approach for learning (e.g. continuous learning), the subprocesses for such decisions
should be controllable. This can be crucial for AI systems of which underlying learning policy can
affect the AI system’s behaviours towards human beings.
The provider of an AI system shall provide users with descriptions and documentation of the AI system's
controllability features.
For semantic computing-based continuous learning systems, the following should be controllable:
a) selection of the ontologies to be built as well as the priorities of new knowledge to be merged during a
knowledge fusion process;
b) selection of the ontologies on which knowledge computing is performed.
c) Sequentially controllable: An AI system, in any state, can respond to control and state observation
instructions. System cannot reach any required state by the execution of one control, but is able to reach
any required state by a sequence of state observations and controls. Consumed resources can be outside
acceptable limits.
d) Loosely controllable: An AI system, in any state, can respond to control and state observation
instructions. System cannot reach the required state by the execution of one control. The system cannot
guarantee that it can reach a required state via a sequence of controls and state observations. Consumed
resources can be outside acceptable limits.
e) If an AI system is not controllable, all of the following apply at the same time:
1] There is no state identified or defined.
2) Only part of the parameters or appearances of the AI system are observable.
3) There are no instructions implemented for state transitions.
4) The system does not provide any instruction that can be used to make the system reach a
required state.
NOTE 1 Not controllable level is applicable to those systems or scenarios where controllability is not required.
NOTE 2 The functionality termination of an AI system is a basic requirement that can be designed and implemented
not for control. This feature is not required in the levels of controllability.
NOTE 3 System states can be observed via the approaches in 6.1 a],
Stakeholders should consider the following principles during the crucial AI system life cycle design and
development stage:
a) Derive controllability features based on not only the explicitly specified requirements, but also those
implicit necessities indicated by scenarios where the AI system can cause unwanted outcomes without
adequate control. The following specific types of requirements can be considered:
1) Adapted requirements are not explicitly stated but can be adapted from the environment through
learning.
2) Delegated requirements can come from another AI system, in a system-of-system structure.
Requirements delegated from another subsystem or super-system can be another type.
b) Plan controllability features depending on the AI system’s functionality, but implement them
independently from the AI system’s functionality design and development as follows:
1) Controllability features are required during the AI system’s execution. Implementation and use of
controllability can be subject to what AI system functionality is performed.
2) To improve the effectiveness and efficiency of controllability, design and development should not
depend on the AI system's functionality implementation.
c) It is efficient for control if these state observation and control can be implemented separately:
1] State observation and control make use of separate communication channels.
During the inception stage of an AI system, controllability functionalities should be considered, including:
a] Determine the objectives of each controllability functionality of an AI system, including but not limited
to the following:
1) problems this controllability functionality solves;
2) customer needs or business opportunities that the controllability functionality addresses;
3) metrics of success.
b) Identify the requirements for each controllability functionality (control or state observation), including:
1) For each interaction between a controller and an AI system, the following should be analysed and
recorded:
i) casual relationship between a controller’s instruction and the behaviour or appearance the
system should exhibit;
ii) the system state and the control actions that can be applied to the system when it is in that state;
iii) after a control, the state in which the system is.
2) Based on the result of 8.2 b) 1), determine the requirements on control functionalities.
3) Based on the result of 8.2 b) 1), determine the requirements on system state observation
functionalities.
4) A requirement can contain functional and non-functional concerned aspects.
5) Each aspect can contain specific measures (see 9.1.3 and 9.1.4) and values that a tested AI system is
supposed to meet.
c) Identify the controllability functionalities useful in typical scenarios in which the system is supposed
to be used. This should be done in particular to prevent or stop an AI system from causing harm. The
range of identified controllability functionalities by this work is more extensive in comparison to the
identification of requirements (see b)) that merely meet system specification. The following should be
performed by stakeholders:
1) Controllability scenario identification discovers the scenarios where control or state observation
functionalities are needed. For each scenario, determine the following:
i) the expected system outputs or behaviours if controllability functionalities are executed
normally;
In the inception stage defined in ISO/IEC 22989:2022, 6.2, the term cost is about funding. It is different with
the term cost used in this document. The term cost in this document refers to the resources controls and
control transfers consume. The funding-related cost for controllability functionalities should be forecast for
the AI system over its entire system life cycle.
For safety-critical AI systems, requirements should be identified before the system (any software or
hardware) design is undertaken, as it is usually not possible to retrofit safety design features.
Constraints on the AI system’s socio-technical (human, procedural and technical) components and their
interactions, and implement socio-technical controls to ensure they are not violated (see, for example,
Systems theoretic accident model and process (STAMP), Systems Theoretic Process Analysis (STPA), in
Reference [1]).
8.3.1 General
The design stage of an AI system provides details for the system fulfilling requirements and targets,
according to the outcomes of the inception stage. In ISO/IEC 22989:2022, 6.2.3, the design of an AI system
can involve various aspects including approach, architecture, training data and risk management.
The development of an AI system's controllability corresponds to the processes realizing control and state
observation functionalities, including but not limited to programming, documenting, testing, bug fixing,
etc. The target of development of an AI system’s controllability is to realize the required functionality
with effectiveness and without introducing any decline or variation in performance. For this, the following
suggestions should be considered:
a) Separate the ownership and use of computing resources (e.g. memory, communication bandwidth and
processor) between controllability and system functionalities. It is important to provide adequate
computing resources for control and state observations when controllability is expected to be executed
immediately in time-deterministic cases.
b) Provide proper priorities to controllability instructions execution. In an IT system, computing tasks
are scheduled by fundamental software (e.g. operating system) by a unified component. Equivalent
distribution of priorities over controllability and other tasks can bring risk of late execution of control
or state observations. This is important for those AI systems where controllability is expected to be
executed immediately in time-deterministic cases.
d) The verified controllability functionalities should be listed with their expectations and actual results.
The verification process of an Al system’s controllability should be documented. Annex A describes a form
that can be used to document the verification process. It can be used in a test completion report.
In an AI system, controllability functionalities can exist that are intended to ensure functional safety (e.g.
the system response duration on controls of a real-time AI system, or the power consumption restriction
of certain controls of a power supply restricted AI system). Functional testing of safety controllability
functionalities should use 9.1.3 b).
2) Perform controls and state observations with correct and incorrect parameters to determine
whether the system can respond and behave correctly.
3) Use a test environment by taking advantage of the parameters in 9.1.1 b) 2), and by introducing
additional influences that the system can encounter in a specific scenario "(e.g. for an automated
driving system, turbulences, pedestrians or obstacles on a road for testing the controllability
functionalities in a "turn right" scenario).
4) Use test data and system configurations that cover not only the data and settings required by 9.1.1
b) 3), but also data and settings based on real use. For system configurations, user customized and
even wrong settings should be considered.
5) Use different system input as well as the control or state observation input as specified in 9.1.1 b) 4).
6) Evaluate system outputs that are meaningful to the scenario as well as outputs as specified in 9.1.1 b) 5).
c) Check an AI system’s outputs and behaviours, given both correct and incorrect operations for the
identified scenarios. The actual output of the system should be compared with expectations for the
identified scenarios.
d) The validated controllability functionalities should be documented for each identified scenario along
with the scenario expectations and actual outputs.
The validation process of an AI system’s controllability should be documented. Annex B describes a form
that can be used to document the validation process.
AI systems have been used in domains. Not all of the domains considered, planned or implemented
controllability functionalities enough for intended use. For AI systems that have been operating for some
time, retrospective validation can be applied for planning and implementing controllability. Triggers for
retrospective validation can include but are not limited to:
a) hazards that happened during the system’s functioning;
b) risk assessment of the system indicates potential risks on safety or ethics;
c) legislations related to the development and use of the system have been changed;
d) operating rules or workflows of the system have been changed.
Table A.l is an example of the documentation for the verification output of an AI system’s controllability.
Table A.l includes the following columns:
a) Requirement is the description of a controllability functionality an AI system can provide.
b) Aspects can be functional or non-functional (e.g. performance efficiency) meaningful aspects that
stakeholders are interested in.
c) Controllability functionality is the description of a controllability functionality tested [see 8.2 b)].
d) Type can be Control or State observation.
e) Test environment is the collection of environmental parameters [see 9.1.1 b) 2)] concerned by the
controllability functionality.
f) Input and Output are the descriptions of the input [see 9.1.1 b) 4)] and output [see 9.1.1 b) 5)] for a tested
controllability functionality.
g) Measures and values are the pairs of measures and values that a tested AI system is supposed to meet.
h) Qualified is the judgement about whether the tested controllability functionality meets the requirement.
It can be Qualified or Not qualified.
Continuous meas¬
-rights
2024
Al
A cleaning robot
should go forward Functional
by 10 cm (distance
error < 5 %) within
Control to go
forward 10 cm
in no more than
Control
Robot is fully charged
and is put on an area
to be cleaned. Extra
distance and time
measuring equipment
Issue "go
forward”
control via
Robot moves for¬
ward by 9.8 cm in
ure: whether or
not the robot goes
forward by 10 cm
with less than 5 %
distances error
Qualified
res vd
Non-func- control is prepared and Time duration: the
issued via a panel tional Control time consumed for Qualified
connected to the robot.
(Efficiency) its going forward
Annex B
(informative]
Table B.l is an example of the documentation for the validation output of an AI system's controllability.
Table B.l includes the following columns:
a) Scenarios column contains the descriptions of an AI system’s actions and events in which controllability
functionalities are applied. Expectations of a scenario should also be recorded.
b) Input and Output columns record the information or actions of an AI system that communicates or
interacts with its external environment.
c) Aspects can be Functionality, Efficiency, Reliability or any other meaningful aspect of controllability,
which can influence the system’s behaviours in a scenario.
d) Controllability functionality is the description of a controllability functionality used in a scenario [see
9.2.1a)].
e) Type can be Control or State observation.
f) Results of controllability functionality is the output or the system behaviour after the execution of a
controllability functionality.
g) Test environment is the collection of environmental parameters and influences [see 9.2.1 b) 3)] that can
affect the controllability functionality.
h) Measures and values are the pairs of measures and values that a tested AI system is supposed to meet in
a scenario with controllability functionalities executed.
i) Qualified is the judgement about whether the system's behaviour under control can permit a scenario to
proceed as expected.
-rights
2024
Al
when the robot in a
"non-cleanable area
detected" state, in
order to prevent
itself from being
"non-clean¬
able area
detected”
state
the descending
stairs and drops
down
move toward
a descending
staircase
feedback to the extra cushion
control panel is prepared at
the foot of the
stairs
turn a refusal
signal and
stops to move
fied
damaged
res vd
Bibliography
[1] Leveson Nancy G., Engineering a safer world: Systems thinking applied to safety. The MIT Press, 2016
[2] ISO/IEC 19505-1, Information technology — Object Management Group Unified Modeling Language
—
(OMG UML) Part 1: Infrastructure
[3] ISO/IEC/IEEE 24765:2017, Systems and software engineering — Vocabulary
[4] ISO 21717:2018, Intelligent transport systems
— Performance requirements and test procedures
—
Partially Automated In-Lane Driving Systems (PADS)
[5] ISO 22166-1:2021, Robotics — Modularity for service robots — Part 1: General requirements
[6] ISO 26871:2020, Space systems — Explosive systems and devices
[7] —
ISO/IEC 25059, Software engineering Systems and software Quality Requirements and Evaluation
—
(SQuaRE) Quality model for AI systems
[8] — —
ISO/IEC 5338, Information technology Artificial intelligence AI system life cycle processes
[9]
transition of software
—
ISO/IEC 11411:1995, Information technology Representation for human communication of state