0% found this document useful (0 votes)
844 views36 pages

Iso Iec TR 27563-2023

ISO/IEC TR 27563:2023 provides best practices for assessing security and privacy in artificial intelligence use cases, particularly those outlined in ISO/IEC TR 24030. The document includes an analysis of security and privacy concerns, risks, controls, and assurance related to AI systems, emphasizing the importance of adequate safeguards to protect individual privacy. It also features templates for analysis and additional use cases to support the implementation of these best practices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
844 views36 pages

Iso Iec TR 27563-2023

ISO/IEC TR 27563:2023 provides best practices for assessing security and privacy in artificial intelligence use cases, particularly those outlined in ISO/IEC TR 24030. The document includes an analysis of security and privacy concerns, risks, controls, and assurance related to AI systems, emphasizing the importance of adequate safeguards to protect individual privacy. It also features templates for analysis and additional use cases to support the implementation of these best practices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

TECHNICAL ISO/IEC TR

REPORT 27563

First edition
2023-05

Security and privacy in artificial


intelligence use cases — Best practices
Sécurité et respect de la vie privée dans les cas d’usage de
l’intelligence artificielle — Bonnes pratiques

Reference number
ISO/IEC TR 27563:2023(E)

© ISO/IEC 2023
ISO/IEC TR 27563:2023(E)

COPYRIGHT PROTECTED DOCUMENT


© ISO/IEC 2023
All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may
be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting on
the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address below
or ISO’s member body in the country of the requester.
ISO copyright office
CP 401 • Ch. de Blandonnet 8
CH-1214 Vernier, Geneva
Phone: +41 22 749 01 11
Email: copyright@iso.org
Website: www.iso.org
Published in Switzerland

ii  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

Contents Page

Foreword......................................................................................................................................................................................................................................... iv
Introduction..................................................................................................................................................................................................................................v
1 Scope.................................................................................................................................................................................................................................. 1
2 Normative references...................................................................................................................................................................................... 1
3 Terms and definitions..................................................................................................................................................................................... 1
4 Abbreviated terms.............................................................................................................................................................................................. 2
5 Analysis of security and privacy.......................................................................................................................................................... 3
5.1 General............................................................................................................................................................................................................ 3
5.2 Application domains in ISO/IEC TR 24030:2021 use cases............................................................................. 3
5.3 Security in ISO/IEC TR 24030:2021 use cases............................................................................................................ 3
5.4 Privacy in ISO/IEC TR 24030:2021 use cases.............................................................................................................. 4
6 Templates for analysis.................................................................................................................................................................................... 5
7 Supporting information................................................................................................................................................................................ 6
7.1 Describe ecosystem............................................................................................................................................................................ 6
7.2 Provide assessment of systems of interest.................................................................................................................... 7
7.3 Identify security and privacy concerns............................................................................................................................. 7
7.4 Identify security and privacy risks....................................................................................................................................... 9
7.5 Identify security and privacy controls............................................................................................................................ 11
7.6 Identify security and privacy assurance concerns.............................................................................................. 15
7.7 Identify security and privacy plan requirements................................................................................................. 16
Annex A (informative) Additional use cases............................................................................................................................................ 18
Bibliography..............................................................................................................................................................................................................................28

© ISO/IEC 2023 – All rights reserved  iii


ISO/IEC TR 27563:2023(E)

Foreword
ISO (the International Organization for Standardization) and IEC (the International Electrotechnical
Commission) form the specialized system for worldwide standardization. National bodies that are
members of ISO or IEC participate in the development of International Standards through technical
committees established by the respective organization to deal with particular fields of technical
activity. ISO and IEC technical committees collaborate in fields of mutual interest. Other international
organizations, governmental and non-governmental, in liaison with ISO and IEC, also take part in the
work.
The procedures used to develop this document and those intended for its further maintenance
are described in the ISO/IEC Directives, Part 1. In particular, the different approval criteria
needed for the different types of document should be noted. This document was drafted in
accordance with the editorial rules of the ISO/IEC Directives, Part 2 (see www.iso.org/directives or
www.iec.ch/members_experts/refdocs).
ISO and IEC draw attention to the possibility that the implementation of this document may involve the
use of (a) patent(s). ISO and IEC take no position concerning the evidence, validity or applicability of
any claimed patent rights in respect thereof. As of the date of publication of this document, ISO and IEC
had not received notice of (a) patent(s) which may be required to implement this document. However,
implementers are cautioned that this may not represent the latest information, which may be obtained
from the patent database available at www.iso.org/patents and https://patents.iec.ch. ISO and IEC shall
not be held responsible for identifying any or all such patent rights.
Any trade name used in this document is information given for the convenience of users and does not
constitute an endorsement.
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and
expressions related to conformity assessment, as well as information about ISO's adherence to
the World Trade Organization (WTO) principles in the Technical Barriers to Trade (TBT) see
www.iso.org/iso/foreword.html. In the IEC, see www.iec.ch/understanding-standards.
This document was prepared by Joint Technical Committee ISO/IEC JTC 1, Information technology,
Subcommittee SC 27, Information technology, cyber security and privacy protection.
Any feedback or questions on this document should be directed to the user’s national standards
body. A complete listing of these bodies can be found at www.iso.org/members.html and
www.iec.ch/national-committees.

iv  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

Introduction
Artificial intelligence (AI) and machine learning (ML) are increasingly being adopted by the digital
industry, using algorithms to make decisions that have the potential to negatively impact the privacy
of individuals and in some cases can even cause harm to some of them, unless adequate safeguards are
deployed. Such safeguards to protect privacy often depend on a variety of factors including the specific
type of process, sensitivity of data used, and potential harm likely to be caused.
This concern has been expressed by:
— Practitioners, who identified 23 principles for AI at the 2017 Asilomar conference[1] covering
research, ethics and values, as well as longer term issues.
— Standard developers, as evidenced by the report on ethically aligned design published by the IEEE
Global Initiative on Ethics of Autonomous and Intelligent Systems[2].
— Policy makers, as exemplified by the appointment by the European Commission of a high-level
expert group on artificial intelligence and the subsequent publication of an assessment list[3].
This document provides an analysis of security and privacy of use cases provided in ISO/IEC TR 24030,
which should be used in parallel. A number of additional use cases are provided in Annex A.
This document also uses concepts from ISO/IEC TR 24028, which addresses trustworthiness in AI
systems, including approaches to establish trust (e.g. transparency, explainability, controllability), and
to achieve trustworthiness properties (e.g. resiliency, reliability, accuracy, safety, security, or privacy).

© ISO/IEC 2023 – All rights reserved  v


TECHNICAL REPORT ISO/IEC TR 27563:2023(E)

Security and privacy in artificial intelligence use cases —


Best practices

1 Scope
This document outlines best practices on assessing security and privacy in artificial intelligence use
cases, covering in particular those published in ISO/IEC TR 24030.
The following aspects are addressed:
— an overall assessment of security and privacy on the AI system of interest;
— security and privacy concerns;
— security and privacy risks;
— security and privacy controls;
— security and privacy assurance; and
— security and privacy plans.
Security and privacy are treated separately as the analysis of security and the analysis of privacy can
differ.

2 Normative references
There are no normative references in this document.

3 Terms and definitions


For the purposes of this document, the following terms and definitions apply.
ISO and IEC maintain terminology databases for use in standardization at the following addresses:
— ISO Online browsing platform: available at https://​w ww​.iso​.org/​obp
— IEC Electropedia: available at https://​w ww​.electropedia​.org/​
3.1
personally identifiable information
PII
information that (a) can be used to establish a link between the information and the natural person to
whom such information relates, or (b) is or can be directly or indirectly linked to a natural person
Note 1 to entry: The “natural person” in the definition is the PII principal (3.3). To determine whether a PII
principal is identifiable, account should be taken of all the means which can reasonably be used by the privacy
stakeholder holding the data, or by any other party, to establish the link between the set of PII and the natural
person.

[SOURCE: ISO/IEC 29100:2011/Amd.1:2018, 2.9]

© ISO/IEC 2023 – All rights reserved  1


ISO/IEC TR 27563:2023(E)

3.2
PII controller
privacy stakeholder (or privacy stakeholders) that determines the purposes and means for processing
personally identifiable information (PII) (3.1) other than natural persons who use data for personal
purposes
Note 1 to entry: A PII controller sometimes instructs others [e.g. PII processors (3.4)] to process PII on its behalf
while the responsibility for the processing remains with the PII controller.

[SOURCE: ISO/IEC 29100:2011, 2.10]


3.3
PII principal
natural person to whom the personally identifiable information (PII) (3.1) relates
Note 1 to entry: Depending on the jurisdiction and the particular data protection and privacy legislation, the
synonym “data subject” can also be used instead of the term “PII principal”.

[SOURCE: ISO/IEC 29100:2011, 2.11]


3.4
PII processor
privacy stakeholder that processes personally identifiable information (PII) (3.1) on behalf of and in
accordance with the instructions of a PII controller (3.2)
[SOURCE: ISO/IEC 29100:2011, 2.12]

4 Abbreviated terms

CCTV closed-circuit television

GDPR General Data Protection Regulation

HCI human computing interaction

LINDDUN linkability, identifiability, non-repudiation, detectability, disclosure of information, un-


awareness, non-compliance

NIST national institute of standards and technology

OEM original equipment manufacturer

PIA privacy impact assessment

PII personally identifiable information

PoC proof of concept

SDG sustainable development goals

STRIDE spoofing identity, tampering, repudiation, information disclosure, denial of service, ele-
vation of privilege

UC use case

V2X vehicle-to-everything

2  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

5 Analysis of security and privacy

5.1 General
This document includes a security and privacy analysis of ISO/IEC TR 24030:2021 use cases. Two
electronic attachments were used:
— the first is the material used by ISO/IEC TR 24030:2021, available here: https://​standards​.iso​.org/​
iso​-iec/​t r/​24030/​ed​-1/​en/​Use+​cases​-v05​_electronic​_ attachment​_022021​.pdf,
— the second is the material used by this document, available here: https://​standards​.iso​.org/​iso​-iec/​
tr/​27563/​ed​-1/​en/​Security​-privacy​-24030​-ed​-1​-AI​-use​-cases​.pdf.
Annex A provides a list of new use cases.

5.2 Application domains in ISO/IEC TR 24030:2021 use cases


ISO/IEC TR 24030:2021 describes 132 use cases, belonging to 22 application domains as shown in
Figure 1.
NOTE 1 134 use cases are listed in this document, as use case 96 from ISO/IEC TR 24030 has been categorized
into 3 application domains.

NOTE 2 The number of use cases per domain, e.g. 1 energy use case compared to 29 healthcare use cases is not
an indication of the potential deployment of AI capabilities in a domain.

NOTE 3 The assignment of a use case to a domain depends on the viewpoint of experts. For instance, use case
132 (Device control using both cloud AI and embedded AI) is classified as manufacturing instead of home.

Figure 1 — Distribution of use cases by application domains

5.3 Security in ISO/IEC TR 24030:2021 use cases


Figure 2 summarizes the security analysis of ISO/IEC TR 24030 use cases in the second electronic
attachment. It shows for each application domain:
— the number of use cases for which security concerns can be negligible;
— the number of use cases for which security concerns can be limited;
— the number of use cases for which security concerns can be significant; and

© ISO/IEC 2023 – All rights reserved  3



ISO/IEC TR 27563:2023(E)

— the number of use cases for which security concerns can be maximum.
NOTE 1 The assessment is based on the most critical systems of interest. For instance, use case 1 (Explainable
artificial intelligence for genomic medicine) involves two systems of interest, the genomic sequence processing
system for which security concerns can be maximum, and the genomic training system for which system
concerns can be significant. The resulting assessment is that security concerns can be maximum.

NOTE 2 The assessment result of each domain is not an indication of the potential privacy concern of AI in a
domain.

Figure 2 — Security analysis in AI use cases

5.4 Privacy in ISO/IEC TR 24030:2021 use cases


Figure 3 summarizes the privacy analysis of ISO/IEC TR 24030 use cases listed in the attachment. It
shows for each application domain:
— the number of use cases for which privacy concerns can be negligible;
— the number of use cases for which privacy concerns can be limited;
— the number of use cases for which privacy concerns can be significant;
— the number of use cases for which privacy concerns can be maximum.
NOTE 1 The assessment is based on the most critical systems of interest. For instance, use case 1 (Explainable
artificial intelligence for genomic Medicine) involves two systems of interest, the genomic sequence processing
system for which privacy concerns can be maximum, and the genomic training system for which system concerns
can be negligible. The resulting assessment is that privacy concerns can be maximum.

NOTE 2 The assessment result of each domain is not an indication of the potential privacy concern of AI in a
domain.

4  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

Figure 3 — Privacy analysis in use cases

6 Templates for analysis


The template used to collect material is shown in Table 1. It includes three types of table cells:
— title cell (e.g. use case name);
— instruction cell (e.g. describe the ecosystem);
— example cell (e.g. System of interest: < use case system of interest > ).
Example cells can include texts in brackets, e.g. < asset A > . They are intended to be replaced by a
specific text related to the use case.
NOTE 1 The proposed texts in example cells use vocabularies and concepts which are aligned with existing
security and privacy references (See [7][8][9][10][13][16][17][18][19][24][15][14]).

NOTE 2 A use case can involve several systems of interest.

© ISO/IEC 2023 – All rights reserved  5



ISO/IEC TR 27563:2023(E)

Table 1 — Template for collecting material


ID < identification as provided by ISO/IEC TR 24030 >
Use case name < use case name as provided by ISO/IEC TR 24030 >
Systems of interest:
Describes the ecosystem: identi- — < use case system of interest >
fies the systems of interest, the Stakeholders:
Ecosystem stakeholders, and the stakehold-
ers’ assets that are impacted by — < stakeholder A >
AI Stakeholder assets that are impacted by AI
— < asset A >
System of interest: < Use case system of interest >
— Security and privacy concerns on < use case
Assessment of system Assessment on security and pri-
system of interest > are < negligible, limited,
of interest vacy concerns
significant, maximum >
— Protection goals to consider for < asset A > asset
are < confidentiality, integrity, availability,
unlinkability, transparency, intervenability[8] >

— The following privacy principles to consider


for a < use case system of interest > integrating
Security and privacy Highlights security and privacy
a < asset A > asset: < e.g. consent and choice, use
concerns concerns that are impacted by AI
retention and disclosure limitation[9] >

— The following framework concepts to consider


for a < use case system of interest > integrating
a < asset A > asset: < e.g. Identify, Protect,
Identify-P, Govern-P[21][15] >
— Privacy risks related to < asset A > asset (e.g. re-
identification of … while performing AI training
and reasoning operations)
Security and privacy Identifies security and privacy
risks risks that are impacted by AI — Security risks related to < asset A > asset
(e.g. alteration of learning data with wrong
information, security of training operation,
security of reasoning operation, …)
— Security and privacy controls from < reference
Security and privacy Identifies security and privacy
(see [22][23][24][17][7]) > to be considered
controls controls that are impacted by AI
for < use case system of interest >
— Organization operating the < use case system
Identifies security and privacy
Security and privacy of interest > integrating < asset A > asset
assurance aspects that are im-
assurance to ensure that it can be audited[19][20] This
pacted by AI
includes organisational and technical evidence.
— Organization operating the < use case system
Identifies security and privacy of interest > integrating < asset A > asset to
Security and privacy
plan aspects that are impacted establish a security and privacy plan[16] that
plan
by AI will be validated and reviewed periodically for
continual improvement.

7 Supporting information

7.1 Describe ecosystem


The type of stakeholders and system of interest that can be considered are shown in Table 2.

6  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

Table 2 — Points of attention on ecosystem


Points of attention Description
Supplier (including solution providers and technology providers)
Entity that does not process PII at all
PII controller
Type of stakeholders
PII processor
PII principals
Third parties
AI system of interest (e.g. a reasoning engine)
Type of system of interest System of interest that includes an asset to protect and uses an AI subsys-
tem

7.2 Provide assessment of systems of interest


The qualifiers that can be used are “can be negligible”, “can be limited”, “can be significant”, “can be
maximum”.
Note It is possible that concerns on security and privacy are not the same.

7.3 Identify security and privacy concerns


For each system of interest, the points of attention are shown in Table 3, Table 4, Table 5, and Table 6.
NOTE 1 Table 3 is based on based on ISO/IEC TR 27550.

NOTE 2 Table 4 is based on ISO/IEC 29100.

NOTE 3 Table 5 is based on ISO/IEC TS 27110 and the NIST privacy framework[15].

Table 3 — Points of attention on protection goals


Points of attention Description
Property that information is not made available or disclosed to unauthorized
Security Confidentiality
individuals, entities, or processes
protection
Integrity Property of accuracy and completeness
goals
Availability Property of being accessible and usable upon demand by an authorized entity
Property that a PII principal can make multiple uses of resources or services
Unlinkability
without others being able to link these uses together
Privacy
Property that all privacy-relevant data processing including the legal, technical
protection Transparency
and organizational settings can be understood and reconstructed
goals
Property that PII principals, PII controllers, PII processors and supervisory
Intervenability
authorities can intervene in all privacy-relevant data processing

Table 4 — Points of attention on privacy principles


Points of attention Description
Provisions which are made to provide PII principals with the opportu-
Consent and choice nity to choose how their PII is handled and to allow a PII principal to
withdraw consent easily and free of charge
Communicating the purpose and awareness that it is expected to com-
Purpose legitimacy and specification
ply with applicable law and rely on a permissible legal basis
Limiting the collection of PII to that which is within the bounds of appli-
Collection limitation
cable law and strictly necessary for the specified purpose(s)

© ISO/IEC 2023 – All rights reserved  7



ISO/IEC TR 27563:2023(E)

Table 4 (continued)
Points of attention Description
Minimize the PII which is processed and the number of privacy stake-
Data minimization
holders and people to whom PII is disclosed or who have access to it
Limiting the use, retention and disclosure (including transfer) of PII to
Use, retention and disclosure limita-
that which is necessary in order to fulfil specific, explicit and legitimate
tion
purposes
Ensuring that the PII processed is accurate, complete, up-to-date (un-
Accuracy and quality less there is a legitimate basis for keeping outdated data), adequate and
relevant for the purpose of use
Providing PII principals with clear and easily accessible information
Openness, transparency and notice about the PII controller’s policies, procedures and practices with re-
spect to the processing of PII
Giving PII principals the ability to access and review their PII, provided
Individual participation and access their identity is first authenticated with an appropriate level of assur-
ance and such access is not prohibited by applicable law
Documenting and communicating as appropriate all privacy-related
policies, procedures and practices. Assigning to a specified individual
Accountability within the organization (who can in turn delegate to others in the or-
ganization as appropriate) the task of implementing the privacy-related
policies, procedures and practices
Protecting PII under its authority with appropriate controls at the oper-
ational, functional and strategic level to ensure the integrity, confiden-
Information security tiality and availability of the PII, and to protect it against risks such as
unauthorized access, destruction, use, modification, disclosure or loss
throughout the whole of its life cycle
Verifying and demonstrating that the processing meets data protection
Privacy compliance and privacy safeguarding requirements by periodically conducting
audits using internal auditors or trusted third-party auditors

Table 5 — Points of attention on activities


Points of attention Description
Identify Ecosystems of stakeholders and threat environment
Protect Safeguards
Security Detect Discover cybersecurity events
Respond Response to cybersecurity events
Recover Restoration and communication after a cybersecurity event
Organizational understanding to manage privacy risk for individuals aris-
Identify-P
ing from data processing
Govern-P Governance controls for privacy
Develop and implement appropriate activities to enable organizations or
Privacy Control-P individuals to manage data with sufficient granularity to manage privacy
risks
Communication capabilities so that organizations and individuals have an
Communicate-P
understanding on how data are processed
Protect-P Data protection safeguards

Table 6 lists points of attention on integration of security and privacy in an ecosystem.


NOTE 4 Table 6 is based on Annex B of ISO/IEC TS 27110.

8  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

Table 6 — Points of attention on integration


Points of atten- Example of activities Example of input Example of output
tion
Specify how the cybersecurity
Interview with domain Work product specifying the
framework activities fit with
architecture experts correspondence between the
Reference archi- the reference architecture used
cybersecurity framework and
tectures in the business environment Reference architecture the ecosystem reference archi-
and its ecosystem of internal documents tecture
and external stakeholders
Specify the mapping between Interview with domain Work product specifying the
roles and stakeholders in the experts correspondence between the
Roles and stake-
domain ecosystem and the List of domain use cases cybersecurity framework and
holders
cybersecurity framework describing roles and the roles and stakeholders in the
activities stakeholders domain ecosystem

Interview with domain se- Work product specifying the


Specify the relationship with curity and privacy experts correspondence between the
Security and pri-
the security and privacy prac- cybersecurity framework and
vacy practices Reference security and
tices in the domain ecosystem security and privacy practices in
privacy documents the domain ecosystem
Interview with system life Work product specifying the
Identify how the system life cycle experts correspondence between the
System life cycle
cycle processes integrate the cybersecurity framework and
processes Reference system life
cybersecurity framework the system life cycle processes
cycle documents of the domain ecosystem

Table 7 lists points of attention on AI specific security and privacy vulnerabilities.


NOTE 5 Table 7 is based on ISO/IEC TR 24028.

Table 7 — Points of attention on AI trustworthiness vulnerabilities


Points of
Vulnerability Example of threats
attention
Influencing training data to manipulate the results of a predic-
Data poisoning
tive model
Adversarial attacks Provide perturbed input data to a valid model
AI specific Send to targeted model a high number of prediction queries
security Model stealing and use response received (the prediction) to train another
threats model
Hardware-focused threats to Affect confidentiality of data
confidentiality and integrity Affect integrity of data and computation
Not following principle of PII minimization
Upon data acquisition
Compromising data storage
AI specif- Upon data pre-processing and Using AI to infer PII from data
ic privacy modelling
Using AI to re-identify information using multiple data sourc-
threats
es
Upon model query Using model for non-authorized purpose (e.g. social service
screening, credit card scoring)

7.4 Identify security and privacy risks


For each system of interest, security and privacy risks can be identified, and resulting consequences
identified. See ISO/IEC 27005 for security and ISO/IEC 29134 for privacy.
The upper part of Figure 4 shows the relationships between security and privacy risks.

© ISO/IEC 2023 – All rights reserved  9



ISO/IEC TR 27563:2023(E)

SOURCE NIST[15], reproduced with the permission of the authors.

Figure 4 — Security and privacy risks, and related functions

Table 8 and Table 9 show examples of categories of threats that can be used.
NOTE These categories of threats are based on the STRIDE and LINDDUN taxonomy.

Table 8 — Points of attention on threats


Points of attention Description
The identity of the users is established (or anonymous entities are accept-
Spoofing
ed)
Data and system resources are only changed in appropriate ways by ap-
Security Tampering
propriate people
threat
STRIDE Repudiation Users cannot perform an action and later deny performing it
taxonomy Information disclosure Data are only available to the users intended to access it
Denial of Service Systems are ready upon request and perform acceptably
Elevation of privilege Users are explicitly allowed or denied access to resources

10  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

Table 8 (continued)
Points of attention Description
Establishing the link between two or more actions, identities, and pieces
Linkability
of information
Establishing the link between an identity and an action or a piece of infor-
Identifiability
mation
Inability to deny having performed an action that other parties can nei-
Non-repudiation
ther confirm nor contradict
Privacy Detectability Detecting the PII principal’s activities
threat Disclosure of informa-
LINDDUN Disclosing the data content or controlled release of data content
tion
taxonomy
PII principals being unaware of what PII about them is being processed
Unawareness Unawareness by PII Controllers of life cycle weaknesses that can exist/
develop due to greater awareness of the content of the training model or
other ML techniques
PII controller fails to inform the data subject about the system’s privacy
Non-compliance policy, or does not allow the PII principal to specify consents in compli-
ance with legislation

The following categories of issues related to privacy consequences in Table 9 can be used.

Table 9 — Points of attention on issues related to privacy consequences


Points of attention Description
Unfair, discriminatory or biased outcome that would largely affect the PII prin-
Discrimination
cipals in any given situations through the processed data about them
Automatically identify and eventually track PII principals and their activities
Unsolicited Tracking
without their consent and/or knowledge
Failure to act with prudence of PII processors and PII controllers on protecting
Negligence
the information even with knowing the risks represented by the processing
Inability to inform or be transparent to PII principals regarding how their PII
Lack of transparency
are processed or handled and its purpose
Amount of PII collected by the system is not proportional to its processing
Lack of proportionality
purpose
Integration of numerous systems and databases which can affect the anonymi-
Loss of Anonymity
ty of PII principals

7.5 Identify security and privacy controls


For each system of interest, security and privacy controls can be identified.
Table 5, based on ISO/IEC TS 27110 and the NIST privacy framework[15] can be used to guide the
identification. The lower part of Figure 4 shows examples of functions that can be used to identify
controls.
Table 10 lists control categories as proposed by ISO/IEC 27001, ISO/IEC 27701 and ISO/IEC 29151 for
information security. Table 11 lists control categories as proposed by ISO/IEC 27002.
NOTE Table 10 is based on ISO/IEC 27001:2013, Annex A.

© ISO/IEC 2023 – All rights reserved  11



ISO/IEC TR 27563:2023(E)

Table 10 — Control categories for information security


Category Sub-categories
Information security policies Management direction
Organization of information Internal organization
security Mobile devices and teleworking
Prior to employment
Human resource security During employment
Termination and change of employment
Responsibility for assets
Asset management
Information classification
Business requirements for access control
User access management
Access control User responsibilities
System and application access control
Media
Cryptography Cryptographic controls
Physical and environmental Secure areas
security Equipment
Operational procedures and responsibilities
Protection from malware
Backup
Operation security Logging and monitoring
Control of operational software
Technical vulnerability management
Information systems audit considerations
Network security management
Communication security
Information transfer
Security requirements of information system
System acquisition, develop-
Security in development and support processes
ment and maintenance
Test data
Information security in supplier relationships
Suppliers relationships
Supplier service delivery management
Information security incident
Management of information security incidents and improvements
management
Information security aspects Information security continuity
of business continuity man-
agement Redundancies
Compliance with legal and contractual requirements
Compliance
Information security reviews

12  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

Table 11 — Control categories for information security based on ISO/IEC 27002


Category themes Controls
Organizational controls Policies for information security
Screening
Terms and conditions of employment
Information security awareness education and training
Disciplinary process
People controls
Responsibilities after termination or change of employment
Confidentiality of non-disclosure agreements
Remote working
Information security event reporting
Physical security perimeters
Physical entry
Securing offices, rooms and facilities
Physical security monitoring
Protecting against physical and environmental threats
Working in secure areas
Clear desk and clear screen
Physical controls
Equipment siting and protection
Security of assets off-premises
Storage media
Supporting utilities
Cabling security
Equipment maintenance
Secure disposal or re-use of equipment
User end point devices
Privileged access rights
Information access restriction
Access to source code
Secure authentication
Capacity management
Protection against malware
Management of technical vulnerabilities
Technological controls Configuration management
Information deletion
Data masking
Data leakage prevention
Information backup
Redundancy of information processing facilities
Logging
Monitoring activities
Clock synchronization
Use of privileged utility programs
Installation of software on operational systems

© ISO/IEC 2023 – All rights reserved  13



ISO/IEC TR 27563:2023(E)

Table 11 (continued)
Category themes Controls
Networks security
Security of network services
Segregation of networks
Web filtering
Use of cryptography
Secure development life cycle
Application security requirements
Secure system architecture and engineering principle
Secure coding
Secure testing in development and acceptance
Outsourced development
Separation of development, test and production environments
Change management
Test information
Protection of information systems during audit testing

Table 12 lists control categories as proposed by ISO/IEC 27701 for PII controllers.

Table 12 — Additional supporting information for PII controllers (for information systems)
Category Supporting information
Identify and document purpose
Identify lawful basis
Conditions for Determine when and how consent is to be obtained
collection and Obtain and record consent
processing Privacy impact assessment
Joint PII controller
Records related to processing PII
Determining and fulfilling obligations to PII principals
Determining information for PII principals
Providing information to PII principals
Providing mechanism to modify or withdraw consent
Obligations to
Providing mechanism to object to PII processing
PII principals
Access, correction and/or erasure
PII controllers’ obligation to inform third parties
Handling requests
Automated decision making

14  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

Table 12 (continued)
Category Supporting information
Limit collection
Limit processing
Accuracy and quality
Privacy by de- PII minimization objectives
sign and privacy PII de-identification and deletion at the end of processing
by default Temporary files
Retention
Disposal
PII transmission controls
Identify basis for PII transfer between jurisdictions
PII sharing, Countries and international organizations to which PII can be transferred
transfer and
disclosure Records of transfer of PII
Records of PII disclosure to third parties

Table 13 below lists control categories as proposed by ISO/IEC 27701 for PII processors.

Table 13 — Additional supporting information for PII processors (for information systems)
Category Supporting information
Customer agreement
Organization’s purposes
Conditions for collection and Marketing and advertising use
processing Infringing instruction
Customer obligations
Records related to processing PII
Obligations to PII principals Obligations to PII principals
Temporary files
Privacy by design and privacy
Return, transfer or disposal of PII
by default
PII transmission controls
Basis for PII transfer between jurisdictions
Countries and international organizations to which PII can be transferred
Records of PII disclosure to third parties
PII sharing, transfer and disclo- Notification of PII disclosure requests
sure Legally binding PII disclosures
Disclosure of subcontractors used to process PII
Engagement of a subcontractor to process PII
Change of subcontractor to process PII

7.6 Identify security and privacy assurance concerns


For each system of interest, security and privacy assurance points of attention can be identified.
Examples are shown in Table 14.

© ISO/IEC 2023 – All rights reserved  15



ISO/IEC TR 27563:2023(E)

Table 14 — Points of attention on assurance


Points of atten-
Comment
tion
Assurance focuses on verifying that requirements concerning security privacy for AI system
are met. Evidence are defined for each requirement
Evidence for
security and pri- EXAMPLE 1 A design report explains how explainability is done
vacy assurance EXAMPLE 2 The AI system has an HCI for explainability
EXAMPLE 3 A privacy impact assessment report is provided
Organisational evidence
Organizational EXAMPLE 4 A periodic review of risks is made
and technical
evidence Technical evidence
EXAMPLE 5 Demonstrating that a specific de-identification mechanism is used
Audits can focus on system assurance or on process assurance
Assurance EXAMPLE 6 A system assurance can be the security and privacy certification of a Ma-
approach and chine learning (ML) capability
metrics for as- EXAMPLE 7 A process assurance can be the audit that an AI system life cycle process is
surance at a given integrity level
NOTE Ecosystem assurance can depend on the underlying governance approach
To be effective assurance is based on the requirements
Competence and
ecosystem for EXAMPLE 8 ISO/IEC 27001 is supported by ISO/IEC 27006
assurance
EXAMPLE 9 ISO/IEC 27701 and ISO/IEC 27002 is supported by ISO/IEC TS 27006-2

7.7 Identify security and privacy plan requirements


For each system of interest, points of attention on security and privacy plan can be identified. Examples
are shown in Table 15 and Table 16.
NOTE Table 5 is based on ISO/IEC TS 27570.

Table 15 — Points of attention on security and privacy ecosystem plan


Points of at-
Comment
tention
The governance process focuses on the establishment of security and privacy policies, and the
Governance continuous monitoring of their proper implementation in the ecosystem. These activities are
process carried out by the governing bodies of the ecosystem, as well as by the organizations in the
ecosystem which implement the security and privacy policies.
The data management process focuses on the management of security and privacy in the cre-
Data manage- ating, capturing, collecting, transforming, publishing, accessing, transferring, and archiving
ment process of data within an ecosystem. These activities are carried out by the governing bodies of an
ecosystem, as well as by the organizations in the ecosystem.
The risk management process deals with the analysis and the treatment of security and
Risk manage-
privacy risks in an ecosystem. The activities are carried out by the governing bodies of the
ment process
ecosystem, as well as by the organizations in the ecosystem.
The engineering process is a set of activities related to the life cycle of a service in an ecosys-
tem. These activities are carried out by the governing bodies of the ecosystem, as well as by
the organizations in the ecosystem concerned with the delivery, and the use of the availability
Engineering of the ecosystem service.
process
It elaborates the conceptual principles such as privacy by design and privacy by default and
other important design goals in applicable jurisdictions. It also considers the requirements
specified in ISO/IEC TR 27550.

16  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

Table 15 (continued)
Points of at-
Comment
tention
The citizen engagement process focuses on consultation with citizens on security and privacy
Citizen engage-
rules and policies at governance level, and on the support on the enforcement of these rules
ment process
and policies concerning the security and privacy of an ecosystem service.

Table 16 — Points of attention on security and privacy plan


Points of at-
Comment
tention
There are specific responsibilities that are associated with the certain stakeholders (e.g. PII
controllers, PII processors). It is important to have a continuous assessment of whether a
stakeholder is changing its role. For instance, it is possible that an operator of an AI system
deployed it with the understanding that no PII is collected, but further operations can lead to
Continuous a status where the AI system is collecting PII.
determination Here are examples of factors that can lead to this situation:
of roles
— Governance capabilities (the AI system dynamically decides to collect some type of data),

— Re-identified data (some data that is initially categorized at non-PII is now a PII)

— Error in data sharing agreements.


Virtually all use cases of ISO/IEC TR 24030 are part of an ecosystem. Organizational meas-
Organizational
ures are implemented when there it is expected that stakeholders to synchronise their
measures in the
actions. For instance, when data sets include privacy leaks, all the stakeholders using the data
ecosystem
sets can be informed and take appropriate actions.
Organizations (both processors and controllers) demonstrate accountability and responsi-
Accountability bility when processing personal information e.g. by having a data protection officer or data
protection team/office dedicated in catering the compliance of the organization
To ensure that organizations are compliant with data processing and data protection re-
Compliance quirements to their respective and applicable jurisdictions including their adherence to data
privacy principles
The digital economy is built on massive streams of data being processed. Through the appli-
cation of AI, the traditional governance frameworks and strategies can be insufficient. Having
Ethics princi-
a set of principles of data ethics in building programs and AI solutions can reinforce its pro-
ples
cesses, such as decision-making, ethical controls that can mitigate new risks and challenges
that AI encounters.
As we have entered digital economy and the rise of data processing, there are increasing
Data breach incidents of personal data breaches that impact both public and private entities, entailing
and security significant economic and legal costs for those involved in processing of personal data. This
incident man- also puts at risk data subjects for identity theft, crimes and other harm. In order to afford
agement protection of personal data, reasonable and appropriate measures are implemented to ensure
that organizations are ready for data breaches and security incidents when it happens.

© ISO/IEC 2023 – All rights reserved  17



ISO/IEC TR 27563:2023(E)

Annex A
(informative)

Additional use cases

A.1 General
This annex provides additional new examples of use cases elaborated by experts in the scope of this
document, which are not listed in ISO/IEC TR 24030.

A.2 Abnormal transaction


The use case in Table A.1 follows the template described in Clause 6. Figure A.1 summarizes the impact
of the use case on security and privacy.

Table A.1 — Abnormal transaction use case


ID SC27–1
Use case name Abnormal transactions of internal control and compliance employees in bank system
Systems of interest:
Describe the ecosys- — Abnormal transactions monitoring system
tem: Stakeholders:
Identify the sys- — Bank
Ecosystem tems of-interest, the
stakeholders, and the — Bank regulator
stakeholders’ assets Stakeholder assets that are impacted by AI
that are impacted by
AI — Core banking system

— Internal control and compliance data


System of interest: Abnormal transactions monitoring system
— Security concerns on abnormal transactions monitoring system
Assessment Assessment on are significant
of system of security and privacy
interest concerns — Privacy concerns on abnormal transactions monitoring system
are significant
— All security and privacy protection goals to consider for abnormal
transactions monitoring system (confidentiality, integrity,
availability, unlinkability, transparency, intervenability)
Highlight security
Security and — All security framework concepts to consider for abnormal
and privacy concerns
privacy transactions monitoring system (Identify, Protect, Detect,
that are impacted by
concerns Respond, Recover)
AI
— All privacy framework concepts to consider for abnormal
transactions monitoring system (Identify-P, Govern-P, Control-P,
Communicate-P, Protect-P)

18  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

Table A.1 (continued)


— Privacy risks related to abnormal transactions monitoring
system (e.g. disclosure of identity information and sensitive legal
Identify security and information etc. while performing AI training and reasoning
Security and operations)
privacy risks that are
privacy risks
impacted by AI — Security risks related to abnormal transactions monitoring
system (e.g. alteration of learning data with wrong information,
security of training operation, security of reasoning operation)
— Security controls from ISO/IEC 27001 or ISO/IEC 27002 to be
considered for abnormal transactions monitoring system (e.g.
Identify security and information security policies, asset management, physical and
Security and environmental security, access control, operation security,
privacy controls that
privacy controls information security incident management)
are impacted by AI
— Privacy controls from ISO/IEC 27701 to be considered for
abnormal transactions monitoring system
Identify security and — Organization using abnormal transactions monitoring system
Security and
privacy assurance to ensure that system can be audited (see ISO/IEC 27006-1 and
privacy
aspects that are im- ISO/IEC 27006-2). This includes organizational and technical
assurance
pacted by AI evidence.
Identify security and
— Organization using abnormal transactions monitoring system
Security and privacy plan aspects
to establish a security plan, that will be validated and reviewed
privacy plan that are impacted by
periodically for continual improvement.
AI

Picture summarizing Figure A.1 shows the impact of the use case on security and privacy.
Impact the impact of the use
summary case on security and
privacy
participant system of interest
participant asset
participant security impact
participant privacy impact
box over system of interest,asset:abnormal transactions
Picture source code monitoring system
Sequencediagram.
org box right of asset:Core banking system
parallel
box right of asset:internal control and compliance data
rbox right of security impact #lightgrey:significant
rbox right of privacy impact #lightgrey:significant
parallel off

© ISO/IEC 2023 – All rights reserved  19



ISO/IEC TR 27563:2023(E)

Figure A.1 — UC SC27–1 Abnormal transactions of internal control and compliance employees
in bank system

A.3 Financial risk control


The use case in Table A.2 follows the template described in Clause 6. Figure A.2 summarizes the impact
of the use case on security and privacy.

Table A.2 — Finance risk control


ID SC27–2
Use case name Financial risk control
Systems of interest:
— Financial risk management system
Describe the
ecosystem: Stakeholders:
Identify the systems of — Financial institution (such as bank)
Ecosystem interest, the — Financial regulator
stakeholders, and the
stakeholders’ assets Stakeholder assets that are impacted by AI
that are impacted by AI — Financial business management system

— Financial performance data


System of interest: Financial risk management system
— Security concerns on financial risk management system are
Assessment significant
Assessment on security
of system of
and privacy concerns — Privacy concerns on financial risk management system are
interest
significant
— All security and privacy protection goals to consider for financial
risk management system (confidentiality, integrity, availability,
unlinkability, transparency, intervenability)
Security and Highlight security and
— All security framework concepts to consider for financial risk
privacy privacy concerns that
management system (identify, protect, detect, respond, recover)
concerns are impacted by AI
— All privacy framework concepts to consider for financial
risk management system (Identify-P, Govern-P, Control-P,
Communicate-P, Protect-P)

20  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

Table A.2 (continued)


— Privacy risks related to financial risk management system (e.g.
disclosure of identity information and sensitive legal information
Identify security and while performing AI training and reasoning operations)
Security and
privacy risks that are
privacy risks — Security risks related to financial risk management system (e.g.
impacted by AI
alteration of learning data with wrong information, security of
training operation, security of reasoning operation)
— Security controls from ISO/IEC 27002 to be considered for financial
risk management system (e.g. information security policies, asset
Security and Identify security and management, physical and environmental security, access control,
privacy privacy controls that operation security, information security incident management)
controls are impacted by AI
— Privacy controls from ISO/IEC 27701 to be considered for financial
risk management system
Identify security and — Organization using financial risk management system to
Security and
privacy assurance ensure that system can be audited (see ISO/IEC 27006-1 and
privacy
aspects that are im- ISO/IEC TS 27006-2). This includes organizational and technical
assurance
pacted by AI evidence.
Identify security and — Organization using financial risk management system to establish
Security and
privacy plan aspects a security plan, that will be validated and reviewed periodically
privacy plan
that are impacted by AI for continual improvement.

Picture summarizing Figure A.2 illustrates the impact of the use case on security and pri-
Impact the impact of the use vacy.
summary case on security and
privacy
participant system of interest
participant asset
participant security impact
participant privacy impact
box over system of interest,asset:Financial risk
management system
Picture source code
Sequencediagram.org box right of asset:Financial business management system
parallel
box right of asset:Financial performance data
rbox right of privacy impact#lightgrey:significant
rbox right of security impact#lightgray:significant
parallel off

Figure A.2 — UC SC27–2 Financial risk control

© ISO/IEC 2023 – All rights reserved  21



ISO/IEC TR 27563:2023(E)

A.4 AI webcam employee monitoring


The use case in Table A.3 follows the template described in Clause 6.

Table A.3 — AI webcam employee monitoring


ID SC27–5
Use Case Name AI webcam employee monitoring
Systems of interest:
Describe the ecosys- — AI webcam employee monitoring
tem: Stakeholders:
Identify the systems — Business processing outsourcing companies
Ecosystem of interest, the stake-
Stakeholder assets that are impacted by AI:
holders, and the
stakeholders’ assets — Employees
that are impacted
by AI — Productivity of employees

— ICT resources
System of interest: AI webcam employee monitoring
Assessment Assessment on
— Privacy concerns for employees are significant as they are monitored
of system of security and privacy
while working especially those that are in telecommute setup.
interest concerns
Security concerns:
— Confidentiality and integrity
Highlight security Privacy concerns:
Security and
and privacy concerns
privacy — Unlinkability, transparency, purpose legitimacy and proportionality.
that are impacted
concerns All privacy concepts to consider for monitoring productivity of
by AI
employees and ensure that they are aware of the processing and
data being processed are proportional to the declared purpose.
(Identify-P, Govern-P, Control-P, Communicate-P, Protect-P)
Security risks:
Identify security and — Repudiation, information disclosure, spoofing
Security and
privacy risks that Privacy risks:
privacy risks
are impacted by AI
— Identifiability, detectability, disclosure of information, unawareness,
non-compliance, lack of transparency, unsolicited tracking
— Controls from ISO/IEC 27001 applies (e.g. mobile devices and
teleworking, logging and monitoring, user responsibilities,
Security and Identify security and compliance with legal and contractual requirements)
privacy privacy controls that — Controls from ISO/IEC 27701 applies (e.g. identify and document
controls are impacted by AI purpose, identify lawful basis, privacy impact assessment,
obligations to PII principals, privacy by design and privacy by
default, records of PII disclosure to third parties)
Identify security and
Security and
privacy assurance
privacy Assurance approach and metrics for assurance
aspects that are
assurance
impacted by AI
Identify security and
All security and privacy plan requirements applies (governance pro-
Security and privacy plan aspects
cess, data management process, risk management process, engineering
privacy plan that are impacted
process, citizen engagement process).
by AI

A.5 Training with privacy-sensitive data


The use case in Table A.4 follows the template contained in ISO/IEC TR 24030.

22  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

Table A.4 — Training with privacy-sensitive data


ID SC27–3
Use case name Training with privacy-sensitive data
Application domain Citizen security
Independently of the purpose or of the conditions of use of AI-based algorithms, usage of
privacy sensitive data are often critical in the training of the algorithms themselves.
This is especially true when the situations to monitor cover low probability random re-
al-life events, implying a training with a huge volume of real-life data.
This is typically the case in citizen security applications, where abnormal or dangerous
situations are expected to be detected to alert the first responders (which will then ana-
lyse the situation and have the last word); data can be as varied as video-surveillance data
or social networks traffic.
Such privacy-sensitive training (typically because data cannot be made available or are
destroyed after few days) has become today the bottleneck for the usage of AI advanced
tools, while they are expected by the first responders and law-enforcement entities, as a
support to run the missions they are mandated by law to conduct.
This use case, not really matching with the intent of the proposed template, is never-
theless summarized below, using the example of a hypothetical tool designed to detect
pickpockets in the crowd, using video-surveillance data.
Deployment Model Organization-wide, eventually decentralized at each data-source level
Status Available, but lacking reliability due to lack of training with real data
Scope Detect abnormal situations and alert relevant first responders
Objective(s) Multiply the staff incidents-detection capacity
Short description Help the staff in charge of security in a major station to detect pick-
(not more than 150 pockets in the crowd using the hundreds of CCTV cameras in place
words) (same process can apply with many other types of detections).
Complete Each camera has a different sight of view, lighting (which can change
description with the time of the day and weather), diverse backgrounds, etc.
For each of them, the challenge is to recognize in the video actions
which have a good probability of being the act of a pickpocket, rather
than of an individual searching in his or her own pocket or of chil-
dren playing together; if, typically, more than 50 % of the detections
prove to be false detections, the system will be rapidly rejected by
Narrative the operators and its new detections ignored.
The system is trained with thousands of hours of videos in real con-
ditions for each camera and covering the variety of conditions such
cameras can encounter. It involves thousands of individuals who hap-
pen to be in the field of view of the cameras, plus a few actors and/or
real pickpockets. This mass of data is archived and played again upon
request.
In many countries and especially in Europe it is unlawful to collect
such videos, with an objective which goes beyond their intended pur-
pose and to keep them longer than a few days, even if the final result
benefits these same citizens using the station on a daily basis.
Stakeholders The stakeholders are the operators of the station and its security staff, the authorities
which will directly or indirectly use the system to identify and prosecute the pickpockets,
the general public using the station and the local privacy authority.
Similar split applies for other types of structures open to the public and other types of
crimes or threats.
Stakeholders’ In theory, the risk for the public is that the videos collected are misused (e.g. to recognize
assets, values the presence of an individual where and with whom she or he is not supposed to be…)
The benefit for the same public and more generally the whole population, is better securi-
ty.

© ISO/IEC 2023 – All rights reserved  23



ISO/IEC TR 27563:2023(E)

Table (continued)
System’s threat and In the systems considered, humans generally remain in the loop and have the last word,
vulnerabilities limiting any risk of uncontrolled bias
ID Name Description Reference to use case men-
Key performance tioned objectives
indicators (KPIs)

Task (s) Recognition, anomaly detection


Method (s)
Hardware
AI features Topology AI can be decentralized in the nodes to limit bandwidth consumption
and video clips called only for detected events
Terms and con-
cepts used
Standardization
opportunities /
requirements
Challenges and Regulations (e.g. GDPR) tend to put the priority on the protection of citizen privacy, even if
issues it has a negative impact on other citizen expectations, like his or her security.
Description Safe cities (SDGs 3 and 16)
Sustainable Development Goal 3: Good Health and Well-being
Societal concerns
SDG to be achieved Sustainable Development Goal 16: Peace and Justice Strong Institu-
tions[34]

A.6 MisBehaviour detection for V2X


The use case in Table A.5 follows the ISO/IEC TR 24030 template.

Table A.5 — MisBehaviour detection for V2X


ID SC27–4
Use case name MisBehaviour detection (MBD) for V2X
Application
Transportation
domain
Deployment
Cloud services, Embedded systems
Model
Status PoC (2 vehicles)
Scope Vehicle to X communication infrastructure (X = Infrastructure, vehicles, stations)
Objective(s) Intention: use AI to enable monitoring of security of V2X infrastructure.
What is to be accomplished: reach semantic level security of V2X messages
Who will benefit: operators of the V2X infrastructure can provide the level of dependability
expected (essential to autonomous vehicles).
Short V2X technology can drastically reduce the number of road accidents, increase
description traffic flow and enable a number of autonomous technologies. However,
(not more cyber-attacks on V2X can reverse these effects and enable malicious actors to
than 150 induce large city wide traffic jams or even targeted accidents. MisBehaviour
words) detection (MBD) and mitigation systems aims to detect and prevent these
types of attacks.

24  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

Table A.5 (continued)


Narrative Complete V2X is technology that aims to reduce road accidents and improve road
description safety in general. In V2X, Intelligent Transport Systems (ITS) stations such as
vehicles, roadside equipment or traffic control centres and nomadic devices
are equipped with V2X transmitters. These stations communicate and share
information using a standardized communication architecture. This com-
munication enables various safety and traffic applications that can reduce
road accidents, improve traffic efficiency and mobility as well as a number of
ecological benefits[25].
Some known examples of these applications are emergency brake light warn-
ing (EBLW), cooperative adaptive cruise control (CACC), road works warning
(RWW) and shockwave damping via speed advice (ShD)[26].
ITS infrastructure is therefore safety critical and the failure of any of its com-
ponents can prove catastrophic. Therefore its trustworthiness is of an impor-
tant concern, in particular when it comes to cybersecurity. This infrastruc-
ture includes[26]: the vehicle component [platform, on-board units (OBUs)],[27]
[28] the roadside component where edge computing can take place [roadside
system, roadside units (RSUs)], the central component where cloud computing
can take place (service provider back office, communication provider back
office, traffic information system, …)[29]. and the support component (govern-
ance, test and certification, cybersecurity incident).
The problem of cyberattacks on V2X has been extensively investigated[30].
justifying the development of further MisBehaviour detection (MBD) capabil-
ities. MBD apply to the entire ITS infrastructure system (vehicles, vehicular
networks, RSU and cloud computing) as shown in Figure A.3.

An MBD system includes: local detection capability, global detection capabil-


ity and reaction capability to mitigate the effects of any suspicious activity
due to cyber-attacks. The global detection is performed by the entity called
Misbehaviour Authority (MA). The MA also interacts with the vehicular Public
Key Infrastructure (PKI) provider (digital keys are used by vehicles to authen-
ticate exchanged V2X messages). MBD is a distributed system that supports
the exchange of misbehaviour reports between the local and global detection
entities, using a predefined reporting protocol. See Reference [31] for an
example.
Stakeholders Infrastructure operators in ecosystem
Stakeholders’
Safety related impact, reputation of OEM, trustworthiness of ITS infrastructure
assets, values
System’s threat New axes of security attacks, New axes of privacy attacks, Detection Accuracy (missed
and detections (false-negative) or wrong detections (false-positive) reports), Data bias due to
vulnerabilities training set location.
ID Name Description Reference to use case men-
tioned objectives
Ensure dependable V2X
Key performance 1 V2X security
communication operation
indicators (KPIs)
Detection latency,
Attack mitigation
2 Response and recovery
capability
capability,

© ISO/IEC 2023 – All rights reserved  25



ISO/IEC TR 27563:2023(E)

Table A.5 (continued)


Task (s) Recognition based on on-board local detection and cloud-based global detec-
tion
Method (s) Machine learning for anomaly detection, integrated into an intrusion detec-
tion system (IDS)
Hardware Embedded on board vehicle processor, on board vehicle sensors, cloud and
AI features edge infrastructure, communication infrastructure
Topology Local on board detection system reporting to a global cloud computing sys-
tem.
Terms and
Autonomous vehicle, cyber-physical system, security systems, intrusion de-
concepts
tection system
used
Standardization ETSI TR 103 460: ITS Security Pre-standardization study on MisBehaviour detection [32].
opportunities /
IEEE 1609.2.1: SCMS (Security credential management system) Standards and VPKI (Vehicu-
requirements
lar public key infrastructure) Architecture and Security[33].
Challenges and The following challenges were addressed:
issues — the local detection system is expected to implement a privacy preserving and secure way
of reporting misbehaving events.

— the global detection system is expected to detect accurately in real-time cybersecurity


attacks based on received reports.

— the reaction function to the misbehaving ITS station (revocation, suspension) is expected
to mitigate the effects of an attack.
The local and global detection systems are based on machine learning algorithms which
have shown to outperform rule-based systems.
Further issues that are anticipated:
— The deployment of a MisBehaviour detection system can enable new axes of attacks such
as a malicious actor that causes the revocation or suspension of genuine vehicles.
Description
Societal
concerns SDG to be Sustainable Development Goal 9: Industry, Innovation and Infrastructure[34]
achieved

26  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

Figure A.3 — V2X infrastructure

© ISO/IEC 2023 – All rights reserved  27



ISO/IEC TR 27563:2023(E)

Bibliography

[1] Asilomar AI Principles, Asilomar Conference. Future of Life Institute. 2017. Available
from: https://​f utureoflife​.org/​open​-letter/​ai​-principles/​
[2] Ethically Aligned Design,1st edition. IEEE Global Initiative on Ethics of Autonomous and
Intelligent Systems. 2019. Available from: https://​standards​.ieee​.org/​content/​dam/​ieee​
-standards/​standards/​web/​documents/​other/​ead1e​.pdf
[3] The Assessment List for Trustworthy Artificial Intelligence (ALTAI). July 2020. Available
from https://​f uturium​.ec​.europa​.eu/​en/​european​-ai​-alliance/​pages/​altai​-assessment​-list​
-trustworthy​-artificial​-intelligence
[4] ISO/IEC/TR 24028:2020, Information technology — Artificial intelligence — Overview of
trustworthiness in artificial intelligence
[5] ISO/IEC/TR 24030:2021, Information technology — Artificial intelligence (AI) — Use cases
[6] ISO/IEC/IEEE 15288:2015, Systems and software engineering — System life cycle processes
[7] ISO/IEC 27701:2019, Security techniques — Extension to ISO/IEC 27001 and ISO/IEC 27002 for
privacy information management — Requirements and guidelines
[8] ISO/IEC/TR 27550:2019, Information technology — Security techniques — Privacy engineering for
system life cycle processes
[9] ISO/IEC 29100:2011, Information technology — Security techniques — Privacy framework
[10] ISO/IEC 29134:2017, Information technology — Security techniques — Guidelines for privacy
impact assessment
[11] ISO/IEC/IEEE 29148:2018, Systems and software engineering — Life cycle processes —
Requirements engineering
[12] ISO/IEC 29151:2017, Information technology — Security techniques — Code of practice for
personally identifiable information protection
[13] ISO 31700-1, Consumer protection — Privacy by design for consumer goods and services — Part 1:
High-level requirements
[14] Cavoukian A., “7 Foundational Principles of Privacy by Design”, Information & Privacy
Commissioner, Ontario, Canada. Available from https://​w ww​.ipc​.on​.ca/​w p​-content/​uploads/​
Resources/​7fo​undational​principles​.pdf
[15] The NIST Privacy Framework, A Tool for Improving Privacy through Enterprise Risk
Management. Version 1.0 (January 2020), https://​doi​.org/​10​.6028/​NIST​.CSWP​.01162020
[16] ISO/IEC/TS 27570:2021, Privacy protection — Privacy guidelines for smart cities
[17] ISO/IEC 27002:2022, Information security, cybersecurity and privacy protection — Information
security controls
[18] ISO/IEC 27005:2022, Information security, cybersecurity and privacy protection — Guidance on
managing information security risks
[19] ISO/IEC 27006-1, Requirements for bodies providing audit and certification of information security
management systems — Part 1: General
[20] ISO/IEC/TS 27006-2:2021, Requirements for bodies providing audit and certification of information
security management systems — Part 2: Privacy information management systems

28  © ISO/IEC 2023 – All rights reserved



ISO/IEC TR 27563:2023(E)

[21] ISO/IEC/TS 27110:2021, Information technology, cybersecurity and privacy protection —


Cybersecurity framework development guidelines
[22] ISO/IEC 27400:2022, Cybersecurity — IoT security and privacy — Guidelines
[23] ISO/IEC 27402, Cybersecurity — IoT security and privacy — Device baseline requirements
[24] ISO/IEC 27403, Cybersecurity – IoT security and privacy – Guidelines for IoT-domotics
[25] Hoadley S., Polis, “Where are we with C-ITS today?,” Joint CIMEC/CODECS City Pool workshop,
Barcelona, 14 November 2016.
[26] van Sambeek M., Ophelders F., Bijlsma T., van der Kluit B., (TNO), O. Türetken, R. Eshuis,
K. Traganos, P. Grefen (TU/e), “Towards an Architecture for Cooperative ITS Applications in the
Netherlands,” DITCM Innovations, April 10, 2015
[27] So S., Sharma P., Petit J., “Integrating plausibility checks and machine learning for misbehavior
detection in vanet,” in 17th IEEE International Conference on Machine Learning and Applications
(ICMLA), Orlando, Florida, USA, 2018
[28] Singh P. K., Dash M. K., Mittal P., Nandi S. K., Nandi S., “Misbehavior detection in c-its using
deep learning approach,” Springer International Publishing, vol. Intelligent Systems Design and
Applications, p. 641–652, 2020.
[29] Mahmoudi I., Kamel J., Ben-Jemaa I., Kaiser A., Urien P., “Towards a Reliable Machine Learning
Based Global Misbehavior Detection in C-ITS: Model Evaluation Approach,” in International
Workshop on Vehicular Adhoc Networks for Smart Cities (IWVSC'2019), Paris, Nov 2019.
[30] van der Heijden R. W., Dietzel S., Leinmüller T., Kargl F.“, Survey on Misbehavior Detection
in Cooperative Intelligent Transportation Systems,” IEEE Communications Surveys & Tutorials,
vol. 21, no. 1, pp. 779-811, 2019.
[31] Kamel J., Ben Jemaa I., Kaiser A., Urien P., “Misbehavior Reporting Protocol for C-ITS,” in
IEEE Vehicular Networking Conference (VNC), Taipei, Taiwan, 2018.
[32] IEuropean Telecommunications Standards Institute (ETSI), “ETSI TR 103 460: ITS Security Pre-
standardisation study on misbehavior detection,” ITS WG5, 2020.
[33] Standards IEEE, “IEEE Std 1609.2.1 - IEEE Standard for Wireless Access in Vehicular
Environments–Security Services for Applications and Management Messages,” IEEE, 2019.
[34] United Nations Sustainable Development Goals, Available from: https://​sdgs​.un​.org/​goals

© ISO/IEC 2023 – All rights reserved  29



ISO/IEC TR 27563:2023(E)

ICS 35.020
Price based on 29 pages

© ISO/IEC 2023 – All rights reserved 

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy