Ethical AI Framework
Ethical AI Framework
Version: 1.3
August 2023
© The Government of the Hong Kong Special Administrative Region of the People's Republic of China
Amendment History
Change Revision
Revision Description Pages Affected Date
Number Number
1 1. Added PCPD’s “Guidance on the Ethical 4-7, 4-30, 6-4 to 1.1 29 June
Development and Use of Artificial 6-6 2023
intelligence” as a reference in Section Appendix E
4.1.1.2 of the Framework
2. Added “Personal Information Protection
Law” of the People's Republic of China
as an example in Section 4.1.4.4 of the
Framework.
3. Updated Appendix B
4. Supplemented Appendix E
SECTION 1
EXECUTIVE SUMMARY
ETHICAL AI FRAMEWORK EXECUTIVE SUMMARY
1. EXECUTIVE SUMMARY
1.1 INTRODUCTION
Artificial intelligence (“AI”) and big data analytics have the potential to enhance social well-being
and are increasingly being applied to various business areas to improve operational efficiency and
to provide new services, but at the same time this can bring about different challenges. It is
important for organisations to consider AI and data ethics when implementing Information
Technology (“IT”) projects and providing services.
When organisations are considering the application of AI and big data analytics, they need to
consider a range of factors such as the requirements of relevant legislation and stakeholder
expectations on the applicable ethical standards of data and technology that appropriately reflect
the value and culture of the local community.
The Ethical Artificial Intelligence Framework (called the “Ethical AI Framework” hereunder)
document consists of:
• A Tailored AI Framework for ethical use of AI and big data analytics when implementing
IT projects; and
• An assessment template (used to complete “AI Assessment”) for AI and big data analytics
to assess the implications of AI applications.
In this document, the term “AI” is used to refer to analytic operations involving big data analytics,
advanced analytics and machine learning that use massive data sets and processing capabilities to
find correlations and make predictions. The term “AI applications” has been used to refer to a
collective set of applications whose actions, decisions or predictions are empowered by AI
models. Examples of AI applications are IT projects which have prediction functionality
and/or model development involving training data. For IT projects that have AI applications,
organisations can make reference to the requirements of the Ethical AI Framework.
The adoption of Ethical AI Framework is a step that establishes a common approach and structure
to govern the development and deployment of AI applications with the intention to maximise the
benefits of the application of AI in IT projects based on the following guiding principles:
• Facilitate organisations to understand the application of AI and big data analytics in their
respective business areas;
• Complement other operating guidelines (e.g. privacy, security and data management);
• Foster and guide the ethical use of AI and big data analytics in the organisation;
1-2
ETHICAL AI FRAMEWORK EXECUTIVE SUMMARY
1-3
ETHICAL AI FRAMEWORK EXECUTIVE SUMMARY
Ethical AI Principles
Twelve Ethical AI Principles should be observed for all AI projects. Two out of the twelve
principles (1) Transparency and Interpretability and (2) Reliability, Robustness and Security
are “Performance Principles”. These fundamental principles must be achieved to create a
foundation for the execution of other principles. For example, without achieving the Reliability,
Robustness and Security principle, it would be impossible to accurately verify that other Ethical
AI Principles have always been followed.
The other principles are categorised as “General Principles”, including (1) Fairness, (2) Diversity
and Inclusion, (3) Human Oversight, (4) Lawfulness and Compliance, (5) Data Privacy, (6)
Safety, (7) Accountability, (8) Beneficial AI, (9) Cooperation and Openness and (10)
Sustainability and Just Transition. They are derived from the United Nations’ Universal
Declaration of Human Rights and the Hong Kong Ordinances:
Principle Definition
Transparency and Organisations should be able to explain the decision-making processes
Interpretability of the AI applications to humans in a clear and comprehensible manner.
Reliability, Like other IT applications, AI applications should be developed such
Robustness and that they will operate reliably over long periods of time using the right
Security models and datasets while ensuring they are both robust (i.e. providing
consistent results and capable to handle errors) and remain secure against
cyber-attacks as required by the relevant legal and industry frameworks.
Fairness The recommendation/result from the AI applications should treat
individuals within similar groups in a fair manner, without favouritism
or discrimination and without causing or resulting in harm. This entails
maintaining respect for the individuals behind the data and refraining
from using datasets that contain discriminatory biases.
Diversity and Inclusion and diverse usership through the AI application should be
Inclusion promoted by understanding and respect the interests of all stakeholders
impacted.
Human Oversight The degree of human intervention required as part of AI application’s
decision-making or operations should be dictated by the level of the
perceived severity of ethical issues.
Lawfulness and Organisations responsible for an AI application should always act in
Compliance accordance with the law and regulations and relevant regulatory regimes.
1-4
ETHICAL AI FRAMEWORK EXECUTIVE SUMMARY
Principle Definition
Data Privacy Individuals should have the right to:
(a) be informed of the purpose of collection and potential transferees of
their personal data and that personal data shall only be collected for
a lawful purpose, by using lawful and fair means, and that the
amount of personal data collected should not be excessive in relation
to the purpose. Please refer to the Data Protection Principles
(“DPP”)1 “Purpose and Manner of Collection” of the Personal Data
(Privacy) Ordinance (the “PD(P)O”) 1
(b) be assured that data users take all practicable steps to ensure that
personal data is accurate and is not kept longer than is necessary.
Please refer to the DPP2 “Accuracy and Duration of Retention” of
the PD(P)O.
(c) require that personal data shall only be used for the original purpose
of collection and any directly related purposes. Otherwise, express
and voluntary consent of the individuals is required. Please refer to
the DPP3 “Use of Personal Data” of the PD(P)O.
(d) be assured that data users take all practicable steps to protect the
personal data they hold against unauthorised or accidental access,
processing, erasure, loss or use. Please refer to the DPP4 “Security
of Personal Data” of the PD(P)O.
(e) be provided with information on (i) its policies and practices in
relation to personal data, (ii) the kinds of personal data held, and (iii)
the main purposes for which the personal data is to be used. Please
refer to the DPP5 “Information to Be Generally Available” of the
PD(P)O.
Safety Throughout their operational lifetime, AI applications should not
compromise the physical safety or mental integrity of mankind.
Accountability Organisations are responsible for the moral implications of their use and
misuse of AI applications. There should also be a clearly identifiable
accountable party, be it an individual or an organisational entity (e.g. the
AI solution provider).
Beneficial AI The development of AI should promote the common good.
Cooperation and A culture of multi-stakeholder open cooperation in the AI ecosystem
Openness should be fostered.
Sustainability and The AI development should ensure that mitigation strategies are in place
Just Transition to manage any potential societal and environmental system impacts.
Table 1: Ethical AI Principles and Definition
1
https://www.pcpd.org.hk/english/data_privacy_law/ordinance_at_a_Glance/ordinance.html
1-5
ETHICAL AI FRAMEWORK EXECUTIVE SUMMARY
AI Governance
AI governance refers to the practices and direction by which AI projects and applications are
managed and controlled. The three lines of defence is a well-established governance concept in
many organisations. Figure 2 shows the different defence lines and their roles.
The governance structure consists of the following with the following setup.
• The first line of defence is the Project Team who is responsible for AI application
development, risk evaluation, execution of actions to mitigate identified risks and
documentation of AI Assessment.
• The second line of defence is comprised of the Project Steering Committee (“PSC”) and
Project Assurance Team (“PAT”) who are responsible for ensuring project quality,
defining acceptance criteria for AI applications, providing independent review and
approving AI applications. The Ethical AI Principles should be addressed through the
completion of AI Assessment before approval of the AI application.
• The third line of defence involves the IT Board, or Chief Information Officer (“CIO”) if
the IT Board is not in place, and is optionally supported by an Ethical AI Committee, which
may consist of external advisors. The purpose of the Ethical AI Committee is to provide
advice on ethical AI and strengthen organisations’ existing competency on AI adoption.
The third line of defence is responsible for reviewing, advising and monitoring of high-risk
AI applications.
1-6
ETHICAL AI FRAMEWORK EXECUTIVE SUMMARY
AI Lifecycle
In order to structure the practices for organisations to follow when executing AI projects/creating
AI applications, practices in different stages of the AI Lifecycle have been detailed in the AI
Practice Guide (Please refer to Section 4 “AI Practice Guide” in the Ethical AI Framework for
further details). A way to conceptualise the AI Lifecycle appears in the following 6-step schematic.
The AI Lifecycle shows the different steps involved in AI projects that can
• Guide organisations to understand the different stages and requirements involved; and
• Serve as a reference for the development of AI practices to align with actual stages of how AI
is typically developed.
The AI Lifecycle is used to align the practices in the AI Practice Guide. The AI Lifecycle also
aligns to a traditional System Development Lifecycle (“SDLC”) model as depicted in Figure 4.
1-7
ETHICAL AI FRAMEWORK EXECUTIVE SUMMARY
part of the project development can be a continuous exercise. This is because an AI model can
often benefit from better or more data for iterative model training throughout the development
process. The approach of conventional software lifecycle is to program the IT application with a
set of instructions for a pre-defined set of events. Thereafter, the IT application will exploit its
computing capabilities and other resources to process the data fed into the system. This is different
from an AI application where a huge amount of data are fed into the application, which in turn
processes all the data resulting in a trained model or AI solution. This trained model is then used
to solve new problems.
There is often a continual feedback loop between the development and deployment stages, as well
as system operation and monitoring of the AI Lifecycle for iterative improvements making this
distinct from a traditional software development lifecycle.
AI Practice Guide
Section 4 “AI Practice Guide” contains detailed practices to be followed for a number of practice
areas. Such practice areas are assessed as part of the AI Application Impact Assessment. A
summary of the practice areas in the AI Practice Guide is listed below.
1-8
ETHICAL AI FRAMEWORK EXECUTIVE SUMMARY
1-9
ETHICAL AI FRAMEWORK EXECUTIVE SUMMARY
An AI Application Impact Assessment should be conducted regularly (e.g. annually or when major
changes take place) as AI projects progress and when the AI application is being operated.
The stages of the AI Lifecycle where AI Application Impact Assessment should be reviewed are
shown in Figure 6.
1-10
ETHICAL AI FRAMEWORK EXECUTIVE SUMMARY
The AI Application Impact Assessment can be used as a ‘live’ document throughout the AI
Lifecycle, but the associated AI Application Impact Assessment should be reviewed at 4 key stages
of the AI Lifecycle with a copy of the AI Application Impact Assessment being retained for
historical records. The mapping of these reviews to the system development lifecycle with
responsible parties and actions to be performed is summarised below.
1-11
ETHICAL AI FRAMEWORK EXECUTIVE SUMMARY
AI Lifecycle: IT
System Board/CIO
Operation and (or its
Monitoring delegates)
Table 3: Actions to be performed for completing and reviewing the AI Application
Impact Assessment
Organisations can make use of the Ethical AI Framework when adopting AI and big data analytics
in their IT projects or services. Ethical AI Framework is defined not only to serve as a
reference/guide for the IT project team during the development and maintenance process. It also
defines the governance structure to enable organisations to demonstrate accountability in building
trust with the public upon adoption of AI by evaluating the impact, safeguarding the public interest
and facilitating innovation.
Furthermore, IT Planners and Executives can make reference to the Ethical AI Framework to
embed appropriate ethical AI considerations starting from the strategy formulation, planning and
establishment of the ecosystem. For details of the relevant section to different roles, please refer to
Section 3.5 “Key Components and Relationships”.
1-12
ETHICAL AI FRAMEWORK PURPOSE
SECTION 2
PURPOSE
1-2
ETHICAL AI FRAMEWORK PURPOSE
2. PURPOSE
This document is intended to provide readers with an understanding of the Ethical AI Framework
and procedures that should be carried out to embed ethical elements in organisations’ planning,
design and implementation of AI applications in IT projects or services (hereafter known as
“ethical AI”). The major sections of this document comprise of:
• Section 1 - Executive Summary provides an outline of the Ethical AI Framework
components, usage and governance structure;
• Section 2 - Purpose outlines the objectives of every section in this document;
• Section 3 - Overview of the Ethical AI Framework introduces the Ethical AI
Framework, Ethical AI Principles, vision statement, roles and responsibilities and
objectives;
• Section 4 - AI Practice Guide provides practical guidance with references to the Ethical
AI Principles as part of the Ethical AI adoption process by the organisations; and
• Section 5 - AI Assessment refers to a template on the ethical AI aspects and
considerations that require completion by the organisations to assess AI Application
Impact.
The intended audience and recommended sections are listed in the table below:
Audience Recommended Sections
Executives of organisations Section 1 - Executive Summary
Chief Information Officers (“CIOs”), Section 1 - Executive Summary
IT Planners/IT Board, Section 3 - Overview
Project Steering Committee (“PSC”), Section 4.1.1 - Project Strategy
Project Assurance Team (“PAT”), Section 4.1.2 - Project Planning
Business Users Section 4.1.3 - Project Ecosystem
Section 5 - AI Assessment
Project Managers All Sections
Project Team (including System Analysts, Section 4.1.4 - Project Development
System Architects and Data Scientists) Section 4.1.5 - System Deployment
Section 4.1.6 - System Operation and
Monitoring
Section 5 - AI Assessment
2-2
ETHICAL AI FRAMEWORK OVERVIEW
SECTION 3
2-1
ETHICAL AI FRAMEWORK OVERVIEW
The vision for the Ethical AI Framework is to enable organisations to manage potential ethical
issues and implications through assessing their AI capabilities and applications. This enables
delivery of ethical AI whilst managing the potential impact of AI applications.
The purpose of this section is to provide an overview of the Ethical AI Framework for
organisations. Examples used are taken from different sources and are purely for illustrative
purposes only.
3.2 OBJECTIVES
3.3 BENEFITS
The adoption of Ethical AI Framework is a foundation step that establishes a common approach
and structure to govern the subsequent development and deployment of AI applications. Benefits
of having an Ethical AI Framework include:
• Establishing common best practices to ensure organisations have guidance and references
to adopt AI in IT projects with appropriate ethical considerations.
• Identifying the benefits, risks and impacts of an AI application to enable better risk
mitigation decisions that maximise benefits.
• Acting as a bridge between the strategy and execution which helps ensure the AI
application is aligned with organisations’ vision and needs.
3-2
ETHICAL AI FRAMEWORK OVERVIEW
In the age of big data, enormous quantities of data are being generated, collected and analysed to
identify insights and support decisions making. Big data are often described in terms of the ‘five
Vs’ 2 where:
• volume refers to the vast quantity of the data available;
• velocity refers to the speed at which data must be stored and/or analysed to provide the
right information at the right time to make appropriate management decisions;
• variety refers to a huge variation in types and sources of data including both structure and
unstructured data (e.g. file objects, social media feeds, tags, data from sensors, audio, image
and video);
• veracity refers to the trustworthiness of the data over its accuracy and quality; and
• value refers to the ability to transform data to improve outcomes/values.
Analytical techniques that are used to analyse big data are often being described as advanced
analytics, machine learning and AI. These technical terms have similar meanings and overlap with
each other. They all refer to analytic operations that take advantage of large volume data describing
the past situations (i.e. historical data), and massive processing capabilities and advanced
algorithms, and that use them to find correlations and make predictions with acceptable accuracy.
In a broad definition, AI is a collective term for computer systems that can sense their environment,
think, learn and take actions in response to the gathered data, with the ultimate goal of fulfilling
their design objectives. AI systems are a collection of interrelated technologies used to help solve
problems autonomously and perform tasks to achieve defined objectives without explicit guidance
from a human being. We can distinguish the four main categories of AI (see Figure 7):
2
https://www.ibm.com/blogs/watson-health/the-5-vs-of-big-data/
3-3
ETHICAL AI FRAMEWORK OVERVIEW
3
https://www.pwc.com/gx/en/news-room/docs/report-pwc-ai-analysis-sizing-the-prize.pdf
4
https://www.pwc.com/gx/en/news-room/docs/report-pwc-ai-analysis-sizing-the-prize.pdf
3-4
ETHICAL AI FRAMEWORK OVERVIEW
The key components of the Ethical AI Framework are depicted in Figure 8. Details of the
components are described in the subsequent sections (i.e. Section 4 “AI Practice Guide” and
Section 5 “AI Assessment”). The Ethical AI Framework should be read in conjunction with
existing standards and practices for IT and project management.
3-5
ETHICAL AI FRAMEWORK OVERVIEW
Target readers of the Ethical AI Framework include IT Planners, System Analysts, System
Architects and Data Scientists. Data Scientists encompass a role including development,
deployment and monitoring AI for the Ethical AI Framework. These areas can also be separate
roles depending on the individual setup of the organisations. These target readers should leverage
the Ethical AI Framework to:
• understand Ethical AI Principles and practices;
• initiate discussions on the impact of AI;
• adopt standardised of practices and terminology; and
• perform AI Assessment.
• Project Strategy
IT Planners and Executives can refer to Section 4.1.1 “Project Strategy” of the AI Practice
Guide to establish organisational AI/data strategy, and to ensure Ethical AI Principles are
embedded and relevant regulations and ordinances are considered.
• Project Planning
IT Planners in organisations can refer to Section 4.1.2 “Project Planning” of the AI Practice
Guide along with the risk gating criteria within the AI Application Impact Assessment to
ensure ethical AI requirements are met, impacts are effectively assessed, and to determine
which AI projects require further review at senior level. Ethics, roles and responsibilities for
AI should be taken into consideration.
3-6
ETHICAL AI FRAMEWORK OVERVIEW
• Project Ecosystem
IT Planners can refer to Section 4.1.3 “Project Ecosystem” of the AI Practice Guide when
evaluating the existing technology landscape, business needs and sourcing (procurement)
options to identify technology gaps. IT planners who work with the sourcing team and the
Project Team should refer to this section for considerations when deploying third-party AI
applications. Existing change management procedures are to be followed when making
changes to existing systems/AI applications.
• Project Development
System Analysts and System Architects (who are responsible for developing data plumbing,
which transforms and feeds data to the AI applications) and Data Scientists (who are
responsible for performing development of AI applications) can refer to Section 4.1.4 “Project
Development” of the AI Practice Guide to ensure aspects such as data validation,
documentation, biased data, data privacy, training, testing and AI modelling techniques are
considered for ethical AI.
• System Deployment
Data Scientists (who are responsible for managing the integration, scaling and deployment of
AI applications, managing post-deployment performance and stability of AI applications) can
refer to Section 4.1.5 “System Deployment” of the AI Practice Guide for areas specifically
related to deployment of AI applications such as integration, testing, feedback loops, tuning
metrics and performance checks.
Twelve Ethical AI Principles should be observed for all AI projects. Two out of the twelve
principles (1) Transparency and Interpretability and (2) Reliability, Robustness and Security
are “Performance Principles”. These fundamental principles must be achieved to create a
foundation for the execution of other principles. For example, without achieving the Reliability,
3-7
ETHICAL AI FRAMEWORK OVERVIEW
Robustness and Security principle, it would be impossible to accurately verify that other Ethical
AI Principles have always been followed.
The other principles are categorised as “General Principles”, including (1) Fairness, (2) Diversity
and Inclusion, (3) Human Oversight, (4) Lawfulness and Compliance, (5) Data Privacy, (6)
Safety, (7) Accountability, (8) Beneficial AI, (9) Cooperation and Openness and (10)
Sustainability and Just Transition. They are derived from the United Nations’ Universal
Declaration of Human Rights and the Hong Kong Ordinances.
Definitions for the principles are listed in Table 4 with further details provided in subsequent
subsections.
Principle Definition
3-8
ETHICAL AI FRAMEWORK OVERVIEW
Principle Definition
5
https://www.pcpd.org.hk/english/data_privacy_law/ordinance_at_a_Glance/ordinance.html
3-9
ETHICAL AI FRAMEWORK OVERVIEW
Principle Definition
Sustainability and Just The AI development should ensure that mitigation strategies are
Transition in place to manage any potential societal and environmental
system impacts.
Table 4: Ethical AI Principles and Definitions
3-10
ETHICAL AI FRAMEWORK OVERVIEW
Explainability
Explainability refers to the degree to which a decision made by an AI application can be
understood by a human expert. Depending on whether the data, algorithms and configurations
of an AI model are available at the time of interpretation, two approaches can be taken to
generate human-readable explanations:
● Built-in Interpretation: Some models inherently have the ability to explain their behaviour.
For example, decision trees are essentially a cascade of questions and work comparably to
the way humans think. When executing against a data point (using its features and values),
the pathways to reach a decision can be simply reported back to human users. Similarly,
linear models like Logistic Regression are fairly intuitive and easy to explain to a non-
expert in data science, as the data points can be visualised in a plot against the learned
probability line, and Feature Importance metrics can be calculated, and absolute values
reported back to data scientists.
Transparency
Transparency is notably the most critical characteristic of building trust into AI models. Trust
is dynamic, developed and strengthened in a gradual manner. It is realised through carefully
designing a process to minimise risk, and therefore, plays a crucial role in the widespread
adoption of new, disruptive technologies like AI. Trust is hard to come by and builds upon
several factors including purpose and performance of the AI applications, as well as the
technology provider (e.g. organisation’s brand reputation, level of transparency in design,
operations, reliability and being able to explain the rationale behind the AI models’ decisions
are crucial for building and maintaining trusts. The Transparency principle calls for the
3-11
ETHICAL AI FRAMEWORK OVERVIEW
adoption of a clear, honest communication channel between an organisation and its end-users
and regulators, when needed, and its indispensable nature.
6
https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf
7
https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf
3-12
ETHICAL AI FRAMEWORK OVERVIEW
Provability
Provability refers to the mathematical certainty behind AI models’ decisions. It mandates a
higher level of formalism in explaining an AI application’s behaviour. This type of
interpretability is geared more towards data scientists to place an AI model under scrutiny to
ensure the decision-making policies of the model can be mathematically proved and remain
consistent as changes in data and environment take place. Equitably, in many safety-critical
AI applications, where the use of AI must be approved or certified by regulators, provability
becomes indispensable.
Examples (please refer to Section 4 “AI Practice Guide” for further examples and practices):
• Project Team should aim to embed elements of transparency into the AI applications and
translate them into human language (e.g. using decision tree/illustration to explain to users
how decisions are made). This allows human to understand whether the decision made by
AI has errors and fix the errors.
Provide examples to explain the AI decision-making process in narrative terms or graphics
(for example, drawing a workflow with decision trees) whenever possible for a non-technical
audience to understand and visualise the AI operations better. Organisations should make
explicit how different factors and data determine the outcomes and conclusions of their AI
models.
Principle: Like other IT applications, AI applications should be developed such that they will
operate reliably over long periods of time using the right models and datasets while ensuring
they are both robust (i.e. providing consistent results and capable to handle errors) and remain
secure against cyber-attacks as required by the relevant legal and industry frameworks.
The overarching aim of the Reliability, Robustness and Security principle is to ensure that AI
applications behave as intended, from training data to final output, over prolonged periods of
time. Reliability is about increasing the likelihood of the system being fault-free. Robustness
is to ensure that models perform when assumptions and variables change. Security is to protect
the data and model itself.
Across both global and local literature, it is recognised there is a need to ensure AI applications
behave as intended. Areas in the literature include awareness of misuse, integrity, robustness,
resilience, effectiveness, quality, appropriateness, accuracy and security. In particular, there is
a focus on preventing harm.
It is an indispensable requirement for AI applications to be designed and developed in a way
that takes into consideration that environment, data and processes on which they rely, change
over time. Malevolent actors of AI applications can exploit such drifts in the AI operating
3-13
ETHICAL AI FRAMEWORK OVERVIEW
environment with a variety of techniques to penetrate and fool AI models to make incorrect
predictions with high confidence. The adoption of the Reliability, Robustness and Security
principle assists organisations to identify potential weaknesses in an AI application, improve
overall performance, withstand adversarial attacks, and monitor long term performance of AI
models throughout their operational lifetime, and verifiably so where applicable and feasible.
Reliability
Fundamental to creating reliable software applications, Reliability is an engineering effort to
maximise the probability that a system will perform its required functions fault-free within a
specified time period and the environment 8. As the complexity of AI algorithms and systems
built upon them increases, the role of disciplined software engineering practices, such as
standards and software tests, become more prominent to ensure two measures encompassed in
reliability:
While quality and integrity are applicable to individuals adhering to industry best practices
and organisational codes of conduct, it is also important that organisations closely monitor and
manage the quality and integrity of the data being used to develop AI applications. ‘Data
quality’ determines the reliability of the information to serve an intended purpose and
attributes that define the usability of information, whereas data integrity refers to the reliability
of information in terms of its physical and logical validity - based on accuracy and consistency
of the data across its lifecycle - the absence of unintended change to the information between
successive updates. The key considerations relating to data quality and integrity are
consistency, accuracy, validity, timeliness, uniqueness and completeness.
Robustness
Robustness is a characteristic describing a model’s ability to effectively perform while its
variables or assumptions are altered. In order to ensure a model is robust, validations and error
handling must be incorporated at every step of the data science pipeline, from data preparation
and ingestion through to prediction.
Robust models must perform consistently while being exposed to a new and independent (but
8
http://www.mit.jyu.fi/ope/kurssit/TIES462/Materiaalit/IEEE_SoftwareEngGlossary.pdf
9
http://www.mit.jyu.fi/ope/kurssit/TIES462/Materiaalit/IEEE_SoftwareEngGlossary.pdf
3-14
ETHICAL AI FRAMEWORK OVERVIEW
similar) datasets and be able to deal with the errors and corner cases that occur at execution
time. Robustness also entails placing effective error handling measures in place to protect AI
models when exposed to malicious inputs and parameters.
Security
Many of existing security practices conventional to software development efforts are also
applicable for AI and machine learning models. As they largely rely on data curated and
integrated with public or third-party sources, they must be able to discern between malicious
input and benign anomalous data. When designing security protocols extra care should be
taken to cleanse, secure and encrypt data, as well as designing access controls to the trained
model.
Adversarial attacks, i.e., the act of introducing small, intentional perturbations to data used to
compel AI models to make incorrect predictions with high confidence, is one of such security
risks. Additionally, for organisations that do not provide direct access to their AI models, but
expose them as web services, designing security protocols that prevent attackers from reverse
engineering their model is necessary.
Examples (please refer to Section 4 “AI Practice Guide” for further examples and practices):
• AI systems should be technically robust, and they should be protected from any malicious
use. Malicious actors can try to deceive AI models with inputs designed to learn about the
AI and deceive it. Controls are necessary to ensure that the AI application will not be
exposed to potential hackers who will train the AI model to perform tasks it was not
intended to perform. When such cases happen, this will lead to liability and reputational
risk to the organisation using the AI application.
• Continuous monitoring which includes validation, verification of accuracy and
maintenance of the model should be performed regularly to improve the security and
robustness of AI. This is because hackers usually look for outdated software with security
flaws which are more vulnerable to cyberattacks. By constantly updating the AI
application, this will minimise such security risks.
3.5.1.3 Fairness
With regards to Fairness, Article 2 of the Universal Declaration of Human Rights states that
“Everyone is entitled to all the rights and freedoms set forth in this Declaration, without
distinction of any kind, such as race, colour, sex, language, religion, political or other
opinions, national or social origin, property, birth or other status. Furthermore, no distinction
3-15
ETHICAL AI FRAMEWORK OVERVIEW
shall be made on the basis of the political, jurisdictional or international status of the country
or territory to which a person belongs, whether it be independent, trust, non-self-governing or
under any other limitation of sovereignty”. It is therefore important for AI applications to limit
discrimination or bias relative to the factors reported in the Article. The same principle against
discrimination is included in the Convention on the Elimination of All Forms of
Discrimination against Women.
The following is a list of key legal terminology for consideration when understanding Fairness
and how it applies to organisations or AI applications implementation: 10
● Prejudice: Prejudice means to injure or harm a person’s rights.
● Discrimination: Discrimination means the adverse, unfair or detrimental treatment,
preference, exclusion or distinction of a person because of the person’s race, colour, sex,
sexual orientation, age, physical or mental disability, marital status, family or carer’s
responsibilities, pregnancy, religion, political opinion, national extraction or social origin.
● Impartiality: Impartiality means to act or make a decision based on merit and according
to the law without bias, influence, preconception or unreasonableness.
● Positive Discrimination: Positive Discrimination means the:
○ treatment of a person;
○ taking of an action affecting a person; and/or
○ making of a decision affecting a person because of that person’s race, colour, sex,
sexual orientation, age, physical or mental disability, marital status, family or carer’s
responsibilities, pregnancy, religion, political opinion, national extraction or social
origin that is done with the intention of achieving:
■ substantive equality;
■ equal enjoyment;
■ equal exercise of human rights; and/or
■ equal exercise of fundamental freedoms
for that person.
Examples (please refer to Section 4 “AI Practice Guide” for further examples and practices):
• Training data should be free from any bias characteristics such as sample size disparity
(where there is significantly less data for minority groups), selection bias (where certain
groups are less likely to be selected), bias in model design and bias in model use and
feedback (e.g. by checking for stereotyping certain groups when relying on data and
10
Please note that often there is no universal legal meaning for these words/phrases and that particular definitions can differ based
on their purpose and the circumstances of that situation. Therefore, in this context the definitions are purposefully broad and
relatively simple, taking into account the context in which they are to be used (i.e. a broad-based decision-making framework).
11
https://www.strategy-business.com/article/What-is-fair-when-it-comes-to-AI-bias?gko=827c0
12
http://www.jennwv.com/papers/checklists.pdf
3-16
ETHICAL AI FRAMEWORK OVERVIEW
algorithms). If the AI application is biased, the decisions made will show preference
towards certain groups of individuals.
• Ensure integrity of source data obtained to help ensure a fair outcome from the AI
application. Data which are invalid or inaccurate, when used to train an AI model, will
lead to biased decisions and affected users will be discriminated unintentionally due to the
flaw in datasets.
Principle: Inclusion and diverse usership through the AI application should be promoted by
understanding and respecting the interests of all stakeholders impacted.
This principle works under the assumption that local cultural norms do not contradict either
any general or performance principles. AI can be used globally by a great variety of people.
Their interpretation of the way an AI application behaves can consequently differ. To achieve
Diversity and Inclusion, it is important to involve the largest possible number of AI users
representing the broadest variety of cultures, interests, lifestyles and disciplines.
Examples (please refer to Section 4 “AI Practice Guide” for further examples and practices):
Human oversight is the capability of humans in making choices that is to think and determine
an outcome and consequently enact upon it. AI applications are regarded as autonomous
systems to various degrees and as they become prevalent in our lives, their function in real-
world contexts is often correlated with fear and uncertainty.
Examples (please refer to Section 4 “AI Practice Guide” for further examples and practices):
• Ensure an appropriate level of human intervention based on multiple factors such as the
benefits and risks of the AI application, impacts of the AI decision, operational cost and
3-17
ETHICAL AI FRAMEWORK OVERVIEW
evolving societal norms and values. Having some level of human intervention will also
reduce job displacement risk.
• Controls should be implemented that allow for human intervention or auto-shutdown in
the event of system failure especially when the system failure will have an impact on
human safety. One such example is autonomous vehicle where human should be given the
option to prevent the vehicle from causing accidents if it failed to detect human on the
road.
In all cases, the principles that AI applications must adhere to are contained in international
treaties or regulations as well as in national legislations and industry standards. It is therefore
indispensable for any organisation dealing with the development or implementation of AI
applications to master and apply consistently all relevant obligations emanated by legislative
sources at any level.
Examples (please refer to Section 4 “AI Practice Guide” for further examples and practices):
• AI system developed must be in compliance with regulations and laws. If there are no laws
to govern the use of AI, humans with bad intentions will develop AI applications that will
cause harm.
• Compliance review processes for AI applications should be defined to keep track of
regulatory changes and to ensure that the policies and processes are compliant.
(d) be assured that data users take all practicable steps to protect the personal data they
hold against unauthorised or accidental access, processing, erasure, loss or use. Please refer to
the DPP4 “Security of Personal Data” of the PD(P)O.
(e) be provided with information on (i) its policies and practices in relation to personal
data, (ii) the kinds of personal data held, and (iii) the main purposes for which the personal
data are to be used. Please refer to the DPP5 “Information to Be Generally Available” of the
PD(P)O.
Individuals 13 should have the right to expect that organisations will process data that pertains
to them in a manner that creates benefits for the individual or for a broader community of
people. In cases where the organisations receive most of the benefit, a demonstrable vetting
process should determine there is minimal impact on an individual. Individuals should have
the right to control data uses that are highly consequential to them. This should be facilitated
through an appropriate level and contextual application of consent and access where possible.
Where consent is not possible, suitable or less impactful, they have the right to know that
accountability processes assure the data uses are fair and responsible 14.
It has been recognised by many legislations at all levels across the globe and locally that people
have the right to be informed with respect to the use of their personally identifiable data. This
is particularly important in the case of AI applications where datasets possibly containing
personal and sensitive information are used to train machine learning algorithms. Additionally,
people need to understand their digital personas, as well as the way they interact in digital
environments, are profoundly different from real life.
Examples (please refer to Section 4 “AI Practice Guide” for further examples and practices):
• Communicate clearly how, why, where, and when customer data are used in AI systems.
• Ensure minimal collection and processing of personal data and retention policies are
followed which take into account privacy regulations such as PD(P)O and the related
PD(P)O guidance notes.
3.5.1.8 Safety
Principle: Throughout their operational lifetime, AI applications should not compromise the
physical safety or mental integrity of mankind.
For safety, unintended risks of harm should be minimised inclusive of physical, emotional and
environmental safety. With the evolution and improvement of AI application performance in
terms of both cognitive capabilities and level of autonomy, the risk of unanticipated or
unintended behaviours increases correspondingly. Different and possibly dangerous scenarios
13
By individuals we include individuals and groups of individuals
14
Not all uses of data are suitable for control – e.g. data used for security analysis
3-19
ETHICAL AI FRAMEWORK OVERVIEW
could arise in which AI applications attempt to take control over their own reward systems or
where the learning system fails with unpredictable consequences.
It is also necessary to determine who is responsible for what and to this regard it is possible to
say that designers and builders of advanced AI applications are stakeholders in the moral
implications of their use, misuse, and actions, with a responsibility and opportunity to shape
those implications.
Examples (please refer to Section 4 “AI Practice Guide” for further examples and practices):
• Where AI tools are used to augment human in decision-making, they should be safe,
trustworthy, and aligned with the ethics and preferences of people who are influenced by
their actions. If robots are deployed to provide care for the elderly, the robot should not
cause physical or mental harm to the elderly.
3.5.1.9 Accountability
Principle: Organisations are responsible for the moral implications of their use and misuse of
AI applications. There should also be a clearly identifiable accountable party, be it an
individual or an organisational entity (e.g. the AI solution provider).
Accountability ensures the responsibilities and liability of stakeholders are made clear and that
people can be held accountable. This includes ensuring that responsibilities are being fulfilled
from planning through to record-keeping.
Accountability is a fundamental mentioned in literature both locally and globally. It is a
cornerstone principle in most privacy frameworks and/or legislation. It is necessary to
determine who is responsible for what, and in this regard, it is possible to say that designers
and builders of advanced AI systems are stakeholders in the moral implications of their use,
misuse and actions, with a responsibility and opportunity to shape those implications.
Accountability is put into action through a comprehensive, end-to-end governance process.
Governance typically refers to the collective set of policies, procedures and oversight
internally and externally that manages the risk of systems and meets required obligations. This
will help management of ethical responsibilities and assist to track and mitigate risks related
to big data and AI projects. By introducing an appropriate governance framework, this can
balance the need for innovation within the organisation and the need to safeguard the public
interest.
Examples (please refer to Section 4 “AI Practice Guide” for further examples and practices):
• People and organisations responsible for the creation and implementation of AI algorithms
should be identifiable and accountable for the impacts of that algorithm, even if the impacts
are unintended. When AI causes damage and there is no responsible party, the end user
affected will not be compensated fairly.
3-20
ETHICAL AI FRAMEWORK OVERVIEW
3.5.1.10 Beneficial AI
AI applications should not cause harm to humanity and should instead positively promote the
common good and wellbeing. Technologies can be created with the best intentions, but without
considering well-being metrics, can still have dramatic negative consequences on people’s
mental health, emotions, sense of themselves, their autonomy, their ability to achieve their
goals and other dimensions of well-being.
AI applications can be ethical, legal, profitable and safe in their usage and yet not positively
contribute to human well-being. Wellbeing metrics that include psychological, social,
economic fairness, and environmental factors could enable a better evaluation of the
technological progress benefits while being able to test for unintended negative consequences
of AI to impact and diminish human well-being.
Examples (please refer to Section 4 “AI Practice Guide” for further examples and practices):
• Development of AI should promote the progress of society and human civilisation, create
smarter working methods and lifestyles, and enhance people’s livelihood and welfare. If
the AI application does not show any benefits to the human civilisation, people may opt
out of using AI application because it does not benefit them.
• AI that does not promote common good may hurt and destroy the well-being of the society.
One such example includes the autonomous weapon which will cause great danger to
human race.
Cooperation and Openness are about building trust. It includes different stakeholder
collaborating and communicating with end-users and other impacted groups on risks and plans
to handle these risks. This principle is emphasised for educating the public about AI to help
build trust 15.
15
https://www.pwc.ch/en/publications/2017/pwc_responsible_artificial_intelligence_2017_en.pdf
3-21
ETHICAL AI FRAMEWORK OVERVIEW
any type of stakeholders, ranging from end-user to universities, research centres, governments
and professional associations in order to help to mitigate the risk of exclusion and inherent
biases within the AI application.
Examples (please refer to Section 4 “AI Practice Guide” for further examples and practices):
• Organisations should actively seek to develop and enhance AI application cross domain,
cross sector and cross organisation. If there is no cooperation between different teams, the
AI model may make biased decisions due to lack of diverse opinions/user testing. Project
Managers can work together with the Project Team as well as end-users to perform testing
before deployment.
Principle: The AI development should ensure that mitigation strategies are in place to manage
any potential societal and environmental system impacts.
AI can help transform traditional sectors and systems to societal and environmental challenges
as well as bolster human well-being. Although AI presents transformative opportunities to
address some of the societal and environmental challenges, left unguided, it also has the
capability to accelerate the society and environment’s degradation. The deployment of AI
applications potentially carries profound societal and environmental impacts as in the case of
worker displacement or in the areas of caring for the elderly, sick and disabled (e.g. can AI
robots replace humans and how does this impact people being cared for), education and planet
preservation. To develop a sustainable AI, the ultimate goal is to ensure that it becomes values-
aligned with humanity, promising safe application of technology for humankind. In practice,
this means checks and balances developed to ensure that evolving AI applications remain
sustainable. This means that there is a need to guarantee that AI applications (with possible
societal and environmental impacts) are implemented with an appropriate mitigation plan.
Examples (please refer to Section 4 “AI Practice Guide” for further examples and practices):
• AI system can be used to contribute to lesser carbon footprint. AI technology can be used
to optimise power utilisation at data centres and help save energy.
• AI technology infrastructure should be scalable to sustain for long-term enhancements. If
the AI applications were developed on infrastructure that could not be scaled-up, this may
lead to limitations in the future for fine-tuning of model.
3-22
ETHICAL AI FRAMEWORK OVERVIEW
3.5.2 AI Governance
AI governance refers to the practices and direction by which AI projects and applications are
managed and controlled. The following are the important elements associated with the acceptance
and success of developing and maintaining AI applications:
• Establishing governance structure to oversee the implementation of AI projects and AI
Assessment.
• Defining roles and responsibilities that affect the use and maintenance of the Ethical AI
Framework. Please refer to Section 3.5.2.1 “AI Governance Structure” for details.
• Specifying a set of practices to guide and support organisations to plan, develop, deploy
and monitor AI applications. Please refer to Section 4 “AI Practice Guide” for details.
• Assessing the adoption of those practices in terms of application impact. Please refer to
Section 5 “AI Assessment” for details.
Effective AI governance should not be a purely technology-led effort as it only solves for concerns
that the technical stakeholders may have; it does nothing to assuage concerns posed by the public
and requirements for end-to-end governance that integrates with the second and third lines of
defence. The three lines of defence is a well-established governance concept in many
organisations.
Figure 9 shows the different defence lines and their roles. The governance structure is a board
structure with the following setup.
• The first line of defence is the Project Team who is responsible for AI application
development, risk evaluation, execution of actions to mitigate identified risks and
documentation of AI Assessment.
• The second line of defence comprises of the Project Steering Committee (“PSC”) and
Project Assurance Team (“PAT”) who are responsible for ensuring project quality,
defining acceptance criteria for AI applications, providing independent review and
3-23
ETHICAL AI FRAMEWORK OVERVIEW
approving AI applications. The Ethical AI Principles should be addressed through the use
of AI Assessment before approval of the AI application.
• The third line of defence involves the IT Board, or the Chief Information Officer (“CIO”)
if the IT Board is not in place, and is optionally supported by an Ethical AI Committee,
which consists of external advisors. The purpose of the Ethical AI Committee is to provide
advice on ethical AI and strengthen organisations’ existing competency on AI adoption.
The third line of defence is responsible for reviewing, advising and monitoring of high-risk
AI applications.
The AI Governance Structure describes the key activities for different roles/functions and defines
their corresponding responsibilities.
Organisations make reference to the Ethical AI Framework to plan, implement, and maintain their
AI applications. The Project Team can be sourced from existing staff who are familiar with the
organisations’ business, IT project management and AI development process. The Project Team
can involve System Analysts, System Architects and Data Scientists who execute the development
of AI models; manage deployment and post deployment performance. The size of the Project Team
will depend on the organisational structure, operating model and scope of AI application being
developed by the organisation.
Roles and responsibilities for the AI Governance Structure are listed in the table below.
Roles Responsibilities
IT Board, or CIO The IT Board is responsible for overseeing an organisation’s IT
if the IT Board is applications, including AI applications and reviewing AI Application
not in place Impact Assessment (please refer to Section 5 “AI Assessment” for
details) for high-risk AI applications before commencement of projects.
The Terms of Reference (“TOR”) for the IT Board outlines high-level
responsibilities including steering the use of IT in the organisation,
integrating the Information Strategy direction with the business
objectives and ensuring that IT initiatives support the organisation’s
direction and policies. This can be achieved for AI projects through
review of an AI Application impact assessment before project
commencement.
Any AI projects that trigger one of the risk gating criteria as mentioned
in Section 5.1 “AI Application Impact Assessment” and Appendix C “AI
Application Impact Assessment Template” within an AI Application
Impact Assessment are defined as high-risk AI applications and would
require the IT Board’s approval. The Project Manager is responsible for
verification of answers to the risk gating criteria to determine if the AI
3-24
ETHICAL AI FRAMEWORK OVERVIEW
Roles Responsibilities
application is of high risk with support from the Project Team as needed.
The definition of high-risk AI applications can be further refined by the
IT Board as the experience and AI capabilities of an organisation
increases.
The IT Board has the responsibilities to:
• Review AI Application Impact Assessment for high-risk AI
applications.
• Review the recommendations and remediation actions provided
by the Project Team based on the reviewed AI Application
Impact Assessment and provide comments to the PSC/PAT and
the Project Team to ensure appropriate considerations of risks,
ethical aspects and benefits.
Ethical AI The Ethical AI Committee is responsible for advising the IT Board/CIO
Committee on ethical issues related to AI applications and AI Assessment. The
(optional) Ethical AI Committee would mainly consist of external advisors with
expertise in technical aspects (e.g. Data, AI Analytics), ethics, legal,
risk/benefits assessment and security. Committee members should be
well versed in assessing ethical type issues for AI projects and be aware
of the criteria for approving an AI project. For example:
• Risks are determined to be reasonable and have been mitigated
in relation to the anticipated benefits that may reasonably be
expected to result;
• Risks to the population making up the data subjects (e.g.
children, prisoners, educationally disadvantaged persons,
mentally disabled, as well as other vulnerable groups) are
considered;
• The ethical (permissible) basis for the collection and use of the
personal data are appropriately documented (e.g. it is within the
scope of expected use, consent was obtained);
• There are adequate provisions to protect the privacy of
individuals involved in the project; and
• The related AI Assessment and decisions reached by the Project
Team and the IT Board/CIO.
Roles Responsibilities
• Bring oversight and external knowledge to assist organisations
when trying to provide trust and transparency over the AI
application for the public.
3-26
ETHICAL AI FRAMEWORK OVERVIEW
Roles Responsibilities
Project Steering PSC/PAT have the responsibilities to:
Committee
• Define acceptance criteria for AI applications and overall
(“PSC”)
requirements.
Project Assurance • Review AI Assessment to ensure impacts of the AI application
Team (“PAT”) are managed.
• Perform ongoing monitoring throughout the AI Lifecycle.
• Provide comments for high-risk AI applications.
• Provide signoff prior to AI application delivery.
• Communicate with the Office of the Privacy Commissioner for
Personal Data (“PCPD”) for high-risk AI projects that have
potential data privacy issues as appropriate.
• Seek advice from your legal department or lawyers for high-risk
AI projects that have potential legal issues as appropriate.
Project approval is always required through existing project
management structures.
Project Team The Project Team is responsible for the delivery of AI projects and AI
Assessment. Members of the Project Team can be internal or contractor
resources who are assigned to complete the project tasks as directed by
a Project Manager. Examples of key roles for the Project Team include
System Analysts, System Architects and Data Scientists. Business users
can also be included.
The Project Team has the responsibilities to:
• Assist the Project Management to ensure that the AI application
complies with the quality standards throughout the AI Lifecycle
(e.g. AI and data ethics, quality procedures, industry standards
and government regulations).
• Complete AI Assessment and deliver AI applications. Each AI
application being developed by the Project Team should have an
AI Application Impact Assessment completed.
• Recommend AI projects and provide AI Application Impact
Assessment (via the Project Manager) for the PSC/PAT’s
review.
• Develop organisation specific AI standards and guidelines if
necessary, leveraging the Ethical AI Framework.
• Communicate with the PCPD for high-risk AI projects that have
potential data privacy issues as appropriate.
• Seek advice from your legal department or lawyers for high-risk
AI projects that have potential legal issues
Project Manager The Project Manager has the responsibilities to:
(Project Team)
3-27
ETHICAL AI FRAMEWORK OVERVIEW
Roles Responsibilities
• Ensure that the AI application complies with the quality
standards throughout the AI Lifecycle (e.g. AI and data ethics,
quality procedures, industry standards and government
regulations).
• Ensure that reports/checks/assessments are performed on the AI
Project including any data governance checks. This can form
part of the quality checks that the Project Managers are
responsible for.
• Qualify the use case, developing the end-to-end vision and
subsequent design of the AI application.
• Ensure that relevant trainings on AI are conducted to upskill
existing staff in relation to the AI model.
• Provide administrative support to the IT Board/CIO in arranging
meetings, preparing minutes, drafting documents and
deliverables, circulating materials to respective parties for
comment and approval, triaging inquiries and coordinating with
different stakeholders over AI initiatives.
Even for Proof of Concepts (“POCs”) projects, a Project Manager with
similar responsibilities should be assigned.
System Analysts, System Analysts and System Architects have the responsibilities to:
System Architects
• Assign and track ownership of data sets used in AI models.
(Project Team)
• Ensuring licences for any purchased data are in place.
• Handle Extract, Transform and Load (“ETL”) activities that
prepare and transform the data for Data Scientists.
• Ensure existing processes such as archival, backup, security,
privacy and retention are adhered to.
Data Scientists Data Scientists have the responsibilities to:
(Project Team)
• Determine the capabilities of the AI application and training
procedures to suffice that capability defined. This includes
POCs.
• Follow controls and procedures defined for model testing and
validation.
• Notify project management who subsequently notifies the IT
Board/CIO of changes to systems or infrastructure that impact
governance and control of AI applications.
• Manage, integrate, scale and deploy AI applications.
• Transfer AI applications to production code and perform model
training at scale.
• Mange post deployment performance and stability of AI
applications.
3-28
ETHICAL AI FRAMEWORK OVERVIEW
Roles Responsibilities
• Manage infrastructure and platforms for AI application
development, training, deployment, monitoring and
testing/validation as well as managing, monitoring and
troubleshooting AI applications.
3.5.3 AI Lifecycle
In order to structure the practices for organisations to follow when executing AI projects/creating
AI applications, practices in different stages of the AI Lifecycle will be described in Section 4.
The AI Lifecycle shows the different steps involved in AI projects that can
• Guide organisations to understand the different stages and requirements involved; and
• Serve as a reference for the development of AI practices to align to actual stages of how AI is
typically developed.
From this AI Lifecycle, key competencies and capabilities can be implemented that serve as the
ingredients of an effective, comprehensive programmatic governance system. Further descriptions
of the corresponding coverage and capabilities for each AI Lifecycle stage are provided below.
3-29
ETHICAL AI FRAMEWORK OVERVIEW
1. Project Strategy
• The AI and Data strategy for individual organisations should be formulated with alignment
to its strategic goals and the Ethical AI Principles. Such strategy should be documented and
effectively communicated within the organisations.
• The key leadership roles accountable for the implementation of the integrated strategy and
key roles who have responsibility for execution should be formally assigned. Subsequently,
a comprehensive and effective process should be established to routinely monitor and
analyse external changes in AI and data use expectations. These changes should be
analysed and used to routinely adopt the organisation’s strategy, policies and governance.
• Ethical AI Principles should be formally established, aligned with overall organisational
strategy and effectively integrated into the organisation’s policies, procedures and
governance framework.
• The overall AI application governance process includes a well-embedded assessment
process to ensure all policies are effectively applied and the application should be evaluated
against benefits, impacts and risks with respect to all stakeholders.
2. Project Planning
3. Project Ecosystem
3-30
ETHICAL AI FRAMEWORK OVERVIEW
• A defined process or procedure with clear responsibilities should be defined to account for
possible future technology requirements, necessary model updates, etc. This includes a
formalised process to evaluate the existing technology landscape, needs and sourcing
options to identify gaps and to adjust the AI roadmap as required.
• Formalised responsibility and processes should be established to evaluate and ensure all
related staff are equipped with the skills and knowledge they need to take on the goals and
responsibility for AI objectives.
• The sourcing team should be routinely evaluated and allocated the right expertise to
perform the change management delivery, training and transition to business as usual.
• Third-party vendor tools, data and techniques should be evaluated to ensure alignment with
Ethical AI Principles, data use and/or AI governance.
4. Project Development
• A defined and programmatic project management plan and system, aligned with the
organisation’s ethical values, should be developed to address all required organisational
processes, functional and non-functional requirements, business value alignment and risk
and testing assessment as part of development process for all new AI initiatives. This
includes formal integration to the enterprise master data management requirements.
• The integrated development of data, analytics/AI, automation/software with Ethical AI
Principles should be embedded across all the organisational dimensions during design and
development with appropriate validation and verification.
• AI application and data suitability should be matched to the business objectives and
technology required.
• Where third-party data or technology is used, all required organisational process, risk and
testing requirements should be formally assessed. Concurrently, requirements should be
defined, and project management should incorporate strategy.
• Plans and/or systems, or a set of procedures should exist to avoid creating or reinforcing
unfair bias in the AI system, both regarding the use of input data as well as for the algorithm
design.
• Formalised processes should be established to test and monitor for potential biases during
the development, deployment and operation of the AI application.
• The AI application should be evaluated for reliability, model sensitivity and model
performance against the formalised selected definition of fairness.
• Bias trade-offs with respect to performance, trade-offs between interpretability and
performance should be routinely evaluated.
5. System Deployment
• A formalised process with assigned responsibility should be in place to ensure all the AI
applications are assessed across all dimensions for impact on all stakeholders at
deployment. This includes assessing whether an appropriate balance of benefits and
3-31
ETHICAL AI FRAMEWORK OVERVIEW
mitigated risks supports the AI application and data processing activity, achieves a goal of
ethical AI and that effective mitigating controls are established to reduce risk.
• A formalised decision-making structure and risk-based escalation path should be
established to resolve issues, assess the usage of high-risk data and make decisions as well
as formally approve the deployment of AI applications.
• Best practices of Machine Learning (“ML”) Engineering, ML Operations (“MLOps”)/
Model Operations (“ModelOps”), Data Operations (“DataOps”) and Ethical AI Principles
should be embedded across all dimensions of the organisation.
• A defined project management system should be established to assess and develop a
comprehensible plan to roll out AI applications and management systems and processes
should be in place to continuously identify, review and mitigate risks of using the identified
AI applications post deployment.
• The project management system and process should ensure the project continues to add
value to the services, benefits are measured, have been defined and are actively tracked and
reported on.
The AI Lifecycle aligns with a traditional software lifecycle model as depicted in Figure 11.
There is often a continual feedback loop between the development and deployment stages, as well
as system operation and monitoring of the AI Lifecycle for iterative improvements making this
distinct from a traditional software development lifecycle.
3-33
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
SECTION 4
AI PRACTICE GUIDE
3-2
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4. AI PRACTICE GUIDE
Organisations can refer to this AI Practice Guide throughout various project stages of AI Lifecycle
starting from strategy, planning, ecosystem, development, deployment and ongoing monitoring of
AI applications. Organisations should continue to follow the related existing ordinances, policies,
and guidelines for IT projects which are relevant to their respective business domain or industry
and current practices. Practices listed in the AI practice guide are additional, ethical AI related
guidance that organisations are advised to adopt.
This section describes good practices for organisations to consider and adopt as they progress along
the AI Lifecycle when developing AI applications. Organisations can follow the AI Practice Guide
at an early stage when conducting the planning stage of the AI Lifecycle for their projects.
Gradually, they can then use the AI Practice Guide for other AI Lifecycle areas based on
organisational needs and progress with AI.
The AI Practice Guide contains sections that require technical data and AI knowledge. The Project
Team Manager should work with the Project Team to ensure that the appropriate data analysis has
been done and evidenced in assessment. Training for Project Teams involved in AI is fundamental
to ensure that these teams can achieve AI capabilities.
Examples used in the AI Practice Guide are taken from different sources and are purely for
illustrative purposes only. They do not imply that the specific examples have to be followed by
organisations.
Note: Roles mentioned in Section 3.5.2 “AI Governance” are related to the AI Governance
structure, while the intended audience of the Ethical AI Framework (as listed under Section 2
“Purpose”) includes a larger group of audience. Other roles such as procurement team exist and
are referenced in the AI Practice Guide as they will be involved in certain practices/regulations
(e.g. organisational procurement practices would be involved when a project team is looking to
purchase AI solutions) however these existing roles would not be part of the AI Governance
structure and hence are not included in Section 3.5.2 “AI Governance”.
The AI Lifecycle stages (please refer to Section 3.5.3 “AI Lifecycle”) are used to group the
different practices with examples. Subject to the situation and context of individual organisations,
each organisation can decide the action party among the suggested candidates.
4-2
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
An organisation strategy for deployment of AI projects should be aligned with any existing
organisation goals and IT strategy. The Ethical AI Principles should be considered or included in
formulating the organisational strategy and its decision-making process.
4-3
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
Relevant regulations and standards require an assessment to ensure that AI and related processes
adhere to any relevant laws or standards.
4-6
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4-7
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
3. Lawfulness and Compliance – Ensuring industry standards and regulations are considered
in the project so that the organisation and AI application users are aware and can comply
with.
Definition:
A portfolio is defined as a collection of projects. Portfolio management is performed to ensure that
the individual IT investments embedded in the organisation’s processes, people and technology
are on course.
4-8
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
It is important to decide whether an AI project will need a review from the IT Board/CIO at the
outset of the project. Each project has its characteristics and features as well as business values.
Roles and responsibilities are required to operationalise AI and ensure accountability. Please refer
to Section 3.5.2 “AI Governance” for details. The Project Manager should define the quality
standard, quality control and assurance activities and the acceptance criteria for major deliverables
of the project in the Quality Management Plan. Quality control not only monitors the quality of
deliverables, but it also involves monitoring various aspects of the project as defined in the Project
Management Plan (“PMP”) to ensure that the AI application complies with the quality standards
throughout the AI Lifecycle (e.g. AI and data ethics, quality procedures, industry standards and
government regulations).
4-9
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4-11
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
A technology roadmap should enable the organisations to plan and strategise which, when and
what technologies will be procured for AI and big data analytics. An effective technology roadmap
should outline a strategy to achieve the digital transformation goals
4-12
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4-13
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
AI projects can be conducted through outsourcing arrangements. Off-the-shelf products or even
external data can be procured for AI projects. In conducting such procurement exercises,
organisations should duly consider the related ethical considerations.
16
https://www.cmde.org.cn/CL0101/20139.html
17
https://www.fda.gov/files/medical%20devices/published/US-FDA-Artificial-Intelligence-and-Machine-Learning-Discussion-Paper.pdf
4-15
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4-16
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
4-17
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
When developing AI applications, organisations should determine the objectives of using AI and
weigh and balance the benefits and risks of using AI in the decision-making process.
4-18
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
When developing AI applications, organisations should determine the design requirements. This
includes assessing various aspects such as the suitability of data and technology and the degree of
human intervention required. Riskier decisions should incorporate a higher level of human
intervention in the process.
18
https://www.kdnuggets.com/2020/05/guide-choose-right-machine-learning-algorithm.html
4-20
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4-21
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
AI models rely on information from various sources, within or outside of the organisations, which
can possess high risks associated with data quality, validity, reliability and consistency. An Extract,
Transform and Load (“ETL”) tool is typically used to extract huge volumes of data from various
sources and to transform and load the data based on the AI model’s needs. Complex data cleansing
and transformation steps can be prone to unintended user errors that are difficult to identify and
may lead to erroneous modelling results. Data integrity is a necessary component to ensure data
fairness. Data integrity ensures that the results generated from the AI application are not generated
by biased or skewed datasets.
4-23
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4-24
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
For adherence to the “Reliability, Robustness and Security” principle, organisations should
enable the error handling mechanism. An AI model for counting footfall from CCTV should
be tested using actual video and re-tested when there are code changes. Data extracted from
videos could be broken down into different stages of extraction to identify for errors (for
example, assess for exclusions of objects at different stages of extraction).
4.1.4.4 Pre-processing
Definition:
In the data processing procedures for AI modelling purposes, the data processed need to be fair
and appropriate in terms of data samples, size and distributions to ensure the AI application
formulate meaningful and representative inferences. Representational flaws in datasets such as
overrepresentation or underrepresentation of data samples may lead to bias in the outcomes of
trained AI models.
Sensitive data containing an individual’s information requires extra care during solution
development to prevent data leakage as well as breaches of privacy and security policies. The
amount of personal data collected, processed and used should be minimised where feasible.
Note: For considerations related to the use of personal data and privacy in IT projects including AI
or big data projects, organisations should refer to PD(P)O guidance.
4-26
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4-27
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
19
Hyperparameter refers to a parameter which its value is used to control the learning process of a machine learning model. For example the
maximum level for a random forest model.
4-29
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
20
https://github.com/dssg/aequitas
21
https://aws.amazon.com/sagemaker/clarify/
22
https://fairlearn.org/
4-30
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Model building encompasses the following areas which will be discussed in detail in the
subsequent subsections:
• Model Assumptions
• Model Objectives and Incentives
• Input Variable Selection
• Model Overfitting
• Model Training Ownership
• Adversarial Attacks
Definition:
Model assumptions are conditions that should be satisfied by the model before performing the
relevant modelling analysis. The assumptions underpinning the model must be checked for
accurate interpretations and conclusions.
4-31
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
AI applications motivated to achieve targets can be misused to satisfy the stated objective but fail
to solve the problem and result in bad behaviours.
4-32
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
Input variables or features are values within datasets that are loaded into an AI application for the
purpose of training the AI model. A robust AI model relies on these informative inputs to provide
an output, often referred to as a target variable (i.e. what the AI model is trying to predict).
Selections of input variables should consider both organisation knowledge and causal
relationships.
4-33
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4-34
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
Overfitting is a modelling error that emerges when an AI model is trained to closely fit a limited
set of data points. An overfitting model will often exhibit high accuracy on the training dataset but
low accuracy on new data. If the AI model does not generalise well from the training data to new
or unseen data, the AI model may perform poorly in its prediction.
4-35
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
AI models where the training process is partially or fully outsourced to the public cloud or relies
on third-party pre-trained models can introduce new security risks. Please note that if the AI model
training process is not outsourced, the practices and examples in this subsection are not applicable.
4-36
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition
An adversarial attack takes place when malicious actors deceive the AI models to intentionally
influence the AI application’s outputs without being detected. This is attempted by modifying the
input data to induce the AI model to make an incorrect prediction. Such attacks can occur in the
training phase or the test phase. Attacks that appear during the training phase are known as
poisoning attacks whereas attacks that exist in the test phase can be identified as evasion attacks.
23
https://www.ibm.com/blogs/research/2018/04/ai-adversarial-robustness-toolbox/
4-38
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
Verification, validation and testing is the process of ensuring the AI applications perform as
intended based on the requirements outlined at the beginning of the project. AI applications should
be thoroughly tested before deployment to evaluate if the application breaks down and whether it
performs as intended.
4-40
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
Assuming the AI application would fail, mitigation steps should be incorporated to minimise
damages in the case of failure.
4-41
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4-42
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4-43
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4-44
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition
Provide feedback to increase learning and robustness of the AI application.
4-45
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition
Traceability, Repeatability and Reproducibility are required to ensure the AI application is
operating correctly and to help build trust from the public and key stakeholders.
4-46
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4-47
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
AI Models (as part of AI applications) should be continuously monitored and reviewed due to the
likelihood of the AI models becoming less accurate and less relevant. This can happen when the
data and the environment are continually changing with time.
4-49
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
Upon AI applications deployment, ongoing operational support should be established to ensure
that the AI applications performance remains consistent, reliable and robust.
4-50
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4-51
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
Definition:
The Continuous Review/Compliance function should be established to monitor and evaluate the
AI application to ensure its adequacy, efficiency and effectiveness. The Project Manager,
PSC/PAT and IT Board/CIO are responsible to monitor risks of AI such as noncompliance with
applicable laws and regulations.
4-52
ETHICAL AI FRAMEWORK AI PRACTICE GUIDE
4-53
ETHICAL AI FRAMEWORK AI ASSESSMENT
SECTION 5
AI ASSESSMENT
4-2
ETHICAL AI FRAMEWORK AI ASSESSMENT
5. AI ASSESSMENT
AI Assessment suggested in this section provides a set of targeted questions (aligned to the AI
Lifecycle) to assist organisations to assess, identify, analyse and evaluate the benefits and impacts
of AI applications, to ensure they are meeting the intent of Ethical AI Principles and to determine
the appropriate mitigation measures required to control any negative impacts within an acceptable
level.
5-2
ETHICAL AI FRAMEWORK AI ASSESSMENT
1. Risk Gating Criteria – A set of questions that are used to distinguish high-risk AI
applications. These questions should be completed at the beginning of a proposed AI project
or upon conditions of the AI application are changed. AI applications which are considered
high-risk would subsequently require review and approval by IT Board/CIO. A sample of the
risk gating questions is shown below. Regardless of the answers to risk gating criteria, project
team should complete the other parts of the AI Application Impact Assessment.
2. AI Application Impact Assessment Questions – The questions are provided to ensure that
the impact of the AI application is identified and managed across the AI Lifecycle stages and
that related Ethical AI Principles have been considered. The questions consider the impact of
the AI applications which includes benefits, risks, the effects on individuals’ rights and the
balancing of different interests. Answers provided by the Project Team will be assessed
qualitatively. A sample of the AI Application Impact Assessment Questions is shown in Figure
14.
5-3
ETHICAL AI FRAMEWORK AI ASSESSMENT
An AI Application Impact Assessment should be conducted regularly (e.g. annually or when major
changes take place) as AI projects progress and when the AI application is being operated.
The stages of the AI Lifecycle where AI Application Impact Assessment should be reviewed are
shown in Figure 15.
5-5
ETHICAL AI FRAMEWORK AI ASSESSMENT
For new AI applications, organisations should complete the AI Application Impact Assessment
and review in the Project Planning, Project Development, System Deployment and ‘System
Operation and Monitoring’ AI Lifecycle stages. These serve as checkpoints to ensure necessary
requirements are identified and incorporated in other subsequent AI Lifecycles stages
appropriately. The AI Application Impact Assessment can be used as a ‘live’ document throughout
the AI Lifecycle, but the associated AI Application Impact Assessment should be reviewed at 4
key stages of the AI Lifecycle (please refer to Figure 15) with a copy of the AI Application Impact
Assessment being retained for historical records.
5-6
ETHICAL AI FRAMEWORK AI ASSESSMENT
AI Lifecycle: IT
System Board/CIO
Operation and (or its
Monitoring delegates)
5.3 RECOMMENDATION
5-7
ETHICAL AI FRAMEWORK APPENDIX
SECTION 6
APPENDIX
5-2
ETHICAL AI FRAMEWORK APPENDIX
6. APPENDIX
APPENDIX A – GLOSSARY
List of terms and definitions used in this document
Term Definition
Class label Class label is the distinct attribute/feature whose value will
be predicted based on the values of another attribute/feature
in the dataset. In other words. Class is the category where the
data will be classified based on the common property that the
data has with other sets of data within a category while label
is the outcome of the AI model’s classification process.
Clustering Machine learning algorithm that involves grouping similar
data points together.
Data lake Centralised repository that stores structured and unstructured
data.
Data lineage Data lineage describes the transformation of data over time
right from the beginning of its creation.
Data mart Subset of data warehouse designed for a specific business
domain such as finance and operations.
Data warehouse Centralised and large repository, usually designed for
analytics purposes and aggregates data from various system
sources
Decision tree An algorithm that uses the tree representation to solve the
problem where each leaf node represents class label and the
internal nodes of the tree represent each attribute.
Human-in-the-loop Human-in-the-loop refers to the capability for human
intervention in every decision cycle of the system.
Human-in-command Human-in-command refers to the capability to oversee the
overall activity of the AI system (including its broader
economic, societal, legal and ethical impact) and the ability
to decide when and how to use the system in any particular
situation.
Human-out-of-the-loop Human-out-of-the-loop refers to the capability of the AI
system in making decisions without human intervention
K-fold Cross validation technique where the original sample is
randomly partitioned into equal k sized subsamples.
Logistic regression A type of classification algorithm used to predict the binary
outcome based on a set of independent values.
Model-agnostic The model-agnostic is a model-independent approach used to
study the underlying structure of an AI without assuming that
it can be accurately described by the model itself because of
its nature.
6-2
ETHICAL AI FRAMEWORK APPENDIX
Term Definition
Personally identifiable Data or information that can be used to identify an individual
information (“PII”) such as identification number, biometric and address.
Random over sampling Technique that involves duplicating dataset randomly from
the minority class and adding them back to the original
training dataset.
Random resampling Technique that involves creating a new version of training
dataset that has a different class distribution. This technique
aims to achieve a more balanced dataset in the new training
dataset.
Random under sampling Technique that involves selecting dataset randomly from the
majority class to delete from the original training dataset.
Random forest A classification method that operates by constructing
decision trees during training stage and output the class that
is the mode of the class or mean/average prediction of the
individual trees.
Regression Supervised machine learning technique used to make
prediction by estimating the relationship between variables.
Regression testing Testing performed to confirm the recent code/programme
changes does not affect existing AI application’s
performance negatively.
Reinforcement learning Reinforcement learning is the training of AI models to make
decisions when dealing with problems and learn by reward
and punishment based on feedback from its own actions.
Risk gating criteria A set of questions that are used to distinguish high-risk AI
applications. These questions should be completed at the
beginning of a proposed AI project or upon conditions of the
AI application are changed.
Support Vector Machine Supervised learning models with associated learning
algorithms that analyse data for classification and regression
analysis.
Surrogate model A surrogate model is an engineering method used when the
predictions of an AI model cannot be easily understood or
measured, so a model of the outcome is used instead.
Unseen data Data which are new and have never been ‘seen’ by the AI
model
6-3
ETHICAL AI FRAMEWORK APPENDIX
6-4
ETHICAL AI FRAMEWORK APPENDIX
6-5
ETHICAL AI FRAMEWORK APPENDIX
6-6
ETHICAL AI FRAMEWORK APPENDIX
Note
Please refer to Section 5.1.1.1 “Process for Completing AI Application Impact Assessment” for
details on the process to complete the AI Application Impact Assessment
Legend
• Text covers the core questions that are to be addressed in a qualitative manner
• Italicised text is added context to support the core question and should be used as an aid to
provide the qualitative answer
Question Context
a - Is the AI application within an For example, certain “Internet of Things” applications
area of intense public scrutiny could have a significant impact on individuals’ daily lives
(e.g. because of privacy and privacy. Examples of such applications include Smart
concerns) and/or frequent Cities applications:
litigation?
• Smart Lighting – intelligent weather adaptive
street lighting
• Smart Traffic Management – Monitoring of
vehicles and pedestrian
• Smart Parking – Monitoring of parking spaces
Therefore, this requires a IT Board/CIO review of the AI
Application Impact Assessment. The combination of facial
recognition and a new, sensitive use or potentially
controversial use should be considered.
b - Is the AI application applied For example, an AI application that is used for the first
in a new (social) domain (i.e. time in healthcare while previously, it was only used for
where AI has not been used in marketing purposes. Due to the change of domain, it is
Hong Kong)? possible that the AI application will raise (new) ethical
questions. When the AI application takes place in a
sensitive social area, the risks and the ethical issues are
potentially greater. Think of topics such as care, safety,
the fight against terrorism or education. Think of
vulnerable groups such as children or the disabled.
c The more an AI application acts independently with
increased freedom to make decisions, the more important
(i) Does the AI application have
it is to properly analyse the consequences of this
a high degree of autonomy?
autonomy. In addition to the freedom to make decisions,
6-7
ETHICAL AI FRAMEWORK APPENDIX
Question Context
If the answer to question is autonomy can also lie in the possibility of selecting data
‘Yes’, please proceed to the sources autonomously.
question (ii). When the AI application is situated in a complex
(ii) Is it used in a complex environment, the risks are greater than when the AI
environment? application is in a confined environment.
If the answer to question (ii) is When the AI application makes decisions automatically
‘Yes’, please proceed to the (without human intervention) and the decision can lead to
question (iii). someone experiencing legal consequences of that decision
or being significantly affected otherwise, the risk is
(iii) Does the AI application
make automated decisions that greater. Think of not being able to get a mortgage, losing
your job, a wrong medical diagnosis or reputational
have a significant impact on
persons or entities or that have damage due to a certain categorisation that can lead to
the exclusion or discrimination against individuals.
legal consequences for them?
Processing with little or no effect on individuals does not
If the answer to the question (iii) match this specific criterion.
is ‘Yes’, such application will
Examples of such AI applications are autonomous vehicle,
likely be considered as higher
autonomous military drones and surgical robots. Such AI
risk application.
applications are considered being used in a complex
environment because the decisions made by the AI depend
on various surrounding environment factors (e.g.
surrounding human activities). Applications such as the
autonomous vehicle and autonomous military drones are
considered to have a high degree of autonomy because
decisions were made entirely by the AI.
d - Is sensitive personally When sensitive personally identifiable information is used
identifiable information used? in the development and/or deployment of AI applications,
the risk is higher. For example, data consisting of racial
or ethnic origin, political opinions, religious or
philosophical beliefs, or trade union membership, genetic
data, biometric data, data concerning health or data
concerning a natural person’s sex life or sexual
orientation.
e - Does the AI application make As the decision-making by the AI application is more
complex decisions? complex (for example, more variables or probabilistic
estimates based on profiles) the risks increase. Simple AI
applications based on a limited number of choices and
variables are less risky. If the way in which an AI
application has come to its decisions can no longer be
(fully) understood or traced back to people, then the risks
resulting from the decision are potentially greater.
f - Does the AI application Processing used to observe, monitor or control
involve systemic observation or individuals, including data collected through networks or
monitoring? “a systematic monitoring of a publicly accessible area”.
Examples would include widespread video surveillance
data and network behavioural tracking. This type of
monitoring is a criterion because the personal data may
be collected in circumstances where individuals may not
be aware of who is collecting their data and how they will
6-8
ETHICAL AI FRAMEWORK APPENDIX
Question Context
be used. Additionally, it may be impossible for individuals
to avoid being subject to such processing in public (or
publicly accessible) space(s).
g - Does the AI application Including profiling and predicting, especially from
involve evaluation or scoring of “aspects concerning the data subject’s performance at
individuals? work, economic situation, health, personal preferences or
interests, reliability or behaviour, location or
movements”.
Examples of this could include:
• Financial institution that screens its customers
against databases for credit referencing, anti-
money laundering (“AML”), counterterrorism or
fraud checks;
• Biotechnology company offering genetic tests
directly to consumers in order to assess and
predict the disease/health risks, or
• Company building behavioural or marketing
profiles based on usage or navigation on its
website.
h - Are personal data processed While “large scale” is difficult to define, consider the
on a large scale and/or are data following factors, when determining whether the
sets combined? processing is carried out on a large scale:
a. the number of individuals concerned, either as a
specific number or as a proportion of the relevant
population;
b. the volume of data and/or the range of different data
items being processed;
c. the duration, or permanence, of the data processing
activity; or
d. the geographical extent of the processing activity.
6-9
ETHICAL AI FRAMEWORK APPENDIX
6-10
ETHICAL AI FRAMEWORK APPENDIX
IT Planners/Executives,
Business Users
6-11
ETHICAL AI FRAMEWORK APPENDIX
6-12
ETHICAL AI FRAMEWORK APPENDIX
IT Planners/Executives,
Business Users
9– Section 4.1.2.2:
(i) How has “fairness” been described? • Consider all Ethical
(ii) What steps are in place to measure and AI Principles
test for achieving this? throughout the AI
Lifecycle
Given there is no single definition of
fairness that will apply equally well to Section 4.1.4.4:
different AI applications, the goal is to • Define and test for
detect and mitigate fairness-related harms fairness and bias
as much as possible. AI applications can
behave unfairly due to biases inherent in Section 4.1.4.5.4:
the data sets used to train them or biases • Perform regularisation
that are explicit or implicitly reflected in and other forms of
decisions made by the development teams model selection
or can result in unfair behaviour when
these applications interact with particular IT Planners/Executives,
stakeholders after deployment. Types of Business Users, Data
harm and risk can include allocation, Scientists, Project
quality of service, stereotyping,
6-14
ETHICAL AI FRAMEWORK APPENDIX
6-15
ETHICAL AI FRAMEWORK APPENDIX
6-16
ETHICAL AI FRAMEWORK APPENDIX
6-17
ETHICAL AI FRAMEWORK APPENDIX
Section 4.1.3.2:
• Define requirements
of the AI application
6-19
ETHICAL AI FRAMEWORK APPENDIX
Section 4.1.6.1:
Did you put in place verification and • Set performance
validation methods and documentation (e.g. metrics
logging) to evaluate and ensure different • Incorporate quality
aspects of the AI application’s reliability assessments
and reproducibility? Did you put in place
measures that address the traceability of Data Scientists, Project
the AI application during its entire Managers
lifecycle? Did you put in place measures to
continuously assess the quality of the input
data to the AI application? Did you define
tested failsafe fallback plans to address AI
application errors of different origins and
put governance procedures in place to
trigger them?
Project Managers, IT
Does your organisation perform active Planners/Executives
monitoring, review and regular AI model
tuning when appropriate (e.g. changes to
customer behaviour, commercial
objectives, risks and corporate values)?
This can mitigate risks related to the
Ethical AI principles such as fairness,
reliability, robustness and security as
models are running under changing
circumstances.
40 - Have all key decision points of the AI Section 4.1.6.3:
application been mapped and do they • Consult IT Board/CIO
meet all relevant legislation, internal
policies or procedures? Project Managers, IT
Planners/Executives
6-27
ETHICAL AI FRAMEWORK APPENDIX
47 - Are there potential negative impacts of Did you assess the societal
the AI application on the environment? impact of the AI
Could the AI application have a negative application’s use beyond
impact on society at large or democracy? the (end-)user and subject,
such as potentially
indirectly affected
stakeholders or society at
large?
48 - For data or technology activities that Examples of third parties
involve third parties (e.g. receiving or could include data brokers
sourcing technology or data as part of this that sell blocks of
activity), what are the associated risks? information, data
aggregators, providers of
storage and computing
tools, data trusts.
Examples of risks could
include data accuracy,
data protection,
downstream use
monitoring and control,
legitimate data collection
(when done through third
parties), data availability.
49 - Is there any likelihood the AI For example, lawsuits can
application could lead to any potential costs potentially lead additional
from the legal and business perspective? legal costs. Is it possible
that the AI application
might lead to such
overheads?
6-30
ETHICAL AI FRAMEWORK APPENDIX
6-31
ETHICAL AI FRAMEWORK APPENDIX
6-32
ETHICAL AI FRAMEWORK APPENDIX
6-33
ETHICAL AI FRAMEWORK APPENDIX
Current Challenges Identify any current issues or challenges faced throughout the existing
AI project Lifecycle.
Current Change Identify changes brought about from the existing AI application such as
Management staff impact, engagement and communication.
D. Proposed AI Applications
Requirement Specify the requirements of business, data, application, and technology
Specifications (including AI model where applicable) as well as the overarching
components (e.g. information security and data privacy) that are
necessary and sufficient for subsequent development and
implementation.
Solution Options Identify the strategic options for implementation based on aligned
and Suitability selection criteria (e.g. cost-benefit analysis).
Recommended AI Describe the recommended projects to the implemented for meeting the
Applications business objectives and achieving the target outcomes
6-34
ETHICAL AI FRAMEWORK APPENDIX
E. Implementation Plan
Implementation Define the strategic approach for implementation, including the inter-
Strategy project dependencies, relative priorities of work and the strategic
measures (based on the Ethical AI Principles and AI Lifecycle
practices, etc.) to be adopted across all projects.
High Level Identify the major activities, milestones and expected deliverables
Roadmap alongside a timeline to achieve the target outcomes
Resources Provide the estimated resource requirements (including staff and
Estimation expenditure) for implementing each of the recommended projects
Benefits and Impact Identify the intangible and tangible benefits, the anticipated impact and
associated mitigation measures for implementation.
Governance Define the governance structure and process for monitoring the
progress and resolving any issues that may arise during the
implementation stage.
A. Executive Summary
• Does the executive summary include necessary and sufficient information for
assessment by organisations’ senior management?
• Is the information accurate and consistent with the other parts of the report?
B. Current Situation (only applicable for organisations with existing AI projects)
• Current Ethics and Legal Considerations: Is the list of all current ethical
considerations and any legal implications of AI applications complete and accurate?
• Current Technology and Infrastructure: Is the list of all current technologies
used for supporting the current AI applications complete and accurate?
• Current Skills and Talent: Is the list of all skills and talent acquired through the
existing AI projects complete and accurate?
• Current Challenges: Is the list of all issues or challenges identified throughout the
AI project Lifecycle complete and accurate, with proper resolutions?
• Current Change Management: Is the list of all changes brought about from the
AI applications complete and accurate, with proper change management plans?
C. Key Drivers and Targets
• Target Ethics and Legal Considerations: Are the ethical considerations and legal
implications conform to the Ethical AI Principles and generally accepted by
society?
• Target Technology and Infrastructure: Are the target technology and
infrastructure conform to the Ethical AI Principles and capable to support the target
AI application?
• Target Skills and Talent: Do the target talent and skills provide the necessary
6-35
ETHICAL AI FRAMEWORK APPENDIX
6-36
ETHICAL AI FRAMEWORK APPENDIX
APPENDIX E – GENERATIVE AI
Generative AI is a form of artificial intelligence that generates new content, such as text, images, or other
media, based on existing data. While generative AI has the potential to be a powerful tool for creativity
and innovation, B/Ds should take note of the potential concerns and challenges when adopting the
technology.
The Ethical AI Principles, AI Governance, the practices suggested for each stage of the AI Lifecycle and
the AI Assessment in this Ethical AI Framework are applicable to the implementation of all kinds of IT
systems which adopt big data analytics and AI technologies including generative AI. Cyberspace
Administration of China together with six other Mainland authorities jointly published the 《生成式人工
智 能 服务 管 理暂行办法》 24 on 13 July 2023 to facilitate the healthy development and regulated
implementation of generative AI technology. The following table attempts to highlight some of the
potential areas of concerns / challenges and some suggested practices as stated in this Ethical AI
Framework and the 《生成式人工智能服务管理暂行办法》 for B/Ds’ consideration.
24
http://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm
25
https://www.datanami.com/2023/01/17/hallucinations-plagiarism-and-chatgpt/
6-37
ETHICAL AI FRAMEWORK APPENDIX
Article 15 of
《生成式人工
6-38
ETHICAL AI FRAMEWORK APPENDIX
智能服务管理
暂行办法》
Liability and Develop clear Clearly state the limitations of liability for Article 9 of
Responsibility - terms of the AI system and emphasise that end 《生成式人工
legal liability and service users should not rely solely on the 智能服务管理
obligations for the suggestions generated by the system. 暂行办法》
suggested actions
and generated
responses made by
generative AI are
unclear
Security - Provide Guide end users to properly utilise the Article 10 of
generative AI may guidance of system and not to use it to damage the 《生成式人工
pose security threats appropriate reputation, legitimate rights and interests 智能服务管理
if being misused usage to users of others. 暂行办法》
Intellectual Understand Request and review the documentation Section 4.1.3.2
property rights - third-party’s such as the algorithm’s design “Procuring AI
generative AI may approach specification, coding and techniques the Services
lead to the risk of system is based on, its outcomes, ongoing (Sourcing)”;
copyright support and monitoring or maintenance of Article 4 and
infringement 26 the proposed system. This helps avoid Article 7 of
any visible procedures from infringing on
《生成式人工
intellectual property rights.
智能服务管理
暂行办法》
Privacy and Conduct data Identify the specific types of data and Section 4.1.3.1
Leakage of assessments sources of data that will be collected, “Technology
sensitive data - if tracked, transferred, used, stored or Roadmap for AI
the generative AI processed as part of the system and and Data Usage”
system is on public whether the data involved are
cloud, sensitive or person-related.
conversations and Document the data lineage to Article 7 of
prompts inputted by understand the source, path, license 《生成式人工
users may be or other obligations and 智能服务管理
reviewed by the transformations of data which would 暂行办法》
cloud provider 27 be utilised in the system.
Perform data Identify whether personal information Section 4.1.4.3
validation exists in the dataset and complies with “Data
data usage policies for personal data Extraction”
instituted by related regulations and
policies.
26
https://www.theverge.com/2022/11/8/23446821/microsoft-openai-github-copilot-class-action-lawsuit-ai-copyright-violation-training-data
27
https://help.openai.com/en/articles/6783457-chatgpt-general-faq
6-39
ETHICAL AI FRAMEWORK APPENDIX
Article 7 of
《生成式人工
智能服务管理
暂行办法》
Protect data Protect data submitted by end users Article 11 and
submitted by when they interact with the system, Article 15 of
end users as well as their activity logs. 《生成式人工
Establish a mechanism to accept and 智能服务管理
review complaints from end users 暂行办法》
about the use of personal data and
take immediate actions to correct,
remove or hide the data concerned.
Perform data Use safeguards such as pseudonyms Section 4.1.4.4
anonymisation and full anonymisation to prevent the “Pre-
connection of the personal data to an processing”
identifiable person. Data
anonymisation is the process of
protecting sensitive information via
encrypting, masking and aggregating
any information that links an
individual to the stored data.
Review regularly whether
anonymised data can be re-identified
and adopt appropriate measures to
protect personal data. A similar
analysis on benefits and risks may be
applied to assess the loss of data
utility if the data are being de-
identified.
Certain significantly harmful AI practices shall be prohibited as they contravene prevailing regulations
and laws pertaining to, in particular, personal data protection, privacy, intellectual property rights,
discrimination and national security. Related regulations and laws include -
Intellectual property rights (Cap. 528 Copyright Ordinance, Cap. 544 Prevention of Copyright
Piracy Ordinance, Cap. 559 Trade Marks Ordinance, Cap. 362 Trade Descriptions Ordinance, Cap.
514 Patents Ordinance, Cap. 522 Registered Designs Ordinance);
Anti-discrimination ordinances (Cap. 480 Sex Discrimination Ordinance, Cap. 487 Disability
Discrimination Ordinance, Cap. 527 Family Status Discrimination Ordinance, Cap. 602 Race
Discrimination Ordinance); and
6-40