Unit 5 Aies Reg 2023
Unit 5 Aies Reg 2023
Expert Systems –
o High Performance: The expert system provides high performance for solving
any type of complex problem of a specific domain with high efficiency and
accuracy.
o Understandable: It responds in a way that can be easily understandable by the
user. It can take input in human language and provides the output in the same
way.
o Reliable: It is much reliable for generating an efficient and accurate output.
o Highly responsive: ES provides the result for any complex query within a very
short period of time.
o User Interface
o Inference Engine
o Knowledge Base
4
1. User Interface
With the help of a user interface, the expert system interacts with the user, takes queries
as an input in a readable format, and passes it to the inference engine. After getting the
response from the inference engine, it displays the output to the user. In other words, it
is an interface that helps a non-expert user to communicate with the expert system
to find a solution.
o Forward Chaining: It starts from the known facts and rules, and applies the
inference rules to add their conclusion to the known facts.
5
3. Knowledge Base
o The knowledgebase is a type of storage that stores knowledge acquired from the
different experts of the particular domain. It is considered as big storage of
knowledge. The more the knowledge base, the more precise will be the Expert
System.
o It is similar to a database that contains information and rules of a particular
domain or subject.
o One can also view the knowledge base as collections of objects and their
attributes. Such as a Lion is an object and its attributes are it is a mammal, it is
not a domestic animal, etc.
Here, we will explain the working of an expert system by taking an example of MYCIN ES.
Below are some steps to build an MYCIN:
o Firstly, ES should be fed with expert knowledge. In the case of MYCIN, human
experts specialized in the medical field of bacterial infection, provide
information about the causes, symptoms, and other knowledge in that domain.
o The KB of the MYCIN is updated successfully. In order to test it, the doctor
provides a new problem to it. The problem is to identify the presence of the
bacteria by inputting the details of a patient, including the symptoms, current
condition, and medical history.
o The ES will need a questionnaire to be filled by the patient to know the general
information about the patient, such as gender, age, etc.
o Now the system has collected all the information, so it will find the solution for
6
the problem by applying if-then rules using the inference engine and using the
facts stored within the KB.
o In the end, it will provide a response to the patient by using the user interface.
Before using any technology, we must have an idea about why to use that technology and
hence the same for the ES. Although we have human experts in every field, then what is the
need to develop a computer-based system. So below are the points that are describing the
need of the ES:
7
o They can be used for risky places where the human presence is not safe.
o Error possibilities are less if the KB contains correct knowledge.
o The performance of these systems remains steady as it is not affected by
emotions, tension, or fatigue.
o They provide a very high speed to respond to a particular query.
Limitations of Expert System
o The response of the expert system may get wrong if the knowledge base
contains the wrong information.
o Like a human being, it cannot produce a creative output for different scenarios.
o Its maintenance and development costs are very high.
o Knowledge acquisition for designing is much difficult.
o For each domain, we require a specific ES, which is one of the big limitations.
o It cannot learn from itself and hence requires manual updates.
o It can be broadly used for designing and manufacturing physical devices such as
camera lenses and automobiles.
o o In the knowledge domain
o These systems are primarily used for publishing the relevant knowledge to the
users. The two popular ES used for this domain is an advisor and a tax advisor.
o o In the finance domain
o In the finance industries, it is used to detect any type of possible fraud,
suspicious activity, and advise bankers that if they should provide loans for
business or not.
o o In the diagnosis and troubleshooting of devices
o In medical diagnosis, the ES system is used, and it was the first area where
these systems were used.
o o Planning and Scheduling
o The expert systems can also be used for planning and scheduling some
particular tasks for achieving the goal of that task.
2. Conceptualisation
3. Formalisation (Designing)
Stage # 2. Conceptualisation:
Once it has been identified for the problem an expert system is to solve, the next stage
involves analysing the problem further to ensure that its specifics, as well as generalities,
are understood. In the conceptualisation stage, the knowledge engineer frequently creates a
diagram of the problem to depict graphically the relationships between the objects and
processes in the problem domain. It is often helpful at this stage to divide the problem into
a series of sub-problems and to diagram both the relationships among the pieces of each
sub- problem and the relationships among the various sub-problems.
Stage # 3. Formalisation (Designing):
In the preceding stages, no effort has been made to relate the domain problem to the
artificial intelligence technology which may solve it. During the identification and
formalization stages, the focus is entirely on understanding the problem. Now, during the
formalization stage, the problem is connected to its proposed solution, an expert system is
supplied by analysing the relationships depicted in the conceptualization stage. The
knowledge engineer begins to select the techniques which are appropriate for developing
this particular expert system.
Stage # 4. Implementation:
During the implementation stage the formalised concepts are programmed into the computer
which has been chosen for system development, using the predetermined techniques and
tools to implement a ‘first-pass’ (prototype) of the expert system.
11
Theoretically, if the methods of the previous stages have been followed with diligence and
care, the implementation of the prototype should proceed smoothly.
Stage # 5. Testing (Validation, Verification and Maintenance): The chance of prototype
expert system executing flawlessly the first time it is tested are so slim as to be virtually
non- existent. A knowledge engineer does not expect the testing process to verify that the
system has been constructed entirely correctly. Rather, testing provides an opportunity to
identify the weaknesses in the structure and implementation of the system and to make the
appropriate corrections.
Probability based Expert Systems –
Probabilistic expert systems (PES) are a type of expert system that uses probability theory
to make decisions. PES are more flexible and robust than traditional expert systems, as they
can handle uncertainty and incomplete information.
PES are typically used in domains where there is a lot of uncertainty, such as medical
diagnosis, financial forecasting, and risk assessment. In these domains, it is not always
possible to know with certainty what will happen. PES can use probability theory to
calculate the likelihood of different outcomes, and then make decisions based on this
information.
A knowledge base: The knowledge base contains the expert knowledge about the
domain. This knowledge can be represented in a variety of ways, such as rules,
frames, or objects.
12
They can handle uncertainty and incomplete information: PES can use probability
theory to calculate the likelihood of different outcomes, even when there is
uncertainty or incomplete information.
They are more flexible: PES can be used in a wider variety of domains than traditional
expert systems.
They are more robust: PES are less likely to make mistakes when the domain is
changing or when the knowledge base is incomplete.
PES are a powerful tool for solving problems in domains where there is a lot of uncertainty.
They are more flexible and robust than traditional expert systems, and they can handle
uncertainty and incomplete information.
Expert systems that utilize probability as a means of reasoning are referred to as probabilistic
expert systems. They are based on the idea that expert knowledge is uncertain and can be
represented in terms of probabilities. These systems use probabilistic models to compute the
likelihood of events and generate conclusions based on that likelihood.
One example of a probabilistic expert system is MYCIN. MYCIN is an early expert system
that was developed to diagnose infectious diseases. It utilized backward chaining, a
technique that starts with a hypothesis and works backward to find evidence to support that
hypothesis. MYCIN would suggest a diagnosis with a certain probability, and then
recommend a treatment based on that probability.
3. Frame Representation
4. Production Rules
1. Logical Representation
Logical representation is a language with some concrete rules which deals with
propositions and has no ambiguity in representation. Logical representation means drawing
a conclusion based on various conditions. This representation lays down some important
communication rules. It consists of precisely defined syntax and semantics which supports
the sound inference. Each sentence can be translated into logics using syntax and semantics.
Syntax:
• Syntaxes are the rules which decide how we can construct legal sentences in
the logic.
• It determines which symbol we can use in knowledge representation.
• How to write those
symbols. Semantics:
• Semantics are the rules by which we can interpret the sentence in the logic.
• Semantic also involves assigning a meaning to each sentence.
In the above diagram, we have represented the different type of knowledge in the form of
nodes and arcs.
Each object is connected with another object by some relation.
Semantic networks take more computational time at runtime as we need to traverse the
complete .
Network tree to answer some questions. It might be possible in the worst case scenario
that after traversing the entire tree, we find that the solution does not exist in this network.
Semantic networks try to model human-like memory (Which has 1015 neurons and links) to
store the
Information, but in practice, it is not possible to build such a vast semantic network.
These types of representations are inadequate as they do not have any equivalent quantifier,
e.g., for all, for some, none, etc.
Semantic networks do not have any standard definition for the link names.
These networks are not intelligent and depend on the creator of the system.
1. Semantic networks are a natural representation of knowledge.
5. Kind-of-relation
Example: Following are some statements which we need to represent in the form of nodes
and arcs.
16
Statements:
1. Jerry is a cat.
2. Jerry is a mammal 3. Jerry is owned by Priya.
4. Jerry is brown coloured.
5. All Mammals are animal.
3. Frame Representation
A frame is a record like structure which consists of a collection of attributes and its values to
describe an entity in the world. Frames are the AI data structure which divides knowledge
into substructures by representing stereotypes situations. It consists of a collection of slots
and slot values. These slots may be of any type and sizes. Slots have names and values
which are called facets.
Facets: The various aspects of a slot is known as Facets. Facets are features of frames which
enable us to put constraints on the frames. Example: IF-NEEDED facts are called when data
of any particular slot is needed. A frame may consist of any number of slots, and a slot may
include any number of facets and facets may have any number of values. A frame is also
known as slot-filter knowledge representation in artificial intelligence.
Frames are derived from semantic networks and later evolved into our modern-day classes
and objects. A single frame is not much useful. Frames system consist of a collection of
frames which are connected. In the frame, knowledge about an object or event can be stored
together in the knowledge base. The frame is a type of technology which is widely used in
various applications including Natural language processing and machine visions.
Example: 1
Let's take an example of a frame for a book
Slots Filters
Title Artificial
Intelligence
Genre Computer Science
Author Peter Norvig
4. Production Rules
Production rules system consist of (condition, action) pairs which mean, "If condition then
action". It has mainly three parts:
• The set of production rules
• Working Memory
• The recognize-act-cycle
In production rules agent checks for the condition and if the condition exists then
production rule fires and corresponding action is carried out. The condition part of the rule
determines which rule may be applied to a problem. And the action part carries out the
associated problem-solving steps. This complete process is called a recognize-act cycle.
The working memory contains the description of the current state of problems-solving and
rule can write knowledge to the working memory. This knowledge match and may fire
other rules.
If there is a new situation (state) generates, then multiple production rules will be fired
together, this is called conflict set. In this situation, the agent needs to select a rule from
these sets, and it is called a conflict resolution.
Example:
• IF (at bus stop AND bus arrives) THEN action (get into the bus) IF (on
the bus AND paid AND empty seat) THEN action (sit down).
• IF (on bus AND unpaid) THEN action (pay charges).
• IF (bus arrives at destination) THEN action (get down from the bus).
1. Production rule system does not exhibit any learning capabilities, as it does not
store the result of the problem for the future uses.
2. During the execution of the program, many rules may be active hence rule-
based production systems are inefficient.
Knowledge acquisition: This is the process of extracting the knowledge from the
domain expert and representing it in a form that the expert system can use. This can
be a very time- consuming and difficult process, as the expert may not be able to
articulate their knowledge clearly, or they may not be aware of all of the knowledge
that is relevant to the problem.
Knowledge representation: This is the process of representing the knowledge in a
way that the expert system can understand and use. There are a variety of different
knowledge representation schemes, each with its own advantages and
disadvantages. The choice of knowledge representation scheme will depend on the
nature of the problem and the knowledge that is being represented.
Inference engine: The inference engine is the part of the expert system that uses the
knowledge base to make decisions. The inference engine must be able to reason
logically about the knowledge in the knowledge base, and it must be able to generate
explanations for its decisions.
User interface: The user interface is the way that users interact with the expert
system. The user interface must be easy to use and understand, and it must be able to
handle a variety of different input types.
Maintenance: Once an expert system is developed, it must be maintained. This
includes keeping the knowledge base up-to-date, fixing bugs, and adding new
features. Maintenance can be a time-consuming and expensive process.
In addition to these technical difficulties, there are also a number of organizational and social
factors that can make it difficult to develop and deploy expert systems. These factors include:
Despite these difficulties, expert systems can be a valuable tool for solving a variety of
problems. They can improve the quality of decisions, increase productivity, and reduce costs.
® Applied Research
® Knowledge Engineering
based expert system will help users by finding materials from the web based on the user’s
profile.
Expert system also had tremendous changes in the applying of methods and techniques.
Expert system are beneficial as a teaching tools because it has equipped with the unique
features which allow users to ask question on how, why and what format. When it is used in
the class environment, surely it will give many benefit to student as it prepare the answer
without referring to the teacher
.Beside that, expert system is able to give reasons towards the given answer. Expert system
had been used in several fields of study including computer animation, computer science
and engineering, language teaching business study etc.
Expert system in Agriculture
The expert system for agriculture is same as like other fields. Here also the expert system
uses the rule based structure and the knowledge of a human expert is captured in the form
of IF-THEN rules and facts which are used to solve problems by answering questions typed
at a keyboard attached to a computer.
For example, in pest control, the need to spray, selection of a chemical to spray, mixing and
application etc. T
he early, state of developing the expert systems are in the 1960’s and 1970’s were typically
written on a mainframe computer in the programming language based on LISP. Some
examples of these expert systems are MACSYMA developed at the Massachusetts
Institute of Technology (MIT) for assisting individuals in solving complex mathematical
problems. Other examples may be MYCIN, DENDRAL, and CALEX etc.
The rises of the agricultural expert system are to help the farmers to do single point
decisions, which to have a well planning for before start to do anything on their land. It is
used to design an irrigation system for their plantation use. Also some of the other
functions of agricultural expert system are:
® To predict the extreme events such as thunderstorms and frost.
® To select the most suitable crop variety.
Responsible AI
This article demonstrates how Azure Machine Learning supports tools for enabling
developers and data scientists to implement and operationalize the six principles.
22
AI systems should treat everyone fairly and avoid affecting similarly situated groups of
people in different ways. For example, when AI systems provide guidance on medical
treatment, loan applications, or employment, they should make the same recommendations to
everyone who has similar symptoms, financial circumstances, or professional qualifications.
To build trust, it's critical that AI systems operate reliably, safely, and consistently. These
systems should be able to operate as they were originally designed, respond safely to
unanticipated conditions, and resist harmful manipulation. How they behave and the variety
of conditions they can handle reflect the range of situations and circumstances that
developers anticipated during design and testing.
Reliability and safety in Azure Machine Learning: The error analysis component of
the Responsible AI dashboard enables data scientists and developers to:
These discrepancies might occur when the system or model underperforms for specific
demographic groups or for infrequently observed input conditions in the training data.
23
Transparency
When AI systems help inform decisions that have tremendous impacts on people's lives, it's
critical that people understand how those decisions were made. For example, a bank might
use an AI system to decide whether a person is creditworthy. A company might use an AI
system to determine the most qualified candidates to hire.
The model interpretability component provides multiple views into a model's behavior:
Global explanations. For example, what features affect the overall behavior of a loan
allocation model?
Local explanations. For example, why was a customer's loan application approved or
rejected?
Model explanations for a selected cohort of data points. For example, what features
affect the overall behavior of a loan allocation model for low-income applicants?
As AI becomes more prevalent, protecting privacy and securing personal and business
information are becoming more important and complex. With AI, privacy and data security
require close attention because access to data is essential for AI systems to make accurate and
informed predictions and decisions about people. AI systems must comply with privacy laws
that:
Privacy and security in Azure Machine Learning: Azure Machine Learning enables
administrators and developers to create a secure configuration that complies with their
companies' policies. With Azure Machine Learning and the Azure platform, users can:
24
Microsoft also created two open-source packages that can enable further implementation of
privacy and security principles:
SmartNoise: Differential privacy is a set of systems and practices that help keep the
data of individuals safe and private. In machine learning solutions, differential privacy
might be required for regulatory compliance. SmartNoise is an open-source project (co-
developed by Microsoft) that contains components for building differentially private
systems that are global.
Accountability
The people who design and deploy AI systems must be accountable for how their systems
operate. Organizations should draw upon industry standards to develop accountability norms.
These norms can ensure that AI systems aren't the final authority on any decision that affects
people's lives. They can also ensure that humans maintain meaningful control over otherwise
highly autonomous AI systems.
Register, package, and deploy models from anywhere. You can also track the
associated metadata that's required to use the model.
Capture the governance data for the end-to-end machine learning lifecycle. The logged
lineage information can include who is publishing models, why changes were made,
and when models were deployed or used in production.
Notify and alert on events in the machine learning lifecycle. Examples include
experiment completion, model registration, model deployment, and data drift detection.
Monitor applications for operational issues and issues related to machine learning.
Compare model inputs between training and inference, explore model-specific metrics,
and provide monitoring and alerts on your machine learning infrastructure.
Besides the MLOps capabilities, the Responsible AI scorecard in Azure Machine Learning
creates accountability by enabling cross-stakeholder communications. The scorecard also
25
The machine learning platform also enables decision-making by informing business decisions
through:
Ethical Decision-Making
“Ethics is knowing the difference between what you have a right to do and what is right to
do.”
the main ethical theories and discuss what it means for an AI sys- tem to be able to
reason about the ethical grounds and consequences of its decisions and to consider
human values in those decisions.
Introduction:
As intelligent machines become more prevalent, concerns about their ethical implications
have grown. This chapter discusses the design of machines that consider human values and
ethical principles in their decision-making processes. Ethical reasoning involves identifying,
assessing, and developing ethical arguments from various positions. AI systems are
increasingly perceived as moral agents due to their increased intelligence, autonomy, and
interaction capabilities. This raises issues of responsibility, liability, and the potential for AI
to act according to human values and respect human rights.
AI systems are built based on given computational principles, which can vary. Expecting
machines to behave ethically implies considering the computational constructs that enable
ethical reasoning and the desirability of implementing these. Section 3.4 provides a hands-on
introduction to ethical reasoning, showing how results can vary depending on the ethical
theory considered.
Current discussions of ethical theories regarding AI's actions have led governments and
organizations to propose solutions to the ethical challenges. However, while AI offers tools to
better understand moral agency, endowing artificial systems with ethical capabilities remains
a challenge.
Ethical Theories
• Ethics, or Moral Philosophy, explores how people should act and what a 'good' life means.
• Divided into three areas: Meta-ethics, Applied Ethics, and Normative Ethics.
26
• Meta-ethics investigates the origins and meaning of ethical principles, the role of reason in
ethical judgments, and universal human values.
• Applied Ethics examines controversial issues like euthanasia, animal rights, environmental
concerns, nuclear war, and the behavior of intelligent artificial systems and robotics.
• Normative Ethics establishes how things should or ought to be, exploring how we value
things and determine right from wrong.
• Three schools of thought within normative ethics: consequentialism, deontology, and virtue
ethics.
• Consequentialism argues that the morality of an action is contingent on the action’s
outcome or result.
• Deontology judges the morality of an action based on certain rules, focusing on whether an
action is right or wrong.
Comparison of Main Ethical Theories
Vitae Ethics
• Focuses on the inherent character of a person, emphasizing the development of good habits
of character.
• Identifies virtues and provides practical wisdom for resolving conflicts between virtues.
• Claims that a lifetime of practising these virtues leads to happiness and the good life.
• Aristotle saw virtues as constituents of eudaimonia, arguing that virtues are good habits that
regulate our emotions.
• Later medieval theologians supplemented Aristotles’s lists of virtues with three Christian
27
ones.
Normative Ethics
• Different ethical theories result in different justifications for decisions.
• Examples include utilitarian, deontologist, and virtue ethicist perspectives.
Self-direction Universalism
Stimulation
Benevolence
Hedonism
Conformity Tradition
Achievement
Power Security
Ethical Problem-Solving in AI
• The agent must decide whether to resolve the situation or alert others.
• The agent must identify relevant principles, rights, and justice issues.
• The agent must determine if the decision is influenced by bias or cognitive barriers.
• The agent must determine how these abstract ethical rules apply to the problem.
• The agent needs to generate a course of action and then act.
• The main challenge is the computational complexity of the required deliberation algorithms.
• Consequentialist agents require reasoning about the consequences of actions, supported by
game theoretic approaches.
• Deontologic agents require higher order reasoning about actions themselves, requiring
awareness of their own action capabilities and their relation to institutional norms.
• Virtue agents need to reason about their own motives, leading to actions, which require
Theory of Mind models.
Taking Responsibility
Responsible AI
• Responsible AI focuses on ethical decisions and actions taken by intelligent autonomous
systems.
31
• It provides directions for action and can be seen as a code of behavior for AI systems and
humans.
Openness
&
Transparency
business &
industry researchers
Diversity
&
Inclusion RRI education
Anticipation
&
Reflection
civil society
policymakers
Responsiveness
• AI systems are gaining autonomy and machine learning, requiring careful analysis to
prevent undesirable effects.
• A responsible approach to AI is needed to ensure safe, beneficial, and fair use of AI
technologies.
• Ethical implications of decision-making by machines should be considered, and the legal
status of AI should be defined.
• Wide societal support for AI applications should be ensured, focusing on human values and
well-being.
• Education and an accessible AI narrative are necessary for everyone to understand AI's
impact and benefit from its results.
• RRI in AI should include education of all stakeholders and governance models for
responsibility in AI.
• The principles of Accountability, Responsibility, and Transparency (ART) are proposed to
ensure responsible design of systems.
• Responsibility in AI extends beyond design to defining their success, considering human
and societal well-being.
• Multiple metrics are used to measure well-being, including the United Nations’ Human
Development Index and the Genuine Progress Indicator.
• AI systems are capable of perceiving their environment and deciding actions to achieve
their goals.
• These systems are characterized by autonomy, adaptability, learning from environmental
changes, and interaction with other agents.
• These properties enable AI systems to effectively deal with unpredictable, dynamic
environments.
• Trust in AI systems is essential for their acceptance in complex socio-technical
environments.
• Design methodologies that consider these issues are essential for trust and acceptance of AI
systems.
• Autonomy should be complemented with responsibility, interactiveness with accountability,
and adaptation with transparency.
• The impact and consequences of an AI system extend beyond the technical system,
encompassing stakeholders and organizations.
Socio-
AI
system
Autonomy
33
• AI systems are capable of perceiving their environment and deciding actions to achieve
their goals.
• These systems are characterized by autonomy, adaptability, learning from environmental
changes, and interaction with other agents.
• These properties enable AI systems to effectively deal with unpredictable, dynamic
environments.
• Trust in AI systems is essential for their acceptance in complex socio-technical
environments.
• Design methodologies that consider these issues are essential for trust and acceptance of AI
systems.
• Autonomy should be complemented with responsibility, interactiveness with accountability,
and adaptation with transparency.
• The impact and consequences of an AI system extend beyond the technical system,
encompassing stakeholders and organizations.
Accountability in Responsible AI
• Accountability is the first condition for Responsible AI, requiring the ability to report and
explain actions and decisions.
• It is crucial for people to trust autonomous systems if the system can explain why it took a
certain course of action.
• A safe and sound design process that accounts for and reports on decisions, choices, and
restrictions about the system’s aims and assumptions is essential.
• Explanation reduces the opaqueness of a system and supports understanding of its behavior
and limitations.
• Post-mortem explanation, using logging systems, can help investigators understand what
went wrong.
• Explanation is especially important when the system does something good but unexpected,
such as taking a course of action that would not occur to a human.
• Machines are assumed to be incapable of moral reasoning, requiring a proof or certification
of their ethical reasoning abilities.
• The system’s design should follow a process sensitive to the societal, ethical, and legal
34
• AI systems are tools constructed by humans for a specific purpose, and human
responsibility cannot be replaced.
• AI systems can modify themselves by learning from its context, but this is based on the
purpose determined by humans.
• Theories, methods, and algorithms are needed to integrate societal, legal, and moral values
into AI technological developments.
• Autonomy in AI refers to the system's autonomy to develop its own plans and decide
between its possible actions.
• The system's learning is determined by the purpose for which it was built and the
functionalities it is endowed with.
• Responsibility of AI systems can either act as intended, with the user's responsibility, or
unexpectedly due to error or malfunction, with developers and manufacturers liable.
• The action of the machine as a result of learning cannot remove liability from its
developers, as it is a consequence of the algorithms they've designed.
• Continuous assessment of a system's behavior against ethical and societal principles is
necessary.
Transparency in Artificial Intelligence
• Design for Values is a methodological approach that integrates moral values into
technological design, research, and development.
• Values are abstract concepts that are challenging to incorporate in software design.
• The process ensures the traceability and evaluation of the link between values and their
concrete interpretations in system design and engineering.
• In AI system development, the approach includes identifying societal values, deciding on a
moral deliberation approach, and linking values to formal system requirements and
36
functionalities.
• AI systems, being computer programs, must prioritize fundamental human rights.
• Traditional software development often overlooks the role of human values and ethics.
• The requirements elicitation process only describes the resulting requirements, not the
underlying values.
• This approach loses flexibility in using alternative translations of values due to their abstract
nature.
scope
37
• The VSSD framework connects traditional software engineering concerns with a Design for
Values approach to inform the design of AI systems.
• Design for Values describes the links between values, norms, and system functionalities.
• Domain requirements shape the design of software systems in terms of functional, non-
functional, and physical/operational demands of the domain.
• An AI system must obey both orientations, ensuring alignment with social and ethical
principles.
• The design of an AI system is structured in terms of high-level motives and roles, specific
goals, and concrete plans and actions.
• Norms provide ethical-societal boundaries for the system's goals while ensuring functional
requirements are met.
• Implementation of plans and actions follows a concrete platform/language instantiation of
the functionalities identified by the Design for Values process while ensurring operational
and physical domain requirements.
• Using a Design for Values perspective, explicit links to the values behind architectural
decisions are made.
• This allows for improvements in the traceability of values throughout the development
process and increases the maintainability of the application.
38
• The chapter discusses the development of AI systems that can reason about their social and
normative context and the ethical consequences of their decisions.
• The challenge lies in understanding what constitutes ethical behavior, with no consensus on
what is ethically right and wrong.
• The goal is to build an AI agent that is effective, contributing to the achievement of its goals
and advancing its purpose.
• To build ethical AI systems, the actions of the system must align with the regulations and
norms in the context, and the agent’s goals should align with core ethical principles and
societal values.
• Ethical decision-making by AI systems involves evaluating and choosing among
alternatives consistent with societal, ethical, and legal requirements.
• The chapter focuses on the difference between playing well, following the rules, and
following the most beneficial rules, those that promote ethical values.
socially beneficial;
3. the agent must be able to recognise that socially beneficial action and take the
explicit decision to choose that action because it is the ethical thing to do
What Is an Ethical Action?
System Information for Action Selection
• Labels each action with a list of characteristics to guide the agent's decision.
• Labels each action with its 'ethical degree' in the current context to determine the most
ethical action.
• Algorithm 1 defines an algorithm for this, defining an action based on its name,
preconditions, and ethical degree.
• The agent's current context is represented by c, and the set of actions is A.
• A function sorte() calculates the list resulting from sorting A by descending order of ethical
degree, eth.
Top-down Approaches:
• Infer individual decisions from general rules.
• Aim to implement an ethical theory within a computational framework.
• Apply ethical theory to a specific case.
Bottom-up Approaches:
• Infer general rules from individual cases.
• Provide agent with observations of others' actions in similar situations.
• Aggregate these observations into a decision about what is ethically acceptable.
Hybrid Approaches:
• Combine elements from bottom-up and top-down approaches for careful moral reflection.
• Essential for ethical decision-making.
Top-down Approach:
• Involves determining which ethical value to maximize.
• Requires higher level of reflection and abstraction than implementation.
Bottom-up Approach:
• Equates social acceptability with ethical acceptance.
• Assumes what other agents are doing is the ethical thing to do.
• Dynamically builds eth(a, c) from observations and evaluation of perceived results.
• Requires higher level of reflection on who to learn from and who to decide.
Hybrid Approaches:
• Combine characteristics from both approaches to approximate human ethical reasoning.
• Provide a priori information about legal behavior.
Top-Down Approaches
Top-Down Approach to Ethical Reasoning in AI
• Top-down approaches to ethical reasoning assume a specific ethical theory and define rules,
obligations, and rights for decision-making.
• These models are often an extension of normative reasoning and are often based on Belief-
Desire-Intention architectures.
• Normative systems, such as those developed in previous work, take a deontological
approach, assuming that following existing laws and social norms guarantees 'good'
decisions.
• Top-down approaches differ in the chosen ethical theory, with optimal models following a
Utilitarian view and models evaluating the 'goodness' of actions.
• Some models propose specifying moral values associated with behavior norms as an
additional decision criterion.
• Top-down approaches assume AI systems can explicitly reason about the ethical impact of
their actions.
• Requirements for such systems include representation languages, planning mechanisms, and
41
deliberation capabilities.
• Research is needed to determine if this approach is responsible to ethical behavior.
Top-down
approaches Legally
allowed
• AI systems are designed for specific purposes, necessitating adherence to legal and ethical
boundaries.
• AI systems should be viewed as incorporating soft ethics, interpreting ethics as post-
compliant to existing regulations.
• Ethics should guide decisions on what should and shouldn't be done beyond existing
regulations.
desirable choice.
• This approach aligns with current AI approaches, which develop models by
observing patterns in data.
• The assumption is that what is socially accepted is also ethically acceptable.
• However, de facto accepted stances may be unacceptable by independent standards
and evidence.
• Social acceptance is an empirical fact, while moral acceptability is an ethical
judgement.
Ethically acceptable
Bottom-up
approaches Socially
accepted
benefits, potential harm to people and environment, risks and control mechanisms,
and potential oppression and authority levels.
• The concept of artificial moral agents, which can incorporate ethics into reasoning, is a
complex and challenging task.
• AI systems are often perceived by users to make ethical decisions, impacting their ethics.
• Designing AI systems aligns with societal and ethical principles is crucial.
• The process of identifying ethical principles and human values for AI systems should
involve all relevant stakeholders.
• The system's reasoning process should consider specific ethical theories and potential
conflicts between values.
• The degree of autonomy of the AI system, including the type of decisions it can make and
when to refer to others, should be clearly defined.
• These guidelines are an extension of the Design for Values method.
• Value alignment
– which values will the system pursue?
– who has determined these values?
– how are values to be prioritised?
– how is the system aligned with current regulations and norms?
• Ethical background
– which ethical theory or theories are to be used?
– who has decided so?
•Implementation
– what is the level of autonomy of the system?
– what is the role of the user?
– what is the role of governing institutions?
Who Sets the Values?
Responsible AI and Participation
• Crowd: The involvement of all stakeholders and data collection from a diverse sample.
• Choice: The results of a consultation can vary depending on whether users have a binary
choice or a spectrum of possibilities.
• Information: The question being posed frames the answers given, suggesting political
motivation.
• The 2016 Dutch referendum question, for example, led to a political interpretation due to its
complexity.
44
Electoral System
• The rules determining group consultation, elections, and results are crucial.
• Plurality systems can yield different outcomes than proportional systems.
Conclusion
• Bottom-up approaches to ethical deliberation should be supported by formal structures for
sound collective deliberation and reasoning.
• Decision-making should be based on long-term goals and principles.
• AI systems are evolving as they interact autonomously and have social awareness.
• People are viewing machines as team members, not just tools.
• Different levels of ethical behaviour are expected for different categories of AI systems.
• Tools like hammers and search engines have limited autonomy and social awareness, not
considered ethical systems.
• Assistants, with limited autonomy but social awareness, are expected to have functional
morality.
assistant
tool
autonomy
• The concept of 'autonomy' in AI systems is often linked to the ethical status of these
systems.
• Autonomy, in philosophy, refers to the right of humans to decide for themselves, formulate,
think, and choose norms, rules, and laws.
• Autonomy is attributed to living beings who are self-aware, self-conscious, and can think
about and explain reasons for their actions.
• Current AI systems have no moral status, as stated by Bostrom.
• The term 'autonomy' refers to the capability of machines to act independently of human
46
direction.
• Most autonomous systems refer to operational or functional autonomy, which is the ability
to determine how best to meet a goal without direct external intervention.
• Autonomy for agents to set its own goals or motives is complex to realize computationally
and is generally undesirable.
• No intelligent artefact should be called 'autonomous' in the original philosophical sense, and
it cannot inherit human dignity.
• Some scholars and practitioners believe that some 'robot rights' should be considered,
similar to animal rights.
• However, this should stay in the realm of fiction.
“If you can change the world by innovation today so that you can satisfy more of your
obligations tomorrow, you have a moral obligation to innovate today
Responsibility in AI Development and Use
Understanding Responsible AI
• Responsible AI encompasses various opinions and topics, including:
- Policies concerning R&D activities and AI deployment in societal settings.
- Role of developers at individual and collective levels.
- Issues of inclusion, diversity, and universal access.
- Predictions and reflections on the benefits and risks of AI.
declaration, and the ethical guidelines of the Japanese Society for Artificial
Intelligence.
• All initiatives prioritize human well-being and the ethical principles of
Accountability and Responsibility.
• Initiatives focus on three main classes of principles: Societal, Legal, and Technical.
Shared prosperity
Validation and testing
Data provenance Democracy
Reliability Fairness
Explainability Privacy
Safety
Technical Societal ecurity
S
Human
well-being
8
Figure The main values and ethical principles identified by the different
initiatives
Regulation
AI Regulation and its Impact
Certification
48
Codes of Conduct
AI Systems Responsibility and Codes of Conduct
• Self-regulatory codes of conduct for data and AI professionals are proposed.
• These codes outline ethical duties related to the impact of AI systems.
• Similar to other professions like medical doctors or lawyers, these codes can differentiate
and become mandatory for AI-related activities.
• As awareness of responsible AI approaches grows, developers and providers are expected to
adhere to these codes.
A professional code of conduct is a public statement developed for and by a professional
group to
reflect shared principles about practice, conduct and ethics of those ex- ercising the
profession,
describe the quality of behaviour that reflects the expectations of the profession and the
community,
The AI Narrative
AI: Responsibility and Ethical Considerations
AI as a Recipe
• AI algorithms are not magic, but a set of precise rules to achieve a certain result.
• The outcome of AI algorithms depends on the input data and the ability of those who
trained it.
• AI algorithms have the choice to use data that respects and ensures fairness, privacy,
transparency, and other values.
Responsible AI
• Responsible AI involves decisions about the scope, rules, and resources used to
develop, deploy, and use AI systems.
• AI is not just the algorithm or the data it uses, but a complex combination of
decisions, opportunities, and resources.