0% found this document useful (0 votes)
25 views49 pages

Unit 5 Aies Reg 2023

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views49 pages

Unit 5 Aies Reg 2023

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 49

1

UNIT - V EXPERT SYSTEM AND RESPONSIBLE AI

Expert Systems – Stages in the Development of an Expert System –


Probability Based Expert Systems –Expert System Tools – Difficulties
in Developing Expert Systems – Applications of Expert Systems–
Responsible AI – Ethical Decision Making–Need for Responsible AI–
Approaches to Ethical Reasoning–Ensuring Responsible AI in Practice.

Expert Systems –

Introduction to Expert Systems


• An expert system is a computer program that is designed to solve complex
problems and to provide decision-making ability like a human expert. It
performs this by extracting knowledge from its knowledge base using the
reasoning and inference rules according to the user queries.
• The performance of an expert system is based on the expert's knowledge stored
in its knowledge base. The more knowledge stored in the KB, the more that
system improves its performance. One of the common examples of an ES is a
suggestion of spelling errors while typing in the Google search box.

Capabilities of the Expert System


Below are some capabilities of an Expert System:
• Advising: It is capable of advising the human being for the query of any domain
from the particular ES.
• Provide decision-making capabilities: It provides the capability of decision
making in any domain, such as for making any financial decision, decisions in
medical science, etc.
• Demonstrate a device: It is capable of demonstrating any new products such as
its features, specifications, how to use that product, etc.
• Problem-solving: It has problem-solving capabilities.
• Explaining a problem: It is also capable of providing a detailed description of an
input problem.
• Interpreting the input: It is capable of interpreting the input given by the user.
• Predicting results: It can be used for the prediction of a result.
• Diagnosis: An ES designed for the medical field is capable of diagnosing a
disease without using multiple components as it already contains various inbuilt
medical tools.

Advantages of Expert System


• These systems are highly reproducible.
• They can be used for risky places where the human presence is not safe.
• Error possibilities are less if the KB contains correct knowledge.
2

• The performance of these systems remains steady as it is not affected by


emotions, tension, or fatigue.
• They provide a very high speed to respond to a particular query.

Limitations of Expert System


• The response of the expert system may get wrong if the knowledge base
contains the wrong information.
• Like a human being, it cannot produce a creative output for different scenarios.
• Its maintenance and development costs are very high.
• Knowledge acquisition for designing is much difficult.
• For each domain, we require a specific ES, which is one of the big limitations.
• It cannot learn from itself and hence requires manual updates.

Applications of Expert System


• In designing and manufacturing domain - It can be broadly used for designing
and manufacturing physical devices such as camera lenses and automobiles.
• In the knowledge domain - These systems are primarily used for publishing the
relevant knowledge to the users. The two popular ES used for this domain is an
advisor and a tax advisor.
• In the finance domain - In the finance industries, it is used to detect any type of
possible fraud, suspicious activity, and advise bankers that if they should
provide loans for business or not.
• In the diagnosis and troubleshooting of devices - In medical diagnosis, the ES
system is used, and it was the first area where these systems were used.
• Planning and Scheduling - The expert systems can also be used for planning and
scheduling some particular tasks for achieving the goal of that task.
• Architecture of Expert System
3

Below are some popular examples of the Expert System:

o DENDRAL: It was an artificial intelligence project that was made as a


chemical analysis expert system. It was used in organic chemistry to detect
unknown organic molecules with the help of their mass spectra and knowledge
base of chemistry.
o MYCIN: It was one of the earliest backward chaining expert systems that was
designed to find the bacteria causing infections like bacteraemia and meningitis.
It was also used for the recommendation of antibiotics and the diagnosis of
blood clotting diseases.
o PXDES: It is an expert system that is used to determine the type and level of
lung cancer. To determine the disease, it takes a picture from the upper body,
which looks like the shadow. This shadow identifies the type and degree of
harm.
o CaDeT: The CaDet expert system is a diagnostic support system that can detect
cancer at early stages.

Characteristics of Expert System

o High Performance: The expert system provides high performance for solving
any type of complex problem of a specific domain with high efficiency and
accuracy.
o Understandable: It responds in a way that can be easily understandable by the
user. It can take input in human language and provides the output in the same
way.
o Reliable: It is much reliable for generating an efficient and accurate output.
o Highly responsive: ES provides the result for any complex query within a very
short period of time.

Components of Expert System

An expert system mainly consists of three components:

o User Interface
o Inference Engine
o Knowledge Base
4

1. User Interface

With the help of a user interface, the expert system interacts with the user, takes queries
as an input in a readable format, and passes it to the inference engine. After getting the
response from the inference engine, it displays the output to the user. In other words, it
is an interface that helps a non-expert user to communicate with the expert system
to find a solution.

2. Inference Engine(Rules of Engine)


o The inference engine is known as the brain of the expert system as it is the main
processing unit of the system. It applies inference rules to the knowledge base to
derive a conclusion or deduce new information. It helps in deriving an error-free
solution of queries asked by the user.
o With the help of an inference engine, the system extracts the knowledge from
the knowledge base.
o There are two types of inference engine:
o Deterministic Inference engine: The conclusions drawn from this type of
inference engine are assumed to be true. It is based on facts and rules.
o Probabilistic Inference engine: This type of inference engine contains
uncertainty in conclusions, and based on the probability.

Inference engine uses the below modes to derive the solutions:

o Forward Chaining: It starts from the known facts and rules, and applies the
inference rules to add their conclusion to the known facts.
5

o Backward Chaining: It is a backward reasoning method that starts from the


goal and works backward to prove the known facts.

3. Knowledge Base
o The knowledgebase is a type of storage that stores knowledge acquired from the
different experts of the particular domain. It is considered as big storage of
knowledge. The more the knowledge base, the more precise will be the Expert
System.
o It is similar to a database that contains information and rules of a particular
domain or subject.
o One can also view the knowledge base as collections of objects and their
attributes. Such as a Lion is an object and its attributes are it is a mammal, it is
not a domestic animal, etc.

Components of Knowledge Base

o Factual Knowledge: The knowledge which is based on facts and accepted by


knowledge engineers comes under factual knowledge.
o Heuristic Knowledge: This knowledge is based on practice, the ability to
guess, evaluation, and experiences.

Knowledge Representation: It is used to formalize the knowledge stored in the


knowledge base using the If-else rules.

Knowledge Acquisitions: It is the process of extracting, organizing, and structuring the


domain knowledge, specifying the rules to acquire the knowledge from various experts, and
store that knowledge into the knowledge base.

Development of Expert System

Here, we will explain the working of an expert system by taking an example of MYCIN ES.
Below are some steps to build an MYCIN:

o Firstly, ES should be fed with expert knowledge. In the case of MYCIN, human
experts specialized in the medical field of bacterial infection, provide
information about the causes, symptoms, and other knowledge in that domain.
o The KB of the MYCIN is updated successfully. In order to test it, the doctor
provides a new problem to it. The problem is to identify the presence of the
bacteria by inputting the details of a patient, including the symptoms, current
condition, and medical history.
o The ES will need a questionnaire to be filled by the patient to know the general
information about the patient, such as gender, age, etc.
o Now the system has collected all the information, so it will find the solution for
6

the problem by applying if-then rules using the inference engine and using the
facts stored within the KB.
o In the end, it will provide a response to the patient by using the user interface.

Participants in the development of Expert System

There are three primary participants in the building of Expert System:

1. Expert: The success of an ES much depends on the knowledge provided by


human experts. These experts are those persons who are specialized in that
specific domain.
2. Knowledge Engineer: Knowledge engineer is the person who gathers the
knowledge from the domain experts and then codifies that knowledge to the
system according to the formalism.
3. End-User: This is a particular person or a group of people who may not be
experts, and working on the expert system needs the solution or advice for his
queries, which are complex.

Why Expert System?

Before using any technology, we must have an idea about why to use that technology and
hence the same for the ES. Although we have human experts in every field, then what is the
need to develop a computer-based system. So below are the points that are describing the
need of the ES:
7

1. No memory Limitations: It can store as much data as required and can


memorize it at the time of its application. But for human experts, there are some
limitations to memorize all things at every time.
2. High Efficiency: If the knowledge base is updated with the correct knowledge,
then it provides a highly efficient output, which may not be possible for a
human.
3. Expertise in a domain: There are lots of human experts in each domain, and
they all have different skills, different experiences, and different skills, so it is
not easy to get a final output for the query. But if we put the knowledge gained
from human experts into the expert system, then it provides an efficient output
by mixing all the facts and knowledge
4. Not affected by emotions: These systems are not affected by human emotions
such as fatigue, anger, depression, anxiety, etc.. Hence the performance remains
constant.
5. High security: These systems provide high security to resolve any query.
6. Considers all the facts: To respond to any query, it checks and considers all
the available facts and provides the result accordingly. But it is possible that a
human expert may not consider some facts due to any reason.
7. Regular updates improve the performance: If there is an issue in the result
provided by the expert systems, we can improve the performance of the system
by updating the knowledge base.

Capabilities of the Expert System


o Advising: It is capable of advising the human being for the query of any
domain from the particular ES.
o Provide decision-making capabilities: It provides the capability of decision
making in any domain, such as for making any financial decision, decisions in
medical science, etc.
o Demonstrate a device: It is capable of demonstrating any new products such as
its features, specifications, how to use that product, etc.
o Problem-solving: It has problem-solving capabilities.
o Explaining a problem: It is also capable of providing a detailed description of
an input problem.
o Interpreting the input: It is capable of interpreting the input given by the user.
o Predicting results: It can be used for the prediction of a result.
o Diagnosis: An ES designed for the medical field is capable of diagnosing a
disease without using multiple components as it already contains various inbuilt
medical tools.

Advantages of Expert System


o These systems are highly reproducible.
8

o They can be used for risky places where the human presence is not safe.
o Error possibilities are less if the KB contains correct knowledge.
o The performance of these systems remains steady as it is not affected by
emotions, tension, or fatigue.
o They provide a very high speed to respond to a particular query.
Limitations of Expert System
o The response of the expert system may get wrong if the knowledge base
contains the wrong information.
o Like a human being, it cannot produce a creative output for different scenarios.
o Its maintenance and development costs are very high.
o Knowledge acquisition for designing is much difficult.
o For each domain, we require a specific ES, which is one of the big limitations.
o It cannot learn from itself and hence requires manual updates.

Applications of Expert System


o In designing and manufacturing domain

o It can be broadly used for designing and manufacturing physical devices such as
camera lenses and automobiles.
o o In the knowledge domain
o These systems are primarily used for publishing the relevant knowledge to the
users. The two popular ES used for this domain is an advisor and a tax advisor.
o o In the finance domain
o In the finance industries, it is used to detect any type of possible fraud,
suspicious activity, and advise bankers that if they should provide loans for
business or not.
o o In the diagnosis and troubleshooting of devices
o In medical diagnosis, the ES system is used, and it was the first area where
these systems were used.
o o Planning and Scheduling
o The expert systems can also be used for planning and scheduling some
particular tasks for achieving the goal of that task.

Stages in the development of an Expert System –


The following points highlight the five main stages to develop an expert system. The stages
are:
The following points highlight the five main stages to develop an expert
system. The stages are:
1. Identification
9

2. Conceptualisation
3. Formalisation (Designing)

4. Implementation 5. Testing (Validation, Verification and Maintenance).

Figure the five Stages in the development of an Expert System

Stage # 1. Identification: Before we can begin to develop an expert system, it is important


to describe, with as much precision as possible, the problem which the system is intended
to solve. It is not enough simply to feel that an expert system would be helpful in a certain
situation; we must determine the exact nature of the problem and state the precise goals
which indicate exactly how the expert system is expected to contribute to the solution.
10

Stage # 2. Conceptualisation:

Once it has been identified for the problem an expert system is to solve, the next stage
involves analysing the problem further to ensure that its specifics, as well as generalities,
are understood. In the conceptualisation stage, the knowledge engineer frequently creates a
diagram of the problem to depict graphically the relationships between the objects and
processes in the problem domain. It is often helpful at this stage to divide the problem into
a series of sub-problems and to diagram both the relationships among the pieces of each
sub- problem and the relationships among the various sub-problems.
Stage # 3. Formalisation (Designing):

In the preceding stages, no effort has been made to relate the domain problem to the
artificial intelligence technology which may solve it. During the identification and
formalization stages, the focus is entirely on understanding the problem. Now, during the
formalization stage, the problem is connected to its proposed solution, an expert system is
supplied by analysing the relationships depicted in the conceptualization stage. The
knowledge engineer begins to select the techniques which are appropriate for developing
this particular expert system.
Stage # 4. Implementation:
During the implementation stage the formalised concepts are programmed into the computer
which has been chosen for system development, using the predetermined techniques and
tools to implement a ‘first-pass’ (prototype) of the expert system.
11

Theoretically, if the methods of the previous stages have been followed with diligence and
care, the implementation of the prototype should proceed smoothly.
Stage # 5. Testing (Validation, Verification and Maintenance): The chance of prototype
expert system executing flawlessly the first time it is tested are so slim as to be virtually
non- existent. A knowledge engineer does not expect the testing process to verify that the
system has been constructed entirely correctly. Rather, testing provides an opportunity to
identify the weaknesses in the structure and implementation of the system and to make the
appropriate corrections.
Probability based Expert Systems –
Probabilistic expert systems (PES) are a type of expert system that uses probability theory
to make decisions. PES are more flexible and robust than traditional expert systems, as they
can handle uncertainty and incomplete information.

PES are typically used in domains where there is a lot of uncertainty, such as medical
diagnosis, financial forecasting, and risk assessment. In these domains, it is not always
possible to know with certainty what will happen. PES can use probability theory to
calculate the likelihood of different outcomes, and then make decisions based on this
information.

PES are made up of three main components:

 A knowledge base: The knowledge base contains the expert knowledge about the
domain. This knowledge can be represented in a variety of ways, such as rules,
frames, or objects.
12

 A probability model: The probability model represents the uncertainty in the


domain. This model can be used to calculate the likelihood of different outcomes.
 An inference engine: The inference engine uses the knowledge base and the
probability model to make decisions. The inference engine can use a variety of
different reasoning strategies, such as forward chaining and backward chaining.

PES have a number of advantages over traditional expert systems:

 They can handle uncertainty and incomplete information: PES can use probability
theory to calculate the likelihood of different outcomes, even when there is
uncertainty or incomplete information.
 They are more flexible: PES can be used in a wider variety of domains than traditional
expert systems.
 They are more robust: PES are less likely to make mistakes when the domain is
changing or when the knowledge base is incomplete.

PES are a powerful tool for solving problems in domains where there is a lot of uncertainty.
They are more flexible and robust than traditional expert systems, and they can handle
uncertainty and incomplete information.

Expert systems that utilize probability as a means of reasoning are referred to as probabilistic
expert systems. They are based on the idea that expert knowledge is uncertain and can be
represented in terms of probabilities. These systems use probabilistic models to compute the
likelihood of events and generate conclusions based on that likelihood.

One example of a probabilistic expert system is MYCIN. MYCIN is an early expert system
that was developed to diagnose infectious diseases. It utilized backward chaining, a
technique that starts with a hypothesis and works backward to find evidence to support that
hypothesis. MYCIN would suggest a diagnosis with a certain probability, and then
recommend a treatment based on that probability.

Another example is DENDRAL, an artificial intelligence-based expert system used for


chemical analysis. DENDRAL utilized probabilistic models to make predictions about the
molecular structure of organic compounds. It combined both forward and backward chaining
techniques, and used probability as a means of reasoning. The system would provide a list of
possible structures with associated probabilities, which could then be validated by a human
expert.

The development of probabilistic expert systems can be challenging. It requires modeling


uncertainty, designing algorithms for reasoning with probabilities, and validating the
accuracy of the system. Techniques such as graphical models and Bayesian networks have
been developed to facilitate the modeling of probabilistic systems[2]. However, despite the
difficulties, probabilistic expert systems have been successfully applied in a variety of
13

fields, including medicine, engineering, and finance.


14

Expert System Tools –


Techniques of Knowledge representation
There are mainly four ways of knowledge representation which are given as follows:
1. Logical Representation
2. Semantic Network Representation

3. Frame Representation
4. Production Rules

1. Logical Representation
Logical representation is a language with some concrete rules which deals with
propositions and has no ambiguity in representation. Logical representation means drawing
a conclusion based on various conditions. This representation lays down some important
communication rules. It consists of precisely defined syntax and semantics which supports
the sound inference. Each sentence can be translated into logics using syntax and semantics.
Syntax:
• Syntaxes are the rules which decide how we can construct legal sentences in
the logic.
• It determines which symbol we can use in knowledge representation.
• How to write those
symbols. Semantics:
• Semantics are the rules by which we can interpret the sentence in the logic.
• Semantic also involves assigning a meaning to each sentence.

Logical representation can be categorised into mainly two logics:


1. Propositional Logics
2. Predicate logics

Advantages of logical representation:


1. Logical representation enables us to do logical reasoning.
2. Logical representation is the basis for the programming languages.
15

Disadvantages of logical Representation:


1. Logical representations have some restrictions and are challenging to work with.
2. Logical representation technique may not be very natural, and inference may not
be so efficient.

In the above diagram, we have represented the different type of knowledge in the form of
nodes and arcs.
Each object is connected with another object by some relation.
Semantic networks take more computational time at runtime as we need to traverse the
complete .
Network tree to answer some questions. It might be possible in the worst case scenario
that after traversing the entire tree, we find that the solution does not exist in this network.
Semantic networks try to model human-like memory (Which has 1015 neurons and links) to
store the

Information, but in practice, it is not possible to build such a vast semantic network.
These types of representations are inadequate as they do not have any equivalent quantifier,
e.g., for all, for some, none, etc.

Semantic networks do not have any standard definition for the link names.
These networks are not intelligent and depend on the creator of the system.
1. Semantic networks are a natural representation of knowledge.

2. Semantic networks convey meaning in a transparent

manner. II.These networks are simple and easily


understandable.
4. IS-A relation (Inheritance

5. Kind-of-relation
Example: Following are some statements which we need to represent in the form of nodes
and arcs.
16

Statements:
1. Jerry is a cat.
2. Jerry is a mammal 3. Jerry is owned by Priya.
4. Jerry is brown coloured.
5. All Mammals are animal.

3. Frame Representation
A frame is a record like structure which consists of a collection of attributes and its values to
describe an entity in the world. Frames are the AI data structure which divides knowledge
into substructures by representing stereotypes situations. It consists of a collection of slots
and slot values. These slots may be of any type and sizes. Slots have names and values
which are called facets.

Facets: The various aspects of a slot is known as Facets. Facets are features of frames which
enable us to put constraints on the frames. Example: IF-NEEDED facts are called when data
of any particular slot is needed. A frame may consist of any number of slots, and a slot may
include any number of facets and facets may have any number of values. A frame is also
known as slot-filter knowledge representation in artificial intelligence.

Frames are derived from semantic networks and later evolved into our modern-day classes
and objects. A single frame is not much useful. Frames system consist of a collection of
frames which are connected. In the frame, knowledge about an object or event can be stored
together in the knowledge base. The frame is a type of technology which is widely used in
various applications including Natural language processing and machine visions.

Example: 1
Let's take an example of a frame for a book

Slots Filters

Title Artificial
Intelligence
Genre Computer Science
Author Peter Norvig

Editio Third Edition


n
Year 1996
Page 1152
Advantages of frame representation:
1. The frame knowledge representation makes the programming easier by
grouping the related data.
17

2. The frame representation is comparably flexible and used by many


applications in AI.
3. It is very easy to add slots for new attribute and relations.
4. It is easy to include default data and to search for missing values.
5. Frame representation is easy to understand and visualize.

Disadvantages of frame representation:


1. In frame system inference mechanism is not be easily processed.
2. Inference mechanism cannot be smoothly proceeded by frame representation.
3. Frame representation has a much generalized approach.

4. Production Rules
Production rules system consist of (condition, action) pairs which mean, "If condition then
action". It has mainly three parts:
• The set of production rules
• Working Memory
• The recognize-act-cycle

In production rules agent checks for the condition and if the condition exists then
production rule fires and corresponding action is carried out. The condition part of the rule
determines which rule may be applied to a problem. And the action part carries out the
associated problem-solving steps. This complete process is called a recognize-act cycle.

The working memory contains the description of the current state of problems-solving and
rule can write knowledge to the working memory. This knowledge match and may fire
other rules.
If there is a new situation (state) generates, then multiple production rules will be fired
together, this is called conflict set. In this situation, the agent needs to select a rule from
these sets, and it is called a conflict resolution.

Example:
• IF (at bus stop AND bus arrives) THEN action (get into the bus) IF (on
the bus AND paid AND empty seat) THEN action (sit down).
• IF (on bus AND unpaid) THEN action (pay charges).
• IF (bus arrives at destination) THEN action (get down from the bus).

Advantages of Production rule:


1. The production rules are expressed in natural language.
2. The production rules are highly modular, so we can easily remove, add or
modify an individual rule.

Disadvantages of Production rule:


18

1. Production rule system does not exhibit any learning capabilities, as it does not
store the result of the problem for the future uses.
2. During the execution of the program, many rules may be active hence rule-
based production systems are inefficient.

Difficulties in Developing Expert Systems –

 Knowledge acquisition: This is the process of extracting the knowledge from the
domain expert and representing it in a form that the expert system can use. This can
be a very time- consuming and difficult process, as the expert may not be able to
articulate their knowledge clearly, or they may not be aware of all of the knowledge
that is relevant to the problem.
 Knowledge representation: This is the process of representing the knowledge in a
way that the expert system can understand and use. There are a variety of different
knowledge representation schemes, each with its own advantages and
disadvantages. The choice of knowledge representation scheme will depend on the
nature of the problem and the knowledge that is being represented.
 Inference engine: The inference engine is the part of the expert system that uses the
knowledge base to make decisions. The inference engine must be able to reason
logically about the knowledge in the knowledge base, and it must be able to generate
explanations for its decisions.
 User interface: The user interface is the way that users interact with the expert
system. The user interface must be easy to use and understand, and it must be able to
handle a variety of different input types.
 Maintenance: Once an expert system is developed, it must be maintained. This
includes keeping the knowledge base up-to-date, fixing bugs, and adding new
features. Maintenance can be a time-consuming and expensive process.

In addition to these technical difficulties, there are also a number of organizational and social
factors that can make it difficult to develop and deploy expert systems. These factors include:

 Lack of management support: Expert systems can be expensive to develop and


deploy, and they may not always be seen as a priority by management.
 Resistance to change: Some people may be resistant to change, and they may not be
willing to use an expert system, even if it is more efficient than their current
methods.
 Lack of domain knowledge: The developers of an expert system must have a good
understanding of the domain that the expert system is being developed for. If they
do not have this knowledge, they may not be able to develop an effective expert
system.
19

Despite these difficulties, expert systems can be a valuable tool for solving a variety of
problems. They can improve the quality of decisions, increase productivity, and reduce costs.

APPLICATIONS OF EXPERT SYSTEM


There are several major application areas of expert system such as agriculture, education,
environment, law manufacturing, medicine power system etc. Expert system is used to
develop a large number of new products as well as new configurations of established
products.
When established products are modified to include an expert system as a component or
when an established product item is replaced with an expert system, the expert system
supported entity is called intelligent.
Expert systems are designed and created to facilitate tasks in the fields of accounting,
medicine, process control, financial service, production, education etc. The foundation of a
successful expert system depends on a series of technical procedures and development that
may be designed by certain related experts.

Expert Systems are for everyone


Everyone can find an application potential in the field of expert systems. Contrary to the
belief that expert systems may pose a threat to job security, expert systems can actually
help to create opportunities for new job areas. No matter which is of business one is
engages in, expert systems can fulfill the need for higher productivity and reliability of
decisions. Some job opportunities offered by the expert system are listed below:
® Basic Research

® Applied Research
® Knowledge Engineering

® The development of Inference engine


® Training

® Sales and marketing


Expert System in Education
In the field of education, many of the expert system’s application are embedded inside the
Intelligent Tutoring System (ITS) by using techniques from adaptive hypertext and
hypermedia. Most of the system usually will assist student in their learning by using
adaptation techniques to personalize with the environment prior knowledge of student and
student’s ability to learn.
Expert system in education has expanded very consistently from micro computer to web
based and agent based technology. Web based expert system can provide an excellent
alternative to private tutoring at any time from any place where internet is provided. Agent
20

based expert system will help users by finding materials from the web based on the user’s
profile.
Expert system also had tremendous changes in the applying of methods and techniques.
Expert system are beneficial as a teaching tools because it has equipped with the unique
features which allow users to ask question on how, why and what format. When it is used in
the class environment, surely it will give many benefit to student as it prepare the answer
without referring to the teacher
.Beside that, expert system is able to give reasons towards the given answer. Expert system
had been used in several fields of study including computer animation, computer science
and engineering, language teaching business study etc.
Expert system in Agriculture
The expert system for agriculture is same as like other fields. Here also the expert system
uses the rule based structure and the knowledge of a human expert is captured in the form
of IF-THEN rules and facts which are used to solve problems by answering questions typed
at a keyboard attached to a computer.
For example, in pest control, the need to spray, selection of a chemical to spray, mixing and
application etc. T
he early, state of developing the expert systems are in the 1960’s and 1970’s were typically
written on a mainframe computer in the programming language based on LISP. Some
examples of these expert systems are MACSYMA developed at the Massachusetts
Institute of Technology (MIT) for assisting individuals in solving complex mathematical
problems. Other examples may be MYCIN, DENDRAL, and CALEX etc.
The rises of the agricultural expert system are to help the farmers to do single point
decisions, which to have a well planning for before start to do anything on their land. It is
used to design an irrigation system for their plantation use. Also some of the other
functions of agricultural expert system are:
® To predict the extreme events such as thunderstorms and frost.
® To select the most suitable crop variety.

® Diagnosis of liver stock disorder and many more.

Expert System for a particular decision problem


The expert system can be used as a stand alone advisory system for the specific knowledge
domain. It also can provide decision support for a high level human expert. The main
purposes, the rises of the expert system are as a delivery system for extension information,
to provide management education for decision makers and for dissemination of up-to-date
scientific information in a readily accessible and easily understood form, to agricultural
researchers, advisers and farmers. By the help of an expert system, the farmers can produce
a more high quality product to the citizen.
21

Expert System for Text Animation (ESTA)


The idea behind creating an expert system is that it can enable many people to benefit from
the knowledge of one person – the expert. By providing it with a knowledge base for a
certain subject area, ESTA can be used to create an expert system for the subject:
ESTA + Knowledge base = Expert System
Each knowledge base contains rules for a specific domain. A knowledge base for an expert
system to give tax advice might contain rules relating marital status, mortgage commitments
and age to the advisability of taking out a new life insurance policy. ESTA has all facilities to
write the rules that will make up a knowledge base.
ESTA has an inference engine which can use the rules in the knowledge base to determine
which advice is to be given to the expert system user.
ESTA also features the ability for the expert system user to obtain answers to questions
such as “how” and “why”. ESTA is used by a knowledge engineer to create a knowledge
base and by the expert system user to consult a knowledge base. Knowledge representation
in ESTA is based on the items like sections, parameters, title.

Responsible AI

Responsible Artificial Intelligence (Responsible AI) is an approach to developing, assessing,


and deploying AI systems in a safe, trustworthy, and ethical way. AI systems are the product
of many decisions made by those who develop and deploy them. From system purpose to
how people interact with AI systems, Responsible AI can help proactively guide these
decisions toward more beneficial and equitable outcomes. That means keeping people and
their goals at the center of system design decisions and respecting enduring values like
fairness, reliability, and transparency.

Microsoft developed a Responsible AI Standard. It's a framework for building AI systems


according to six principles: fairness, reliability and safety, privacy and security,
inclusiveness, transparency, and accountability. For Microsoft, these principles are the
cornerstone of a responsible and trustworthy approach to AI, especially as intelligent
technology becomes more prevalent in products and services that people use every day.

This article demonstrates how Azure Machine Learning supports tools for enabling
developers and data scientists to implement and operationalize the six principles.
22

Fairness and inclusiveness

AI systems should treat everyone fairly and avoid affecting similarly situated groups of
people in different ways. For example, when AI systems provide guidance on medical
treatment, loan applications, or employment, they should make the same recommendations to
everyone who has similar symptoms, financial circumstances, or professional qualifications.

Fairness and inclusiveness in Azure Machine Learning: The fairness


assessment component of the Responsible AI dashboard enables data scientists and
developers to assess model fairness across sensitive groups defined in terms of gender,
ethnicity, age, and other characteristics.

Reliability and safety

To build trust, it's critical that AI systems operate reliably, safely, and consistently. These
systems should be able to operate as they were originally designed, respond safely to
unanticipated conditions, and resist harmful manipulation. How they behave and the variety
of conditions they can handle reflect the range of situations and circumstances that
developers anticipated during design and testing.

Reliability and safety in Azure Machine Learning: The error analysis component of
the Responsible AI dashboard enables data scientists and developers to:

 Get a deep understanding of how failure is distributed for a model.


 Identify cohorts (subsets) of data with a higher error rate than the overall benchmark.

These discrepancies might occur when the system or model underperforms for specific
demographic groups or for infrequently observed input conditions in the training data.
23

Transparency

When AI systems help inform decisions that have tremendous impacts on people's lives, it's
critical that people understand how those decisions were made. For example, a bank might
use an AI system to decide whether a person is creditworthy. A company might use an AI
system to determine the most qualified candidates to hire.

A crucial part of transparency is interpretability: the useful explanation of the behavior of AI


systems and their components. Improving interpretability requires stakeholders to
comprehend how and why AI systems function the way they do. The stakeholders can then
identify potential performance issues, fairness issues, exclusionary practices, or unintended
outcomes.

Transparency in Azure Machine Learning: The model interpretability and counterfactual


what-if components of the Responsible AI dashboard enable data scientists and developers to
generate human-understandable descriptions of the predictions of a model.

The model interpretability component provides multiple views into a model's behavior:

 Global explanations. For example, what features affect the overall behavior of a loan
allocation model?
 Local explanations. For example, why was a customer's loan application approved or
rejected?
 Model explanations for a selected cohort of data points. For example, what features
affect the overall behavior of a loan allocation model for low-income applicants?

The counterfactual what-if component enables understanding and debugging a machine


learning model in terms of how it reacts to feature changes and perturbations.

Azure Machine Learning also supports a Responsible AI scorecard. The scorecard is a


customizable PDF report that developers can easily configure, generate, download, and share
with their technical and non-technical stakeholders to educate them about their datasets and
models health, achieve compliance, and build trust. This scorecard can also be used in audit
reviews to uncover the characteristics of machine learning models.

Privacy and security

As AI becomes more prevalent, protecting privacy and securing personal and business
information are becoming more important and complex. With AI, privacy and data security
require close attention because access to data is essential for AI systems to make accurate and
informed predictions and decisions about people. AI systems must comply with privacy laws
that:

 Require transparency about the collection, use, and storage of data.


 Mandate that consumers have appropriate controls to choose how their data is used.

Privacy and security in Azure Machine Learning: Azure Machine Learning enables
administrators and developers to create a secure configuration that complies with their
companies' policies. With Azure Machine Learning and the Azure platform, users can:
24

 Restrict access to resources and operations by user account or group.


 Restrict incoming and outgoing network communications.
 Encrypt data in transit and at rest.
 Scan for vulnerabilities.
 Apply and audit configuration policies.

Microsoft also created two open-source packages that can enable further implementation of
privacy and security principles:

 SmartNoise: Differential privacy is a set of systems and practices that help keep the
data of individuals safe and private. In machine learning solutions, differential privacy
might be required for regulatory compliance. SmartNoise is an open-source project (co-
developed by Microsoft) that contains components for building differentially private
systems that are global.

 Counterfit: Counterfit is an open-source project that comprises a command-line tool


and generic automation layer to allow developers to simulate cyberattacks against AI
systems. Anyone can download the tool and deploy it through Azure Cloud Shell to run
in a browser, or deploy it locally in an Anaconda Python environment. It can assess AI
models hosted in various cloud environments, on-premises, or in the edge. The tool is
agnostic to AI models and supports various data types, including text, images, or
generic input.

Accountability

The people who design and deploy AI systems must be accountable for how their systems
operate. Organizations should draw upon industry standards to develop accountability norms.
These norms can ensure that AI systems aren't the final authority on any decision that affects
people's lives. They can also ensure that humans maintain meaningful control over otherwise
highly autonomous AI systems.

Accountability in Azure Machine Learning: Machine learning operations (MLOps) is


based on DevOps principles and practices that increase the efficiency of AI workflows. Azure
Machine Learning provides the following MLOps capabilities for better accountability of
your AI systems:

 Register, package, and deploy models from anywhere. You can also track the
associated metadata that's required to use the model.
 Capture the governance data for the end-to-end machine learning lifecycle. The logged
lineage information can include who is publishing models, why changes were made,
and when models were deployed or used in production.
 Notify and alert on events in the machine learning lifecycle. Examples include
experiment completion, model registration, model deployment, and data drift detection.
 Monitor applications for operational issues and issues related to machine learning.
Compare model inputs between training and inference, explore model-specific metrics,
and provide monitoring and alerts on your machine learning infrastructure.

Besides the MLOps capabilities, the Responsible AI scorecard in Azure Machine Learning
creates accountability by enabling cross-stakeholder communications. The scorecard also
25

creates accountability by empowering developers to configure, download, and share their


model health insights with their technical and non-technical stakeholders about AI data and
model health. Sharing these insights can help build trust.

The machine learning platform also enables decision-making by informing business decisions
through:

 Data-driven insights, to help stakeholders understand causal treatment effects on an


outcome, by using historical data only. For example, "How would a medicine affect a
patient's blood pressure?" These insights are provided through the causal
inference component of the Responsible AI dashboard.
 Model-driven insights, to answer users' questions (such as "What can I do to get a
different outcome from your AI next time?") so they can take action. Such insights are
provided to data scientists through the counterfactual what-if component of
the Responsible AI dashboard.

Ethical Decision-Making
“Ethics is knowing the difference between what you have a right to do and what is right to
do.”
the main ethical theories and discuss what it means for an AI sys- tem to be able to
reason about the ethical grounds and consequences of its decisions and to consider
human values in those decisions.
Introduction:
As intelligent machines become more prevalent, concerns about their ethical implications
have grown. This chapter discusses the design of machines that consider human values and
ethical principles in their decision-making processes. Ethical reasoning involves identifying,
assessing, and developing ethical arguments from various positions. AI systems are
increasingly perceived as moral agents due to their increased intelligence, autonomy, and
interaction capabilities. This raises issues of responsibility, liability, and the potential for AI
to act according to human values and respect human rights.

AI systems are built based on given computational principles, which can vary. Expecting
machines to behave ethically implies considering the computational constructs that enable
ethical reasoning and the desirability of implementing these. Section 3.4 provides a hands-on
introduction to ethical reasoning, showing how results can vary depending on the ethical
theory considered.

Current discussions of ethical theories regarding AI's actions have led governments and
organizations to propose solutions to the ethical challenges. However, while AI offers tools to
better understand moral agency, endowing artificial systems with ethical capabilities remains
a challenge.

Ethical Theories

• Ethics, or Moral Philosophy, explores how people should act and what a 'good' life means.
• Divided into three areas: Meta-ethics, Applied Ethics, and Normative Ethics.
26

• Meta-ethics investigates the origins and meaning of ethical principles, the role of reason in
ethical judgments, and universal human values.
• Applied Ethics examines controversial issues like euthanasia, animal rights, environmental
concerns, nuclear war, and the behavior of intelligent artificial systems and robotics.
• Normative Ethics establishes how things should or ought to be, exploring how we value
things and determine right from wrong.
• Three schools of thought within normative ethics: consequentialism, deontology, and virtue
ethics.
• Consequentialism argues that the morality of an action is contingent on the action’s
outcome or result.
• Deontology judges the morality of an action based on certain rules, focusing on whether an
action is right or wrong.
Comparison of Main Ethical Theories

Consequentialis Deontolog Virtue Ethics


m y
An action is right An action is right if it
if it An action is is what a virtuous person
Description promotes the best right if it is in would do in the
consequences, i.e accordance circumstances
maximises with a moral
happiness rule or
principle
The results matter, Persons must be Emphasise the
Central not the actions seen character of the agent
Concern themselves as ends and making the actions
may never be
used as means
Right Virtue (leading to the
Guiding Good (often seen (rationality is attainment of eudaimonia)
Value as maximum doing one’s
happiness) moral
duty)
The best for most Practice human
Practical (means-ends Follow the qualities (social
Reasoning reasoning) rule (rational practice)
reasoning)
Action (Is
Deliberation Consequences action Motives (Is action
Focus (What is outcome compatible with motivated by virtue?)
of action?) some
imperative?)

Virtue Ethics and Normative Ethics

Vitae Ethics
• Focuses on the inherent character of a person, emphasizing the development of good habits
of character.
• Identifies virtues and provides practical wisdom for resolving conflicts between virtues.
• Claims that a lifetime of practising these virtues leads to happiness and the good life.
• Aristotle saw virtues as constituents of eudaimonia, arguing that virtues are good habits that
regulate our emotions.
• Later medieval theologians supplemented Aristotles’s lists of virtues with three Christian
27

ones.

Normative Ethics
• Different ethical theories result in different justifications for decisions.
• Examples include utilitarian, deontologist, and virtue ethicist perspectives.

Problems of Normative Ethics


• Principle of Double Effect (DDE): Deliberate inflicting harm is wrong, even if it leads to
good.
• Human Rights Ethics: Humans have absolute, natural rights inherent in the nature of ethics,
not contingent on human actions or beliefs.
• Principle of Lesser Evils: The only way out of a moral conflict is to violate one of the moral
positions and choose the lesser evil.

Values as Key Drivers in Decision-Making


• Values, such as honesty, beauty, respect, environmental care, and self-enhancement, are key
drivers in human decision-making.
• Basic values are desirable goals that motivate action and transcend specific actions and
situations.

Key Properties of Values


• Genericity: Values are generic and can be instantiated in a wide range of concrete
situations.
• Comparison: Values allow comparison of different situations with respect to that value.
• Values are abstract and context independent, meaning they cannot be directly measured but
only through interpretations or implementations.

Value Systems and Contradictions


• Value systems internally order values along two orderings: relative relation between values
and intrinsic opposition between values.
• Contradictions do not mean one value excludes another, but rather they are pulling in
different directions and a balance must be found.

Swartz's Classification of Basic Values


• Identified and classified along four dimensions: Openness to change: Self-Direction and
Stimulation; Self-enhancement: Hedonism.
28

Self-direction Universalism

Stimulation
Benevolence

Hedonism
Conformity Tradition

Achievement

Power Security

Schwartz’s value model


Values and Moral Decision-Making

• Values guide action selection and evaluation, considering relative priority.


• Personal preference influences decision-making, with people preferring alternatives
satisfying their most important values.
• Moral values are consistent across cultures, but prioritise them differently.
• Societal values influence individual moral decision-making.
• Identifying societal values is crucial for determining moral deliberation rules in AI systems

Ethics in Practice in AI Systems

• Ethical reasoning is crucial in moral dilemmas where moral requirements conflict.


• Different ethical theories lead to distinct solutions to these dilemmas.
• AI systems, due to their increasing autonomy, will encounter ethical dilemmas.
• The trolley problem, a hypothetical scenario, illustrates the moral dilemma in self-driving
cars.
• Values guide the selection or evaluation of actions, considering their relative priority.
• Personal preference also plays a role in decision-making, with people tending to prefer
alternatives that satisfy their most important values.
• Societal values influence moral decision-making, with different values leading to different
decisions.
• Identifying societal values is crucial when determining moral deliberation rules for AI
systems.
29

AI Systems and Moral Dilemmas: A Metaphor and Analysis

• The trolley problem is a hypothetical scenario to illustrate ethical challenges faced by


autonomous machines.
• Other AI systems may face similar dilemmas, such as choosing between patients or
pedestrians.
• The dilemma is a metaphor to highlight the ethical challenges autonomous machines may
face.
• Automated solutions to these dilemmas are not the only possibility, and human-in-the-loop
solutions are often the most suitable.
• Responsibility for decision-making is not solely with the individual but also influenced by
societal, legal, and physical infrastructures.
• Moral dilemmas do not have a single optimal solution; they arise from the need to choose
between two 'bad' options.
• Ethical theories are applied to determine potential reactions of self-driving cars in the
dilemma.
• Consequentialist, Utilitarian, Deontologic, and Virtuous cars are used to determine potential
reactions.
• The decision an individual takes in a moral dilemma is also influenced by their
prioritization of values.

Implementing Ethical Reasoning in AI Systems

Obstacles to Ethical Reasoning in AI Systems


• Objections include lack of regret and creativity in responding to moral dilemmas.
• The process of reasoning about ethical aspects of any given situation is complex and
requires capabilities far beyond what can be implemented in AI systems.

Reasons for Ethical Reasoning in AI Systems


• To apply moral reasoning suggested by ethical theories, an agent must identify a situation
with an ethical dimension.
• This process requires sophisticated reasoning capabilities, powerful sensors, actuators, and
sufficient computational capability.

Steps for Evaluating and Choosing AI Systems


30

• Recognize that there is an event to react to.


• Identify the possible ethical dimensions of the situation.
• Determine which parties would be affected by potential responses.
• For each action, determine the potential positive and negative consequences.

Ethical Problem-Solving in AI

• The agent must decide whether to resolve the situation or alert others.
• The agent must identify relevant principles, rights, and justice issues.
• The agent must determine if the decision is influenced by bias or cognitive barriers.
• The agent must determine how these abstract ethical rules apply to the problem.
• The agent needs to generate a course of action and then act.
• The main challenge is the computational complexity of the required deliberation algorithms.
• Consequentialist agents require reasoning about the consequences of actions, supported by
game theoretic approaches.
• Deontologic agents require higher order reasoning about actions themselves, requiring
awareness of their own action capabilities and their relation to institutional norms.
• Virtue agents need to reason about their own motives, leading to actions, which require
Theory of Mind models.
Taking Responsibility

AI: Ethical and Practical Development

Understanding AI and Ethical Theories


• AI has potential to improve accuracy, efficiency, cost savings, and speed in human
activities.
• The development and deployment of AI significantly influence its impact on society.
• Automated classification systems can lead to privacy and bias issues, while self-driving
vehicles raise safety and responsibility concerns.

AI's Impact on Society


• AI's use in various areas of society, including labor, well-being, social interactions,
healthcare, and income distribution, requires ethical, legal, societal, and economical
considerations.
• Responsible AI requires inclusive and diverse AI systems, considering all humankind.

Education and Responsible AI


• Education plays a crucial role in spreading knowledge of AI's potential impact and
promoting participation in shaping societal development.
• The core of AI development should be 'AI for Good' and 'AI for All'.

Design and Engineering Approaches for Responsible AI


• Researchers, policymakers, industry, and society are recognizing the need for design and
engineering approaches that ensure safe, beneficial, and fair use of AI technologies.
• These approaches include system design and implementation, governance and regulatory
processes, and consultation and training activities.

Responsible AI
• Responsible AI focuses on ethical decisions and actions taken by intelligent autonomous
systems.
31

• It provides directions for action and can be seen as a code of behavior for AI systems and
humans.

Responsible Research and Innovation in AI Systems Development

• AI technology development requires a comprehensive understanding of its socio-technical


and societal impacts.
• A Responsible Research and Innovation (RRI) vision can be applied to AI system
development.
• RRI involves a research and innovation process considering environmental and societal
effects.
• RRI is based on participation, requiring collaboration among all societal actors to align the
process with societal values and expectations.
• RRI is a continuous process, from the drawing table to the market introduction of resulting
products and services.

Understanding RRI Process


• Defined as transparent, interactive process.
• Promotes mutual responsiveness between societal actors and innovators.
• Aims for ethical acceptability, sustainability, and societal desirability of innovation process
and products.

Openness
&

Transparency

business &
industry researchers

Diversity
&
Inclusion RRI education
Anticipation
&
Reflection

civil society

policymakers

Responsiveness

The Responsible Research and Innovation process


RRI Process Overview

• Involves all parties in defining research and innovation directions.


• Encourages Diversity and Inclusion: Involve diverse stakeholders in early innovation
process.
• Promotes Openness and Transparency: Clear communication about project nature, including
funding, decision-making, and governance.
• Builds Public Trust: Open data and results ensure accountability and critical scrutiny.
• Anticipation and Reflexivity: Understanding current context from diverse perspectives,
considering environmental, economic, and social impacts.
• Reflects on individual and institutional values.
32

RRI in AI Systems Development

• AI systems are gaining autonomy and machine learning, requiring careful analysis to
prevent undesirable effects.
• A responsible approach to AI is needed to ensure safe, beneficial, and fair use of AI
technologies.
• Ethical implications of decision-making by machines should be considered, and the legal
status of AI should be defined.
• Wide societal support for AI applications should be ensured, focusing on human values and
well-being.
• Education and an accessible AI narrative are necessary for everyone to understand AI's
impact and benefit from its results.
• RRI in AI should include education of all stakeholders and governance models for
responsibility in AI.
• The principles of Accountability, Responsibility, and Transparency (ART) are proposed to
ensure responsible design of systems.
• Responsibility in AI extends beyond design to defining their success, considering human
and societal well-being.
• Multiple metrics are used to measure well-being, including the United Nations’ Human
Development Index and the Genuine Progress Indicator.

The Art of AI: Accountability, Responsibility, and Transparency

• AI systems are capable of perceiving their environment and deciding actions to achieve
their goals.
• These systems are characterized by autonomy, adaptability, learning from environmental
changes, and interaction with other agents.
• These properties enable AI systems to effectively deal with unpredictable, dynamic
environments.
• Trust in AI systems is essential for their acceptance in complex socio-technical
environments.
• Design methodologies that consider these issues are essential for trust and acceptance of AI
systems.
• Autonomy should be complemented with responsibility, interactiveness with accountability,
and adaptation with transparency.
• The impact and consequences of an AI system extend beyond the technical system,
encompassing stakeholders and organizations.

Socio-

AI

system

Autonomy
33

Figure : The ART principles: Accountability, Responsibility,


Autonomy
The Art of AI: Accountability, Responsibility, and Transparency

• AI systems are capable of perceiving their environment and deciding actions to achieve
their goals.
• These systems are characterized by autonomy, adaptability, learning from environmental
changes, and interaction with other agents.
• These properties enable AI systems to effectively deal with unpredictable, dynamic
environments.
• Trust in AI systems is essential for their acceptance in complex socio-technical
environments.
• Design methodologies that consider these issues are essential for trust and acceptance of AI
systems.
• Autonomy should be complemented with responsibility, interactiveness with accountability,
and adaptation with transparency.
• The impact and consequences of an AI system extend beyond the technical system,
encompassing stakeholders and organizations.
Accountability in Responsible AI

• Accountability is the first condition for Responsible AI, requiring the ability to report and
explain actions and decisions.
• It is crucial for people to trust autonomous systems if the system can explain why it took a
certain course of action.
• A safe and sound design process that accounts for and reports on decisions, choices, and
restrictions about the system’s aims and assumptions is essential.

The Importance of Explanation in Trusting AI Systems

• Explanation reduces the opaqueness of a system and supports understanding of its behavior
and limitations.
• Post-mortem explanation, using logging systems, can help investigators understand what
went wrong.
• Explanation is especially important when the system does something good but unexpected,
such as taking a course of action that would not occur to a human.
• Machines are assumed to be incapable of moral reasoning, requiring a proof or certification
of their ethical reasoning abilities.

Developing Explaination Mechanisms

• Explainations should be comprehensible and useful to a human, considering relevant social


sciences literature.
• Explainations should be contrastive, selective, and social, and should follow Grice’s
conversation maxims of quality, quantity, manner, and relevance.

Accountability Beyond the Design of the System

• The system’s design should follow a process sensitive to the societal, ethical, and legal
34

impact, and the characteristics of the context in which it will operate.


• Decisions made during the design process have ethical implications.

AI Systems and Their Responsibility

• AI systems are tools constructed by humans for a specific purpose, and human
responsibility cannot be replaced.
• AI systems can modify themselves by learning from its context, but this is based on the
purpose determined by humans.
• Theories, methods, and algorithms are needed to integrate societal, legal, and moral values
into AI technological developments.
• Autonomy in AI refers to the system's autonomy to develop its own plans and decide
between its possible actions.
• The system's learning is determined by the purpose for which it was built and the
functionalities it is endowed with.
• Responsibility of AI systems can either act as intended, with the user's responsibility, or
unexpectedly due to error or malfunction, with developers and manufacturers liable.
• The action of the machine as a result of learning cannot remove liability from its
developers, as it is a consequence of the algorithms they've designed.
• Continuous assessment of a system's behavior against ethical and societal principles is
necessary.
Transparency in Artificial Intelligence

• Algorithmic Transparency is a principle aiming to make algorithms' decisions transparent to


users and regulators.
• However, this solution may violate intellectual property and business models of algorithm
developers and make the code less understandable to most users.
• Machine Learning algorithms, or 'black-box' algorithms, often lack transparency due to their
complexity and the need to fine-tune outputs to specific inputs.
• Machine Learning algorithms are trained with data generated by humans, which can contain
biases and mistakes.
• Researchers, practitioners, and policymakers are increasingly recognizing the need to
address bias in data and algorithms.
• Heuristics, simple rules for efficient processing of inputs, can induce bias and stereotypes
when they reinforce a misstep in thinking or a basic misconception of reality.
• Bias is inherent in human thinking and an unavoidable characteristic of data collected from
human processes.
• Current Machine Learning algorithms may follow existing biases in the data, such as
correlations between race and address.
• The aim of algorithmic transparency is to ensure that the machine will not be prejudiced,
but removing the algorithmic black box will not eliminate bias.
• Different measures of bias exist, and they are in tension.

Checklist for Transparency


35

1. Openness about data


• What type of data was used to train the algorithm?
• What type of data does the algorithm use to make decisions?
• Does training data resemble the context of use?
• How is this data governed (collection, storage, access)
• What are the characteristics of the data? How old is the data, where
was it collected, by whom, how is it updated?
• Is the data available for replication studies?
2. Openness about design processes
• What are the assumptions?
• What are the choices? And the reasons for choosing and the reasons not
to choose?
• Who is making the design choices? And why are these groups involved
and not others?
• How are the choices being determined? By majority, consensus, is veto
possible?
• What are the evaluation and validation methods used?
• How is noise, incompleteness and inconsistency being dealt with?
3. Openness about algorithms
• What are the decision criteria we are optimising for?
• How are these criteria justified? What values are being considered?
• Are these justifications acceptable in the context we are designing for?
• What forms of bias might arise? What steps are taken to assess, identify
and prevent bias?
4. Openness about actors and stakeholders
• Who is involved in the process, what are their interests?
• Who will be affected?
• Who are the users, and how are they involved?
• Is participation voluntary, paid or forced?
• Who is paying and who is controlling?

Figure 4.3: Checklist for Transparency

Design for Values in AI Systems Development

• Design for Values is a methodological approach that integrates moral values into
technological design, research, and development.
• Values are abstract concepts that are challenging to incorporate in software design.
• The process ensures the traceability and evaluation of the link between values and their
concrete interpretations in system design and engineering.
• In AI system development, the approach includes identifying societal values, deciding on a
moral deliberation approach, and linking values to formal system requirements and
36

functionalities.
• AI systems, being computer programs, must prioritize fundamental human rights.
• Traditional software development often overlooks the role of human values and ethics.
• The requirements elicitation process only describes the resulting requirements, not the
underlying values.
• This approach loses flexibility in using alternative translations of values due to their abstract
nature.

A Design for Values approach provides guidelines to how AI applications should


be designed, managed and deployed, so that values can be identified and incorporated
explicitly into the design and implementation processes. Design for Values
methodologies therefore provide means to support the fol- lowing processes:
• Identify the relevant stakeholders;
• Elicit values and requirements of all stakeholders;
• Provide means to aggregate the values and value interpretations from all
stakeholders;
• Maintain explicit formal links between values, norms and system func- tionalities
that enable adaptation of the system to evolving perceptions and justification of
implementation decisions in terms of their underlying values;
• Provide support to choose system components based on their underlying societal
and ethical conceptions, in particular when these components are built or
maintained by different organisations, holding potentially different values.
intention

scope
37

Design Methodology for Responsible AI Based on Value Sensitive Software Development


(VSSD) Framework

• The VSSD framework connects traditional software engineering concerns with a Design for
Values approach to inform the design of AI systems.
• Design for Values describes the links between values, norms, and system functionalities.
• Domain requirements shape the design of software systems in terms of functional, non-
functional, and physical/operational demands of the domain.
• An AI system must obey both orientations, ensuring alignment with social and ethical
principles.
• The design of an AI system is structured in terms of high-level motives and roles, specific
goals, and concrete plans and actions.
• Norms provide ethical-societal boundaries for the system's goals while ensuring functional
requirements are met.
• Implementation of plans and actions follows a concrete platform/language instantiation of
the functionalities identified by the Design for Values process while ensurring operational
and physical domain requirements.
• Using a Design for Values perspective, explicit links to the values behind architectural
decisions are made.
• This allows for improvements in the traceability of values throughout the development
process and increases the maintainability of the application.
38

Responsible AI Use in Development


• AI use should reduce risks and burdens, ensuring societal and ethical values are central to
development.
• The development life cycle for AI systems should include analysis, design, implementation,
evaluation, and maintenance.
• A responsible approach requires continuous evaluation throughout the development process,
considering the dynamic and adaptable nature of AI systems.
• The development life cycle should center around evaluation and justification processes.

Can AI Systems Be Ethical?


“The issue is not whether we can make machines that are ethical, but the ethics of the people
behind the machines.
AI Systems: Ethical Decision-Making and Ethical Challenges

• The chapter discusses the development of AI systems that can reason about their social and
normative context and the ethical consequences of their decisions.
• The challenge lies in understanding what constitutes ethical behavior, with no consensus on
what is ethically right and wrong.
• The goal is to build an AI agent that is effective, contributing to the achievement of its goals
and advancing its purpose.
• To build ethical AI systems, the actions of the system must align with the regulations and
norms in the context, and the agent’s goals should align with core ethical principles and
societal values.
• Ethical decision-making by AI systems involves evaluating and choosing among
alternatives consistent with societal, ethical, and legal requirements.
• The chapter focuses on the difference between playing well, following the rules, and
following the most beneficial rules, those that promote ethical values.

What Is an Ethical Action?

In order to determine whether we can implement ethical agents, we first need to


understand whether it is possible to provide a formal computational definition of an
ethical action.
Dennett identifies three requirements for ethical action:
1. it must be possible to choose between different actions;
2. there must be some societal consensus that at least one of the possible choices is
39

socially beneficial;
3. the agent must be able to recognise that socially beneficial action and take the
explicit decision to choose that action because it is the ethical thing to do
What Is an Ethical Action?
System Information for Action Selection
• Labels each action with a list of characteristics to guide the agent's decision.
• Labels each action with its 'ethical degree' in the current context to determine the most
ethical action.
• Algorithm 1 defines an algorithm for this, defining an action based on its name,
preconditions, and ethical degree.
• The agent's current context is represented by c, and the set of actions is A.
• A function sorte() calculates the list resulting from sorting A by descending order of ethical
degree, eth.

Algorithm 1 Naive Ethical Reasoning


1: E = sorte(A); 2: choice = 0;
3: i = 0;
4: while (i < length(E)) do
5: if (holds(precond(E[i], c) and choice == 0) then
6: most ethical = E[i];
7: choice = 1;
8: else
9: i++;
10: do(most ethical);
AI Ethics and Ethical Reasoning

Issues with AI Ethics


• Dennett's definition of ethics is complex due to the need for social consensus on the ethical
grounds of actions.
• Even if a consensus is reached on an ethical theory, the outcome depends on the values the
agent considers.
• Ethics cannot be imposed; an ethical individual should be able to evaluate its decisions and
learn from them.
• In the case of machines, imposing ethics requires a consensus on which values and ethical
theories to apply and understanding how these rules evolve over time and context.

Different Ethical Theories and Their Computational Demands


• Deontological ethics evaluate actions, using a labelling system, while consequentialist
ethics evaluate results of actions.
• Deontological ethics can be determined a priori based on the laws holding in the context.
• Conscientialism and deontological ethics are rational systems, while virtue ethics focus on
human character and personal motivation.
• The implementation of virtue ethics is less clear due to its relational character and the
importance of context in its implementation.
40

Ethical Reasoning by AI Systems: Current Approaches and Their Consequences

Top-down Approaches:
• Infer individual decisions from general rules.
• Aim to implement an ethical theory within a computational framework.
• Apply ethical theory to a specific case.

Bottom-up Approaches:
• Infer general rules from individual cases.
• Provide agent with observations of others' actions in similar situations.
• Aggregate these observations into a decision about what is ethically acceptable.

Hybrid Approaches:
• Combine elements from bottom-up and top-down approaches for careful moral reflection.
• Essential for ethical decision-making.

Top-down Approach:
• Involves determining which ethical value to maximize.
• Requires higher level of reflection and abstraction than implementation.

Bottom-up Approach:
• Equates social acceptability with ethical acceptance.
• Assumes what other agents are doing is the ethical thing to do.
• Dynamically builds eth(a, c) from observations and evaluation of perceived results.
• Requires higher level of reflection on who to learn from and who to decide.

Hybrid Approaches:
• Combine characteristics from both approaches to approximate human ethical reasoning.
• Provide a priori information about legal behavior.

Top-Down Approaches
Top-Down Approach to Ethical Reasoning in AI

• Top-down approaches to ethical reasoning assume a specific ethical theory and define rules,
obligations, and rights for decision-making.
• These models are often an extension of normative reasoning and are often based on Belief-
Desire-Intention architectures.
• Normative systems, such as those developed in previous work, take a deontological
approach, assuming that following existing laws and social norms guarantees 'good'
decisions.
• Top-down approaches differ in the chosen ethical theory, with optimal models following a
Utilitarian view and models evaluating the 'goodness' of actions.
• Some models propose specifying moral values associated with behavior norms as an
additional decision criterion.
• Top-down approaches assume AI systems can explicitly reason about the ethical impact of
their actions.
• Requirements for such systems include representation languages, planning mechanisms, and
41

deliberation capabilities.
• Research is needed to determine if this approach is responsible to ethical behavior.

Reflection on the Top-down Approach in Ethical Reasoning


• Ethical theories offer an abstract understanding of moral reasoning motives and decision-
making.
• Decisions are influenced by moral and societal values.
• Practical application of top-down models requires understanding which moral and societal
values should guide deliberation in different situations.
• Consequentialistic approaches aim for 'the best for the most', but understanding societal
values is crucial.
• Top-down approaches impose a system of ethics on the agent, assuming a similarity
between ethics and law.
• Law provides rules but doesn't support winning the game, while ethics guides ethical
behavior.
• Legality and rightness can vary, making it crucial to consider both legal and ethical
considerations.
Ethically acceptable

Top-down
approaches Legally
allowed

Figure Top-down approaches assume alignment between law and ethics


AI Systems and Ethical Considerations

• AI systems are designed for specific purposes, necessitating adherence to legal and ethical
boundaries.
• AI systems should be viewed as incorporating soft ethics, interpreting ethics as post-
compliant to existing regulations.
• Ethics should guide decisions on what should and shouldn't be done beyond existing
regulations.

5.3.1 Bottom-Up Approaches


Bottom-Up Approaches to Ethical Reasoning in AI

• Bottom-up approaches suggest that ethical behavior is learned from observation of


others' behavior.
• Malle suggests that robots need to learn norms and morality, similar to children.
• Moral acceptability is determined by social agreement on propositions, not
external expert evaluation.
• An example of a bottom-up approach is described in [96], where an agent learns a
model of social preferences and efficiently aggregates these preferences to identify a
42

desirable choice.
• This approach aligns with current AI approaches, which develop models by
observing patterns in data.
• The assumption is that what is socially accepted is also ethically acceptable.
• However, de facto accepted stances may be unacceptable by independent standards
and evidence.
• Social acceptance is an empirical fact, while moral acceptability is an ethical
judgement.
Ethically acceptable

Bottom-up
approaches Socially

accepted

Figure Bottom-up approaches assume alignment between social practice and


ethics
Bottom-Up Approaches to Decision Spectrum
• Views decision spectrum as a two-dimensional space along ethical acceptability and social
acceptance axes.
• Consensus on 'good' and 'bad' behaviors is culture and context-dependent.
• Crowd wisdom can lead to accepted but unacceptable decisions, like tax avoidance or
speeding.
• Morally acceptable stances may not be accepted due to perceived extra efforts or costs.
• Each opinion is sustained by arguments of different acceptability, and conflicting opinions
can be sustained by equally acceptable ethical principles.

5.3.1 Hybrid Approaches


Hybrid Approaches to Ethical Reasoning in AI Systems

• Hybrid approaches aim to combine the benefits of top-down and bottom-up


approaches to ensure ethical reasoning by AI systems is legally and socially
accepted.
• The nature of moral behavior is based on pragmatic social heurisms, emphasizing
both nature and nurture.
• A hybrid approach combining programmed rules (nature) and context observations
(nurture) is needed to implement ethical agents.
• Both top-down and bottom-up approaches are needed for ethical decision making.
• Examples include Conitzer and colleagues' approach, which integrates game-
theoretic solution concepts and machine learning on human-labelled instances.
• The OracleAI system is another example of a hybrid approach.
• MOOD, a hybrid approach to ethical reasoning, is based on 'collective intelligence'
and embeds concepts of social acceptance and moral acceptability in the
deliberation process.
• Ethical acceptability concerns the fairness of decisions, distributions of costs and
43

benefits, potential harm to people and environment, risks and control mechanisms,
and potential oppression and authority levels.

5.2 Designing Artificial Moral Agents

AI Systems and Ethical Reasoning

• The concept of artificial moral agents, which can incorporate ethics into reasoning, is a
complex and challenging task.
• AI systems are often perceived by users to make ethical decisions, impacting their ethics.
• Designing AI systems aligns with societal and ethical principles is crucial.
• The process of identifying ethical principles and human values for AI systems should
involve all relevant stakeholders.
• The system's reasoning process should consider specific ethical theories and potential
conflicts between values.
• The degree of autonomy of the AI system, including the type of decisions it can make and
when to refer to others, should be clearly defined.
• These guidelines are an extension of the Design for Values method.
• Value alignment
– which values will the system pursue?
– who has determined these values?
– how are values to be prioritised?
– how is the system aligned with current regulations and norms?
• Ethical background
– which ethical theory or theories are to be used?
– who has decided so?
•Implementation
– what is the level of autonomy of the system?
– what is the role of the user?
– what is the role of governing institutions?
Who Sets the Values?
Responsible AI and Participation

• Importance of participation in AI systems for societal and ethical responsibility.


• Analysis of opinion collection and aggregated data is crucial.
• Results are influenced by the question being posed and voting count.
• Design of AI systems should consider cultural and individual values of the people and
societies involved.

Aspects to Consider in AI Decisions

• Crowd: The involvement of all stakeholders and data collection from a diverse sample.
• Choice: The results of a consultation can vary depending on whether users have a binary
choice or a spectrum of possibilities.
• Information: The question being posed frames the answers given, suggesting political
motivation.
• The 2016 Dutch referendum question, for example, led to a political interpretation due to its
complexity.
44

Ethical Deliberation in AI Systems

Involvement and Legitimacy


• All votes count equally, regardless of involvement.
• Involvement can lead to surprising results, as seen in Colombia's 2016 peace referendum.
• Voting decisions are often influenced by social identities and partisan loyalties, not honest
examination of reality.

Electoral System
• The rules determining group consultation, elections, and results are crucial.
• Plurality systems can yield different outcomes than proportional systems.

Value Priorities and Cultural Preferences


• Different values lead to different decisions, and it's often impossible to fulfill all desired
values.
• Values are individual, and societies and cultures prioritize them differently.
• Cultural preferences are important when developing systems for use across different
cultures.

Values and Their Interpretations


• Values are abstract concepts that allow for different interpretations depending on the user
and context.
• Identifying values and their normative interpretations is important for system
functionalities.
• Development should follow a Design for Values method.

Conclusion
• Bottom-up approaches to ethical deliberation should be supported by formal structures for
sound collective deliberation and reasoning.
• Decision-making should be based on long-term goals and principles.

• Information: Accurate and relevant data is made available to all par-


ticipants.
• Substantive balance: Different positions can be compared based on their
supporting evidence.
• Diversity: All major positions relevant to the matter at hand are avail- able to
all participants and considered in the decision-making process.
• Conscientiousness: Participants sincerely weigh all arguments.
• Equal consideration: Views are weighed based on evidence, not on who is
advocating a particular view.
5.3 Implementing Ethical Deliberation
AI Ethics and Decision-Making Mechanisms

• AI ethics focuses on developing algorithms to consider ethical aspects of decisions.


• The spectrum of decision-making possibilities for AI systems is wide, often too
complex or unnecessary.
• Four possible approaches to design decision-making mechanisms for autonomous
systems are identified:
45

• Algorithmic: Incorporates moral reasoning fully in the system’s deliberation


mechanisms.
• Human in command: Involves a person or group in the decision process.
• Regulation: Incorporates ethical decisions in the systemic infrastructure of the
environment.
• Random: AI system randomly chooses its course of action when faced with a moral
decision.
• The random mechanism can be seen as an approximation of human behavior, and
research is needed to understand the acceptability of random approaches.
• The implementation approach is influenced by who is consulted and how individual
values are aggregated.

Levels of Ethical Behaviour


Ethical Behaviour in AI Systems

• AI systems are evolving as they interact autonomously and have social awareness.
• People are viewing machines as team members, not just tools.
• Different levels of ethical behaviour are expected for different categories of AI systems.
• Tools like hammers and search engines have limited autonomy and social awareness, not
considered ethical systems.
• Assistants, with limited autonomy but social awareness, are expected to have functional
morality.

Operational ethics Functional ethics


partner
Full ethical behaviour
social awareness

assistant

tool

autonomy

Figure 5.3: Ethics design stances for different categories of AI systems


(adapted from
5.7 The Ethical Status of AI Systems

Ethical Status of AI Systems and Autonomy

• The concept of 'autonomy' in AI systems is often linked to the ethical status of these
systems.
• Autonomy, in philosophy, refers to the right of humans to decide for themselves, formulate,
think, and choose norms, rules, and laws.
• Autonomy is attributed to living beings who are self-aware, self-conscious, and can think
about and explain reasons for their actions.
• Current AI systems have no moral status, as stated by Bostrom.
• The term 'autonomy' refers to the capability of machines to act independently of human
46

direction.
• Most autonomous systems refer to operational or functional autonomy, which is the ability
to determine how best to meet a goal without direct external intervention.
• Autonomy for agents to set its own goals or motives is complex to realize computationally
and is generally undesirable.
• No intelligent artefact should be called 'autonomous' in the original philosophical sense, and
it cannot inherit human dignity.
• Some scholars and practitioners believe that some 'robot rights' should be considered,
similar to animal rights.
• However, this should stay in the realm of fiction.

Ensuring Responsible AI in Practice

“If you can change the world by innovation today so that you can satisfy more of your
obligations tomorrow, you have a moral obligation to innovate today
Responsibility in AI Development and Use

Understanding Responsible AI
• Responsible AI encompasses various opinions and topics, including:
- Policies concerning R&D activities and AI deployment in societal settings.
- Role of developers at individual and collective levels.
- Issues of inclusion, diversity, and universal access.
- Predictions and reflections on the benefits and risks of AI.

Importance of Understanding AI Values


• AI systems reflect our interests, weaknesses, and differences.
• Understanding the values behind AI requires deciding on ethical guidelines, governance
policies, incentives, and regulations.
• Certification is an extension of regulation and quality assurance.

Determining AI Behavior and Capabilities


• Researchers and developers of AI systems determine how systems behave and exhibit
capabilities.
• Codes of conduct guide decisions, procedures, and systems that contribute to the welfare of
stakeholders and respect the rights of all constituents affected by operations.
• Software engineers play a significant role in shaping AI systems and applications, hence it's
time to expect standards of conduct.

6.1 Governance for Responsible AI


7 Ethical, Societal, and Legal Impact of AI

• Increased efforts by national and transnational governance bodies, including the


European Union, OECD, UK, France, Canada, and others.
• Initiatives include the IEEE initiative on Ethics of Autonomous and Intelligent
Systems, the High Level Expert Group on AI of the European Commission, the
Partnership on AI, the French AI for Humanity strategy, and the Select Committee on
AI of the British House of Lords.
• Initiatives aim to provide recommendations, standards, and policy suggestions for AI
system development, deployment, and use.
• Examples include the Asilomar principles, the Barcelona declaration, the Montreal
47

declaration, and the ethical guidelines of the Japanese Society for Artificial
Intelligence.
• All initiatives prioritize human well-being and the ethical principles of
Accountability and Responsibility.
• Initiatives focus on three main classes of principles: Societal, Legal, and Technical.

Shared prosperity
Validation and testing
Data provenance Democracy
Reliability Fairness
Explainability Privacy
Safety
Technical Societal ecurity
S

Human
well-being

General Legal Auditability


Accountability
Adherence to law
Responsibility
Transparency Redress

8
Figure The main values and ethical principles identified by the different
initiatives

Regulation
AI Regulation and its Impact

• Fear of stifling innovation and progress is a common concern when discussing AI


regulation.
• Current laws and regulations are insufficient to handle the complexities of AI.
• AI's dynamic nature necessitates immediate regulation, as it is already affecting individuals
and society.
• There is no established definition of AI, making it difficult to determine the focus of
regulation.
• Regulation in specific areas like healthcare or military can provide more suitable
instruments for its proper application.
• Not all regulation is negative, especially when it takes the form of incentives or investment
programs.
• AI is an artefact, and product and service liability laws apply to its use.
• Close collaboration between legal and AI experts is needed to evaluate and update existing
laws for specific AI applications.
• Regulation can also be seen as a means to further scientific development of AI.
• Current approaches based on neural networks and deep learning may not meet these
requirements, leading to complaints and delays.
• Regulation requires a culture of openness and cooperation between scientists, developers,
policymakers, and ethicists.

Certification
48

Ethical Certification in AI Systems

• The process of identifying organic and free-range eggs is crucial.


• Certification stamps indicate the evaluation of these eggs against specific metrics.
• Users can make informed decisions about which eggs to buy based on these certifications.
• Similar mechanisms can be used for AI systems, with independent institutions validating
and testing algorithms, applications, and products.
• Regulation can specify minimum principles and their interpretation for all systems in a
given country or region.
• Certification supports business differentiation and ensures consumer protection.
• Initiatives like the IEEE's Ethics Certification Program for Autonomous and Intelligent
Systems (ECPAIS) aim to advance transparency, accountability, and reduce algorithmic bias
in AI systems.
• A new EU oversight agency is proposed for the protection of public welfare through
scientific evaluation and supervision of AI products, software, systems, and services.

Codes of Conduct
AI Systems Responsibility and Codes of Conduct
• Self-regulatory codes of conduct for data and AI professionals are proposed.
• These codes outline ethical duties related to the impact of AI systems.
• Similar to other professions like medical doctors or lawyers, these codes can differentiate
and become mandatory for AI-related activities.
• As awareness of responsible AI approaches grows, developers and providers are expected to
adhere to these codes.
A professional code of conduct is a public statement developed for and by a professional
group to
reflect shared principles about practice, conduct and ethics of those ex- ercising the
profession,
describe the quality of behaviour that reflects the expectations of the profession and the
community,

• provide a clear statement to society about these expectations, and


• enable professionals to reflect on their own ethical decisions.

Inclusion and Diversity


AI Development and Diversity

• Inclusion and diversity are crucial in AI development, including gender, cultural


background, and ethnicity.
• Cognitive diversity aids in better decision-making.
• Development teams should include social scientists, philosophers, and ensure gender,
ethnicity, and cultural differences.
• Regulation and codes of conduct can foster diversity in AI teams.
• Expertise diversification is essential for understanding the ethical, social, legal, and
economic impact of AI.
• Education plays a crucial role in addressing the transdisciplinary nature of AI.
• Current curricula often deliver engineers with a narrow task view, requiring a broadening of
engineering education.
49

AI Applications: Distributed Nature and Human-Agent Interaction


• Analysis of AI applications' integration of socio-technical systems.
• Reflection on the global impact of autonomous, emergent, decentralized, self-organizing
entities.
• Insight into incremental design and development frameworks.
• Impact of individual decisions on human rights, democracy, and education.
• Consequences of inclusion and diversity in design.
• Understanding governance and normative issues in terms of competences, responsibilities,
health, safety, risks, explanations, and accountability.

The AI Narrative
AI: Responsibility and Ethical Considerations

AI Narrative and its Evolution


• The AI field has experienced ups and downs, but the current level of excitement and
fear is unprecedented.
• AI is gaining popularity in various application domains due to the availability of
large amounts of data, improved algorithms, and substantial computational power.
• The AI field's contribution to AI is the improvement of algorithms, which are
fortunate contingencies.

AI's Ethical, Legal, Societal, and Economic Impact


• AI's potential to impact our lives and the world raises questions about its ethical,
legal, societal, and economic effects.
• Governments, corporations, and social organizations are committing to an
accountable, responsible, transparent approach to AI, focusing on human values and
ethical principles.

AI as a Recipe
• AI algorithms are not magic, but a set of precise rules to achieve a certain result.
• The outcome of AI algorithms depends on the input data and the ability of those who
trained it.
• AI algorithms have the choice to use data that respects and ensures fairness, privacy,
transparency, and other values.

Responsible AI
• Responsible AI involves decisions about the scope, rules, and resources used to
develop, deploy, and use AI systems.
• AI is not just the algorithm or the data it uses, but a complex combination of
decisions, opportunities, and resources.

Benefits and Dangers of AI Animatronic Representations


• AI systems are increasingly impersonating humans, with different levels of success.
• Concerns about misleading and unrealistic expectations cannot be ignored.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy