0% found this document useful (0 votes)
31 views81 pages

Ai Expert Book

Uploaded by

MS.KIRUTHIKA V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views81 pages

Ai Expert Book

Uploaded by

MS.KIRUTHIKA V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 81

UNIT-1

Problem-Solving agents:

 In AI, Search techniques are universal problem- solving methods. Rational agents
Algorithms or Problem-solving mostly used these search strategies to solve a specific
problem and provide the best result.
 Problem-Solving agents are the goal-based agents and use atomic representation.

Problem Solving Techniques:

1) Search
Searching is a step-by-step procedure to solve a search-problem in a given search
space. A Search problem can have 3 main factors
a) Search space
Search space represents a set of possible solutions which a system may have
b) Start state
It is a state from where agent: begins the search
c) Goal test:
It is a function which observe the current state and returns whether the goal
state is achieved or not.
2) Search tree
A tree representation of search problem is called search tree. The root of the search
tree is the root node is corresponding to the initial state.
3) Actions
It gives the description of all the available action to the agent.
4) Transition model
A description of what each action do can be represented as a transition mode.
5) Path cost
It is a function which assigns a numeric cost to each path.
6) Solution
It is an action sequence which leads from the start node to the goal node.
7) Optimal Solution
If a solution has the lowest cost among all solutions.
Properties of Search Alogrithms:

The four essential properties of search algorithm:

Algorithms:

 Completeness
 Optimality
 Time Complexity
 Space Complexity

Completeness:

A Search algorithm is said to be complete if it guarantees to return a solution among all


other solutions, then such a solution for is said to be an optimal selection.

Time Complexity:

Time Complexity is a measure of time for an algorithm to complete its task.

Space Complexity:

It is the maximum storage space required at any point during the search, as the
complexity of the problem.

Search Algorithms:

Search Algorithms are classified into uniformed (Blind search) search and informed
search (Heuristic search) algorithms.

 Search Algorithms:
 Uninformed/Blind
o Breadth First Search
o Uniform Cost Search
o Depth First Search
o Depth Limited Search
o Iterative Deeping Depth
o Bidirectional Search
 Informed Search
o Best First Search
o A*Search

Uninformed/Blind Search:

The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal.

 It is also called blind search


 It has 6 main types

Breadth first search:

Breadth-first search is the most common search strategy for traversing a tree or graph. The
algorithm searches breadth wise in a tree or graph.

The Breadth-first search implemented using FIFO queue data structure

Advantages:

BFS will provide a solution if any solution exist.

Disadvantages:

BFS needs lots of time if the solution exist.

Example:

BFS algorithm from hoot nodes to goal node k

SA → B →C → D → GHE−FK

Depth-first Search:

 It starts from the root node and follows each path to its greatest depth node before
moving to the next path.
 DFS uses a stack data structure for its implementation.
 The process of the DFS algorithm is similar to the BFS algorithm.
Advantages:

Examples:

Root node → ¿ →¿ node

It will start searching from root node s, and traverse A, then B, then D and E. after
traversing E, it will backtrack the tree as it has no other goal node is not found. After it
will traverse node C and then by, and it will terminate as it found goal node.

Depth-Limited Search Algorithm:

 A depth-Limited search algorithm is similar to depth-first search with a


Predetermined Limit.
 Depth-Limited search can solve the drawback of the infinite path in the Depth-
first search.
 Depth-limited search is memory efficient

Uniform-cost Search Algorithm:

 It is a searching algorithm used for traversing a weighted tree or graph.


 This algorithm comes into play when a different cost is available for each
edge.
 A Uniform-cost search algorithm is implemented by the priority queue.

Iterative deepening depth-first search:

 It is a combination of DFS and BFS algorithms.


 This search algorithm finds out the best depth Limit and does it by
gradually increasing the limit unit a goal is found.
1st iteration => A
2nd iteration=> A, B, C
3rd iteration=> A,B,D,E,C,F,G
4th iteration=>A,B,D,H,I,E,C,F,K,G
Bidirectional Search:

Bidirectional Search algorithm runs two simultaneous searches, one form initial state called
as forward – search and other from goal node called as backward – search, to find the goal
node.

Bidirectional search can use search technique such as BFS, DFS, DLS, etc.

Example:

This algorithm divides one graph/tree into two sub graphs. Its stark traversing from node. In
the forward direction and starts from goal node 16 in the backward direction.

The algorithm terminates at node a where two searches meet Root node.

Informed importance of Search Algorithms in AI

Solving Problems:

Using logical search mechanisms. It employ such algorithms to determine the quickest or
shortest path between two locations.

Search Programming:

Many AI activities can be coded in terms of searching, which improves the formulation of a
given Problems solution.

Goal-based agents:

It’s efficiency is improved through search algorithms. It can offer the finest resolution to an
isave to solve it

Support production systems:

Production systems use search, algorithm in AI to find the rules that can lead to the required
action

Neural network system:

1. BFS:
a. Time complexity of BFS algorithm can be obtained by the number of nodes
traversed in BFS.
b. T(b) = 1+b^2+b^2++bd = 0 (bd) where dis depth of shallowest solution and b
is a node at every state.
c. The space complexity of BFS algorithm is o(bd).
d. BFS is complete and optimal.
2. DFS
a. Time complexity of DFS will be equivalent to the node traversed by the
algorithm T(n)=1+n^2+n^3+…-c(n^m) depth of any node.
b. Space complexity of DFS is equivalent size, which is (bm).
c. DFS algorithm is complete within finite state space.
d. DFS algorithm non-optimal, as it may generate a large number of steps

Advantages:

DFS requires very less memory. It takes less time to reach to the goal node.

Disadvantages:

There is no guarantee of finding the solution.

3. DLS:
a. The traversal path will be S → A → C → D→ B→ I → J
b. The time complexity of DLS algorithm is o(b^a).
c. The space complexity of D is algorithm is O(be).
d. DLS algorithm is complete if the solution is above the depth-limit
Advantages:
DLS is memory efficient.
Disadvantages:
DIS also has a disadvantage of incompleteness.
4. UCS:
a. The traversal path will be, S → A → D →G
b. The worst-case time complexity of Uniform cost search is DCb+[C/2].
c. The worst-case space complexity of Uniform-cost search is DCb[C/E].
d. Uniform cast search is optimal- Advantages
Disadvantages:
The algorithm may be stuck in an infinite.
5. IDDFS:
a. IDDFS algorithm is optimal.
b. This algorithm is complete.
c. The space complexity of IDDFS will be o(b^d).
d. The worst time complexity is o(b^d).

b-branching factors d-depth.

Advantages:

It combines the benefits of BFS and OFS search algorithm.

Disadvantage:

The main drawback of IDDFS is that it repeats all the work of the previous phase.

6. BSA:
a. Time complexity of bidirectional search using BFS is o(bd).
b. Space complexity of bidirectional search is o(bd)
c. Bidirectional search is optimal
d. Bidirectional search is complete, if we BFS in both searches

Advantages:

It is fast and it’s requires less memory

Disadvantages:

Implementation of the bidirectional search tree is difficult.

Informed search:

 Informed search algorithms in AI use domain expertise.


 It is also known as Heuristic search.
 It is a method that, while not always guaranteed to find a decent solution in a
reasonable amount of time.
i. Greedy Best-First Search:
Greedy Best-First is an informed search algorithm where the evaluation
function is strictly equal to the heuristic function.
The solution from a greedy best-first search may not be optimal since a
shorter path-may exist.
It is called “greedy” because at each step it tries to get as close to the goal as it
can.
The evaluation function f(x) for the greedy best-first search is

F(x)=h(x), the evaluation function is equal to the heuristic function.

h(x), evaluates the successive node based on how close to the target node.

The total cost for the path( A →C → E ) evaluates 10. Which is the lowest cost, breedy best
first search always ignored the path, because it does not consider the edge weights.

A*Search:

A* Tree search, known as A ^k search, combines the strengths of uniform- cost search and
greedy search.

A heuristic function is used to determine the estimated cost and estimate the distance
between the current node and the desired need.

A*search determine lowest cost between any two nodes. This algorithm is a variant of
Dijkstra’s algorithm.

The evaluation function F(x) for the areas where in parameters were located after the
instructions that are particular to the task.

Those 3 components make up a constraint satisfaction technique in its

Entirely.

The elements of CSPs:

CSP’s are characterized by 3 main components:

 Variables
 Domains
 Constraints
1) Variables:
Variables represent the entities or Components of the problem that weed to be
assigned Values for eg in a scheduling problem, variables might represent time slots
or tasks.
2) Domains:
Fach variable has an associated domain, which defines the set of values that the
variables can take. For instance, in scheduling , the domain of a time slot variable
might be a list of i available times.
3) Constraints:
Constraints are the rules or conditions that specify relationships between variables.
They restrict the combinations of values that variables can conditions that specify
relationships between variable constraints can be unary(single variable) or binary(two
variable).
Example:
1) Sudoku puzzles: In sudoku, the variables are the empty cells, the domains are
numbers from 1to9 , and the constraints entire that no number is repeated in a
row,column or 3*3 subgrid
2) Map Colouring : In the map colouring problem, Variables represent regions or
countries, domains represent available colours, and constraints ensure that adjacent
regions must have different colours.

Local Search Stratergies:

Local search optimization algorithms that are used to find the best possible solution
within a given search space.

Working of a local Search Algorithm 3

The basic working principle of a local search algorithm involves the following steps.

Initialization:

Start with an initial solution, which can be generated randomly or through some heuristic
method.

Evaluation:

Evaluate the quality of the initial sdation using an objective function ov a firi fitness
measure.

Neighbour Generation:

Generate a set of neighbouring solutions by making minor changes to the current solution.
These changes are typically referred to as “moves”.
Selection:

Choose one of the neighbouring solutions Cho based on a criterion, such as the
improvement in the objective function value. These steps determine the direction in which
the search proceeds.

Termination:

Continue the process iteratively, moving to the Selected neiboring solution, and repeating
steps 2 to 4 until a termination condition is met.

Example

1) Hill dimbing:
a) Hill climbing is a straightforward local search algorithm that starts with an
initial solution and iteratively moves to the best neighboring solution that
improves the objective function.
b) Initialization, Evaluation, Neighbor bjeneration Selection, Termination(eg:
reaching a maximum number of iterations or finding a satisfactory solution)
c) Hill climbing has a limitation in that it can get stuck in local optima, which
are solutions that are better than their neighbours but not necessarily the best
overall solution.

Types of Local Search Algorithm:

 Hill climbing: It repeatedly makes small improvements to the current solution by


moving to neighbouring solution with the highest Objective function value.
 Simulated Annealing: It allows the algorithm that allows the algorithm to accept
moves the worsen the objective function.
 Tabu Search: It keeps track of recently visited solutions and avoids revisiting them to
avoid getting stuck in cycles.
 Genetic Algorithm: It uses natural selection and genetic operators to iteratively
generate and improve a populatin of candidate solutions
 Local Beam Search:
 This is a variation of beam search that maintains k states instead of one and performs
a local search on all k states.
UNIT – II
REASONING:
 The Reasoning is the mental process of deriving logical conclusion and making
prediction from available knowledge, facts, and beliefs.
 “Reasoning is a way to infer facts from existing data”. It is a general process of
thinking rationally, to find valid conclusions.
 In AI the reasoning is essential so that the machine can also think rationally as a
human brain and can perform like a human.

Types Of Reasoning:
 Deducting Reasoning
 Common Sense Reasoning
 Inductive Reasoning
 Monotonic Reasoning
 Abductive Reasoning
 Non-Monotonic Reasoning

1. Deductive Reasoning:
Deductive Reasoning is deducing new information from logically related known
information. It is the form of valid reasoning which means the argument’s conclusion must be
true when the premises are true.

In Deductive Reasoning, the truth of the premises guarantees the truth of the
conclusion.

Example:
Premise-1: All the human eats veggies.

Premise-2: Suresh is human.

Conclusion: Suresh eats veggies.


2. Inductive Reasoning:
Inductive Reasoning is a form of reasoning to arrive at a conclusion using limited sets
of facts by the process of generalization. It starts with the series of specific facts or data and
reaches to a general statement or conclusion.

Inductive Reasoning is a type of propositional logic, which is also known as cause-


effect reasoning or bottom-up reasoning.

In Inductive Reasoning premises provide probable supports to the conclusion, so the


truth of premises does not guarantee the truth of the conclusion.

Example:
Premise: All of the pigeons we have seen in the zoo are white.

Conclusion: Therefore, we can expect all the pigeo8ns to be white.

3. Abductive Reasoning:
Abductive Reasoning is a form of logical reasoning which states with single or multiple
observations then seeks to find the most likely explanation or conclusion for the observation.

Example:
Implication: Cricket ground is wet if it is raining.

Axiom: Cricket ground is wet.

Conclusion: It is raining.

4. Common Sense Reasoning:


Common Sense Reasoning is an informal form of reasoning, which can be gained
through experiences.

Common Sense Reasoning simulates the human ability to make presumptions about
events which occurs on every day.

Example:
1. One can be at one place at a time.

5. Monotonic Reasoning:
In this reasoning, once the conclusion is taken. Then it will remain the same even if we
add some other information to existing information in our knowledge base and not useful for
the real-time systems.

In this, adding knowledge does not decrease the set of prepositions that can be derived.

It is used in conventional reasoning systems, and a logic-based systems is monotonic.

Example:
Earth revolves around the sun.

It is a true fact, and it cannot be changed even if we add another sentence in knowledge
base like, “The Moon revolves around the Earth”, or “Earth is not round”, etc.

6. Non-Monotonic Reasoning:
In Non-Monotonic Reasoning some conclusions may be invalidated if we add some
more information to our knowledge base.

Non-Monotonic Reasoning deals with incomplete and certain models.

Example:
1) Birds can fly.
2) Penguins cannot fly.
3) Pitty is a bird.

We conclude “Pitty can fly”.

If we add one another sentence into knowledge base “Pitty is a Penguin”, which concludes
“Pitty cannot fly”, so it invalidates the above conclusion.

2) Symbolic Reasoning:
Symbol Artificial Intelligence (AI) is a sub-field of AI that focuses on the processing
and manipulation of symbols or concepts, rather than numerical data.

Example: If the patient reports having a fever, the system might use the following rule.
If patient has a fever AND patient has a cough AND patient has difficulty in breathing
THEN patient may have pneumonia.

Uncertainty:
Knowledge representation with first-order logic and propositional logic is based on
certainty, means we are sure about the predicates.

Example:

A  B, which means if A is true then B is true, but what about the situation where we
are not sure about whether A is true or not then we cannot express this statement, this
situation comes under uncertainty.

Causes Of Uncertainty:
1) Information occurred from unreliable sources.
2) Experimental Errors.
3) Equipment Fault.
4) Temperature Variation.
5) Climate Change.

Probabilistic Reasoning:
 Unpredictable Outcomes.
 Predicates are too large to handle.
 Unknown error occurs.

There are two methods:

 Bayes’ rule.
 Bayesian Statistics.

Bayes’ Rule:
 It determines the probability of an event with uncertain knowledge.
 Value of P (B/A) with the knowledge of P (A/B).

P (A/B) = P (B/A) P (A)

P (B)
 P (A) is called the Prior Probability. Probability of hypothesis before considering the
evidence.
 P (B) is called Marginal Probability, pure probability of an evidence.

P (Ai/B) = P (Ai) * P (B/Ai)

∑ki=1 P (Ai) * P (B/Ai)

Example:
A Doctor is aware that disease meningitis causes a patient to have a stiff neck, and it
occurs 80% of the time. He is also aware of some more facts, which are given as follows:

The known probability that a patient has meningitis disease is 1/30,000. The known
probability that a patient has a stiff neck is 2%.

P (a/b) = 0.8
P (b) = 1/30000
P (a) = 0.02
P (b/a) = P (a/b) P (b) = 0.8 + (1/30000) = 0.001333333
P (a) 0.02

Applications Of Bayes’ Theorem:


 It is used to calculate the next step of the robot when the already executed step is
given.
 Bayes’ Theorem is helpful in whether forecasting.
 It can solve the Monty Hall problem.

Bayesian Network:
 A Bayesian Network is a probabilistic graphical model which represents a set variables
and their conditional dependencies using a directed acyclic graph.

Bayesian Network consist of two parts:

1) Directed Acyclic Graph.


2) Table of Conditional Probabilities.
 A Bayesian Network graph is made up of nodes and arc

 In above diagram A, B, C, D are random variables represented by the nodes of the


network graph.
 Consisting node B, which is connected with node A by a directed arrow, then node A is
called the Parent of Node B.
 Node C is independent of node A.

3. Slot And Filler Structures:


Monotonic Reasoning can be performed more effectively than with pure logic and
Non-Monotonic Reasoning is easily supported.

Weak Slot And Filler Structure:


“Knowledge – poor” or “weak” as very little importance is given to the specific knowledge
the structure should contain,
Attributes = Slot & its Value = Filler

Semantic Network:
 A Semantic Network (SN) is a simple notation scheme for logical knowledge
representation.
 A SN consists of a concepts and relations between concepts.
 Representing a SN with a directed graph,
1) Vertices: denote concepts.
2) Edges: represent relation between concepts.
 The graphical depiction associated with a SN is a significant reason for their
popularity.
 Semantic Networks can show natural relationships between objects/concepts.
 Uses of Semantic Nets,
1) Coding static World Knowledge.
2) Built-in fast inference method.
3) Localization of information.

Example:
“A Sparrow is a bird”.
Two concept: “Sparrow” and “bird”.
Sparrow is a kind of bird, so connect the two concepts with a IS-A
relation.
 This is a higher-lower relation or abstract-concrete relation.

Frames:
 Frame is a collection of attributes called as slots and associated values that describe
some entity in the world (filler).
 It contains information as attributes – value pairs, default values, etc.
Example:
An example frame corresponding to the semantic net.
(Twenty
(SPECIES (VALUE bird))
(COLOUR (VALUE yellow))
(ACTIVITY (VALUE fly))

Strong Slot And Filler Structure:


 Conceptual Dependency.
 Scripts.
 CYC.
 Conceptual Dependency:
 Conceptual Dependency originally developed to represent knowledge acquired from
natural language input.
 CD is collection of symbols which are used to represent knowledge.
 CD is a theory of how to represent the kind of natural about events that is usually
contained in natural language sentences.

Various Primitives (Symbols) used in CD:

ATRANS – Transfer of an abstract entity. (E.g.: Give).


PTRANS – Transfer of a physical location of an object. (E.g.: go).
PROPEL – Applying of physical force to an object. (E.g.: push).
MTRANS – Transfer of Mental information. (E.g.: tell).
MBUILO – Building a new information out of old. (E.g.: Decide).
SPEAK – Utter a sound. (E.g.: Say).
ATTEND – Focus a sense on a stimulus. (E.g.: Listen, watch).
INGEST – Actor ingesting an object. (E.g.: eat).
MOVE – Movement of a body part by owner. (E.g.: punch, kick).
GRASP – Actor grasping an object. (E.g.: catch).
EXPEL – Actor getting rid of an object from body. (E.g.: cry, sweat).

Six Primitive Conceptual Categories:


 PP => Real World objects.
 ACT => Real World actions.
 PA => Attributes of objects.
 AA => Attributes of actions.
 T => Times.
 LOC => Locations.

Example:

 It should be noted that this representation is same for different saying with same
meaning.

Example:

 I gave the man a book,


 The Man got book from me,
 The Book was given to man by me, etc.
 SCRIPTS:
 Scripts is a structure that describe a sequence of events in particular contest.
 Script are frame like structures used to represent commonly occurring experiences
such as going to movies, shopping in supermarket, eating in restaurant, banking, etc.
 A Script consist a set of slots and information (knowledge) contained in it.

Various Components of Scripts are:

Script Name: Title.


Track: Peoples involve in the event described in script.
Entry Condition: Required pre-situation to execute the script.
Props: Non live objects involve in the script.
Scenes: The actual sequence of events that occur.
Result: Condition that will be true after events described in the script are occurred.
 CYC:
 An ambition attempt to form a very large knowledge base aimed at capturing
commonsense reasoning.
 Initial goals to capture knowledge from a hundred randomly selected.
 Both implicit and explicit knowledge encoded.

Example:

Suppose we read that “Wellington learned of Napoleon’s death”, then we (humans)


can conclude Napoleon never knew that Wellington had died.

We require special implicit knowledge or commonsense such as

 We only die once.


 You stay dead.
 You cannot learn of anything when dead.
 Time cannot go backward.

UNIT-III

KNOWLEDGE REPRESENTATION

The way knowledge is encoded. How machines do all these things comes under knowledge
representation and reasoning.

Kinds of knowledge which needs to be represented in AI system.

I. Object : All the facts about objects in our world domains.


II. Events: Events are actions which occur our world.
III. Performance: It describes behavior which involves knowledge about how to do
things.
IV. Meta-knowledge: It is knowledge about what we know.
V. Facts: Facts are the truths about the real world and what we represent.
VI. Knowledge base: The central component of the knowledge-based agent is the
knowledge base.

Type of knowledge Representation

 Declarative knowledge(object facts)


 Heuristic knowledge(Rule of thumb)
 Meta knowledge(knowledge about knowledge)
 Procedural knowledge(Rules procedure)
 Structural knowledge(Relationship between object, concept)

I. Declarative knowledge
 It includes concepts facts and objects.
 It also called descriptive knowledge and expressed in declarative sentences.
 It is simpler than procedural language.
Example:
Medical diagnosis: Declarative knowledge includes a knowledge base of
symptoms, diseases and their relationships, enabling a system to provide
accurate diagnoses
II. Heuristic knowledge
 Heuristic knowledge is representing knowledge of some experts in a field or
subject.
 It is rules of thumb based on previous experiences, awareness of approaches
and which are good work but not guaranteed.
Examples:
A study of planning, tagging and learning.
III. Meta knowledge
 Knowledge about the other types of knowledge is called meta-knowledge.
Example:
A study at planning, tagging and learning.
IV. Procedural knowledge
 It is also known as imperative knowledge.
 It is a type of knowledge which is responsible for knowing how to do
something.
 It includes rules, strategies, procedures, agendas, etc.
Example
Writing a sorting algorithm: procedural knowledge involves understanding the
specific steps, such as bubble sort or quicksort, sort a list of elements.
V. Structural knowledge
 It is describes relationships between variables concepts such as kind of part of
grouping of something.
 It describes the relationship that exists between concepts or objects.
Example:
A Computer keyboard has keys and an acoustic instrument has strings.

Ai Knowledge Cycle

An AI system has the following components

 Perception
 Learning
 Knowledge representation and reasoning
 Planning
 Execution

In this diagram how an AI System can interact with real world and what components help,
it to show intelligence.

Learning

Perception

Knowledge Reasoning
representation

Planning

Education
 AI system has perception component by which it retrieves information from its
environment. It can be visual, audio or another form of sensory input.
 The learning component is responsible for learning from data captured by perception
comportment.
 The main components are knowledge representation and reasoning & it involved in
showing the intelligence in machine like humans & these are independent with each
other but also coupled together.
 The planning and execution depend on analysis of knowledge representation and
reasoning.

Knowledge Representation Techniques

 Logical representation
 Semantic Network representation
 Production rules
 Frame representation
 Logical representation
i. Logical representation
 Logical representation is a language with some definite rules which
deal with prepositions and has no ambiguity in representation.
 It consist of defined syntax and semantics.

SYNTAX SEMANTICS
It refers to the formal structure It refers to the meaning conveyed
or arrangement of symbols and by syntactic structures
rules.

Advantages

 It helps to perform logical reasoning.

Disadvantage

 This technique may not be very natural inference may not be very efficient.
ii. Semantic network representation
 Semantic networks work as an alternative of predicate logic for knowledge
representation. In Semantic networks, we can represent our knowledge in the form of
graphical networks.
 It is consists of two types of relations
 IS-A relation(Inheritance)
 Kind-of-relation

Example:
Kind - of
car Pick-up truck
IS-A IS-A

Audi BMW

Has Has
headlight
Advantages:

 It conveys meaning in a transparent manner.

Disadvantage:

 It take more computational time at runtime.


iii. Frame representation
 A Frame is a record like structure that consists of a collection of attributes and values
to describe an entity in the world.
 It consists of a collection of slots and slot values of any type and size.

Advantage:

 It is easy to understand and visualize.

Disadvantage:

 In frame system inference the mechanism cannot be easily processed.


VI. Production Rules
 In Production rules agent checks for the condition and if the condition exists then
production rule fires and corresponding action is carried out.
 The production rules system consists of 3 main parts.
 The set of production rules
 Working Memory
 The recognize-act-cycle.

Advantage:

 The Production rules are highly modular and can be easily removed or modified.

Disadvantage:

 During the execution of the program many rules may be active. Thus rule-based
production systems are inefficient.

Issues In Knowledge Representation

 The issues that arise while using KR techniques are many.


 Important Attributed
There are 2 attributed “instance” and “isa” that are general significant .These
are important because they support property inheritance.
 Single value attributes
This is about a specific attribute that is guaranteed to take a unique value.
 Finding right structures
This is about access to right structure for describing a particular situation.
It requires selecting an initial structure and then revising the choice.

Predicate Logic

 A predicate is an expression of one or more variables determined on some


specific domain. A predicate with variable can be made a preposition by either
authorizing a value to the variable or by quantifying the variable.
 A predicate is a truth assignment given for a particular statement which is
either true or false. To solve common sense problems by computer system.

Logic symbols used in predicate logic

∀ - For all ˅- OR

∃ -There exists ˄-AND

→ - Implies
¬ -Not

Example

 ∀x(at(x)) => Mammel(x): All cats are mammels.

First-Order Logic

It is another way of knowledge representation in AI. It is an extension to propositional logic.

Basic components of first-order logic

1. Constants: Objects in the domain of discourse.


Example: a, b, c
2. Variables: Symbols that can represent any object in the domain.

Example: x, y, z

3. Predicates: Functions that return true or false, depending on the objects n they
are applied.

Example: P(x), Q(x, y)

4. Functions: Map objects to other objects within the domain.

Example: f(x), g(x, y)

5. Logical connectives:
 Conjunction (˄): P˄Q (P and Q )
 Disjunction (˅):P˅Q (P or Q)
 Negation(¬):¬ P (not P)
 Implication(→):P→ a( if P then Q)
 Biconditional(↔):P↔ Q(P if and only if Q)
6. Quantifiers
 Universal Quantifier(∀):∀xP(x)
(for all x, P(x) is true)
 Existential Quantifer(∃):∃xP(x)
(there exists an x such that P(is) is true)

Knowledge Representation Using Rules


 Knowledge representation using rules in AI involves using a set of rules to
represent knowledge in a formal and explicit way.
 These rules are used to reason and make decisions based on the knowledge
represented.

Rules-Based Systems

It consists of a set of rules, a working memory and an inference engine. The rules are used to
reason about the knowledge in the working memory to derive conclusion or make decisions.

Types of Rules

1) Production Rules
If-then rules that specify a conclusion or action based on conditions
Example: If it is raining, then take an umberella.
2) Semantic Rules:
Define relationships between concepts and Objects.
Example: A car is a type of vehicle.
3) Procedural Rules:
Special a sequence of actions to perform a task.
Example: To make a cup of coffee, first, boil water, then add coffee powder, sugar,
milk etc..
Components of a Rule:

1.Antecedent: The condition or premise of the rule.

2. Consequent: The conclusion or action of the rule.

3.Confidence Factor: A measure of the rule’s certainty or veliability.

Advantages:

1) Easy to represent knowledge


Rules are easy to understand and represent knowledge in a structured way.
2) Reasoning and decision Making
Rules enable reasoning and decision making capabilities.
3) Flexibility
Rules can be easily added, removed or modified.
Sytactic Representation

Syntactic representation refers to the formal structure or arrangement of symbols and rules
used to construct valid statements or expressions in a given language.

Key aspects of syntactic representation

 Symbols: Basic units of representation.


(e.g., Characters, words, tokens)
 Grammar : A set of rules that define the structure of valid expressions.
 Syntax trees: Hierarchical structures that represent the syntactic structure of
expressions.
 Parsing: The process of analyzing a string of symbols according to the grammar
rules.

Examples:

Syntax: ∀ x (P(x)→Q (x))

The structure and placement of quantifiers predicates, connectives follow specific


syntactic rules.

Semantic Representation

Semantic representation refers to the meaning conveyed by syntactic structures. It deals


with the interpretation of symbols expressions and their relationships.

They aspects of semantic representation

 Meaning: What the symbols and structures represent.


 Interpretation: Assigning meaning to syntactic elements.
 Models: Structures that satisfy the interpretations of the syntactic elements.
 Ontologies: Formal representations of a set of concepts within a domain and the
relationships between those concepts.
Examples:
Semantics: ∀ x (P(x)→Q (x))
 Here, P(x) might mean “ x is a human” and Q(x) might mean
“ x is mortal. ”
 The statement means “For all x, if x is a human, then x is mortal.”
Logical And Slot Fillers

 Logic is used to represent knowledge using logical statements, rules and inference
techniques.
 It provides a formal way to reason about knowledge, making it possible to draw
conclusions, make decisions and solve problems.
 Slot is an attribute value pair in its simplest form. Filler is a value that a slot can take
could be a numeric, string value or a pointer to another slot.
 Slot filter are used to represent knowledge using a frame a-based structures.
 Slots represent attributes or properties.
 Fillers represent the values or instances of those attributes.

Slot fillers are often used in semantic networks, frames, and ontologies to represent
knowledge in a structured and organized way.

Examples:

Frame: person

Slots: name, age, occupation

Fillers: John, 30, doctor

For e.g : Consider a knowledge base that represents information about people, including their
name, age and occupation.

Logic can be used to represent rules such as

 “If a person is a doctor. Then they have a medical degree.”


 “If a person is over 30, then they are considered experienced.”

Slot fillers can be used to represent the attributes and values of individual people,such as

 John: name=John, age=30,occupation=doctor.


 Jane: name=Jane, age=25,occupation=engineer.

By combining logic and slot fillers, the knowledge base can reason about the knowledge base
can reason about the knowledge and make Inferences, such as

 John has a medical degree because he is a doctor.


 Jane is not considered experienced because she is under 30.
Game Playing

Game playing is an important domain of Artificial Intelligence. Game don’t require much
knowledge. the only knowledge use need to provide is the rules, legal moves and the
conditions of winning or losing the game.

Computer programs which plays 2 player games

 Game-playing as search
 With the complication of an opponent

General principles of game- playing and search

 Evaluation functions
 Minmax principle
 Alpha-beta pruning
 Heuristic techniques
 Generate Procedure : So that only good moves are generated.
 Test procedure: So that the best move can be explored first.
TWO MAIN APPROACHES
 Rule-based Systems
 Machine Learning-based systems

Rule-based systems

 Rule-based systems,a cornerstone of game playing in AI,rely on predefined set of


rules to govern the behavior of AI agents during gameplay.
 Rule based approaches are particularly effective in games with well-defined rules
and relatively simple decision trees, such as Tic-Tac-Toe.

Machine Learning-based systems

 Machine Learning-based systems, on the other hand, represent a paradigm shift in


game playing in AI.
 Instead of relying on fixed rules, There systems utilize algorithms to learn from
experience and adapt their strategies accordingly.

Advantages of Game playing

 Enhanced strategies thinking


 Adaptive Learning
 Real-world relevance
 Efficient Decision-making

Disadvantage of Game playing

 Computational Complexity
 Limited Generalization
 Lack of Creativity
 Complexity of Game Rules

Game: TIC-TAC-TOE

Computer to play a game against a human opponent.

Objective: Max aims to win or draw the game by making optimal moves

AI Program:Max

Game play

Initial state: The game starts with an empty board

Max’s Turn: Max makes the 1st move,placing its symbols(x) in position 5.

Human Turn: The human places their symbol(o) in position2


o

Max’s Turn: Max analyzes the board and decides to place its symbols(x) in position 9

Human Turn: The human places their symbol(o) in position 8

o x

Max’s Turn: Max analyzes the board again and decides to place the symbols(x) in position 6

x x

o x

Human Turn: The human places their symbol(o) in position 3

o o

x x

o x

Max’s Turn: Max analyzes the board and decides to place the symbol(x) in Position 7

o o
x x

x o x

Game over: Max wins the game!

Minimal Search

Minimal search in knowledge representation is a technique used in AI to find the shortest


path to a goal state.

Key Components

1) Graph: A graph represents the search space, consisting of nodes & edges.
2) Initial State: The starting point of the search.
3) Goal State: The desired outcome or solution.

Minimal Search Works

1) Initialize: Start at the initial state


2) Evaluate: Calculate the neuristic value for each neighbouring node
3) Select: Choose the node with the minimum neuristic value.
4) Expand: Move to the selected node and repeat steps 2-4
5) Goal Test: Check if the current node is the goal state
6) Solution: If the goal is reached, reconstruct the path from the initial state to the goal
state.

Applications:

1) Pathfinding: Finding the shortest path in video games, GPS navigation, roboatics
2) Problem soving: Solving puzzles, such as the sliding file puzzle

Advantage:

1) Optimality: Guarantees the shortest path to the goal state.


2) Efficiency: Reduces the search space by focusing on promising nodes.

Alpha-Beta Cutoff

Alpha cutoffs are applied by the maximizing player, cutsoff moves at minimizing level.
Beta cutoffs are applied by the minimizing player(the opponent) cuts of moves at maximizing
player

Alpha-Beta Cutoff

When the algorithm determines that a node’s score is outside the alpha-beta window. It stops
exploring that branch of the tree. This is known as an alpha-beta cutoff.

Two Parameters

ALPHA: The best(highest-value) choice we have found so far at any point along the path of
maximizer. The initial value of alpha is - ∞

BETA: The best(lowest-value) choice we have found so far at any point along the path of
minimizer. The initial value of alpha is + ∞

How It works:

Consider a game tree with following nodes

Node A (score=5)

|-- Node B(score=3)

|-- Node C(score=7)

|-- Node D(score=2)

We have a game tree with four nodes:

A, B, C,D Node A is the root node, and nodes B, C, D are its childen

Alpha (∞)=4

Beta(β)=8

We start by evaluating node A. Since its score(5) is within the alpha-beta window(4-8), we
expand its children

Next, we evaluate node B. Its score(3) is less than alpha(4), so we can prune this branch.

This is an alpha-beta cutoff

We don’t need to evaluate nodes C and D because the score of node B is already outside the
alpha-beta window
Diagram

5 A

C
3 7
B

2
D

In this diagram, the alpha-beta cutoff occurs at node B because its score (3) is less than
alpha(4). We prune the branch and don’t evaluate nodes C and D.

After evaluating a few nodes, we reach a node with a score of 3, which is less than alpha(4).
We can stop exploring their branch.

By applying alpha-beta cutoff, we reduce the number of nodes to be evaluated making the
algorithm more efficient.

Iterative Deepening Planning

 Iterative deepening planning is a planning technique that systematically explores the


search spaces by gradually increasing the depth limit until a solution is found.
 It starts with a shallow depth limit and iteratively increase it util the goal is reached.

How it works

1) Initial the depth limit to a shallow value(eg 1)


2) Perform a BFS or DFS up to the current depth limit to find a plan
3) If the goal is not found, increase the depth limit by 1 and repeat step 2
4) Continue this process until the goal is found or the maximum depth limit is
reached.

Key Components

1. Planning problems: A planning problem consists of an initial state, a goal state, and a
set of actions.
2. Actions: A plan is a sequence of actions of that achieves the goals

Advantages:

 Complete: Iterative deepening planning is a completer algorithm, meaning it will


always find a solution if one exists.
 Optimal: It is an optimal algorithm, meaning it will find the shortest plan to the
goal.

Example:

Initial State: Robot is at location A

Goal State: Robot is at location C

Actions:

Actions: Move Forward

Preconditions: Robot is at location A

Effects: Robot is at location B

Actions: Turn left

Preconditions: Robot is at location B

Effects: Robot is at location C

Using iterative deepening planning, we can generate a plan to achieve the goal.

Depth limit=1:

Move Forward

Depth limit=2:

Move Forward

Turn left

… and so on, until the goal is reached.

In this example, iterative deepening planning is used to generate a plan to move the robot
from location A to location C. The depth limit is gradually increased until the goal is
achieved.
Component Of Planning System

Planning is the process of coming up with a series of actions or procedures to accomplish a


particular goal.

Types of planning in AI

 Classical planning
 Temporal planning
 Hierarchical planning
 CLASSICAL PLANNING
A series of actions is created to accomplish a goal in a predetermined setting.
It assumes that everything is static and predictable.
 HIERARCHICAL PLANNING
By dividing large problems into smaller once, hierarchical planning makes
planning more effective.
TEMPORAL PLANNING
 Planning for the future considers time restrictions and interdependencies between
actions. It ensures that the plan is workable within a certain time limit by taking
into account the duration of tasks.

Components of planning systems in AI

A planning system in AI is made up of many crucial parts that cooperate to produce


successful plans.

 Representation: The Component that describes how the planning problem is


represented is called representation. The state space, actions, objectives and
limitations must all be defined.
 Search: To locate the best plans, a variety of search techniques, including depth-first
search & A*search, can be used.
 Heuristics: It is used to direct search efforts and gauge the expense or benefit of
certain actions.

Benefits of AI planning

 Resource Allocation
 Better Decision Making
 Automation of Complex tasks

Application of AI planning

 Robotics: To enable autonomous robots to properly navigate their surroundings, carry


out activities, achieve goals.
 Gaming: AI planning & essential to the gaming industry because it enables game
characters to make thoughtful choices and design difficult and interesting gameplay
scenarios.

Goal Stack Planning (Gsp)

 Goal Stack Planning(GSP) is a technique used in AI to manage multiple goals by


organizing them in a specific order.
 It works backwards from the goal state to the initial state.
 We keep solving these “goals” and “sub goals” until we finally arrive at the initial
state.
 Apart from the “Initial state” and the “goal state”. We maintain a “World State”
contiguration as well. Goal stack uses this world state to work its way from Goal
state to initial state.

List of Predicates

 ON(A, B): Block A is on B


 ONTABLE(A): A is on table
 CLEAR(A):Nothing is on top of A
 HOLDING(A): Arm is holding A
 ARMEMPTY: Arm is holding nothing

Robot Arm can perform 4 operations

 STACK(X, Y):Staking Block x on Block y


 UNSTACK(X, Y):Picking up Block x which is on top on top of Block y
 PICKUP(X):Picking up Block x which is on top of the table
 PUTDOWN(X):Put Block x on the table.
INITIAL STATE GOAL STATE

B C D

A C D A B

Initial state

B
B D
C
A2- PUTOWN(C)
A D
1-PICKUP(C)

C
A C D A D B

3-UNSTACK(B, A)
C 4-PUTDOWN(B

C
B
A D A D B
5-PICKUP(C) 6-STACK(C, A)

C C B
A D A D
7-PICKUP(B) 8-STACK(B, D)

GOAL STATE

Example Of Minmax Search Algorithm

I. There are two players one is called Maximizer and other is called minimizer.
II. Maximizer will try to get the maximum possible score, and minimizer will try to get
the minimum possible score
III. This algorithm applies DFS, we have to go all the way through the leaves to reach the
terminal nodes.
IV. At the terminal node, the terminal values are given so we will compare those and
backtrack the tree until the initial state occurs.
V. Main steps involved in solving the two-player game tree.

Step 1: The algorithm generates the entire game-tree and apply the utility function to get
the utility values for the terminal states.

=>Maximizer
A

=>Minimizer
B C

D E G
H F O

I J K L M N
=>Maximizer

-1
7 =>Terminal node
Step 2: Now, we find the utilities value for the Maximizer,
0 its initial value is -∞, so we will
4
2 value in
compare each 6 terminal-3
state with-5initial value of Maximizer and determines the
higher node values. It will find the maximum among the all

 For node D max(-1, --∞) =>max(-1,4)=4


 For node E max(2,- ∞) =>max(-3, -)=-3
 For node F max(-3, -∞) =>max(0, 7)=7
=> Maximier
A
=> Minimizer
=> Maximizer
B C
7
G
4 D 6 -3 => Terminal node
E F O
N 7
H M
I J K L 0
-1 -5
4 2 6 -3

Step 3: It’s a two of minimizer, so it will compare all nodes value with +∞, and will find the
3rd layer node values

 For node B=min(4,6)=4


 For node C=min(-3, 7)=-3

=> Maximier
A
=> Minimizer
4 -3 => Maximizer
B C
7 => Terminal node
G
4 D 6 3
E F O
N 7
H M
I J K L 0
-1 -5
4 2 6 -3
Step 4:For node A max(4, -3)=4

4 => Maximier
A
=> Minimizer
4 -3 => Maximizer
B C
7
G
4 D 6 -3 => Terminal node
E F O
N 7
H M
I J K L 0
-1 -5
4 2 6 -3

This was the completer workflow of the minmax two player game.

ALPHA-BETA PRUNING EXAMPLE-2

Step 1: Max player will start first move from node A where α=-∞ and β=+∞.

α= -∞ => Max
A
β=∞ => Min
=> Max
B C
7
G
4 D 6 -3 5 => Terminal node
E F
7
2 1
3 5 9 0

Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max (2, 3)=3 will be the value of α at node D and
node value will also 3

Step 3: min(∞, 3)=3, hence at node B

Now α =-∞, and β= 3

α= -∞ => Max
A

B C
α= -∞ β=∞ => Min
β=3 3 => Max
=> Terminal node
α =3
3 5
β =∞
7
2 1
3 5 9 0

Step 4: The value of alpha will change.


Max(-∞, 5), α =5, β = 3, α >=β
α= -∞ => Max
A
α= -∞ β=∞ => Min
β=3 3 α =5 C => Max
B
β =3 => Terminal node
G 5
D 3 5
E F
7
2 1
3 5 9 0

Step 5: Alpha value changed the maximum available value is 3 as max (-∞, 3)=3 β =+∞

Step 6: max(3,0)=3 => left child which is 0,

Max(3,1)=3 => right child which is 1

α remains 3, but the node value of F will become 1

=> Max
A

α =3 => Min
B C
3 β =∞
α =3 => Max
β =∞ G
D F => Terminal node
E
G 5

2 3 5 9 0 1 7

Step 7: Node F return the node value 1 to node c, at c α =3 & β= +∞ min(∞,1)=1


α =3 => Max
A β =∞

α =3 => Min
B C
3 β =1
=> Max

D G => Terminal node


E F
G
2 3 0 1
5
5 9 7

Step 8: C new returns the value of 1 to A the value of A is max (3,1)=3

Hence the optimal value for the maximizer is 3.

3 =>Max
A

=> Min
B C
3
=> Max

D 3 5 1 G => Terminal node


E F
G
5
2 3 0 1
7
5 9

EXAMPLE OF ITERATIVE DEEPENING DEPTH FIRST SEARCH


Depth=0
A

Depth=1
D
B
C
Depth=2
E
H
F G
Depth=3

I J K L M N

Depth=4

O P R S
The starting node is A and the depth initialized to O. The goal node R to find the depth and
path to reach it.

The tree can be visited as A B E F C G D H

DEPTH={0,1,2,3,4}

DEPTH LIMITS IDDFS

0 A
1 ABCD
2 ABEFCGDH
3 ABEIFJKCGLDHMN
4 A B E I F J K O P C G L D HMNS
UNIT-IV

NATURAL LANGUAGE PROCESSING (NLP)

NLP:

NLP is a field of computer science, artificial intelligence and linguistics that studies how
humans and computers interact with language.

Nlp Used For:


NLP is a machine learning technology that gives computers the ability to interpt,
manipulate, and comprehend human language.

Is Nlp Ai Or Ml?
NLP is a subfield of computer science and artificial intelligence(AI) that uses machine
learning to enable computers to understand and communicate with human language.

Is Nlp The Future Of Ai?


NLP stands as aadvancing domain with extensive applications across diverse industrial
sectors. Its surge in popularity within these sectors can be attributed to the exponential
growth of AI driven technology.

Is Nlp A Programming Language?


It is an ontology assisted way of programming in terms of natural language sentences.

Layers Of Natural Language Processing(Nlp).


Lexical Analysis:
 This phase scans the source code as a stream of characters and
converts it into meaningful lexemes.

 It divides the whole text into paragraphs sentences,words.

Syntactic Analysis:
 Syntactic analysis is used to check grammar,word arrangements
and shows the relationship among the words.

Semantic Analysis:
 Syntactic analysis is concerned with the meaning
representation. It mainly focuses on the literal meaning of
words, phrases and sentences.
Discourse Integration:
 Discourse integration depends upon the sentences that proceeds
it and also invokes the meaning of the sentence that follow it.

Pragmatic Analysis:
 It helps you to discover the intended effect by applying a set of
rules that characterize cooperative dialogues.

Advantages Of Nlp:

 NLP helps users to ask questions about any subject and get a
direct response within seconds.

 NLP helps computers to communicate with humans in their


language.

 It is very time efficient.

Disadvantages Of Nlp:
 NLP is unpredictable.
 NLP may not show context
 NLP may require more keystrokes.
Nlp Techniques:
NLP encompasses a wide array of technique that aimed at enabling computers to
process and understand human language.

• Test processing and preprocessing in NLP


• Dividing text into smaller units, such as words or sentences  Reducing
words to their base or root forms.

• Removing common words(like “and” , “the” , “is”) that may not carry
significant meaning.

• Syntax and parsing in NLP


• Assigning parts of speech to each word in a sentence(eg.,
noun,verb,adjective)

• Analyzing the grammatical structure of a sentence to identify relationships


between words.

• Breaking down a sentence into its constituent parts or phrases(eg., noun


phrases, verb phrases).

• Semantic Analysis
• Identifying and classifying entities in text, such as names of people,
organizations,locations,dates,etc.,

• Determining which meaning of a word is used in a given context.


• Identifying when different words refer to the same entity in a text.(eg., “he”
refers to “John”).

• Text classification in NLP


• Identifying topics or themes within a large collection of documents.
• Classifying text as spam or not spam.

• Question Answering
• Retrival based QA: Finding and returning the most relevant text passage in
response to a query.

• Generating an answer based on the information available in a text corpus.

Future Scope Of Nlp

• NLP has a promising future and is expected to improve existing


technologies and make interactions with technology more natural.

• It has numerous possibilities and applications. Advancements in field like


sppech recognition, automated machine translation, sentiment analysis and
chatbots.
Future Enhancements Of Nlp
• The future of NLP holds exciting possibilities across various sectors.
• Healthcare: NLP can transform patient care through improved diagnostics,
personalized treatment plans and efficient patient doctor communication.

Syntactic Process In Nlp

 Syntactic processing is the process of analyzing the grammatical structure of


a sentence to understand its meaning.

 This involves identifying the different parts of speech in a sentence, such as nouns,
verbs, adjectives, adverbs and how they relate to each other in order to give proper
meaning to the sentence.

Syntactic Processing Work

“The cat sat on the mat”

“cat” as a noun

“sat” as a verb

“on” as a preposition

“mat” as a noun
It would also involve understanding that “cat” is the subject of the sentence and “mat” is the
object.

Applications Of Syntactic Processing

1. Language translation:

Understanding the syntactic structure of a sentence is crucial for accurate


translation.

2. Sentiment analysis:

Identifying the relationships between words and phrases helps determine


the sentiment of a text.
3. Question answering:

Syntactic processing helps identify the relationships between entities and


actions in a text.

4. Text summarization:

Understanding the syntactic structure of a text helps identify the most important
information.

Semantic Analysis In Nlp


Semantic analysis is a technique used in NLP to help machines understand the
meaning of words, sentences and texts by considering the context of the text.
EXAMPLE:

“The boy ate the apple” defines an apple as a fruit. while,

“The boy went to Apple” define Apple as a brand or store.

Applications Of Semantic Analysis

1. Information Retrieval:

It improves search engines by understanding the meaning behind user


queries and document content.

2. Question Answering:

Semantic analysis helps answer complex questions by identifying relevant


information and relationships in text.

3. Text Summarization:

Semantic analysis summarizes long documents by extracting key concepts


and relationships.

4. Sentiment Analysis:
It determines the sentiment and emotional tone behind text, such as
detecting positive or negative opinions.

5. Named Entity Recognition:

Semantic analysis identifies and categorizes named entities (people,


places, organizations) in text.

6. Relationship Extraction:

Semantic analysis identifies relationships between entities, such as


"person A is a colleague of person B".

Advantages Of Semantic Analysis

1. Improved understanding:

Semantic analysis provides a deeper understanding of the meaning


and context of text, leading to more accurate interpretations.

2. Enhanced information retrieval:

Semantic analysis improves search results by capturing the nuances


of language and returning more relevant information.

3. Better sentiment analysis:

Semantic analysis accurately identifies sentiment and emotional tone,


helping businesses and organizations understand public opinion.

4. More accurate entity recognition:

Semantic analysis identifies and categorizes named entities, improving


data quality and facilitating data integration.
5. Increased accuracy in question answering:

It provides more accurate answers to complex questions by understanding


relationships and context.

Semantic Analysis Works

Elements Of Semantic Analysis


• Hyponyms:

o This refers to a specific lexical entity having a relationship


with a more generic verbal entity called hypernym.

For example: red, blue, and green are all hyponyms of color, their hypernym.

• Meronomy:

o Refers to the arrangement of words and text that denote a


minor component of something.

For example: mango is a meronomy of a mango tree.

• Polysemy: o It refers to a word having more than one meaning.


However, it is represented under one entry.
For example: the term ‘dish’ is a noun. In the sentence, ‘arrange the dishes on the shelf,’
the word dishes refers to a kind of plate.

• Synonyms:

o This refers to similar-meaning words.

For example: abstract (noun) has a synonyms summary–synopsis.

• Antonyms:

o This refers to words with opposite meanings.

For example: cold has the antonyms warm and hot.

• Homonyms:

o This refers to words with the same spelling and


pronunciation, but reveal a different meaning altogether.

For example: bark (tree) and bark (dog).

Tasks Involved In Semantic Analysis


• Word sense Disambiguation

• Relationship extraction

WORD SENSE DISAMBIGUATION:


It refers to an automated process of determining the sense or meaning of the
word in a given context.
Example:

‘Raspberry Pi’ can refer to a fruit, a single-board computer, or even a company


(UK-based foundation). Hence, it is critical to identify which meaning suits the word
depending on its usage.

Relationship Extraction:
It determine the semantic relationship between words in a text. In this,
relationship include various entities such as an individuals name,place, company,
designation, etc.,

Example:

Elon Musk is one of the co-founders of Tesla, which is based in Austin, Texas.
This phrase illustrates two different relationships.

Elon Musk is the co-founder of Tesla


[Person] [Company]

Tesla is based in Austin, Texas


[Company] [Place]

PARALLEL AND DISTRIBUTED AI PSYCOLOGICAL MODELLING


It refers to the use of parallel and distributed computing techniques to model
human cognition and behavior. This involves.

1. Parallel processing:

It using multiple processing units to perform tasks simultaneously,


mimicking the brain's parallel processing capabilities.

2. Distributed processing:

It breaking down complex tasks into smaller sub-tasks and


distributing them across multiple processing units or agents, similar to how
cognitive tasks are distributed across different brain regions.

GOALS:

• Scalability:
Model complex cognitive phenomena that require large-scale processing.

• Flexibility:
Accommodate individual differences and adapt to changing environments.
• Real-time processing:
Enable real-time interaction and feedback.

APPLICATIONS:

• Cognitive architectures:
Integrate parallel and distributed processing into cognitive models.

• Neural networks:
Use parallel and distributed computing to train and deploy neural
networks.

• Multi-agent systems:
Model social behavior and interactions using distributed AI.

• Human-computer interaction:
Develop more natural and intuitive interfaces using parallel and
distributed AI.

• Cognitive robotics:
Control and coordinate robotic behavior using distributed AI.

BENEFITS:
 Improved scalability and flexibility

 Enhanced real-time processing capabilities

 More accurate and comprehensive cognitive models


 Better human-computer interaction and human-robot interaction

 Potential applications in fields like education, healthcare, and social sciences.

CHALLENGES:
 Complexity:
Managing and coordinating parallel and distributed processes

 Communication:
Ensuring efficient data exchange between processing units.

 Synchronization:
Coordinating tasks and maintaining consistency across processing units

 Scalability:
Adapting to large-scale problems and datasets

 Interpretability:
Understanding and explaining complex AI models and behaviors.

PARALLELISM AND DISTRIBUTED IN REASONING SYSTEM


Parallelism and distributed processing in reasoning systems refer to the use of
multiple processing units or agents to perform reasoning tasks simultaneously, improving
efficiency and scalability.

PARALLELISM:
1. Task parallelism: Dividing a reasoning task into smaller sub-tasks and executing
them concurrently.

2. Data parallelism: Distributing data across multiple processing units and performing
the same reasoning task on each unit.

3. Pipelined parallelism: Breaking down a reasoning task into a series of stages and
executing them concurrently.

DISTRIBUTED PROCESSING:

1. Decentralized reasoning: Distributing reasoning tasks across multiple agents or


nodes, each contributing to the overall solution.

2. Distributed knowledge representation: Storing knowledge across multiple nodes,


enabling efficient access and reasoning.

3. Communication and coordination: Ensuring data exchange and synchronization


between nodes.

BENEFITS:

1. Scalability: Handle large-scale reasoning tasks and knowledge bases.

2. Efficiency: Reduce processing time through concurrent execution.

3. Flexibility: Adapt to changing environments and requirements.

APPLICATIONS:

1. Artificial intelligence: Enhance reasoning capabilities in AI systems.

2. Expert systems: Improve performance and scalability in expert systems.

3. Multi-agent systems: Enable distributed reasoning and decision-making.


CHALLENGES:

1. Coordination and communication: Manage data exchange and synchronization.

2. Consistency and coherence: Ensure consistent and coherent reasoning results.

3. Scalability and efficiency: Balance computational resources and reasoning


performance.

LEARNING CONNECTIONST MODEL:


 The Connectionist Model in AI is a cognitive architecture that posits that cognitive
processes can be understood in terms of the connections and interactions between
simple computational units or nodes.

 This model is inspired by the structure and function of the brain and is often used to
describe the processing of information in neural networks.

KEY FEATURES:

 Distributed Representation: Information is represented across multiple nodes,


rather than being localized in a single location.

 Parallel Processing: Multiple nodes process information simultaneously, allowing


for fast and efficient processing.

 Learning and Adaptation: Connections between nodes can be modified based on


experience, allowing the system to learn and adapt.
 Activation and Inhibition: Nodes can be activated or inhibited by other nodes,
allowing for complex patterns of activity to emerge.

TYPES OF CONNECTIONST MODEL:

 Feedforward Networks: Information flows only in one direction, from input nodes
to output nodes.

 Recurrent Networks: Information can flow in a loop, allowing for feedback and
recurrent processing.

 Attractor Networks: The network converges to a stable state, or attractor, which


represents the processed information.

APPLICATIONS OF CONNECTIONST MODEL:

 Pattern Recognition: Connectionist models can learn to recognize patterns in data,


such as images or speech.

 Language Processing: Connectionist models can learn to process and generate


natural language.

 Memory and Learning: Connectionist models can learn and remember new
information.

 Decision-Making: Connectionist models can make decisions based on patterns and


associations learned from data.

BENEFITS OF CONNECTIONIST MODEL:

 Flexibility: Can be used to model a wide range of cognitive tasks and processes.

 Scalability: Can be applied to large-scale problems and datasets.


 Biological Plausibility: Is inspired by the structure and function of the brain.

LIMITATIONS OF CONNECTIONIST MODEL:

 Complexity: Can be difficult to understand and analyze due to the complex


interactions between nodes.

 Lack of Interpretability: Can be challenging to interpret the results of


Connectionist models.

 Training Requirements: Requires large amounts of data and computational


resources to train.

HOPEFIELD NETWORKS
 Hopfield networks are a type of recurrent artificial neural network that serve as a
content-addressable ("associative") memory system with binary threshold nodes.

 They are a simple example of a neural network that can store and recall memories.

CHARACTERISTICS OF HOPEFIELD NETWORK:

 Recurrent: Hopfield networks have feedback connections, which allow the network
to settle into a stable state.

 Binary threshold nodes: Each node in the network has a binary output (0 or 1) and
a threshold value that determines its output.

 Symmetric weights: The weights connecting nodes are symmetric, meaning that the
weight from node A to node B is the same as the weight from node B to node A.

HOW HOPEFIELD NETWORKS WORK:

 Initialization: The network is initialized with a set of random weights and biases.
 Training: The network is trained on a set of patterns, where each pattern is a binary
vector.

 Storage: The network stores the patterns in its weights and biases.

 Recall: When a noisy or incomplete pattern is presented to the network, it settles


into a stable state that corresponds to the closest stored pattern.

APPLICATIONS OF HOPEFIELD NETWORKS:


 Associative memory: Hopfield networks can store and recall patterns, making them
useful for associative memory tasks.

 Optimization: Hopfield networks can be used to solve optimization problems by


encoding the problem into a pattern and using the network to find the optimal
solution.

 Machine learning: Hopfield networks can be used as a building block for more
complex machine learning models.

LIMITATIONS OF HOPEFIELD NETWORKS:

 Capacity: Hopfield networks have a limited capacity for storing patterns.

 Convergence: The network may not always converge to a stable state.

 Sensitivity to noise: The network can be sensitive to noise in the input patterns.

NEURAL NETWORKS:
A Neural networks is a machine learning model inspired by the structure and
function of the human brain. It consists of layers of interconnected nodes on
“neurons”. Which process and transmit information.

Input layer : Receives the data to be proceed.

Hidden layer: Performs complex calculation and transformation.

Output layer: Generates the final prediction on result.

NEURAL NETWORK CAN:

Learn: Adapt to new data and imrove performance.


Classify: Categories data into classes or groups Predict:

Forecast future values or outcomes.

Generate: Create new data, like text or images.

APPLICATIONS OF NEURAL NETWORKS:

 Image recognition
 Natural language processing
 Speech recognition
 Predictive analysis.

ADVANTAGES OF NEURAL NETWORKS:

 Adaptability: Neural networks are useful for activities where the link between
inputs and outputs is complex or not well defined because they can adapt to new
situations and learn from data.
 Pattern Recognition: Their proficiency in pattern recognition renders them
efficacious in tasks like as audio and image identification, natural language
processing, and other intricate data patterns.

 Parallel Processing: Because neural networks are capable of parallel processing by


nature, they can process numerous jobs at once, which speeds up and improves the
efficiency of computations.

 Non-Linearity: Neural networks are able to model and comprehend complicated


relationships in data by virtue of the non-linear activation functions found in neurons,
which overcome the drawbacks of linear models.

DISADVANTAGES OF NEURAL NETWORKS:

 Computational Intensity: Large neural network training can be a laborious and


computationally demanding process that demands a lot of computing power.

 Black box Nature: As “black box” models, neural networks pose a problem in
important applications since it is difficult to understand how they make decisions.

 Overfitting: Overfitting is a phenomenon in which neural networks commit training


material to memory rather than identifying patterns in data. Although regularization
approaches help to alleviate this, the problem still exists.

TYPES OF NEURAL NETWORKS:


 Convolutional neural networks
 Feedforward neural networks
 Recurrent neural networks
 Generative adversarial networks
 Modular neural networks

Feedforward Neural Network :


 The feedforward neural network is one of the most basic artificial neural networks.
 In this ANN, the data or the input provided travels in a single direction.
 It enters into the ANN through the input layer and exits through the output layer
while hidden layers may or may not exist.

 So the feedforward neural network has a front-propagated wave only and usually
does not have backpropagation.

Convolutional Neural Network :


 A Convolutional neural network has some similarities to the feed-forward neural
network, where the connections between units have weights that determine the
influence of one unit on another unit.

 But a CNN has one or more than one convolutional layer that uses a convolution
operation on the input and then passes the result obtained in the form of output to the
next layer.

 CNN has applications in speech and image processing which is particularly useful in
computer vision.

Modular Neural Network:


 A Modular Neural Network contains a collection of different neural networks that
work independently towards obtaining the output with no interaction between them.

 Each of the different neural networks performs a different sub-task by obtaining


unique inputs compared to other networks.

 The advantage of this modular neural network is that it breaks down a large and
complex computational process into smaller components, thus decreasing its
complexity while still obtaining the required output.

Radial basis function Neural Network:


 Radial basis functions are those functions that consider the distance of a point
concerning the center.

 RBF functions have two layers. In the first layer, the input is mapped into all the
Radial basis functions in the hidden layer and then the output layer computes the
output in the next step.

 Radial basis function nets are normally used to model the data that represents any
underlying trend or function.

Recurrent Neural Network:


 The Recurrent Neural Network saves the output of a layer and feeds this output back
to the input to better predict the outcome of the layer.

 The first layer in the RNN is quite similar to the feed-forward neural network and the
recurrent neural network starts once the output of the first layer is computed.

 After this layer, each unit will remember some information from the previous step so
that it can act as a memory cell in performing computations.

UNIT 5 (EXPERT SYSTEM)


What is Expert systems?
Expert systems are a crucial subset of artificial intelligence (AI) that simulate the decision-
making ability of a human expert. These systems use a knowledge base filled with domain-
specific information and rules to interpret and solve complex problems. Expert systems are
widely used in fields such as medical diagnosis, accounting, coding, and even in games.

Block diagram of Expert System

Examples of expert systems:


• CaDet (Cancer Decision Support Tool) is used to identify cancer in its earliest stages.
• DENDRAL helps chemists identify unknown organic molecules.
• Dxplain is a clinical support system that diagnoses various diseases
• MYCIN identifies bacteria such as bacteremia and meningitis, and recommends
antibiotics and dosages.
• PXDES determines the type and severity of lung cancer a person has
• RI/XCON is an early manufacturing expert system that automatically selects and
orders computer components based on customer specifications

Characteristics of Expert System:


• High Performance: The expert system provides high performance for solving any type
of complex problem of a specific domain with high efficiency and accuracy.
• Understandable: It responds in a way that can be easily understandable by the user. It
can take input in human language and provides the outputs in the same way.
• Reliable: It is much reliable for generating an efficient and accurate output
• Highly responsive: ES provides the result for any complex query within a very short
period of time

Componenets of Expert System:


An Expert System mainly consists of three Components

• User Interface
• Inference Engine
• Knowledge Base

>User Interface

• With the help of a user interface, the expert system interacts with the user, takes
queries as an input in a readable format, and Passes it to the inference engine.
• After getting the response from the inference engine, it displays the output to the user.
In other words, it is an interface that helps a non-expert user to communicate with the
expert system to find a solution

>Inference Engine

• The inference engine is known as the brain of the expert system as it is the main
Processing unit of the system. It applies inference rules to the knowledge base to
derive a conclusion or deduce new information.
• With the help of an Inference engine, the system extracts the knowledge from the
Knowledge base.
• Two types of inference Engine

1. Deterministic inference engine:


The conclusions drawn from this type of inference engine are assumed to be true. It is based
on facts and rules

2. Probabilistic Inference engine:


This type of inference engine contains uncertainity in conclusions, and based on the
probability.

>Knowledge Base

• The Knowledge base is a type of storage that stores knowledge acquired from the
different experts of the particular domain. It is considered as big storage of
knowledge.
• It is similar to a database that contains information and rules of a particular domain or
subject.
• One can also view the knowledge base as Collections of objects and their attributes.
Such as a lion is an object and its attributes are it is a mammal, it is not a domestic
animal.

Capabilities of the Expert System


• Advising: It is capable of advising the human being for the query of any domain from
the particular ES
• Provide decision-making Capabilities: It provides the capability of decision making
in any domain, such as for making any financial decision, decisions in medical
science, etc.,
• Demonstrate a device: It is capable of demonstrating any new Products such as its
features, specifications, how to use that product, etc.,
• Problem-solving: It has problem-solving, capabilities
• Explaining a problem: It is also capable of providing a detailed description of an
input problem
• Interpreting the input: It is capable of interpreting the input given by the user.

Advantages of Expert System


• These systems are highly reproducible.
• They can be used for risky places where the human presence is not safe.
• Error possibilities are less if the KB contains correct Knowledge.
• The performance of these systems remains steady as it is not affected by emotions,
tension, or fatigue.
• They provide a very high speed to respond to a particular query.
Limitations of Expert System
• The response of the ES may get wrong if the knowledge base contains the wrong
Information.
• Like a human being, it cannot produce a creative output for different scenarios.
• Its maintenance and development cost, are very high.
• Knowledge acquisition for designing is much difficult.
• for each domain, we require a specific ES, which is one of the big limitations.
• It cannot learn from itself and hence requires manual updates.

Applications of Expert System


• Designing and manufacturing domain: It can be broadly used for designing and
manufacturing physical devices such as camera Lenses and automobiles.
• Knowledge Domain: These systems are primarily used for publishing the relevant
knowledge to the users. The two popular ES used for this domain is an advisor and a
tax advisor.
• Finance Domain: In the finance industries, it is used to detect any type of possible
fraud, suspicious activity, and advise bankers that if they should provide loans for
business or not

Planning and scheduling: The expert systems can also be used for Planning and
scheduling some particular tasks for achieving the goal of that task.

Types of Expert Systems:


• Rule-Based Expert Systems.
• Frame-Based Expert Systems.
• Fuzzy Logic Systems.
• Neural Network -Based Expert Systems.  Neuro-Fuzzy Expert Systems.

1. Rule-Based Expert Systems:


Use a set of "if-then" rules to process data and make decisions. These rules are typically
written by human experts and capture domain-specific knowledge.
Example: MYCIN, an early system for diagnosing bacterial infections.

2. Frame-Based Expert Systems:


Represent knowledge using frames, which are data structures similar to objects in
Programming. Each frame contains attributes and values related to a particular concept.
Example: Systems used for knowledge Representation in areas Like NLP

3. Fuzzy Logic Systems:


It Handle uncertain or imprecise information using fuzzy logic, which allows for Partial
truths rather than binary true false Values.
Examples. Fuzzy control systems for managing household appliances Like washing machines
and air conditioners.

4. Neural Network-Based Expert Systems:


It use Artificial neural networks to learn from data and make predictions or decisions based
on learned patterns. They are often used for tasks involving pattern recognition and
classification.
Example. Deep Learning models for image and speech recognition.

5. Neuro-Fuzzy Expert Systems:


Neural networks and fuzzy logic to combine the learning capabilities of neural networks with
the handling of uncertainity and imprecision offered by fuzzy logic.
This hybrid approach helps in dealing with complex problems where both pattern recognition
and uncertain reasoning are required.

Example: Automated control systems that adjust based on uncertain environmental conditions
or financial forecasting models that handle both quantitative data and fuzzy inputs.

How Expert Systems Work?


• Input Data: Users provide data or queries related to a specific problem or scenario.
• Processing: The inference engine processes the input data using the rules in the
knowledge base to generate conclusions or recommendations.
• Output: The system presents the results or solutions to the user through the user
interface.
Explanation: If applicable, the system explains how the conclusions were reached, providing
Insights into the reasoning process.

Fundamental Methods of Expert System


• Forward chaining
• Backward chaining

1. Forward chaining:
• "What can happen next?" The inference engine follows the chain of conditions and
derivations and finally deduces the outcome.
• It considers all the facts and rules, and sorts them before concluding to a solution
• This strategy is followed for working on conclusion, result, or effect.
Example, prediction of share market status as an effect of changes in interest rates.

2. Backward chaining:
• An expert system finds out the answer to the question, "Why this happened?"

• On the basis of what has already happened, the inference Engine tries to find out
which conditions could have happened in the past for this result.
• This strategy is followed for finding out cause or reason.  Example: diagnosis of
blood cancer in humans

Common Sense
• Common sense is the mental skills that most people share.
• Common Sense is ability to analyze a situation based on its context, using millions of
integrated pieces of common knowledge.

Common sense is what people come to know in the process of growing and living in
the world.
• Common sense knowledge includes the basic facts about events and their effects, facts
about knowledge and how it is obtained, facts about beliefs and desires. It includes the
basic facts about material objects and their properties
• Example: Everyone knows that dropping a glass of water, the glass will break and
water will spill on podium. However, this information is not Obtained by formula or
equation for a falling body or equations governing fluid flow.
• The goal of the formal common sense reasoning community is to encode this implicit
knowledge using format logic Common sense is Identified as:

>) Common sense knowledge: What every one know


>) Common sense reasoning: ability to use common sense knowledge

Common sense knowledge:


What one can express as a fact using a richer ontology Example:

1. Every person is younger than the Person's mother


2. If you hold a knife by it's blade then the blade may cut you.
3. If you drop paper into a flame then the paper will burn.

Common sense Reasoning:


What one builds as a reasoning method into his program.
Example:

1. If you have a problem, think of a past situation where you solved a similar problem
2. If you fall at something, imagine how you might have done things differently.
Common sense Architecture:
• The system takes as input a template Produced by information extraction system about
certain aspects of a scenario
The template is a frame with slots and slots fillers

• The template is fed to a script classifier, which classifies what script is active in the
template
• The template and the script are passed to a reasoning problem builder specific to the
script, which converts the template into a commonsense reasoning problem.
• The problem and a commonsense knowledge base are passed to a commonsense
reasoner. It infers and fills in missing details to Produce a model of the input text.

• The model provides a deeper representation of the input, than is provided by the
template alone.

Importance of Common Sense in Expert System


• Improved decision-making  Enhanced problem-solving
• Increased accuracy
• Better understanding of human behavior
• More natural human-computer interaction

Challenges of Implementing Common Sense


• Knowledge Acquisition
• Representation and organization
• Integration with domain - specific knowledge
• Reasoning and inference
• Evaluation and validation

Techniques for Implementing Common Sense


• Knowledge graph-based approaches
• Machine learning and deep Learning
• Natural Language Processing and understanding
• Cognitive architectures and modeling
• Hybrid approaches

Applications of common Sense


Natural language Processing & generation 
Computer vision and image understanding.

• Robotics and autonomous systems


• Expert systems and decision support
• Human-computer interaction & dialogue systems

Qualitative physics
• Qualitative physics in AI is a way to study the physical world by representing and
reasoning about it.

• Qualitative physics uses dimensional analysis to solve problems, which requires
Knowledge of physical variables and dimensional representation.
• This allows reasoning about systems and devices without needing to know the
physical laws that govern them.
• for eg:, if a problem involves time period, mass, and spring constant, the variables and
their dimensions are t (T), m [M], K[MT-2].
• Qualitative Physics can also be used to solve physical system problems, and can be
used for tasks like behavior analysis and conceptual design.

Techniques used in Qualitative physics


• Qualitative simulation: Simulating physical phenomena using qualitative models, such
as qualitative differential equations
• Model-based reasoning: Using qualitative models to reason about physical systems
and predict behaviors.
• Case-based reasoning: Storing and retrieving cases Physical phenomena to reason
about new situations
• Ontologies: Representing physical knowledge using ontologies, which provide a
framework for organizing and reasoning about qualitative knowledge

Applications of Qualitative Physics


• Robotics: Enables robots to understand and interact with their physical environment
• Computer vision: Allows for qualitative understanding of visual scenes and events
• Natural Language Processing: Enables understanding of physical phenomena
described in Natural language.
Expert Systems: Provides a foundation for building expert systems that reason about
physical domains.

Benefits of Qualitative Physics


• Improved explainability: Provides insights into Physical phenomena and system
behaviors.
• Robustness. Less sensitive to numerical errors and uncertainties
• Flexibility: Can handle incomplete or uncertain knowledge
• Efficiency: Reduces computational complexity in certain applications

Common Sense Ontologist


• An Ontologist is a professional who specializes in designing, developing, and
managing ontologies.

• The primary goal is to create a structured and comprehensive representation of
Knowledge that can be easily understood and utilized by both humans and machines.
• Ontologies are used to facilitate Knowledge sharing, communication, collaboration
between humans and machines.

How ontologies used in AI?


In AI, ontologies are used to model Knowledge about the world in a structured form. This
allows AI systems to understand and reason about the relationships between concepts,
improving their ability to process natural language, make decisions, and learn from data

Applications of Ontologists in AI
• Expert Systems: Ontologists in AI develop Ontologies that power expert systems,
which mimic human decision-making in a particular domain
• Natural Language Processing: Ontologists in AI use ontologies to improve natural
language processing systems, enabling machines to understand human language
• Decision Support Systems: It support decision-making in complex domains
Data Integration: Ontologists in AI use ontologies to integrate data from multiple
sources, enabling machines to reason about the integrated data.
• Robotics and autonomous system: It enable robots and autonomous systems to
understand their environment and make decisions

Key Components of Ontology


• Concepts: Representing entities, objects, or ideas within the domain
• Relationship: Defining how concepts interact, relate, or connect
• Rules: Specifying constraints, Logic, or axioms governing the domain  Axioms:
Fundamental truths or assumptions underlying the ontology  Instances: specific
examples or individuals within the domain.

Benefits of ontology
• Improved knowledge organization and sharing  Enhanced reasoning & decision-
making capabilities.
• Increased data consistency and quality
• Better interoperability and integration
• More explainable and transparent AI Systems

Memory Organization
• Memory Organization provides a framework within which to understand the
relationship between individual identification, duplicate records, and the proper
functioning of AI
• Types of Memory Organization
There are several types of memory Organization used in computer system.

Memory Organization in Artificial Intelligence can include a variety of techniques that help
AI Systems.

1. E-MDPs: Episodic Memory Organization packets. A network of frames that contain


conceptual information about different types of Episodic events.
2. Replay memory: A technique used in AI, Particularly in reinforcement learning, where
an agent learns to make decisions by interacting with its environment.
3. Working memory. Another critical aspect of AI inspired by the human cognitive
System. Memory is central to common sense behavior and also the basis for learning.
Human memory is still not fully understood however Psycologists have proposed
several ideas
4. Short-Term Memory (STM):
• Only a few items at a time can be held here.
• Perceptual information stored directly here 5. Long-Term Memory (ITM):
• Capacity for storage is very large and fairly permanent
• LTM is often divided up further
Episodic Memory: Contains information about Personal experiences
Semantic Memory: General facts with no Personal meaning

Three distinct Memory Organizations Packets (MOPS) code Knowledge about an


sequence

• 1st MOP represents the physical sequence of events


• 2nd MOP represents the set of social events that takes place
• 3rd MDP revolves around the goals of the Person in the particular episode

Expert System Shells


• Expert System shells are a collection of software packages and tools used to develop
expert systems
• A shell provides the developers with Knowledge acquisition, inference engine, user
interface, explanation facility.
• Initially each expert system is build from Scratch (LISP)
• Example of shell is EMYCIN (for Empty- MYCIN derived from MYCIN) Expert
System shells CUPS, JESS, DROOLS

Expert System has two main parts


• A Knowledge Base: This is where all the expert info is stored
• An inference Engine: This part uses the stored knowledge to figure out solutions

Benefits of Expert System Shell


• Time and Cost Savings: Less time spent on development means Lower overall costs
• User - Friendly Interface:
1. Easier Knowledge: input we can add expert knowledge without complex coding
2. Simple maintenance: Updating the system is straight forward, even for non-technical
users
 flexibility and Customization:

1. Versatile applications: We can use them for different industries like healthcare, finance
or engineering
2. Easy modifications: As your needs change, you can quickly adjust your expert system

Built in Features
• Inference Engines: It They are already included, so you don't need to build One from
scratch.
• Explanation facilities: These help users understand how the system reaches its
conclusions
Knowledge base Editors: They make organizing and updating information simple

>User Interface:

• Easy-to-use screens for inputting data and getting results


• Often includes menus, buttons, forms
• Helps non-technical users interact with the system

>Knowledge Base Editor:

• A tool for adding and organizing expert Knowledge


• Allows, users to input rules, facts and relationships
• Makes it easy to update information as needed

>Explanation Facility:

• Helps users understand how the system reached its decision


• shows the steps and rules used in the reasoning process
• Builds trust by making the system's logic transparent

>Inference Engine:
The "brain" of the Expert System

• Uses the knowledge base to solve problems or make decision


• Applies logical reasoning to reach conclusions

>Knowledge Acquisition Tools:

• Assist in gathering and structuring expert Knowledge


• May include interview templates or questionnaries


• Helps convert human expertise into a format the system can use

>Rule Builder:

• Allows users to create if then rules easily  Helps in setting up the logic the system
will follow  often includes a visual interface for creating rules.

Examples of Expert System Shells


>CLIPS (C Language Integrated Production System)

• Developed by NASA
• Free and open-source
• Good for building rule-based expert systems
• Used in space shuttle operations >Jess (Java Expert System Shell):

• Based on CLIPS but works with Java


• Fast and powerful
• popular in academic and research settings >Prolog:
• It is not just a shell but is often used to build expert systems
• Good for logical reasoning tasks
• Used in NLP >Drools:
• Open-source business rules management system.  Integrates well with Java
applications
Used in many industries for decision automation

Future of Expert System Shells in AI


• The future of ES shells in AI looks bright and full of possibilities. We can expect to
see these tools become more powerful and user-friendly.
• They will likely merge with machine Learning, creating systems that not only use
expert knowledge, but also team and adapt
• As AI continues to evolve, these shells could play a crucial role in developing. more
advanced and flexible systems across Various fields

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy