0% found this document useful (0 votes)
30 views27 pages

Unit 3 Artificial Intelligence

The document discusses reasoning in Artificial Intelligence (AI), highlighting types such as monotonic, non-monotonic, deductive, inductive, and abductive reasoning. It explains the role of Truth Maintenance Systems (TMS) in managing knowledge consistency and uncertainty, and introduces concepts like sample space in probability and certainty factors in rule-based systems. Additionally, it covers applications of Bayes' Theorem and Dempster Shafer Theory for handling uncertainty in decision-making.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views27 pages

Unit 3 Artificial Intelligence

The document discusses reasoning in Artificial Intelligence (AI), highlighting types such as monotonic, non-monotonic, deductive, inductive, and abductive reasoning. It explains the role of Truth Maintenance Systems (TMS) in managing knowledge consistency and uncertainty, and introduces concepts like sample space in probability and certainty factors in rule-based systems. Additionally, it covers applications of Bayes' Theorem and Dempster Shafer Theory for handling uncertainty in decision-making.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 27

Unit-3 Uncertinity And

Reasoning
Reasoning
Reasoning in Artificial Intelligence (AI) is the logical process of drawing conclusions, making
predictions, or constructing solutions based on existing knowledge.
 It plays a crucial role in enabling AI systems to simulate human-like decision-making and
problem-solving capabilities

Monotonic Reasoning
Monotonic Reasoning is the process that does not change its direction or can say that it moves in
the one direction.
•Monotonic Reasoning will move in the same direction continuously means it will either move in
increasing order or decrease.

•Butsince Monotonic Reasoning depends on knowledge and facts, It will only increase and will
never decrease in this reasoning.

•Example:

• Sun rises in the East and sets in the West.


Non-monotonic Reasoning

Non-monotonic Reasoning is the process that changes its


direction or values as the knowledge base increases.
•It is also known as NMR in Artificial Intelligence.
•Non-monotonic Reasoning will increase or decrease based on the
condition.
•Since that Non-monotonic Reasoning depends on assumptions, It
will change itself with improving knowledge or facts.
•Example:

• Consider a bowl of water, If we put it on the stove and turn the


flame on it will obviously boil hot and as we will turn off the
flame it will cool down gradually.
Types of Reasoning Mechanisms in AI

Deductive Reasoning?
Deductive reasoning is a logical process where one draws a specific conclusion from a
general premise. It involves using general principles or accepted truths to reach a
specific conclusion.
For example, if the premise is "All birds have wings," and the specific observation is
"Robins are birds," then deducing that "Robins have wings" is a logical conclusion.
•In deductive reasoning, the conclusion is necessarily true if the premises are true.

•It
follows a top-down approach, starting with general principles and applying them to
specific situations to derive conclusions.

•Deductivereasoning is often used in formal logic, where the validity of arguments is


assessed based on the structure of the reasoning rather than the content.

•It
helps in making predictions and solving puzzles by systematically eliminating
possibilities until only one logical solution remains.
Inductive Reasoning?
Inductive reasoning is a logical approach to making inferences, or conclusions. People often use
inductive reasoning informally in everyday situations .When you use a specific set of data or existing
knowledge from past experiences to make decisions, you're using inductive reasoning.
Example: AI-Based Email Classification
Scenario: An AI system is designed to classify emails into categories such as "urgent," "important,"
"normal," and "spam."
Process:

Data Collection: The AI starts by analyzing thousands of emails that are already labeled by users. It
observes various features such as keywords, sender information, time of email, and user interactions
(like whether emails are opened quickly and replied to, or marked as spam).
Pattern Recognition: Through its analysis, the AI notices certain patterns:
Emails containing words like "urgent" or "immediately" and sent from recognized contacts are often labeled as
"urgent."
Emails from known commercial sources containing words like "sale" or "offer" are frequently marked as "spam."
Emails that are not from contacts but contain formal language and no promotional content are often classified as
"important."
Generalization: Using these observations, the AI develops a general set of rules or a model to
predict the category of new emails. For example, it might generalize that any email from a recognized
contact that includes the word "urgent" should be classified as "urgent."
 Application: When new emails arrive, the AI applies these generalized rules to classify them
based on the learned patterns.
 Outcome: The AI uses inductive reasoning to generalize from specific instances to broader rules,
Abductive Reasoning?
Abductive reasoning is a type of reasoning that emphasizes drawing inferences from the existing data. There
is no assurance that the conclusion drawn is accurate, though, as the information at hand could not be
comprehensive.
 Conclusions drawn from abductive reasoning are likely to be true. This type of reasoning determines the most
likely conclusion for a set of incomplete facts by taking it into account.
Although abductive reasoning is a kind of deductive reasoning, the accuracy of the conclusion cannot be
guaranteed by the information at hand.
Example of Abductive Reasoning : Let's take an example: Suppose you wake up one morning and find that
the street outside your house is wet.

1.Observation: The street is wet.

2.Possible Hypotheses:

It rained last night.

A water pipe burst.

A street cleaning vehicle just passed by.

Additional Information: You recall that the weather forecast predicted rain for last night.

Abductive Reasoning Conclusion: The most plausible explanation for the wet street, given the forecast and
the lack of any other visible cause, is that it rained last night.
Sources of Uncertainty in
Reasoning
Uncertainty is omnipresent because of
incompleteness and incorrectness.
Uncertainty in Data : data derived from assumptions
Uncertainty in Knowledge Representation :
limited expressiveness of the representation mechanism
Uncertainty in Rules : conflict resolution and
incomplete because some conditions are unknown
Reasoning and KR
Non-Monotonic Reasoning:
In a non-monotonic reasoning system new information can be added which
will cause the deletion or alteration of existing knowledge. For example,
imagine you have invited someone to your house for dinner.
In the absence of any other information you may make an assumption that
your guest eats meat and will like chicken. Later you discover that the guest
is in fact a vegetarian and the inference that your guest likes chicken
becomes invalid.
Asystem to deal with such a non-monotonic knowledge is the Truth
Maintenance System (TMS).
The main object of the TMS is the maintenance of the knowledge base.
A TMS is a mechanism for keeping track of dependencies and detecting
inconsistencies. It is also called reason maintenance system.
TMS which implements to permit a form of non-monotonic reasoning by
permitting the addition of changing to a knowledge base.
A Truth Maintenance System (TMS), also known as a Reason
Maintenance System, is a type of artificial intelligence (AI) system
designed to handle situations where information might be contradictory
or uncertain. It essentially helps manage the knowledge base of an AI
system by tracking how beliefs and assumptions are formed.
 Here's how a TMS works:
1.Knowledge Representation: The TMS maintains a record of all the
facts and beliefs within the system. This includes both base facts (initial
assumptions) and derived facts (conclusions reached through reasoning).
2.Dependency Tracking: The key aspect of a TMS is that it tracks the
dependencies between these facts. For each derived fact, the TMS
stores the specific base facts and reasoning steps that led to its
conclusion. This creates a network of relationships between beliefs.
3.Maintaining Consistency: Imagine a scenario where a new piece of
information contradicts existing beliefs. This can lead to inconsistencies
in the knowledge base. The TMS detects such inconsistencies and tries to
maintain a coherent view.
Architecture of Truth Management System
 Role of Truth Maintenance System
 1. The main job is TMS is to maintain “consistency of knowledge” being used by the
problem solver and not to perform any inference functions.
 2. TMS also gives the inference – component, the latitude to perform non-monotonic
inferences.
 3. When discoveries made, this more recent information can displace previous conclusions
that are no longer valid.
 4. the TMS maintains dependency records for all such conclusion.
 5. The procedure uses to perform this process that says “Dependency-Directed Back-
tracking“.
 6. The records maintain in the form of a “Dependency-Network“.
 The nodes in the network represent KB entries such as premises, conclusions, inference rules and the like.
 Attached to the nodes are justifications that represent the inference steps from which the node derived.
Working Principle of TMS
The Inference Engine (IE) solves domain problems based on its current
belief set, while the TMS maintains the currently active belief set. The
updating process is incremental. After each inference, information
exchange between the two components. The IE tells the TMS what
deductions it has made. The TMS, in turn, asks a question about current
beliefs and reasons for failure. If maintains a consistent set of beliefs for
the IE to work with even if now knowledge is added or removed.

Step1: Say, The KB contains the propositions P, P->Q and modus ponens.
Step2: From this, the IE would rightfully conclude Q and add this
conclusion to the KB.
Step3: Later, if it was learned that P was appropriate, it would be added
to the KB resulting in a contradiction.
Step4: Consequently, it would be necessary to remove P to eliminate the
inconsistency.
 TruthMaintenance Systems can have different
characteristics:

 Justification-Based Truth Maintenance System (JTMS)


 Itis a simple TMS where one can examine the consequences of the
current set of assumptions. The meaning of sentences is not known.
 Assumption-Based Truth Maintenance System (ATMS)
 Itallows to maintain and reason with a number of simultaneous,
possibly incompatible, current sets of assumption. Otherwise it is
similar to JTMS, i.e. it does not recognise the meaning of sentences.
 Logical-Based Truth Maintenance System (LTMS)
 Like JTMS in that it reasons with only one set of current assumptions
at a time. More powerful than JTMS in that it recognises the
propositional semantics of sentences, i.e. understands the relations
between p and ~p, p and q and p&q, and so on.
Justification-Based Truth Maintenance
• A Justification-based truth maintenance system (JTMS) is a simple TMS where one can examine the
consequences of the current set of assumptions.
• In JTMS labels are attacched to arcs from sentence nodes to justification nodes. This label is either "+" or "-".
Then, for a justification node we can talk of its in-list, the list of its inputs with "+" label, and of its out-list, the list
of its inputs with "-" label.
• The meaning of sentences is not known. We can have a node representing a sentence p and one representing
~p and the two will be totally unrelated, unless relations are established between them by justifications.
• For example, we can write:

which says that if both p and ~p are IN we have a contradiction.


The association of IN or OUT labels with the nodes in a dependency network defines an in-out-labeling function.
This function is consistent if:
• The label of a junctification node is IN iff the labels of all the sentence nodes in its in-list are all IN and the labels
of all the sentence nodes in its out-list are OUT.
• The label of a sentence node is IN iff it is a premise, or an enabled assumption node, or it has an input from a
justification node with label IN.
Sample Space
Sample Space is a key concept in probability theory and is used to determine the
likelihood of different results occurring in a random experiment or event, by
representing all possible outcomes or events that can occur.
The sample space in probability refers to the set of all possible outcomes or
results that can arise from a random experiment. It serves as the foundation for
calculating probabilities and understanding the variability of outcomes.
How to Find Sample Space in Probability
To find the sample space in Probability, follow the below steps:

Identify all possible outcomes of the experiment.


 List these outcomes in a set, ensuring each one is unique.
 For a single die roll, the sample space is {1, 2, 3, 4, 5, 6}.
 For drawing a card from a standard deck, the sample space is 52
unique cards.
 Combining sample spaces when multiple events occur helps calculate
complex probabilities.
 What is Sample Space Diagram
A sample space diagram is a visual representation
that illustrates all the possible outcomes of a
random experiment. It is a valuable tool in
probability theory for visualising and understanding
the different potential results of an event.
 Sample Space Diagram for Rolling of Two Die
 Following illustration represents all the possible
outcomes i.e., sample space of rolling of two die.
ATMS based problem solver
A1. Hotel register was
forged.
A2. Hotel register was not
forged.
A3. Babbitt's brother-in-
law lied.
A4. Babbitt's brother-in-
law did not lie.
A5. Cabot lied.
A6. Cabot did not lie.
A7. Abbott, Babbitt, and
Cabot are the only
possible suspects
A8. Abbott, Babbitt, and
Cabot are not the
only suspects
Application of Bayes’
Theorem
 Statistical
inference : to calculate the probability
that a new drug is effective in treating a
particular disease
 Bayesian
statistics : to update beliefs abt
parameters and hypothesis.
 Machine learning : used to make predictions
such as classifying email as spam or not spam
 Medicaldiagnosis : update the probability of a
disease given new results or symptoms.
Dempster Shafer Theory
Dempster Shafer Theory(DST) is an evidence theory, it combines all possible
outcomes of the problem. Hence it is used to solve problems where there may be a
chance that a piece of different evidence will lead to some different result.
The uncertainty in this model is given by:-
Consider all possible outcomes.
Belief
will lead to belief in some possibility by bringing out some
evidence. (What is this supposed to mean?)
 Plausibility will make evidence compatible with possible outcomes.

 Massfunction m(K): It is an interpretation of m({K or B}) i.e; it means there is


evidence for {K or B} which cannot be divided among more specific beliefs for K
and B.
 Belief
in K: The belief in element K of Power Set is the sum of masses of the
element which are subsets of K.
Certainty factors and Rule based
systems

Certainty factors are a way to manage uncertainty in rule-based systems.

Rule-based systems . Rule-based systems are a type of artificial intelligence (AI) that use a set of
rules to make decisions.
The certainty factor of a rule's conclusion is calculated by multiplying the certainty factor of the rule
by the minimum of the certainty factors of its premises.
How they work:
Rule-based systems operate by applying a set of "if-then" rules to input data, and the certainty
factor is used to adjust the confidence level of the conclusion based on the strength of the evidence
provided by the rules.
Range of certainty factors:
Certainty factors are usually represented as a value between -1 (completely uncertain or disbelieved)
and +1 (completely certain).
Example scenarios using certainty factors:
Credit card fraud detection:
Rule: “If purchase location is significantly different from usual location AND purchase amount is
unusually large, then flag transaction as potentially fraudulent (certainty factor 0.7)”.
How certainty factors are used
Certainty factors were used in the medical expert system
Certainty factors were one of the most popular ways to
represent and manipulate uncertain knowledge in rule-based
expert systems in the 1980s.
However, some researchers have criticized the certainty-
factor model and have stopped using it.
What are rule-based systems?
Rule-based systems are a basic type of AI model that use a
set of prewritten rules to make decisions and solve problems.
For example, a rule-based system can be used in a customer
service chatbot.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy