AI_Unit - 4
AI_Unit - 4
Artificial Intelligence
AI&ML – BTMT2502 /
AI& DS – BTAT2502
Prepared by:
Prof. D. Jagadeesan, B.E (CSE), M.Tech (CSE), Ph.D (CSE),
MISTE, MIEEE,MCSI
Python Programming 1
Unit 4
Logic & Knowledge Representation: First order logic. Inference in
first order logic, propositional vs. first order inference, Unification &
lifts forward chaining, Backward chaining, Resolution, learning from
observation, Explanation-based learning, Statistical Learning methods,
and Reinforcement Learning.
First order logic
• First Order Logic also known as Predicate logic, is a formal system used in artificial intelligence,
mathematics and logic to represent relationships between objects and make inferences. It extends
Propositional Logic by incorporating quantifiers, predicates and variables, making it more
expressive.
• Components of Predicate Logic
1.Constants (Objects): Represent specific entities in the domain. Example: Alice, 5, Paris
2.Variables: Represent unspecified elements in the domain. Example: x, y, z
3.Predicates (Relations/Properties): Functions that define a property or a relation between
objects. Example: Loves(Alice, Bob) (Alice loves Bob)
4.Functions: Define deterministic mappings from objects to objects. Example: Father(John) = Robert
(John’s father is Robert)
5.Logical Connectives: ¬ (Negation), ∧ (AND), V (OR), → (Implication), (Biconditional)
6.Quantifiers:
1. Universal Quantifier (∀ ): "For all" – expresses generality. Example: ∀x Loves(x, Chocolate) →
(Everyone loves chocolate)
2. Existential Quantifier (∃): "There exists" – expresses existence. Example: ∃x Loves(x, Pizza)
(At least one person loves pizza)
First Order Logic
• Examples of Predicate Logic Sentences
1. Basic Statements
• "John is a student." → Student(John)
• "Mary is a doctor." → Doctor(Mary)
2. Relationships Between Objects
• "Alice loves Bob." → Loves(Alice, Bob)
• "Tom is taller than Jerry." → Taller(Tom, Jerry)
• "Paris is the capital of France." → Capital(Paris, France)
• 3. Universal Quantifier ( ∀ ) – "For all"
• "All humans are mortal.“→∀x (Human(x) → Mortal(x))
• "Every student studies.“→ ∀x (Student(x) → Studies(x))
• "If a person is a teacher, they teach.“→ ∀x (Teacher(x) → Teaches(x))
• 4. Existential Quantifier ( ∃ ) – "There exists"
• "Some people like ice cream.“ → ∃x (Person(x) ∧ Likes(x, IceCream))
• "There exists a person who is a millionaire.“→ ∃x (Person(x) ∧ Millionaire(x))
• "Some dogs are friendly.“→ ∃x (Dog(x) ∧ Friendly(x))
Inference in first order logic
• Inference in First-Order Logic is used to deduce new facts or sentences from
existing sentences. Before understanding the FOL inference rule, let's understand
some basic terminologies used in FOL.
• Substitution:
• Substitution is a fundamental operation performed on terms and formulas. It occurs in all
inference systems in first-order logic. The substitution is complex in the presence of
quantifiers in FOL. If we write F[a/x], so it refers to substitute a constant "a" in place of
variable "x".
• FOL inference rules for quantifier:
• As propositional logic we also have inference rules in first-order logic, so following
are some basic inference rules in FOL:
• Universal Generalization -
• Universal Instantiation
• Existential Instantiation
• Existential introduction
Inference in first order logic
• Universal Generalization
• Universal generalization is a valid inference rule which states that if premise P(c) is true for any arbitrary
element c in the universe of discourse, then we can have a conclusion as ∀ x P(x).
• It can be represented as:
• Universal Instantiation
• Universal instantiation is also called as universal elimination or UI is a valid inference rule. It can be
applied multiple times to add new sentences.
• It can be represented as:
• Existential Instantiation
• Existential instantiation is also called as Existential Elimination, which is a valid inference rule in first-
order logic.
• It can be represented as:
• Existential introduction
• An existential introduction is also known as an existential generalization, which is a valid inference rule
in first-order logic.
• It can be represented as:
Propositional vs. First Order Inference
UNIFY (p,q)=
where SUBST(,p) = SUBST(q, )
- is unifier value
Unification
• Following are some basic conditions for unification:
• Predicate symbol must be same, atoms or expression with different
predicate symbol can never be unified.
• No. of arguments in both expressions must be identical
• Unification will fail if there are two similar variables present in the same
expression.
• Example
• UNIFY(Knows(John,x), Knows(John, kumar))
SUBST = x/kumar
(Knows(John,kumar), Knows(John, kumar))
Successfully unified
Unification
• Example
• UNIFY(Knows(John,x), Knows(y, kumar))
SUBST = x/kumar
S1=(Knows(John,kumar), Knows(y, kumar))
SUBST = John/y
S2=(Knows(John,kumar), Knows(y, kumar))
Unification SUBST =(x/kumar, John/y)
Successfully unified
• For example, unification can help determine that "he" refers to a specific
person or entity mentioned earlier in a text.
• Logic Programming: Unification is a cornerstone of logic programming
languages like Prolog. In logic programming, unification is used to match
query predicates with database predicates.
SUBST θ= {f(b)/x}
SUBST θ= {b/y}
S1 => {Q(a, g(f(b), a), f(b)); Q(a, g(f(b), a), f(b))}, Successfully Unified.
Inference engine
• The inference engine is the component of the intelligent system in artificial
intelligence, which applies logical rules to the knowledge base to infer new
information from known facts.
• The first inference engine was part of the expert system.
• Inference engine commonly proceeds in two modes, which are:
1.Forward chaining
2.Backward chaining
Forward chaining
• Forward chaining is also known as a forward deduction or forward reasoning method when using
an inference engine.
• Forward chaining is a form of reasoning which start with atomic sentences in the knowledge base
and applies inference rules in the forward direction to extract more data until a goal is reached.
• The Forward-chaining algorithm starts from known facts, triggers all rules whose premises are
satisfied, and add their conclusion to the known facts. This process repeats until the problem is
solved.
• Properties of Forward-Chaining:
• It is a bottom-up approach.
• It is a process of making a conclusion based on known facts or data, by starting from the
initial state and reaches the goal state.
• Forward-chaining approach is also called as data-driven as we reach to the goal using
available data.
• Forward -chaining approach is commonly used in the expert system, such as CLIPS, business,
and production rule systems.
Forward chaining
• Example:
• Facts: It is raining
• Rule: If it is raining, the street is wet
• Conclusion: The Street is wet
Step 2: we will see those facts which infer from available facts and with satisfied premises.
Rule-(1) does not satisfy premises, so it will not be added in the first iteration.
Rule-(2) and (3) are already added.
Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers from the
conjunction of Rule (2) and (3).
Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from Rule-(7).
Forward chaining
• Step-3:
• we can check Rule-(1) is satisfied with the substitution {p/Robert, q/T1, r/A}, so we can add
Criminal(Robert) which infers all the available facts. And hence we reached our goal statement.
Step 2: we will infer other facts form goal fact which satisfies the rules. So as we can see in Rule-1, the goal
predicate Criminal (Robert) is present with substitution {Robert/P}. So we will add all the conjunctive facts below
the first level and will replace p with Robert.
Backward chaining
• Backward chaining proof:
• Step-3: we will extract further fact Missile(q) which infer from Weapon(q), as it satisfies Rule-(5). Weapon
(q) is also true with the substitution of a constant T1 at q.
• Step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert, T1, r) which satisfies the Rule- 4, with
the substitution of A in place of r. So, these two statements are proved here.
Backward chaining
• Backward chaining proof:
• At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which satisfies Rule- 6. And hence all
the statements are proved true using backward chaining.
Forward and Backward chaining
• Problems:
• Problem 1: As per medical law, it is a violation for a licensed doctor to prescribe banned drugs to
patients. Drug X is a banned drug. Dr. Smith is a licensed doctor and has prescribed Drug X to a
patient. (Using Forward Chaining)
• Problem 3: If a student has passed high school and scored above 90% in their final exams, they are
eligible for admission. If a student is eligible for admission and has extracurricular achievements,
they are shortlisted for an interview. If a student passes the interview, they are admitted to the
university. (Using Forward and Backward Chaining)
Resolution
• Resolution is a theorem proving technique that proceeds by building
refutation proofs, i.e., proofs by contradictions. It was invented by a
Mathematician John Alan Robinson in the year 1965.
• Resolution is used, if there are various statements are given, and we
need to prove a conclusion of those statements. Unification is a key
concept in proofs by resolutions. Resolution is a single inference rule
which can efficiently operate on the conjunctive normal form or clausal
form.
• Clause: Disjunction of literals (an atomic sentence) is called a clause. It is
also known as a unit clause.
• Conjunctive Normal Form: A sentence represented as a conjunction of
clauses is said to be conjunctive normal form or CNF.
Example:
We can resolve two clauses which are given below:
[Animal (g(x) V Loves (f(x), x)] and [ ¬ Loves(a, b) V ¬ Kills(a, b)]
• Where two complimentary literals are:
Loves (f(x), x) and ¬ Loves (a, b)
• These literals can be unified with unifier θ= [a/f(x), and b/x] , and it
will generate a resolvent clause: [Animal (g(x) V ¬ Kills(f(x), x)].
The Resolution Steps
• Types of Supervised Learning: Supervised learning is classified into two categories of algorithms:
• Regression: A regression problem is when the output variable is a real value, such as “dollars” or
“weight”.
• Classification: A classification problem is when the output variable is a category, such as “Yes” or “No”
, “disease” or “no disease”.
Unsupervised Learning
• Unsupervised learning in artificial intelligence is a type of machine learning that learns from
data without human supervision.
• Unsupervised learning is a type of machine learning that works with data that has no labels or
categories. The main goal is to find patterns and relationships in the data without any guidance.
• For example, a unlabeled dataset of images of Elephant, Camel and Cow would have each
image tagged with either “Elephant“, “Camel” or “Cow.”
• Types of Unsupervised Learning: Unsupervised learning is classified into two categories of algorithms:
• Clustering: A clustering problem is where you want to discover the inherent groupings in the data,
such as grouping customers by purchasing behavior.
• Association: An association rule learning problem is where you want to discover rules that describe
large portions of your data, such as people that buy X also tend to buy Y
Reinforcement learning
• Reinforcement Learning (RL) is a branch of machine learning that focuses on how agents can learn to
make decisions through trial and error to maximize cumulative rewards. RL allows machines to learn by
interacting with an environment and receiving feedback based on their actions. This feedback comes in the
form of rewards or penalties.
• Reinforcement Learning revolves around the idea that an agent (the learner or decision-maker) interacts with an
environment to achieve a goal. The agent performs actions and receives feedback to optimize its decision-making
over time.
• Agent: The decision-maker that performs actions.
• Environment: The world or system in which the agent operates.
• State: The situation or condition the agent is currently in.
• Action: The possible moves or decisions the agent can make.
• Reward: The feedback or result from the environment based on the agent’s action.
Statistical Learning methods
• Statistics is the science of collecting, organizing, analyzing, interpreting, and
presenting data. It encompasses a wide range of techniques for summarizing
data, making inferences, and drawing conclusions.
• Statistical methods help quantify uncertainty and variability in data, allowing
researchers and analysts to make data-driven decisions with confidence.
Types of Statistics
• There are commonly two types of statistics, which are discussed below:
• Descriptive Statistics: It helps us simplify and organize big chunks of
data. This makes large amounts of data easier to understand.
• Inferential Statistics: It is a little different. It uses smaller data to draw
conclusions about a larger group. It helps us predict and draw
conclusions about a population.
Statistical Learning methods
• Applications of Statistics in Machine Learning
• Statistics is a key component of machine learning, with broad applicability in various
fields.
• Feature engineering relies heavily on statistics to convert geometric features into
meaningful predictors for machine learning algorithms.
• In image processing tasks like object recognition and segmentation, statistics
accurately reflect the shape and structure of objects in images.
• Anomaly detection and quality control benefit from statistics by identifying
deviations from norms, aiding in the detection of defects in manufacturing
processes.
• Environmental observation and geospatial mapping leverage statistical analysis to
monitor land cover patterns and ecological trends effectively.