0% found this document useful (0 votes)
14 views55 pages

Unit 2

This document discusses optimal decision-making in games using concepts from game theory, including adversarial search and the Mini-Max algorithm. It explains the roles of maximizing and minimizing players, the structure of game trees, and the process of evaluating moves to determine optimal strategies. Additionally, it covers alpha-beta pruning as a technique to enhance the efficiency of the Mini-Max algorithm by eliminating unnecessary branches in the search tree.

Uploaded by

Bhargavi Jangam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views55 pages

Unit 2

This document discusses optimal decision-making in games using concepts from game theory, including adversarial search and the Mini-Max algorithm. It explains the roles of maximizing and minimizing players, the structure of game trees, and the process of evaluating moves to determine optimal strategies. Additionally, it covers alpha-beta pruning as a technique to enhance the efficiency of the Mini-Max algorithm by eliminating unnecessary branches in the search tree.

Uploaded by

Bhargavi Jangam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 55

UNIT 2

Games - Optimal Decisions in Games, Alpha–Beta Pruning, Defining Constraint Satisfaction Problems,
Constraint Propagation, Backtracking Search for CSPs, Knowledge-Based Agents, Logic- Propositional Logic,
Propositional Theorem Proving: Inference and proofs, Proof by resolution, Horn clauses and definite clauses.
Game Theory
Game Theory is a mathematical framework used to analyse and understand strategic interactions between
rational decision-makers, known as players, in various scenarios. It provides a systematic approach to
studying decision-making in competitive situations where the outcome of one player's actions depends on the
actions of others.
Game Theory focuses on interactions between rational decision-makers, known as players. Players can
be individuals, companies, nations, or any entities capable of making strategic decisions. Each
player has a set of strategies or actions they can take, and their objective is to maximize their utility
or payoff based on the outcomes of their actions and the actions of others.

Game Playing
Adversarial search, or game-tree search, is a technique for analyzing an adversarial game in order to try
to determine who can win the game and what moves the players should make in order to win.
Adversarial search is one of the oldest topics in Artificial Intelligence. The original ideas for
adversarial search were developed by Shannon in 1950 and independently by Turing in 1951, in the
context of the game of chess—and their ideas still form the basis for the techniques used today.
2- Person Games:
o Players: We call them Max and Min.
o Initial State: Includes board position and whose turn it is.
o Operators: These correspond to legal moves.
o Terminal Test: A test applied to a board position which determines whether the game is over. In
chess, for example, this would be a checkmate or stalemate situation.
o Utility Function: A function which assigns a numeric value to a terminalstate. For example, in
chess the outcome is win (+1), lose (-1) or draw (0). Note that by convention, we always measure
utility relative to Max.
Optimal Decision Making in Games
Let us start with games with two players, whom we’ll refer to as MAX and MIN for obvious reasons. MAX
is the first to move, and then they take turns until the game is finished. At the conclusion of the game, the
victorious player receives points, while the loser receives penalties. A game can be formalized as a type of
search problem that has the following elements:
 S0: The initial state of the game, which describes how it is set up at the start.
 Player (s): Defines which player in a state has the move.
 Actions (s): Returns a state’s set of legal moves.
 Result (s, a): A transition model that defines a move’s outcome.
 Terminal-Test (s): A terminal test that returns true if the game is over but false otherwise.
Terminal states are those in which the game has come to a conclusion.
 Utility (s, p): A utility function (also known as a payout function or objective function )
determines the final numeric value for a game that concludes in the terminal state s for player p.
The result in chess is a win, a loss, or a draw, with values of +1, 0, or 1/2. Backgammon’s payoffs
range from 0 to +192, but certain games have a greater range of possible outcomes. A zero-sum
game is defined (confusingly) as one in which the total reward to all players is the same for each
game instance. Chess is a zero-sum game because each game has a payoff of 0 + 1, 1 + 0, or 1/2 +
1/2. “Constant-sum” would have been a preferable name, 22 but zero-sum is the usual term and
makes sense if each participant is charged 1.
The game tree for the game is defined by the beginning state, ACTIONS function, and RESULT function—a
tree in which the nodes are game states and the edges represent movements. The figure below depicts a
portion of the tic-tac-toe game tree (noughts and crosses). MAX may make nine different maneuvers from his
starting position. The game alternates between MAXs setting an X and MINs placing an O until we reach leaf
nodes corresponding to terminal states, such as one player having three in a row or all of the squares being
filled. The utility value of the terminal state from the perspective of MAX is shown by the number on each
leaf node; high values are thought to be beneficial for MAX and bad for MIN.

Mini-Max Algorithm

The Mini-Max algorithm is a decision-making algorithm used in artificial intelligence, particularly in game
theory and computer games. It is designed to minimize the possible loss in a worst-case scenario (hence
"min") and maximize the potential gain (therefore "max").
In a two-player game, one player is the maximizer, aiming to maximize their score, while the other is the
minimizer, aiming to minimize the maximizer's score. The algorithm operates by evaluating all possible
moves for both players, predicting the opponent's responses, and choosing the optimal move to ensure the
best possible outcome.
Working of Min-Max Process in AI
The Min-Max algorithm is a decision-making process used in artificial intelligence for two-player games. It
involves two players: the maximizer and the minimizer, each aiming to optimize their own outcomes.
Players Involved
Maximizing Player (Max):
 Aims to maximize their score or utility value.
 Chooses the move that leads to the highest possible utility value, assuming the opponent will play
optimally.
Minimizing Player (Min):
 Aims to minimize the maximizer's score or utility value.
 Selects the move that results in the lowest possible utility value for the maximizer, assuming the
opponent will play optimally.
The interplay between these two players is central to the Min-Max algorithm, as each player attempts to
outthink and counter the other's strategies.
Step-by-Step involved in the Mini-Max Algorithm
The Min-Max algorithm involves several key steps, executed recursively until the optimal move is
determined. Here is a step-by-step breakdown:
Step 1: Generate the Game Tree
 Objective: Create a tree structure representing all possible moves from the current game state.
 Details: Each node represents a game state, and each edge represents a possible move.
Step 2: Evaluate Terminal States
 Objective: Assign utility values to the terminal nodes of the game tree.
 Details: These values represent the outcome of the game (win, lose, or draw).
Step 3: Propagate Utility Values Upwards
 Objective: Starting from the terminal nodes, propagate the utility values upwards through the tree.
 Details: For each non-terminal node:
o If it's the maximizing player's turn, select the maximum value from the child
nodes.
o If it's the minimizing player's turn, select the minimum value from the child
nodes.
Step 4: Select Optimal Move
 Objective: At the root of the game tree, the maximizing player selects the move that leads to the
highest utility value.
Min-Max Formula
The Min-Max value of a node in the game tree is calculated using the following recursive formulas:
1. Maximizing Player's Turn:
 Max(s)=max⁡a∈A(s)Min(Result(s,a))Max(s)=maxa∈A(s)Min(Result(s,a))
 Here:
o Max(s)Max(s)is the maximum value the maximizing player can
achieve from state s.
o A(s) is the set of all possible actions from state s.
o Result Result(s,a)Result(s,a) is the resulting state from taking action
aaa in state s.
o Min(Result(s,a))Min(Result(s,a)) is the value for the minimizing
player from the resulting state.
2. Minimizing Player's Turn:
 Min(s)=min⁡a∈A(s)Max(Result(s,a))Min(s)=mina∈A(s)Max(Result(s,a))
 Here:
o Min(s)Min(s) is the minimum value the minimizing player can
achieve from state sss.
o The other terms are similar to those defined above.
Terminal States
For terminal states, the utility value is directly assigned:
Utility(s)={1if the maximizing player wins from state s0if the game is a draw from state s−1if the minimizing
player wins from state sUtility(s)=⎩⎨⎧10−1if the maximizing player wins from state sif the game is a draw
from state sif the minimizing player wins from state s.
Example Calculation
Consider a simple game where the utility values of terminal states are given. To illustrate the Min-Max
calculations:
1. Start from the terminal states and calculate the utility values.
2. Propagate these values up the tree using the Min-Max formulas.
For example, if the terminal states have utility valuesU1,U2,…,Un,U1,U2,…,Un, then:
 For the maximizing player's node:Max(s)=max⁡(U1,U2,…,Un)Max(s)=max(U1,U2,…,Un)
 For the minimizing player's node: Min(s)=min⁡(U1,U2,…,Un)Min(s)=min(U1,U2,…,Un)
Pseudocode for Min-Max Algorithm
This pseudocode demonstrates the recursive nature of the Min-Max algorithm, alternating between the
maximizing and minimizing players, and evaluating utility values until the optimal move is determined.
function minimax(node, depth, maximizingPlayer) is

if depth ==0 or node is a terminal node then

return static evaluation of node

if MaximizingPlayer then // for Maximizer Player

maxEva= -infinity

for each child of node do

eva= minimax(child, depth-1, false)

maxEva= max(maxEva,eva) //gives Maximum of the values

return maxEva

else // for Minimizer player

minEva= +infinity

for each child of node do

eva= minimax(child, depth-1, true)

minEva= min(minEva, eva) //gives minimum of the values

return minEva
Example of Min-Max in Action
Consider a simplified version of a game where each player can choose between two moves at each turn.
Here's a basic game tree:
Max
/ \
Min Min
/\ / \
+1 -1 0 +1
 At the leaf nodes, the utility values are +1, -1, 0, and +1.
 The minimizing player will choose the minimum values from the child nodes: -1 (left subtree) and
0 (right subtree).
 The maximizing player will then choose the maximum value between -1 and 0, which is 0.
Thus, the optimal move for the maximizing player, considering optimal play by the minimizer, leads to a
utility value of 0.
Example

Alpha-beta pruning algorithm:

• Pruning: eliminating a branch of the search tree from consideration without exhaustive
examination of each node
• - Pruning: the basic idea is to prune portions of the search tree that cannot improve the
utility value of the max or min node, by just considering the values of nodes seen so far.
• Alpha-beta pruning is used on top of minimax search to detect paths that do not need to be
explored. The intuition is:
• The MAX player is always trying to maximize the score. Call this .
• The MIN player is always trying to minimize the score. Call this  .
• Alpha cutoff: Given a Max node n, cutoff the search below n (i.e., don't generate or examine any
more of n's children) if alpha(n) >= beta(n)
(alpha increases and passes beta from below)
• Beta cutoff.: Given a Min node n, cutoff the search below n (i.e., don't generate or examine any
more of n's children) if beta(n) <= alpha(n)
(beta decreases and passes alpha from above)
• Carry alpha and beta values down during search Pruning occurs whenever alpha >= beta
Algorithm:

Pseudo-code for Alpha-beta Pruning


function minimax(node, depth, alpha, beta, maximizingPlayer) is
if depth ==0 or node is a terminal node then
return static evaluation of node
if MaximizingPlayer then // for Maximizer Player
maxEva= -infinity
for each child of node do
eva= minimax(child, depth-1, alpha, beta, False)
maxEva= max(maxEva, eva)
alpha= max(alpha, maxEva)
if beta<=alpha
break
return maxEva
else // for Minimizer player
minEva= +infinity
for each child of node do
eva= minimax(child, depth-1, alpha, beta, true)
minEva= min(minEva, eva)
beta= min(beta, eva)
if beta<=alpha
break
return minEva
Let’s make the above algorithm clear with an example.

 The initial call starts from A. The value of alpha here is -


INFINITY and the value of beta is +INFINITY. These values are
passed down to subsequent nodes in the tree. At A the maximizer
must choose max of B and C, so A calls B first
 At B it the minimizer must choose min of D and E and hence
calls D first.
 At D, it looks at its left child which is a leaf node. This node returns a
value of 3. Now the value of alpha at D is max( -INF, 3) which is 3.
 To decide whether its worth looking at its right node or not, it checks
the condition beta<=alpha. This is false since beta = +INF and alpha
= 3. So it continues the search.
 D now looks at its right child which returns a value of 5.At D, alpha =
max(3, 5) which is 5. Now the value of node D is 5
 D returns a value of 5 to B. At B, beta = min( +INF, 5) which is 5. The
minimizer is now guaranteed a value of 5 or lesser. B now calls E to
see if he can get a lower value than 5.
 At E the values of alpha and beta is not -INF and +INF but instead -
INF and 5 respectively, because the value of beta was changed
at B and that is what B passed down to E
 Now E looks at its left child which is 6. At E, alpha = max(-INF, 6)
which is 6. Here the condition becomes true. beta is 5 and alpha is 6.
So beta<=alpha is true. Hence it breaks and E returns 6 to B
 Note how it did not matter what the value of E‘s right child is. It could
have been +INF or -INF, it still wouldn’t matter, We never even had to
look at it because the minimizer was guaranteed a value of 5 or
lesser. So as soon as the maximizer saw the 6 he knew the minimizer
would never come this way because he can get a 5 on the left side
of B. This way we didn’t have to look at that 9 and hence saved
computation time.
 E returns a value of 6 to B. At B, beta = min( 5, 6) which is 5.The
value of node B is also 5
So far this is how our game tree looks. The 9 is crossed out because it was
never computed.

 B returns 5 to A. At A, alpha = max( -INF, 5) which is 5. Now the


maximizer is guaranteed a value of 5 or greater. A now calls C to see
if it can get a higher value than 5.
 At C, alpha = 5 and beta = +INF. C calls F
 At F, alpha = 5 and beta = +INF. F looks at its left child which is a 1.
alpha = max( 5, 1) which is still 5.
 F looks at its right child which is a 2. Hence the best value of this
node is 2. Alpha still remains 5
 F returns a value of 2 to C. At C, beta = min( +INF, 2). The condition
beta <= alpha becomes true as beta = 2 and alpha = 5. So it breaks
and it does not even have to compute the entire sub-tree of G.
 The intuition behind this break-off is that, at C the minimizer was
guaranteed a value of 2 or lesser. But the maximizer was already
guaranteed a value of 5 if he choose B. So why would the maximizer
ever choose C and get a value less than 2 ? Again you can see that it
did not matter what those last 2 values were. We also saved a lot of
computation by skipping a whole sub-tree.
 C now returns a value of 2 to A. Therefore the best value at A is max(
5, 2) which is a 5.
 Hence the optimal value that the maximizer can get is 5
This is how our final game tree looks like. As you can see G has been crossed
out as it was never computed.

Example:

1) Setup phase: Assign to each left-most (or right-most) internal node of the tree, variables:
alpha = -infinity, beta = +infinity
2) Look at first computed final configuration value. It’s a 3. Parent is a min
node, so set the beta (min) value to 3.
3) Look at next value, 5. Since parent is a min node, we want the minimum of
3 and 5 which is 3. Parent min node is done – fill alpha (max) value of its parent max
node. Always set alpha for max nodes and beta for min nodes. Copy the state of the max
parent node into the second unevaluated min child.

4) Look at next value, 2. Since parent node is min with b=+inf, 2 is smaller, change b.
5) Now, the min parent node has a max value of 3 and min value of 2. The value of the
2nd child does not matter. If it is >2, 2 will be selected for min node. If it is <2, it will be
selected for min node, but since it is <3 it will not get selected for the parent max node.
Thus, we prune the right subtree of the min node. Propagate max value up the tree.

6) Max node is now done and we can set the beta value of its parent and
propagate node state to sibling subtree’s left-most path.
7) The next node is 10. 10 is not smaller than 3, so state of parent does not change. We still
have to look at the 2nd child since alpha is still –inf.

8) The next node is 4. Smallest value goes to the parent min node. Min subtree is
done, so the parent max node gets the alpha (max) value from the child. Note that
if the max node had a 2nd subtree, we can prune it since a>b.
9) Continue propagating value up the tree, modifying the corresponding alpha/beta values.
Also propagate the state of root node down the left-most path of the right subtree.

10) Next value is a 2. We set the beta (min) value of the min parent to 2. Since
no other children exist, we propagate the value up the tree.
11) We have a value for the 3 rd level max node, now we can modify the
beta (min) value of the min parent to 2. Now, we have a situation that a>b
and thus the value of the rightmost subtree of the min node does not
matter, so we prune the whole subtree.

12) Finally, no more nodes remain, we propagate values up the tree. The
root has a value of 3 that comes from the left-most child. Thus, the player
should choose the left-most child’s move in order to maximize his/her
winnings. As you can see, the result is the same as with the mini-max
example, but we did not visit all nodes of the tree.
Defining Constraint Satisfaction Problems
A Constraint Satisfaction Problem is a mathematical problem where the solution must
meet a number of constraints. In a CSP, the objective is to assign values to variables such
that all the constraints are satisfied. CSPs are used extensively in artificial intelligence for
decision-making problems where resources must be managed or arranged within strict
guidelines.
Components of CSP
1. Variables: The things that need to be determined are variables. Variables in a
CSP are the objects that must have values assigned to them in order to satisfy a
particular set of constraints. Boolean, integer, and categorical variables are just a
few examples of the various types of variables, for instance, could stand in for
the many puzzle cells that need to be filled with numbers in a sudoku puzzle.
2. Domains: The range of potential values that a variable can have is represented
by domains. Depending on the issue, a domain may be finite or limitless. For
instance, in Sudoku, the set of numbers from 1 to 9 can serve as the domain of a
variable representing a problem cell.
3. Constraints: The guidelines that control how variables relate to one another are
known as constraints. Constraints in a CSP define the ranges of possible values
for variables. Unary constraints, binary constraints, and higher-order constraints
are only a few examples of the various sorts of constraints. For instance, in a
sudoku problem, the restrictions might be that each row, column, and 3×3 box
can only have one instance of each number from 1 to 9.
Representation of Constraint Satisfaction Problems (CSP)
In Constraint Satisfaction Problems (CSP), the solution process involves the interaction
of variables, domains, and constraints. Below is a structured representation of how CSP is
formulated:
1. Finite Set of Variables (V1,V2,…,Vn)(V1,V2,…,Vn):
The problem consists of a set of variables, each of which needs to be assigned a
value that satisfies the given constraints.
2. Non-Empty Domain for Each Variable (D1,D2,…,Dn)(D1,D2,…,Dn):
Each variable has a domain—a set of possible values that it can take. For
example, in a Sudoku puzzle, the domain could be the numbers 1 to 9 for each
cell.
3. Finite Set of Constraints (C1,C2,…,Cm)(C1,C2,…,Cm):
Constraints restrict the possible values that variables can take. Each constraint
defines a rule or relationship between variables.
4. Constraint Representation:
Each constraint CiCi is represented as a pair <scope, relation>, where:
 Scope: The set of variables involved in the constraint.
 Relation: A list of valid combinations of variable values that satisfy
the constraint.
5. Example:
Let’s say you have two variables V1V1 and V2V2. A possible constraint could
be V1≠V2V1=V2, which means the values assigned to these variables must not
be equal.
 Detailed Explanation:
o Scope: The variables V1V1 and V2V2.
o Relation: A list of valid value combinations
where V1V1 is not equal to V2V2.

CSP Algorithms:
1. Backtracking Algorithm
The backtracking algorithm is a depth-first search method used to systematically
explore possible solutions in CSPs. It operates by assigning values to variables and
backtracks if any assignment violates a constraint.
How it works:
 The algorithm selects a variable and assigns it a value.
 It recursively assigns values to subsequent variables.
 If a conflict arises (i.e., a variable cannot be assigned a valid value), the
algorithm backtracks to the previous variable and tries a different value.
 The process continues until either a valid solution is found or all possibilities
have been exhausted.
This method is widely used due to its simplicity but can be inefficient for large problems
with many variables.
2. Forward-Checking Algorithm
The forward-checking algorithm is an enhancement of the backtracking algorithm that
aims to reduce the search space by applying local consistency checks.
How it works:
 For each unassigned variable, the algorithm keeps track of remaining valid
values.
 Once a variable is assigned a value, local constraints are applied to neighboring
variables, eliminating inconsistent values from their domains.
 If a neighbor has no valid values left after forward-checking, the algorithm
backtracks.
This method is more efficient than pure backtracking because it prevents some conflicts
before they happen, reducing unnecessary computations.
3. Constraint Propagation Algorithms
Constraint propagation algorithms further reduce the search space by enforcing local
consistency across all variables.
How it works:
 Constraints are propagated between related variables.
 Inconsistent values are eliminated from variable domains by leveraging
information gained from other variables.
 These algorithms refine the search space by making inferences, removing values
that would lead to conflicts.
Constraint propagation is commonly used in conjunction with other CSP algorithms, such
as backtracking, to increase efficiency by narrowing down the solution space early in the
search process.

Constraint Propagation
Constraint propagation is a fundamental concept in constraint satisfaction problems (CSPs).
A CSP involves variables that must be assigned values from a given domain while
satisfying a set of constraints. Constraint propagation aims to simplify these problems by
reducing the domains of variables, thereby making the search for solutions more efficient.
Key Concepts
1. Variables: Elements that need to be assigned values.
2. Domains: Possible values that can be assigned to the variables.
3. Constraints: Rules that define permissible combinations of values for the
variables.
How Constraint Propagation Works
Constraint propagation works by iteratively narrowing down the domains of variables based
on the constraints. This process continues until no more values can be eliminated from any
domain. The primary goal is to reduce the search space and make it easier to find a
solution.
Steps in Constraint Propagation
1. Initialization: Start with the initial domains of all variables.
2. Propagation: Apply constraints to reduce the domains of variables.
3. Iteration: Repeat the propagation step until a stable state is reached, where no
further reduction is possible.
Example
Consider a simple CSP with two variables, X and Y, each with domains {1, 2, 3}, and a
constraint X ≠ Y. Constraint propagation will iteratively reduce the domains as follows:
 If X is assigned 1, then Y cannot be 1, so Y's domain becomes {2, 3}.
 If Y is then assigned 2, X cannot be 2, so X's domain is reduced to {1, 3}.
 This process continues until a stable state is reached.
Applications of Constraint Propagation
Constraint propagation is widely used in various AI applications. Some notable areas
include:
Scheduling
In scheduling problems, tasks must be assigned to time slots without conflicts. Constraint
propagation helps by reducing the possible time slots for each task based on constraints like
availability and dependencies.
Planning
AI planning involves creating a sequence of actions to achieve a goal. Constraint
propagation simplifies the planning process by reducing the possible actions at each step,
ensuring that the resulting plan satisfies all constraints.
Resource Allocation
In resource allocation problems, resources must be assigned to tasks in a way that meets all
constraints, such as capacity limits and priority rules. Constraint propagation helps by
narrowing down the possible assignments, making the search for an optimal allocation
more efficient.
Algorithms for Constraint Propagation
Several algorithms are used for constraint propagation, each with its strengths and
weaknesses. Some common algorithms include:
Arc Consistency
Arc consistency ensures that for every value of one variable, there is a consistent value in
another variable connected by a constraint. This algorithm is often used as a preprocessing
step to simplify CSPs before applying more complex algorithms.
Path Consistency
Path consistency extends arc consistency by considering triples of variables. It ensures that
for every pair of variables, there is a consistent value in the third variable. This further
reduces the domains and simplifies the problem.
k-Consistency
k-Consistency generalizes the concept of arc and path consistency to k variables. It ensures
that for every subset of k-1 variables, there is a consistent value in the kth variable. Higher
levels of consistency provide more pruning but are computationally more expensive.
Extensions of CSPs
• Weighted CSPs: CSPs where each constraint has an associated weight, and the
goal is to find the assignment with the minimum or maximum total weight.
• Soft Constraints: Constraints that can be violated at a cost, and the goal is to find
an assignment that minimizes the total violation cost.
• Temporal CSPs: CSPs with temporal constraints, where variables represent events
occurring over time, and constraints specify temporal relationships between
events.
• Distributed CSPs: CSPs where variables and constraints are distributed across
multiple agents or processors, requiring communication and coordination to find
a solution.
Implementation steps of Constraint Propagation
1. Initial Domain Initialization:
- At the beginning of the constraint propagation process, each variable is
assigned an initial domain containing all possible values it can take.
2. Constraint Enforcement:
- Constraints define relationships or conditions that must be satisfied by the
assignments of values to the variables.
- Constraint propagation enforces these constraints by iteratively applying
constraint- specific techniques to update the domains of variables.
3. Local Consistency Techniques:
- Local consistency techniques are used to ensure that the assignments of values
to variables are consistent with the constraints.
- Arc consistency and domain reduction are two common local consistency
techniques employed in constraint propagation.
4. Arc Consistency:

- Arc consistency is a property that ensures that for every pair of variables
involved in a binary constraint, there exists at least one value in the domain of
each variable that satisfies the constraint.
- In arc consistency, constraints are propagated along arcs (binary constraints)
in the constraint graph to remove values from the domains of variables that are
inconsistent with the constraints.
- Arc consistency pruning is performed iteratively until no more changes can be
made to the domains.
5. Domain Reduction:
- Domain reduction techniques aim to reduce the size of variable domains by
eliminating values that are inconsistent with the constraints.
- This is achieved by iteratively applying constraint-specific algorithms to
update the domains based on the current assignments and constraints.
- Examples of domain reduction techniques include forward checking,
constraint propagation through singleton domains, and constraint
propagation through difference constraints.
6. Iterative Propagation:

- Constraint propagation is performed iteratively, with each iteration potentially


reducing the size of variable domains and enforcing consistency.
- The process continues until a fixed point is reached, where no further changes
can be made to the domains or constraints.
7. Pruning the Search Space:
- By enforcing constraints and propagating the consequences of variable
assignments, constraint propagation prunes the search space, reducing the
number of possible assignments and accelerating the search for feasible
solutions.
- Pruning the search space helps improve the efficiency of backtracking search
algorithms commonly used to solve CSPs.
Sudoku Puzzle
• Problem: Fill in a 9x9 grid with digits from 1 to 9 such that each row, each
column, and each of the nine 3x3 subgrids contain all of the digits 1 to 9.
• Variables: Each cell in the Sudoku grid represents a variable.
• Domains: The digits 1 to 9.
• Constraints: Each row, column, and 3x3 subgrid must contain unique digits.
• Explanation: In Sudoku, we need to fill in a partially filled grid with digits while
ensuring that no digit is repeated in any row, column, or subgrid.
It's a classic CSP that can be solved using backtracking search with constraint propagation
techniques such as arc consistency.

Job Scheduling Problem


• Problem: Assign a set of tasks to a set of workers over a period of time such that
each task is completed within its duration and workers do not exceed their capacity.
• Variables: Each task represents a variable.
• Domains: The set of available time slots for each task.
• Constraints: Each task must be scheduled within its duration, and the total
workload for each worker must not exceed their capacity.
• Explanation: The job scheduling problem arises in various scenarios, such as
project management, manufacturing, and resource allocation. It involves
assigning tasks to workers while considering task durations, worker capacities, and task
dependencies.
4 Queens Problem

The 4 Queens Problem consists in placing four queens on a 4 x 4 chessboard so that no two
queens attack each other. That is, no two queens are allowed to be placed on the same row,
the same column or the same diagonal.
We are going to look for the solution for n=4 on a 4 x 4 chessboard.
4 Queens Problem using Backtracking Algorithm:
Place each queen one by one in different rows, starting from the topmost row. While placing
a queen in a row, check for clashes with already placed queens. For any column, if there is
no clash then mark this row and column as part of the solution by placing the queen. In case,
if no safe cell found due to clashes, then backtrack (i.e, undo the placement of recent queen)
and return false.
Illustration of 4 Queens Solution:
Step 0: Initialize a 4×4 board.

Step 1:
 Put our first Queen (Q1) in the (0,0) cell .
 ‘x‘ represents the cells which is not safe i.e. they are under attack by the Queen
(Q1).
 After this move to the next row [ 0 -> 1 ].
Step 2:
 Put our next Queen (Q2) in the (1,2) cell .
 After this move to the next row [ 1 -> 2 ].

Step 3:
 At row 2 there is no cell which are safe to place Queen (Q3) .
 So, backtrack and remove queen Q2 queen from cell ( 1, 2 ) .
Step 4:
 There is still a safe cell in the row 1 i.e. cell ( 1, 3 ).
 Put Queen ( Q2 ) at cell ( 1, 3).
Step 5:
 Put queen ( Q3 ) at cell ( 2, 1 ).

Step 6:
 There is no any cell to place Queen ( Q4 ) at row 3.
 Backtrack and remove Queen ( Q3 ) from row 2.
 Again there is no other safe cell in row 2, So backtrack again and remove queen
( Q2 ) from row 1.
 Queen ( Q1 ) will be remove from cell (0,0) and move to next safe cell i.e. (0 , 1).
Step 7:
 Place Queen Q1 at cell (0 , 1), and move to next row.

Step 8:
 Place Queen Q2 at cell (1 , 3), and move to next row.
Step 9:
 Place Queen Q3 at cell (2 , 0), and move to next row.

Step 10:
 Place Queen Q4 at cell (3 , 2), and move to next row.
 This is one possible configuration of solution

Follow the steps below to implement the idea:


 Make a recursive function that takes the state of the board and the current row
number as its parameter.
Start in the topmost row.
If all queens are placed, return true
For every row.
o Do the following for each column in current row.
o If the queen can be placed safely in this column
o Then mark this [row, column] as part of the solution and recursively
check if placing queen here leads to a solution.
o If placing the queen in [row, column] leads to a solution, return true.
o If placing queen doesn’t lead to a solution then unmark this [row,
column] and track back and try other columns.
 If all columns have been tried and nothing worked, return false to trigger
backtracking.

Travelling Salesman Problem

The travelling salesman problem is a graph computational problem where the salesman needs
to visit all cities (represented using nodes in a graph) in a list just once and the distances
(represented using edges in the graph) between all these cities are known. The solution that is
needed to be found for this problem is the shortest possible route in which the salesman visits
all the cities and returns to the origin city.

If you look at the graph below, considering that the salesman starts from the vertex a, they
need to travel through all the remaining vertices b, c, d, e, f and get back to a while making
sure that the cost taken is minimum.

There are various approaches to find the solution to the travelling salesman problem: naive
approach, greedy approach, dynamic programming approach, etc.
As the definition for greedy approach states, we need to find the best optimal solution locally
to figure out the global optimal solution. The inputs taken by the algorithm are the graph G
{V, E}, where V is the set of vertices and E is the set of edges. The shortest path of graph G
starting from one vertex returning to the same vertex is obtained as the output.

Algorithm
 Travelling salesman problem takes a graph G {V, E} as an input and declare another
graph as the output (say G) which will record the path the salesman is going to take
from one node to another.
 The algorithm begins by sorting all the edges in the input graph G from the least
distance to the largest distance.
 The first edge selected is the edge with least distance, and one of the two vertices (say
A and B) being the origin node (say A).
 Then among the adjacent edges of the node other than the origin node (B), find the
least cost edge and add it onto the output graph.
 Continue the process with further nodes making sure there are no cycles in the output
graph and the path reaches back to the origin node A.
 However, if the origin is mentioned in the given problem, then the solution must
always start from that node only. Let us look at some example problems to understand
this better.

Examples

Consider the following graph with six cities and the distances between them −

From the given graph, since the origin is already mentioned, the solution must always start
from that node. Among the edges leading from A, A → B has the shortest distance.
Then, B → C has the shortest and only edge between, therefore it is included in the output
graph.

Theres only one edge between C → D, therefore it is added to the output graph.

Theres two outward edges from D. Even though, D → B has lower distance than D → E, B is
already visited once and it would form a cycle if added to the output graph. Therefore, D → E
is added into the output graph.

Theres only one edge from e, that is E → F. Therefore, it is added into the output graph.
Again, even though F → C has lower distance than F → A, F → A is added into the output
graph in order to avoid the cycle that would form and C is already visited once.

The shortest path that originates and ends at A is A → B → C → D → E → F → A

The cost of the path is: 16 + 21 + 12 + 15 + 16 + 34 = 114.

Even though, the cost of path could be decreased if it originates from other nodes but the
question is not raised with respect to that.

Water Jug Problem in AI


The Water jug Issue in artificial intelligence is a model riddle(puzzle) in man-made
consciousness and math that bright lights on improving the use of something like two water
containers to measure a specific measure of water. It is a fundamental issue in the space of
upgrade and heading. This issue comes in various designs with different compartment cutoff
points and target assessments, making it an adaptable gadget for learning man-made
brainpower decisive reasoning procedures.
Defining water jug problem in AI
The Water jug Issue is an exemplary riddle in man-made reasoning including two containers,
one with a limit of 'x' Liters and the other 'y' Liters, and a water source. The objective is to
quantify a particular 'z' Liter of water utilizing these containers, with no volume markings. It's
a trial of critical thinking and state space search, where the underlying state is the two
containers unfilled and the objective is to arrive at a state where one container holds 'z' Liters.
Different tasks like filling, exhausting, and pouring between containers are utilized to track
down an effective arrangement of moves toward accomplish the ideal water estimation.
Water Jug Problem in Artificial Intelligence
Classic Version:
o In its classic form, this problem involves two containers, each with a replacement
limit.
o The goal is to use these containers to measure specific amounts of water while
meeting standards and requirements.
o Guide explaining an example water bottle problem: a 3 Liter bottle and a 5 Liter
bottle. The task is to measure 4 Liters of water.

Sample Problem Situation:


o Imagine a situation where you have a 3-liter bottle and a 5-liter bottle and you want to
measure 4 liters of water.
o Think about the situation by imagining two bottles and a tank to fill.
o The idea is to determine the sequence of operations that will reach an estimate of 4
Liters.

Knowing the AI reservoir statistics in this balance can provide critical thinking understanding
of the problem and become a way for members to engage in critical thinking.
Requirements and Objectives: The kettle problem in AI lies in the wrong requirements and
objectives.
Condition 1: Containers(jug) are limited.
Condition 2: Filling can be done by pouring water between containers or from a water
source.
objective: The goal is to fill some water, usually by combining and moving water between
properly measured containers.
State Space and Activity Space:
In cognitive critical thinking, we work in both spatial (each conceptual form) and functional
(each conceivable activity) spaces.
In the water container problem, it is within the state space that all requirements for water
levels are included.
In the activity area there are actions the user can perform, such as filling the cauldron,
emptying it, starting from one container, and pouring water into the next container.
Initial State, Goal State, and Actions:
The first state is where you start. In the example scenario, it means that both containers are
empty. The target state is the space to be reached when the ideal water level is reached (e.g. 4
Liters. Actions are operations on containers, such as covering them as possible actions., or
pour water in the middle.)
Brute-Force Approach
Example:
o The most powerful way is to thoroughly research all possible solutions to the water
tank problem.
o This method is obvious, but may not be effective in difficult situations.

Basic Model and Brute-Force Arrangement:


Think of a situation where you need to calculate 4 Liters of water using a 3Liter container
and a 5 Liter container. Walk members through the preparation of the Beast Force step by
step and show them what to do. Begin with the two jugs vacant (0, 0).
a. Fill the 3 liter container (3, 0).
b. Pour water from a 3 liter container into a 5 liter container (0, 3).
c. Fill the 3 liter container (3, 3).
d. Pour water from a 3 liter container into a 5 liter container until it is full (1, 5).
e. Empty the 5 liter container (1, 0).
f. Pour the excess water from the 3 liter container into the 5 liter container (0, 1).
g. Fill the 3 liter container (3, 1).
h. Pour water from the 3 liter container into the 5 liter container until it is full (0, 4)..
This example shows how a dynamic approach can be used to handle the water bottle problem
in artificial intelligence by efficiently testing several successive steps until a target level is
reached. In any case, it is important to emphasize that this strategy may not work in larger
and more surprising situations.
Water Jug Example Using Search Algorithms in AI
An Introduction to Search Algorithms
The search algorithm is a key element of cognitive analysis.
Two common search algorithms used in the water transportation problem are scalability scan
(BFS) and depth-first search (DFS).
o BFS examines each step before continuing to the next step. At a higher level.
o DFS examines each branch before going back.

Step-by-Step Demonstration with BFS


To solve the water jug problem, we must proceed with the BFS (Breadth First Search)
method. This model has a bottle of 3 liters and another of 5 liters and calculates to 4 liters of
water. We use BFS to follow best practices.
1. Let's start with the first state: (0, 0)

o At the beginning, both containers are empty.


2. Apply the possible actions to the current state: (0, 0)

o Fill the 3 liter container: (3, 0)


o Fill the 5 liter container: (0, 5)

3. Expanding to a higher level:

o There are currently two new states to explore: (3, 0) and (0, 5).

4. Expand:

o Pour from a 3 liter container to a 5 liter container: (0, 3)


o Fill the 3 liter container: ( 3 , 3)\ n
o From (0, 5) you can:
o Pour from a 5 liter container to a 3 liter container: (3, 2).

5. explore Further:

o Advance the land to higher levels.


o You can reach (3, 0) from (0, 3).
o You can reach (0, 3) from (3, 3) or (3, 5)

6. Objective State Accomplished:

o In our search, we've arrived at the objective state (0, 4).

7. Backtrack to Track down the Arrangement:

o To find the arrangement way, we backtrack from the objective state to the
underlying state:

(0, 4) - > (3, 1) - > (0, 1) - > (1, 0) - > (1, 5) - > (3, 4) - > (0, 4).
This presentation describes the idea of Breadth-First Search to explore space in order to find
the best answer to the water container problem. This ensures that we analyze all possible
actions and find the easiest path to the goal state. BFS guarantees optimal performance, but
may not be the most effective solution in larger problem areas.
Brief Notice of Heuristic search Calculations
Both breadth-first search and depth-first search work well for the water container problem,
but the breadth-first approach may not produce the best decisions for other complex
situations. In these situations, heuristic exploratory statistics such as A* are important.
o Search criteria: A* is a learning search statistic that uses heuristics to target
motivational factors. It combines the advantages of BFS and DFS to ensure
optimization and efficiency.
o Heuristics: A heuristic is a spatial evaluation of how close a state is to its goal. In the
water container problem, the simplest heuristic is the absolute separation between the
current state and the goal state.
o Optimization: Heuristic research calculations such as A* can be adapted to more
complex evolution problems. For example, in resource allocation, A* can effectively
combine resources to achieve a goal while minimizing cost.
o Efficiency: By tightly controlling search, A* can reduce the amount of governance
required for problem areas larger and more diverse.

Characterize State Representation :


Complete the procedure for the problem of remembering the speed of water for each
container. For example, if two containers have a volume of 4 liters and 3 liters respectively, it
is called the shape (2, 0), which means that the first container has 2 liters of water and the
second container has 0 liters .
Characterize Node Representation:
All areas of the hunting tree are associated with a state. The office stores data such as the
current state, the cost from the initial office to the continuing office (g), the heuristic cost (h),
and the total cost (f = g + h).
Generate Successors:
It indicates the ability to produce different characteristics of a state. These substitutes are
obtained by performing appropriate operations such as filling, dispensing or pouring water
between containers.
Heuristic Function:
We define a heuristic function that measures the cost from the current state to the target state.
For example, one possible heuristic is the Manhattan distance between the current state and
the target state.
Knowledge-Based Agents
Knowledge-based agents are a type of intelligent agent that utilizes
knowledge representation and reasoning decisions to make and achieve goals in
complex environments.The central component of a knowledge-based agent is
knowledge base, or KB. A knowledge base is a set of sentences. (Here “sentence” is
used as a technical term. It is related but not identical to the sentences of English and
other natural languages.) Each sentence is expressed in a language called a knowledge
representation language and represents some assertion about the world. Sometimes we
dignify a sentence with the name axiom, when the sentence is taken as given without
being derived from other sentences.
INFERENCE
There must be a way to add new sentences to the knowledge base and a way to query what
is known. The standard names for these operations are TELL and ASK, respectively. Both
operations may involve inference—that is, deriving new sentences from old. Inference
must obey the requirement that when one ASKs a question of the knowledge base, the
answer should follow from what has been told (or TELLed) to the knowledge base
previously.
THE WUMPUS WORLD
• WUMPUS WORLD is an environment in which knowledge-based agents can show
their worth. The wumpus world is a cave consisting of rooms connected by
passageways.
• Lurking somewhere in the cave is the terrible wumpus, a beast that eats anyone who
enters its room.
• The wumpus can be shot by an agent, but the agent has only one arrow. Some rooms
contain bottomless pits that will trap anyone who wanders into these rooms (except
for the wumpus, which is too big to fall in). The only mitigating feature of this bleak
environment is the possibility of finding a heap of gold. Although the wumpus world
is rather tame by modern computer game standards, it illustrates some important
points about intelligence.
• The precise definition of the task environment is given by the PEAS description.
Performance measure: +1000 for climbing out of the cave with the gold, –1000 for
falling into a pit or being eaten by the wumpus, –1 for each action taken and –10 for using
up the arrow. The game ends either when the agent dies or when the agent climbs out of the
cave.
Environment: A 4×4 grid of rooms. The agent always starts in the square labeled [1,1],
facing to the right. The locations of the gold and the wumpus are chosen randomly, with a
uniform distribution, from the squares other than the start square. In addition, each square
other than the start can be a pit, with probability 0.2.
Actuators: The agent can move Forward, TurnLeft by 90◦, or TurnRight by 90◦. The
agent dies a miserable death if it enters a square containing a pit or a live wumpus. (It is
safe, albeit smelly, to enter a square with a dead wumpus.) If an agent tries to move forward
and bumps into a wall, then the agent does not move. The action Grab can be used to pick
up the gold if it is in the same square as the agent. The action Shoot can be used to fire an
arrow in a straight line in the direction the agent is facing. The arrow continues until it either
hits (and hence kills) the wumpus or hits a wall. The agent has only one arrow, so only the
first Shoot action has any effect. Finally, the action Climb can be used to climb out of the
cave, but only from square [1,1].
Sensors: The agent has five sensors, each of which gives a single bit of information:
– In the square containing the wumpus and in the directly (not diagonally) adjacent squares,
the agent will perceive a Stench.
– In the squares directly adjacent to a pit, the agent will perceive a Breeze.
– In the square where the gold is, the agent will perceive a Glitter.
– When an agent walks into a wall, it will perceive a Bump.
– When the wumpus is killed, it emits a woeful Scream that can be perceived anywhere in
the cave.
The percepts will be given to the agent program in the form of a list of five symbols; for
example, if there is a stench and a breeze, but no glitter, bump, or scream, the agent program
will get [Stench, Breeze, None, None, None].
a) The rooms adjacent to the Wumpus room are smelly, so that it would have some stench.
b) The room adjacent to PITs has a breeze, so if the agent reaches near to PIT, then he will
perceive the breeze.
c) There will be glitter in the room if and only if the room has gold.
d) The Wumpus can be killed by the agent if the agent is facing to it, and Wumpus will emit
a horrible scream which can be heard anywhere in the cave.
Performance measure:
• +1000 reward points if the agent comes out of the cave with the gold.
• -1000 points penalty for being eaten by the Wumpus or falling into the pit.
• -1 for each action, and -10 for using an arrow.
• The game ends if either agent dies or came out of the cave.
Environment:
• A 4*4 grid of rooms.
• The agent initially in room square [1, 1], facing toward the right.
• Location of Wumpus and gold are chosen randomly except the first square [1,1].
• Each square of the cave can be a pit with probability 0.2 except the first square.
Actuators:
• Left turn,
• Right turn
• Move forward
• Grab
• Release
• Shoot.
Sensors:
• The agent will perceive the stench if he is in the room adjacent to the Wumpus. (Not
diagonally).
• The agent will perceive breeze if he is in the room directly adjacent to the Pit.
• The agent will perceive the glitter in the room where the gold is present.
• The agent will perceive the bump if he walks into a wall.
• When the Wumpus is shot, it emits a horrible scream which can be perceived
anywhere in the cave.
• These percepts can be represented as five element list, in which we will have
different indicators for each sensor.
• Example if agent perceives stench, breeze, but no glitter, no bump, and no scream
then it can be represented as:
[Stench, Breeze, None, None, None].

Logic
we said that knowledge bases consist of sentences. These sentences are expressed according
to the syntax of the representation language, which specifies all the sentences that are well
formed. The notion of syntax is clear enough in ordinary arithmetic: “x + y = 4” is a well-
formed sentence, whereas “x 4 y + =” is not.
A logic must also define the semantics or meaning of sentences. The semantics defines the
truth of each sentence with respect to each possible world. For example, the semantics for
arithmetic specifies that the sentence “x + y =4” is true in a world where x is 2 and y is 2,
but false in a world where x is 1 and y is 1. In standard logics, every sentence must be either
true or false in each possible world—there is no “in between.”

Propositional Logic
Propositional logic (PL) is the simplest form of logic where all the statements are made by
propositions. A proposition is a declarative statement which is either true or false. It is a
technique of knowledge representation in logical and mathematical form.
a) It is Sunday.
b) The Sun rises from West (False proposition)
c) 3+3= 7(False proposition)
d) 5 is a prime number.
Following are some basic facts about propositional logic:
• Propositional logic is also called Boolean logic as it works on 0 and 1.
• In propositional logic, we use symbolic variables to represent the logic, and we can
use any symbol for a representing a proposition, such A, B, C, P, Q, R, etc.
• Propositions can be either true or false, but it cannot be both.
• Propositional logic consists of an object, relations or function, and logical
connectives.
• These connectives are also called logical operators.
• The propositions and connectives are the basic elements of the propositional logic.
• Connectives can be said as a logical operator which connects two sentences.
• A proposition formula which is always true is called tautology, and it is also called a
valid sentence.
• A proposition formula which is always false is called Contradiction.
• A proposition formula which has both true and false values is called Statements
• Questions, commands, or opinions are not propositions such as "Where is Rohini",
"How are you", "What is your name", are not propositions.
Syntax of propositional logic:
The syntax of propositional logic defines the allowable sentences for the knowledge
representation. There are two types of Propositions:
• Atomic Propositions
• Compound propositions
Atomic Proposition: Atomic propositions are the simple propositions. It consists of a single
proposition symbol. These are the sentences which must be either true or false.
Example:
a) 2+2 is 4, it is an atomic proposition as it is a true fact.
b) "The Sun is cold" is also a proposition as it is a false fact.
Compound proposition: Compound propositions are constructed by combining simpler or
atomic propositions, using parenthesis and logical connectives.
Example:
a) "It is raining today, and street is wet."
b) "Ankit is a doctor, and his clinic is in Mumbai."
Logical Connectives:
Logical connectives are used to connect two simpler propositions or representing a sentence
logically. We can create compound propositions with the help of logical connectives. There
are mainly five connectives, which are given as follows:
Negation: A sentence such as ¬ P is called negation of P. A literal can be either Positive
literal or negative literal.
Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a conjunction.
Example: Rohan is intelligent and hardworking. It can be written as,
P= Rohan is intelligent,
Q= Rohan is hardworking. P∧ Q.
Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called disjunction,
where P and Q are the propositions.
Example: "Ritika is a doctor or Engineer",
Here P= Ritika is Doctor. Q= Ritika is Engineer, so we can write it as P ∨ Q.
Implication: A sentence such as P → Q, is called an implication. Implications are also
known as if-then rules. It can be represented as
If it is raining, then the street is wet.
Let P= It is raining, and Q= Street is wet, so it is represented as P → Q
Biconditional: A sentence such as P⇔ Q is a Biconditional sentence, example If I am
breathing, then I am alive
P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.
Propositional Logic Connectives
Truth table with three propositions
Precedence of connectives:

Logical equivalence
• Logical equivalence is one of the features of propositional logic. Two propositions are
said to be logically equivalent if and only if the columns in the truth table are identical
to each other.
• Let's take two propositions A and B, so for logical equivalence, we can write it as
A⇔B. In below truth table we can see that column for ¬A∨ B and A→B, are
identical hence A is Equivalent to B

Properties of Operators
Commutativity:
P∧ Q= Q ∧ P, or
P ∨ Q = Q ∨ P.
Associativity:
(P ∧ Q) ∧ R= P ∧ (Q ∧ R),
(P ∨ Q) ∨ R= P ∨ (Q ∨ R)
Identity element:
P ∧ True = P,
P ∨ True= True.
Distributive:
P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
DE Morgan's Law:
¬ (P ∧ Q) = (¬P) ∨ (¬Q)
¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
Double-negation elimination:
¬ (¬P) = P.
Inference and proofs
Inference rules are the templates for generating valid arguments. Inference rules are applied
to derive proofs in artificial intelligence, and the proof is a sequence of the conclusion that
leads to the desired goal.
In inference rules, the implication among all the connectives plays an important role.
Following are some inference rules:
• Implication: It is one of the logical connectives which can be represented as P → Q.
It is a Boolean expression.
• Converse: The converse of implication, which means the right-hand side proposition
goes to the left-hand side and vice-versa. It can be written as Q → P.
• Inverse: The negation of implication is called inverse. It can be represented as ¬ P →
¬Q
• Contrapositive: The negation of converse is termed as contrapositive, and it can be
represented as ¬ Q → ¬ P.
• From these inferences some of the compound statements are equivalent to each other,
which we can prove using truth table:

Hence from the above truth table, we can prove that P → Q is equivalent to ¬ Q → ¬ P, and
Q→ P is equivalent to ¬ P → ¬ Q.
Types of Inference rules
1. Modus Ponens:
The Modus Ponens rule is one of the most important rules of inference, and it states that if P
and P → Q is true, then we can infer that Q will be true. It can be represented as:

Example:
Statement-1: "If I am sleepy then I go to bed" ==> P→ Q
Statement-2: "I am sleepy" ==> P
Conclusion: "I go to bed." ==> Q.
Hence, we can say that, if P→ Q is true and P is true then Q will be true.

Proof by Truth table:

2. Modus Tollens:
The Modus Tollens rule state that if P→ Q is true and ¬ Q is true, then ¬ P will also true. It
can be represented as:

Statement-1: "If I am sleepy then I go to bed" ==> P→ Q


Statement-2: "I do not go to the bed."==> ~Q
Statement-3: Which infers that "I am not sleepy" => ~P
Proof by Truth table:

3. Hypothetical Syllogism:
The Hypothetical Syllogism rule state that if P→Q is true whenever O→R is true, then P→R
is true. It can be represented as the following notation:
Example:
Statement-1: If you have my home key then you can unlock my home. P→Q
Statement-2: If you can unlock my home then you can take my money. Q→R
Conclusion: If you have my home key then you can take my money. P→R

4. Disjunctive Syllogism:
The Disjunctive syllogism rule state that if P∨Q is true, and ¬P is true, then Q will be true. It
can be represented as:

Example:
Statement-1: Today is Sunday or Monday. ==>P∨Q
Statement-2: Today is not Sunday. ==> ¬P
Conclusion: Today is Monday. ==> Q
Proof by truth-table:

5. Addition:
The Addition rule is one the common inference rule, and it states that If P is true, then P ∨Q
will be true.
• Example:
• Statement: I have a vanilla ice-cream. ==> P
Statement-2: I have Chocolate ice-cream.
Conclusion: I have vanilla or chocolate ice-cream. ==> (P∨Q)
• Proof by Truth-Table:

6. Simplification:
The simplification rule state that if P∧ Q is true, then Q or P will also be true. It can be
represented as:

Proof by Truth-Table:

7. Resolution:
The Resolution rule state that if P ∨ Q and ¬ P ∧ R is true, then Q ∨ R will also be true. It
can be represented as

Proof by Truth-Table:
Horn clauses and definite clauses
Proof Strategies

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy