Experiment 5 - Adversarial Searching
Experiment 5 - Adversarial Searching
LAB Manual
PART A
(PART A: TO BE REFERRED BY STUDENTS)
Experiment No. 5
A.1 AIM: - Write a program to study adversarial search strategy using MIN-MAX algorithm
A.2 Prerequisite
A.3 Outcome
After successful completion of this experiment students will be able to build AI agent for playing perfect Tic-Tac-Toe game using MIN-MAX algorithm adversarial
search
A.4 Theory:
Minimax (sometimes MinMax, MM or saddle point) is a decision rule used in artificial intelligence, decision theory, game theory, statistics and philosophy for
minimizing the possible loss for a worst case (maximum loss) scenario. When dealing with gains, it is referred to as "maximin"—to maximize the minimum gain.
Originally formulated for two-player zero-sum game theory, covering both the cases where players take alternate moves and those where they make simultaneous
moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty.
Let us combine what we have learnt so far about minimax and evaluation function to write a proper Tic-Tac-Toe AI (Artificial Intelligence) that plays a perfect game.
This AI will consider all possible scenarios and make the most optimal move.
We shall be introducing a new function called findBestMove(). This function evaluates all the available moves using minimax() and then returns the best move the
Minimax :
To check whether or not the current move is better than the best move we take the help of minimax() function which will consider all the possible ways the game can
go and returns the best value for that move, assuming the opponent also plays optimally
The code for the maximizer and minimizer in the minimax() function is similar to findBestMove() , the only difference is, instead of returning a move, it will return a
value.
To check whether the game is over and to make sure there are no moves left we use isMovesLeft() function. It is a simple straightforward function which checks
One final step is to make our AI a little bit smarter. Even though the following AI plays perfectly, it might choose to make a move which will result in a slower victory
Assume that there are 2 possible ways for X to win the game from a given board state.
Our evaluation function will return a value of +10 for both moves A and B. Even though move A is better because it ensures a faster victory, our AI may choose B
sometimes. To overcome this problem we subtract the depth value from the evaluated score. This means that in case of a victory it will choose the victory which takes
the least number of moves and in case of a loss it will try to prolong the game and play as many moves as possible. So the new evaluated value will be
Now since move A has a higher score compared to move B our AI will choose move A over move B. The same thing must be applied to the minimizer. Instead of
subtracting the depth we add the depth value as the minimizer always tries to get, as negative a value as possible. We can subtract the depth either inside the evaluation
function or outside it. Anywhere is fine. I have chosen to do it outside the function.
PART B
(PART B : TO BE COMPLETED BY STUDENTS)
(Students must submit the soft copy as per following segments within two hours of the practical. The soft copy must
Batch : 3
4. Conclude the learning done through the tasks that are implemented.
Tasks to be implemented:
Code:
def print_board(board):
print(" | ".join(row))
print("-" * 9)
for i in range(3):
return True
return True
return False
def is_board_full(board):
return all([cell != ' ' for row in board for cell in row])
def evaluate(board):
if check_win(board, 'X'):
return 10
return -10
else:
return 0
if score == 10:
elif is_board_full(board):
return 0
if is_maximizing:
best = -float('inf')
for i in range(3):
for j in range(3):
board[i][j] = 'X'
return best
else:
best = float('inf')
for i in range(3):
for j in range(3):
board[i][j] = 'O'
return best
def find_best_move(board):
best_val = -float('inf')
for i in range(3):
for j in range(3):
board[i][j] = 'X'
best_val = move_val
best_move = (i, j)
return best_move
def get_valid_move(board):
while True:
try:
row, col = map(int, input("Enter your move (row and column, separated by space): ").split())
if 0 <= row < 3 and 0 <= col < 3 and board[row][col] == ' ':
except ValueError:
def main():
print_board(board)
while True:
# Player's move
board[row][col] = 'O'
print_board(board)
if check_win(board, 'O'):
break
if is_board_full(board):
print("It's a draw!")
break
# AI's turn
print("AI's turn...")
board[ai_row][ai_col] = 'X'
print_board(board)
if check_win(board, 'X'):
break
if is_board_full(board):
print("It's a draw!")
break
if __name__ == "__main__":
main()
Output:
B.2 Observations and learning:
(Students are expected to comment on the output obtained with clear observations and learning for each task/ sub part assigned)
While implementing the Minimax algorithm for adversarial searching, it was observed that the AI evaluates all possible game states to determine the best
possible move. The algorithm follows a recursive approach, where the maximizer aims for the highest possible score, and the minimizer tries to reduce it. The
evaluation function played a crucial role in deciding optimal moves, ensuring a faster victory or a delayed defeat. Additionally, adjusting the score based on
depth improved decision-making efficiency. The key learning outcome was understanding how adversarial search strategies help in decision-making by
B.3 Conclusion:
(Students must write the conclusion as per the attainment of individual outcome listed above and learning/observation noted in section B.3)
The implementation of the Minimax algorithm demonstrated the effectiveness of adversarial search in game-playing scenarios. By simulating intelligent
decision-making, the AI was able to evaluate possible moves, anticipate the opponent’s strategy, and choose the optimal path to victory. The experiment
reinforced the importance of evaluation functions, depth-based scoring adjustments, and recursive search techniques in achieving optimal performance.
Overall, this experiment provided valuable insights into how AI can be designed to play strategic games like Tic-Tac-Toe efficiently using adversarial
searching methods.
1 Why adversarial searching techniques are useful for game playing? Justify your answer.
Adversarial searching techniques are useful for game playing because they allow an AI agent to make optimal decisions by anticipating and
countering an opponent’s moves. These techniques, like the Minimax algorithm, analyze all possible game states and evaluate the best
possible moves, ensuring that the AI maximizes its chances of winning while minimizing the opponent’s advantage. Adversarial search is
particularly effective in two-player, zero-sum games (such as Chess and Tic-Tac-Toe), where one player’s gain is the other’s loss. By
considering all possible game progressions, adversarial searching helps AI agents strategize optimally, making them competitive against
human or AI opponents.
Finds the optimal path to a goal state in problem-solving scenarios. Determines the best move considering an intelligent opponent.
No opponent; focuses on solving a search problem. Involves an opponent who tries to counteract the AI’s moves.
Uses heuristics to estimate the best path to the goal. Considers both maximizing and minimizing strategies to make decisions.
Used in single-agent problems like pathfinding and puzzles. Used in two-player or multi-agent competitive games.