0% found this document useful (0 votes)
4 views9 pages

Experiment 5 - Adversarial Searching

The document outlines a lab manual for an MCA program focusing on an experiment to implement the MIN-MAX algorithm for adversarial search in a Tic-Tac-Toe game. It includes prerequisites, expected outcomes, and a detailed explanation of the algorithm, along with code implementation and tasks for students. The document emphasizes the importance of evaluation functions and decision-making strategies in AI-driven game scenarios.

Uploaded by

Tarun Purohit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views9 pages

Experiment 5 - Adversarial Searching

The document outlines a lab manual for an MCA program focusing on an experiment to implement the MIN-MAX algorithm for adversarial search in a Tic-Tac-Toe game. It includes prerequisites, expected outcomes, and a detailed explanation of the algorithm, along with code implementation and tasks for students. The document emphasizes the importance of evaluation functions and decision-making strategies in AI-driven game scenarios.

Uploaded by

Tarun Purohit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

SVKM’s NMIMS

MPSTME, Mumbai Campus


Computer Engineering Department
Program: MCA, Semester - II
Subject: Artificial Intelligence

LAB Manual

PART A
(PART A: TO BE REFERRED BY STUDENTS)

Experiment No. 5

A.1 AIM: - Write a program to study adversarial search strategy using MIN-MAX algorithm

A.2 Prerequisite

Different programming language structure overview, Tic-Tac-Toe game playing strategy

A.3 Outcome

After successful completion of this experiment students will be able to build AI agent for playing perfect Tic-Tac-Toe game using MIN-MAX algorithm adversarial
search

A.4 Theory:

Minimax (sometimes MinMax, MM or saddle point) is a decision rule used in artificial intelligence, decision theory, game theory, statistics and philosophy for

minimizing the possible loss for a worst case (maximum loss) scenario. When dealing with gains, it is referred to as "maximin"—to maximize the minimum gain.

Originally formulated for two-player zero-sum game theory, covering both the cases where players take alternate moves and those where they make simultaneous

moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty.

Let us combine what we have learnt so far about minimax and evaluation function to write a proper Tic-Tac-Toe AI (Artificial Intelligence) that plays a perfect game.

This AI will consider all possible scenarios and make the most optimal move.

Finding the Best Move :

We shall be introducing a new function called findBestMove(). This function evaluates all the available moves using minimax() and then returns the best move the

maximizer can make.

Minimax :

To check whether or not the current move is better than the best move we take the help of minimax() function which will consider all the possible ways the game can

go and returns the best value for that move, assuming the opponent also plays optimally

The code for the maximizer and minimizer in the minimax() function is similar to findBestMove() , the only difference is, instead of returning a move, it will return a

value.

Checking for GameOver state:

To check whether the game is over and to make sure there are no moves left we use isMovesLeft() function. It is a simple straightforward function which checks

whether a move is available or not and returns true or false respectively.


Making our AI smarter:

One final step is to make our AI a little bit smarter. Even though the following AI plays perfectly, it might choose to make a move which will result in a slower victory

or a faster loss. Let's take an example and explain it.

Assume that there are 2 possible ways for X to win the game from a given board state.

● Move A : X can win in 2 move

● Move B : X can win in 4 moves

Our evaluation function will return a value of +10 for both moves A and B. Even though move A is better because it ensures a faster victory, our AI may choose B

sometimes. To overcome this problem we subtract the depth value from the evaluated score. This means that in case of a victory it will choose the victory which takes

the least number of moves and in case of a loss it will try to prolong the game and play as many moves as possible. So the new evaluated value will be

● Move A will have a value of +10 – 2 = 8

● Move B will have a value of +10 – 4 = 6

Now since move A has a higher score compared to move B our AI will choose move A over move B. The same thing must be applied to the minimizer. Instead of

subtracting the depth we add the depth value as the minimizer always tries to get, as negative a value as possible. We can subtract the depth either inside the evaluation

function or outside it. Anywhere is fine. I have chosen to do it outside the function.
PART B
(PART B : TO BE COMPLETED BY STUDENTS)

(Students must submit the soft copy as per following segments within two hours of the practical. The soft copy must

be uploaded on the MS Teams)

Roll No. A057 Student Name : Tarun Purohit

Program : MCA Semester : 2

Batch : 3

Date of Experiment : 08/02/2025 Date of Submission : 16/02/2025

B.1 Answers of Task to be written by student:


(Paste your answers completed during the 2 hours of practical in the lab here)

Instruction for Students:

1. Clearly understand the functioning of intelligent agents.

2. Choose suitable programming language.

3. Write the program to simulate the behavior of specified intelligent agents

4. Conclude the learning done through the tasks that are implemented.

Tasks to be implemented:

Implement / simulate the behavior of the following informed searching method.

1. MinMax searching algorithm

Code:

def print_board(board):

for row in board:

print(" | ".join(row))

print("-" * 9)

def check_win(board, player):

for i in range(3):

if all([cell == player for cell in board[i]]) or all([board[j][i] == player for j in range(3)]):

return True

if all([board[i][i] == player for i in range(3)]) or all([board[i][2 - i] == player for i in range(3)]):

return True

return False

def is_board_full(board):

return all([cell != ' ' for row in board for cell in row])

def evaluate(board):

if check_win(board, 'X'):

return 10

elif check_win(board, 'O'):

return -10

else:

return 0

def minimax(board, depth, is_maximizing):


score = evaluate(board)

if score == 10:

return score - depth

elif score == -10:

return score + depth

elif is_board_full(board):

return 0

if is_maximizing:

best = -float('inf')

for i in range(3):

for j in range(3):

if board[i][j] == ' ':

board[i][j] = 'X'

best = max(best, minimax(board, depth + 1, False))

board[i][j] = ' '

return best

else:

best = float('inf')

for i in range(3):

for j in range(3):

if board[i][j] == ' ':

board[i][j] = 'O'

best = min(best, minimax(board, depth + 1, True))

board[i][j] = ' '

return best

def find_best_move(board):

best_val = -float('inf')

best_move = (-1, -1)

for i in range(3):

for j in range(3):

if board[i][j] == ' ':

board[i][j] = 'X'

move_val = minimax(board, 0, False)

board[i][j] = ' '

if move_val > best_val:

best_val = move_val

best_move = (i, j)

return best_move

def get_valid_move(board):

while True:

try:

row, col = map(int, input("Enter your move (row and column, separated by space): ").split())
if 0 <= row < 3 and 0 <= col < 3 and board[row][col] == ' ':

return row, col

print("Invalid move! Try again.")

except ValueError:

print("Invalid input! Please enter two numbers separated by space.")

def main():

board = [[' ' for _ in range(3)] for _ in range(3)]

print("Welcome to Tic-Tac-Toe with Minimax AI!")

print_board(board)

while True:

# Player's move

row, col = get_valid_move(board)

board[row][col] = 'O'

print_board(board)

if check_win(board, 'O'):

print("Congratulations! You won!")

break

if is_board_full(board):

print("It's a draw!")

break

# AI's turn

print("AI's turn...")

ai_row, ai_col = find_best_move(board)

board[ai_row][ai_col] = 'X'

print_board(board)

if check_win(board, 'X'):

print("AI wins! Better luck next time.")

break

if is_board_full(board):

print("It's a draw!")

break

if __name__ == "__main__":

main()

Output:
B.2 Observations and learning:
(Students are expected to comment on the output obtained with clear observations and learning for each task/ sub part assigned)

While implementing the Minimax algorithm for adversarial searching, it was observed that the AI evaluates all possible game states to determine the best

possible move. The algorithm follows a recursive approach, where the maximizer aims for the highest possible score, and the minimizer tries to reduce it. The

evaluation function played a crucial role in deciding optimal moves, ensuring a faster victory or a delayed defeat. Additionally, adjusting the score based on

depth improved decision-making efficiency. The key learning outcome was understanding how adversarial search strategies help in decision-making by

anticipating opponent moves, making it effective for AI-driven game-playing scenarios.

B.3 Conclusion:
(Students must write the conclusion as per the attainment of individual outcome listed above and learning/observation noted in section B.3)

The implementation of the Minimax algorithm demonstrated the effectiveness of adversarial search in game-playing scenarios. By simulating intelligent

decision-making, the AI was able to evaluate possible moves, anticipate the opponent’s strategy, and choose the optimal path to victory. The experiment

reinforced the importance of evaluation functions, depth-based scoring adjustments, and recursive search techniques in achieving optimal performance.

Overall, this experiment provided valuable insights into how AI can be designed to play strategic games like Tic-Tac-Toe efficiently using adversarial

searching methods.

B.4 Question of Curiosity


(To be answered by student based on the practical performed and learning/observations)

1 Why adversarial searching techniques are useful for game playing? Justify your answer.

Adversarial searching techniques are useful for game playing because they allow an AI agent to make optimal decisions by anticipating and

countering an opponent’s moves. These techniques, like the Minimax algorithm, analyze all possible game states and evaluate the best

possible moves, ensuring that the AI maximizes its chances of winning while minimizing the opponent’s advantage. Adversarial search is

particularly effective in two-player, zero-sum games (such as Chess and Tic-Tac-Toe), where one player’s gain is the other’s loss. By

considering all possible game progressions, adversarial searching helps AI agents strategize optimally, making them competitive against

human or AI opponents.

2 Differentiate between Informed Searching and Adversarial Searching.

Informed Searching Adversarial Searching

Finds the optimal path to a goal state in problem-solving scenarios. Determines the best move considering an intelligent opponent.

No opponent; focuses on solving a search problem. Involves an opponent who tries to counteract the AI’s moves.

Uses heuristics to estimate the best path to the goal. Considers both maximizing and minimizing strategies to make decisions.

Used in single-agent problems like pathfinding and puzzles. Used in two-player or multi-agent competitive games.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy