0% found this document useful (0 votes)
50 views12 pages

Mayuresh Ai

The document is a certificate certifying that Mayuresh Kasar, a student of TYBSc Computer Science at Satish Pradhan Dnyanasadhana College in Thane, has successfully completed all practical work in the subject of Artificial Intelligence under the guidance of the subject in charge. It lists 10 practical experiments conducted by the student on various AI algorithms like breadth-first search, A* search, decision trees, neural networks, support vector machines, etc. and is signed by the subject in charge and head of the department.

Uploaded by

MAYURESH KASAR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views12 pages

Mayuresh Ai

The document is a certificate certifying that Mayuresh Kasar, a student of TYBSc Computer Science at Satish Pradhan Dnyanasadhana College in Thane, has successfully completed all practical work in the subject of Artificial Intelligence under the guidance of the subject in charge. It lists 10 practical experiments conducted by the student on various AI algorithms like breadth-first search, A* search, decision trees, neural networks, support vector machines, etc. and is signed by the subject in charge and head of the department.

Uploaded by

MAYURESH KASAR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Exam Seat No:

Satish Pradhan Dnyanasadhana


College Thane

Certificate
This is to certify that Mr./Miss.: Mayuresh Kasar of TYBSc Computer Science
(Semester-V) Class has successfully completed all the practical work in subject
Artificial Intelligence, under the guidance of Asst Prof. Dnyaneshwar Deore
(subject in charge) during Year 2023-24 in partial fulfilment of Computer Science
Practical Examination conducted by University of Mumbai.

Subject in charge Head of the Department

Date
Satish Pradhan Dnyanasadhana College, Thane [A.Y. 2023-2024]
Name: Mayuresh Kasar Roll No:16
Program: B.SC.CS (SEM V) Subject: Artificial Intelligence (PR)

Sr.no Title Date Sign


Breadth First Search & Iterative Depth First Search
• Implement the Breadth First Search algorithm to solve a given problem.
1 • Implement the Iterative Depth First Search algorithm to solve the same
problem.
• Compare the performance and efficiency of both algorithms.

A* Search and Recursive Best-First Search


• Implement the A* Search algorithm for solving a pathfinding problem.
2 • Implement the Recursive Best-First Search algorithm for the same
problem.
• Compare the performance and effectiveness of both algorithms.

Decision Tree Learning


• Implement the Decision Tree Learning algorithm to build a decision tree
3 for a given dataset.
• Evaluate the accuracy and effectiveness of the decision tree on test data.
• Visualize and interpret the generated decision tree

Feed Forward Backpropagation Neural Network


• Implement the Feed Forward Backpropagation algorithm to train a neural
4 network.
• Use a given dataset to train the neural network for a specific task.
• Evaluate the performance of the trained network on test data.

Support Vector Machines (SVM)


• Implement the SVM algorithm for binary classification.
5 • Train an SVM model using a given dataset and optimize its parameters.
• Evaluate the performance of the SVM model on test data and analyse
the results.

Adaboost Ensemble Learning


• Implement the Adaboost algorithm to create an ensemble of weak
classifiers.
6 • Train the ensemble model on a given dataset and evaluate its
performance.
• Compare the results with individual weak classifiers.

Naive Bayes' Classifier


• Implement the Naive Bayes' algorithm for classification.
7 • Train a Naive Bayes' model using a given dataset and calculate class
probabilities.
• Evaluate the accuracy of the model on test data and analyse the results.

K-Nearest Neighbours (K-NN)


• Implement the K-NN algorithm for classification or regression.
8 • Apply the K-NN algorithm to a given dataset and predict the class or
value for test data.
• Evaluate the accuracy or error of the predictions and analyse the results.

Association Rule Mining


• Implement the Association Rule Mining algorithm (e.g., Apriori) to find
frequent itemset.
9 • Generate association rules from the frequent itemset and calculate their
support and confidence.
• Interpret and analyse the discovered association rules.

Demo of OpenAI/TensorFlow Tools


• Explore and experiment with OpenAI or TensorFlow tools and libraries.
10 • Perform a demonstration or mini-project showcasing the capabilities of
the tools.
• Discuss and present the findings and potential applications

2
Satish Pradhan Dnyanasadhana College, Thane [A.Y. 2023-2024]
Name: Mayuresh Kasar Roll No:16
Program: B.SC.CS (SEM V) Subject: Artificial Intelligence (PR)

Practical 3
Aim: Breadth First Search & Iterative Depth First Search
• Implement the Breadth First Search algorithm to solve a given problem.
• Implement the Iterative Depth First Search algorithm to solve the same problem.
• Compare the performance and efficiency of both algorithms.
Code:
from collections import deque
import time
class Maze:
def init (self, maze):
self.maze = maze
self.rows = len(maze)
self.cols = len(maze[0])
def is_valid(self, row, col):
return 0 <= row < self.rows and 0 <= col < self.cols and self.maze[row][col] == 0
def bfs(self, start, end):
queue = deque([(start, [])])
visited = set()
while queue:
current, path = queue.popleft()
if current == end:
return path + [end]
if current not in visited:
visited.add(current)
neighbors = [(current[0] + dr, current[1] + dc) for dr, dc in [(-1, 0), (1, 0), (0, -1), (0, 1)]]
valid_neighbors = [(r, c) for r, c in neighbors if self.is_valid(r, c)]
queue.extend((neighbor, path + [current]) for neighbor in valid_neighbors)
return None
def dfs(self, start, end):
stack = [(start, [])]
visited = set()
while stack:
current, path = stack.pop()
if current == end:
return path + [end]
if current not in visited:
visited.add(current)
neighbors = [(current[0] + dr, current[1] + dc) for dr, dc in [(-1, 0), (1, 0), (0, -1), (0, 1)]]
valid_neighbors = [(r, c) for r, c in neighbors if self.is_valid(r, c)]
stack.extend((neighbor, path + [current]) for neighbor in valid_neighbors)
return None
maze_data = [[0, 1, 0, 0, 0],[0, 1, 0, 1, 0],[0, 0, 0, 1, 0],[0, 1, 0, 0, 0],[0, 0, 0, 1, 0],]
maze = Maze(maze_data)
start = (0, 0)
end = (4, 4)
start_time = time.time()
bfs_path = maze.bfs(start, end)
end_time = time.time()
bfs_time = end_time - start_time
print("BFS Path:", bfs_path)
print("BFS Time:", bfs_time)
start_time = time.time()
dfs_path = maze.dfs(start, end)
end_time = time.time()
dfs_time = end_time - start_time
print("DFS Path:", dfs_path)
print("DFS Time:", dfs_time)
Output:

3
Satish Pradhan Dnyanasadhana College, Thane [A.Y. 2023-2024]
Name: Mayuresh Kasar Roll No:16
Program: B.SC.CS (SEM V) Subject: Artificial Intelligence (PR)

Practical 4
Aim: A* Search and Recursive Best-First Search
• Implement the A* Search algorithm for solving a pathfinding problem.
• Implement the Recursive Best-First Search algorithm for the same problem.
• Compare the performance and effectiveness of both algorithms.
Code:
import heapq, math, time
class Maze:
def init (self, maze):
self.maze, self.rows, self.cols = maze, len(maze), len(maze[0])
def is_valid(self, row, col):
return 0 <= row < self.rows and 0 <= col < self.cols and self.maze[row][col] == 0
def heuristic(self, current, goal):
return math.sqrt((current[0] - goal[0]) ** 2 + (current[1] - goal[1]) ** 2)
def astar(self, start, end):
heap = [(0, start, [])]
visited = set()
while heap:
f, current, path = heapq.heappop(heap)
row, col = current
if current == end:
return path + [end]
if current not in visited:
visited.add(current)
for neighbor in [(row - 1, col), (row + 1, col), (row, col - 1), (row, col + 1)]:
if self.is_valid(*neighbor):
g = len(path) + 1
h = self.heuristic(neighbor, end)
heapq.heappush(heap, (g + h, neighbor, path + [current]))
return None
def rbfs(self, start, end, f_limit=float('inf')):
if start == end:
return [start]
successors = [(neighbor, self.heuristic(neighbor, end)) for neighbor in self.get_neighbors(start)]
successors.sort(key=lambda x: x[1])
while successors:
best, h = successors[0]
if h > f_limit:
return None
result = self.rbfs(best, end, min(f_limit, h))
if result is not None:
return [start] + result
successors.pop(0)
return None
def get_neighbors(self, current):
row, col = current
neighbors = [(row - 1, col), (row + 1, col), (row, col - 1), (row, col + 1)]
return [neighbor for neighbor in neighbors if self.is_valid(*neighbor)]
maze_data = [[0, 1, 0, 0, 0],[0, 1, 0, 1, 0],[0, 0, 0, 1, 0],[0, 1, 0, 0, 0],[0, 0, 0, 1, 0],]
maze, start, end = Maze(maze_data), (0, 0), (4, 4)
start_time = time.time()
astar_path = maze.astar(start, end)
end_time = time.time()
astar_time = end_time - start_time
print("A* Search Path:" + str(astar_path) + "\nA* Search Time:" + str(astar_time))
start_time = time.time()
rbfs_path = maze.rbfs(start, end)
end_time = time.time()
rbfs_time = end_time - start_time
print("RBFS Path:" + str(rbfs_path) + "RBFS Time:" + str(rbfs_time))
Output:

4
Satish Pradhan Dnyanasadhana College, Thane [A.Y. 2023-2024]
Name: Mayuresh Kasar Roll No:16
Program: B.SC.CS (SEM V) Subject: Artificial Intelligence (PR)

Practical 5
Aim: Decision Tree Learning
• Implement the Decision Tree Learning algorithm to build a decision tree for a given dataset.
• Evaluate the accuracy and effectiveness of the decision tree on test data.
• Visualize and interpret the generated decision tree
Code:
#pip install scikit-learn matplotlib
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
X_train, X_test, y_train, y_test = train_test_split(*load_iris(return_X_y=True), test_size=0.2, random_state=42)
clf = DecisionTreeClassifier(random_state=42).fit(X_train, y_train)
y_pred = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
plt.figure(figsize=(12, 8))
plot_tree(clf, feature_names=load_iris().feature_names, class_names=load_iris().target_names, filled=True, rounded=True)
plt.show()
Output:

5
Satish Pradhan Dnyanasadhana College, Thane [A.Y. 2023-2024]
Name: Mayuresh Kasar Roll No:16
Program: B.SC.CS (SEM V) Subject: Artificial Intelligence (PR)

Practical 6
Aim: Feed Forward Backpropagation Neural Network
• Implement the Feed Forward Backpropagation algorithm to train a neural network.
• Use a given dataset to train the neural network for a specific task.
• Evaluate the performance of the trained network on test data
Code:
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers, models
from sklearn.model_selection import train_test_split
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_images, test_images = train_images / 255.0, test_images / 255.0
train_images = train_images.reshape((60000, 28 * 28))
test_images = test_images.reshape((10000, 28 * 28))
train_labels = tf.keras.utils.to_categorical(train_labels)
test_labels = tf.keras.utils.to_categorical(test_labels)
X_train, X_test, y_train, y_test = train_test_split(train_images, train_labels, test_size=0.2, random_state=42)
model = models.Sequential([layers.Dense(128, activation='relu', input_shape=(28 * 28,)), layers.Dense(10, activation='softmax')])
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=5, batch_size=64, validation_split=0.1)
test_loss, test_acc = model.evaluate(X_test, y_test)
print("Test Accuracy:", test_acc)
Output:

6
Satish Pradhan Dnyanasadhana College, Thane [A.Y. 2023-2024]
Name: Mayuresh Kasar Roll No:16
Program: B.SC.CS (SEM V) Subject: Artificial Intelligence (PR)

Practical 7
Aim: Support Vector Machines (SVM)
• Implement the SVM algorithm for binary classification.
• Train an SVM model using a given dataset and optimize its parameters.
• Evaluate the performance of the SVM model on test data and analyse the results.
Code:
#pip install numpy scikit-learn
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score, classification_report
X, y = datasets.load_iris(return_X_y=True)
y = (y == 0).astype(int) # 1 for Setosa, 0 for Non-Setosa
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
param_grid = {'C': [0.1, 1, 10, 100], 'kernel': ['linear', 'rbf', 'poly']}
grid_search = GridSearchCV(SVC(), param_grid, cv=5)
grid_search.fit(X_train, y_train)
best_params = grid_search.best_params_
print("Best Parameters:", best_params)
best_svm_model = SVC(**best_params).fit(X_train, y_train)
y_pred = best_svm_model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
print("Classification Report:\n", classification_report(y_test, y_pred))
Output:

7
Satish Pradhan Dnyanasadhana College, Thane [A.Y. 2023-2024]
Name: Mayuresh Kasar Roll No:16
Program: B.SC.CS (SEM V) Subject: Artificial Intelligence (PR)

Practical 8
Aim: Adaboost Ensemble Learning
• Implement the Adaboost algorithm to create an ensemble of weak classifiers.
• Train the ensemble model on a given dataset and evaluate its performance.
• Compare the results with individual weak classifiers.
Code:
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score, classification_report
X, y = load_breast_cancer(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
adaboost_model = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=1), n_estimators=50,
random_state=42)
adaboost_model.fit(X_train, y_train)
accuracy_adaboost = accuracy_score(y_test, adaboost_model.predict(X_test))
print("AdaBoost Accuracy:", accuracy_adaboost)
print("AdaBoost Classification Report:\n", classification_report(y_test, adaboost_model.predict(X_test)))
weak_classifier = DecisionTreeClassifier(max_depth=1).fit(X_train, y_train)
accuracy_weak_classifier = accuracy_score(y_test, weak_classifier.predict(X_test))
print("\nWeak Classifier Accuracy:", accuracy_weak_classifier)
print("Weak Classifier Classification Report:\n", classification_report(y_test, weak_classifier.predict(X_test)))
Output:

8
Satish Pradhan Dnyanasadhana College, Thane [A.Y. 2023-2024]
Name: Mayuresh Kasar Roll No:16
Program: B.SC.CS (SEM V) Subject: Artificial Intelligence (PR)

Practical 9
Aim: Naive Bayes' Classifier
• Implement the Naive Bayes' algorithm for classification.
• Train a Naive Bayes' model using a given dataset and calculate class probabilities.
• Evaluate the accuracy of the model on test data and analyse the results.
Code:
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score, classification_report
X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
nb_model = GaussianNB().fit(X_train, y_train)
y_pred = nb_model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
print("Classification Report:\n", classification_report(y_test, y_pred))
Output:

9
Satish Pradhan Dnyanasadhana College, Thane [A.Y. 2023-2024]
Name: Mayuresh Kasar Roll No:16
Program: B.SC.CS (SEM V) Subject: Artificial Intelligence (PR)

Practical
Aim: K-Nearest Neighbours (K-NN) 10
• Implement the K-NN algorithm for classification or regression.
• Apply the K-NN algorithm to a given dataset and predict the class or value for test data.
• Evaluate the accuracy or error of the predictions and analyse the results.
Code:
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score, classification_report
X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
knn_model = KNeighborsClassifier(n_neighbors=3).fit(X_train, y_train)
y_pred = knn_model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
print("Classification Report:\n", classification_report(y_test, y_pred))
Output:

10
Satish Pradhan Dnyanasadhana College, Thane [A.Y. 2023-2024]
Name: Mayuresh Kasar Roll No:16
Program: B.SC.CS (SEM V) Subject: Artificial Intelligence (PR)

Practical
Aim: Association Rule Mining 11
• Implement the Association Rule Mining algorithm (e.g., Apriori) to find frequent itemset.
• Generate association rules from the frequent itemset and calculate their support and confidence.
• Interpret and analyse the discovered association rules.
Code:
#pip install mlxtend
from mlxtend.preprocessing import TransactionEncoder
from mlxtend.frequent_patterns import apriori, association_rules
transactions = [['bread', 'milk', 'eggs'],['bread', 'butter', 'jam'],['milk', 'tea', 'butter'],['bread', 'tea', 'butter'],['bread', 'milk', 'jam'],]
te = TransactionEncoder()
df = pd.DataFrame(te.fit_transform(transactions), columns=te.columns_)
frequent_itemsets = apriori(df, min_support=0.2, use_colnames=True)
rules = association_rules(frequent_itemsets, metric="confidence", min_threshold=0.7)
print("Frequent Itemsets:")
print(frequent_itemsets)
print("\nAssociation Rules:")
print(rules[['antecedents', 'consequents', 'support', 'confidence']])
Output:

11
Satish Pradhan Dnyanasadhana College, Thane [A.Y. 2023-2024]
Name: Mayuresh Kasar Roll No:16
Program: B.SC.CS (SEM V) Subject: Artificial Intelligence (PR)

Practical 12
Aim: Demo of OpenAI/TensorFlow Tools
• Explore and experiment with OpenAI or TensorFlow tools and libraries.
• Perform a demonstration or mini-project showcasing the capabilities of the tools.
• Discuss and present the findings and potential applications
Code:
#pip install tensorflow
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, datasets
import matplotlib.pyplot as plt
(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()
train_images, test_images = train_images / 255.0, test_images / 255.0
train_labels, test_labels = tf.keras.utils.to_categorical(train_labels), tf.keras.utils.to_categorical(test_labels)
model = keras.Sequential([layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)), layers.MaxPooling2D((2, 2)),
layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dense(10, activation='softmax')])
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels))
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f'Test Accuracy: {test_acc}')
predictions = model.predict(test_images)
plt.figure(figsize=(10, 10))
for i in range(9):
plt.subplot(3, 3, i + 1)
plt.imshow(test_images[i])
plt.title(f'Label: {tf.argmax(test_labels[i])}\nPredicted: {tf.argmax(predictions[i])}')
plt.axis('off')
plt.show()
Output:

12

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy