AIML Manual - Merged
AIML Manual - Merged
Page MARKS
EX.NO DATE NAME OF THE EXPERIMENT No AWARDED REMARKS
Aim :
Algorithm:
1. To begin, place any one vertices of our graph at the lower extreme of the queue.
2. Add the very first element in the created queue to the list of objects that have already
been checked out.
3. Create a list of all the nodes that seem to be near that vertex. Individual nodes
which are not in the visited list should be moved to the rear of the queue.
4. Repeat the above two steps, i.e., steps 2 and 3, till our queue is reduced to 0
Program:
graph = { '1':
['2','5','3'],
'2':['6', '4'],
'5':['4'],
'3':['4','7'],
'6':[],
'4':[],
'7':[]
}
visited=[]
queue=[]
def bfs(visited,graph,node):
visited.append(node)
queue.append(node)
while queue:
m = queue.pop(0)
print(m)
for neighbour in graph[m]:
if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)
print("Following is the Breadth-First Search")
bfs(visited,graph,'1')
Output:
Result:
Thus, the program for the BFS algorithm using python was implemented, executed and
verified successfully.
Ex No. 1b Implementation of Uninformed Search Algorithms – DFS
Aim :
Algorithm:
graph = { '1':
['2','5','3'],
'2':['6', '4'],
'5':['4'],
'3':['4','7'],
'6':[],
'4':[],
'7':[]
}
visited = set()
def dfs(visited, graph, node):
if node not in visited:
print(node)
visited.add(node)
for neighbour in graph[node]:
dfs(visited, graph, neighbour)
print("Following is the Depth-First Search")
dfs(visited,graph,'1')
Output:
Result:
Thus, the program for the DFS algorithm using python was implemented, executed and
verified successfully.
Ex No. 2a Implementation of Informed Search Algorithm – A*
Aim :
Output:
Result:
Thus, the program for the A* algorithm using python was implemented, executed and
verified successfully.
Ex No. 2b Implementation of Informed Search Algorithm
- Memory Bounded A*
Aim :
1. Set the initial threshold to ne the heuristic estimate of the distance from the start node to
the goal node.
2. While the threshold has not reached infinity :
2.1. Call the iterative_deepening_a_star_rec with the current threshold value.
2.2. If the return value is negative, print solution was found.
2.3. Else update the threshold to be the return value.
3. If the loop finishes without finding a solution, return -1.
4. Define the function iterative_deepening_a_star_rec.
5. Return the minimum distance.
Program:
V={}
E={}
V=({'A':7,'B':9,'C':6,'D':5,'E':6,'F':4.5,'H':4,'I':2,'J':3,'K':3.5,'G':0})
E=({('B','D'):2,('A','B'):4,('A','C'):4,('A','D'):7,('D','E'):6,('E','F'):5,('D','F'):8,('D','H'):5,('H','I'):3,
('I','J'):3,('J','K'):3,('K','H'):3,('F','G'):5})
INFINITY=10000000
cameFrom={}
def h(node):
return V[node]
def cost(node, succ):
return E[node,succ]
def successors(node):
neighbours=[]
for item in E:
if node==item[0][0]:
neighbours.append(item[1][0])
return neighbours
def reconstruct_path(cameFrom, current):
total_path = [current]
while current in cameFrom:
current = cameFrom[current]
total_path.append(current)
return total_path
def ida_star(root,goal):
global cameFrom
def search(node, g, bound):
min_node=None global cameFrom f = g + h(node)
if f > bound:
return f
if node==goal:
return "FOUND"
minn = INFINITY
for succ in successors(node):
t = search(succ, g + cost(node, succ), bound) if t == "FOUND":return
"FOUND"
if t < minn: minn = t
min_node=succ cameFrom[min_node]=node return minn
bound= h(root) count =1
while 1:
print ("itertion"+str(count)) count+=1
t = search(root, 0, bound) if t == "FOUND":
print (reconstruct_path(cameFrom, goal)) return bound
if t == INFINITY:return "NOT_FOUND" bound = t
print (ida_star('A','G'))
Output:
Result:
Thus, the program for the Memory Bounded A* algorithm using python was
implemented, executed and verified successfully.
Ex No. 3 Implementation of Naïve Bayes Model
Aim :
To build a naïve bayes model using gaussian naïve bayes formula in python.
Algorithm:
Program:
Result:
Thus, the program for the Naive Bayes algorithm using python was implemented,
executed and verified successfully.
Ex No. 4 Implementation of Bayesian Network
Aim :
Algorithm:
Program:
bayesNet = BayesianModel()
bayesNet.add_node("M")
bayesNet.add_node("U")
bayesNet.add_node("R")
bayesNet.add_node("B")
bayesNet.add_node("S")
bayesNet.add_edge("M","R")
bayesNet.add_edge("U","R")
bayesNet.add_edge("B","R")
bayesNet.add_edge("B","S")
bayesNet.add_edge("R","S")
Output:
Result:
Thus, the program for the Bayes Network using python was implemented, executed and
verified successfully.
Ex No. 5 BUILD REGRESSION MODELS
Aim :
Algorithm:
Program:
x=[14,16,27,42,39,50,83]
y=[2,5,7,9,10,13,20]
xy=0 x2=0
xy_all=0 x2_all=0 x_all=0
y_all=0
for i in range(7):
xy_all=xy_all+x[i]*y[i]
x2_all=x2_all+x[i]*x[i]
x_all=x_all+x[i]
y_all=y_all+y[i]
x_bar=x_all/7
y_bar=y_all/7
b=(xy_all-
7*x_bar*y_bar)/(x2_all-
7*x_bar*x_bar) a=(y_bar-
b*x_bar)
print(b)
print(a)
print("Y=",b,"(X)+",a)
Output:
Result:
Thus, the program for the Regression Model using python was implemented, executed
and verified successfully.
Ex No. 6a BUILD DECISION TREES
Aim :
Algorithm:
Program:
Result:
Thus, the program for the Decision Tree using python was implemented, executed and
verified successfully.
Ex No. 6b BUILD RANDOM FOREST
Aim :
Algorithm:
1. In the Random forest model, a subset of data points and a subset of features is
selected for constructing each decision tree.
2. Individual decision trees are constructed for each sample.
3. Each decision tree will generate an output.
4. Final output is considered based on Majority Voting or Averaging for
5. Classification and regression, respectively.
Program:
Result:
Thus, the program for the Random Forest using python was implemented, executed and
verified successfully.
Ex No. 7 BUILD SVM MODEL
Aim :
Algorithm:
Program:
Result:
Thus, the program for the SVM Model using python was implemented, executed and
verified successfully.
Ex No. 8 IMPLEMENT ENSEMBLING TECHNIQUES
Aim :
To implement the ensembling techniques like Bagging, Boosting and Stacking using
Python.
BAGGING:
Algorithm:
1. Create multiple datasets from the train dataset by selecting observations with replacements.
2. Run a base model on each of the created data sets independently.
3. Combine the predictions of all the base models to each the final output.
4. Bagging normally uses only one base model.
Program:
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
import xgboost as xgb
from sklearn.ensemble import BaggingRegressor
df = pd.read_csv("train_data.csv")
target = df["target"]
train = df.drop("target")
X_train, X_test, y_train, y_test = train_test_split( train, target, test_size=0.20)
model = BaggingRegressor(base_estimator=xgb.XGBRegressor())
model.fit(X_train, y_train)
pred = model.predict(X_test)
print(mean_squared_error(y_test, pred_final))
Output:
4666
BOOSTING:
Algorithm:
1. Take a subset of the train dataset.
2. Train a base model on that dataset.
3. Use a third model to make predictions on the whole dataset.
4. Calculate errors using the predicted values and actual values.
5. Initialize all data points with the same weight.
6. Assign higher weight to incorrectly predicted data points.
7. Make another model, make predictions using the new model in such a way that
errors made by the previous model are mitigated/corrected.
8. Similarly, create multiple models–each successive model correcting the errors of the
previous model.
9. The final model (strong learner) is the weighted mean of all the previous models
(weak learners).
Program:
import pandas as pd
from sklearn.model_selection import
train_test_split from sklearn.metrics import
mean_squared_error
from sklearn.ensemble import
GradientBoostingRegressor df =
pd.read_csv("train_data.csv")
target = df["target"]
train =
df.drop("target")
X_train, X_test, y_train, y_test = train_test_split( train, target,
test_size=0.20) model = GradientBoostingRegressor()
model.fit(X_train, y_train)
pred_final =
model.predict(X_test)
print(mean_squared_error(y_test, pred_final))
Output:
4789
STACKING:
Algorithm:
Program:
mport pandas as pd
from sklearn.model_selection import train_test_split from
sklearn.metrics import mean_squared_error
from sklearn.ensemble import RandomForestRegressor
import xgboost as xgb
from sklearn.linear_model import LinearRegression
from vecstack import stacking
df = pd.read_csv("train_data.csv")
target = df["target"]
train = df.drop("target")
X_train, X_test, y_train, y_test =
train_test_split( train, target, test_size=0.20)
model_1 = LinearRegression()
model_2 = xgb.XGBRegressor()
model_3 = RandomForestRegressor()
all_models = [model_1, model_2, model_3]
s_train, s_test = stacking(all_models, X_train, X_test,
y_train, regression=True, n_folds=4)
final_model = model_1
del = final_model.fit(s_train, y_train) pred_final = final_model.predict(X_test)
print(mean_squared_error(y_test, pred_final))
Output:
4510
Result:
Thus the above Python program for implementing the ensembling techniques like
Bagging, Boosting and Stacking was successfully executed and verified successfully.
Ex No. 9 IMPLEMENT CLUSTERING ALGORITHMS
Aim :
To implement the clustering algorithms like K-Means and Partitioning Around Medoids (PAM)
using python for the set of data points
k MEANS:
Algorithm:
1. Initialize k means with random values
2. For a given number of iterations:
3. Iterate through items:
4. Find the mean closest to the item
5. Assign item to mean
6. Update mean
Program:
import math
x = [1, 1, 2, 2, 3, 5]
y = [1.5, 4.5, 1.5, 3.5, 2.5, 6]
c1 = [1, 1.5]
c2 = [2, 1.5]
dist_c1 = []
dist_c2 = []
clust_1 = []
clust_2 = []
k=0
while k<2:
clust_1 = clust_2 = []
dist_c1 = dist_c2 = []
cx = cy = 0
for i in range(6):
dist_c1 = dist_c1 + [math.sqrt(((x[i] - c1[0])**2)+((y[i] - c1[1])**2))]
dist_c2 = dist_c2 + [math.sqrt(((x[i] - c2[0])**2)+((y[i] - c2[1])**2))]
if(dist_c1[i] < dist_c2[i]):
clust_1 = clust_1 + [i]
else:
clust_2 = clust_2 + [i]
print(" Cluster_1: ",end = "")
for i in range(len(clust_1)):
print(clust_1[i],end="")
cx = cx + x[clust_1[i]]
cy = cy + y[clust_1[i]]
c1 = [cx/len(clust_1), cy/len(clust_1)]
print()
cx = cy = 0
print(" Cluster_2: ",end="")
for i in range(len(clust_2)):
print(clust_2[i],end="")
cx = cx + x[clust_2[i]]
cy = cy + y[clust_2[i]]
c2 = [cx/len(clust_2), cy/len(clust_2)]
print()
print("Centroid C1= (",round(c1[0],3),",
",round(c1[1],3),")") print("Centroid C2=
(",round(c2[0],3),", ",round(c2[1],3),")") k = k + 1
Output:
Program:
x = [2, 3, 3, 4, 6, 6, 7, 7, 8, 7]
y = [6, 4, 8, 7, 2, 4, 3, 4, 5, 6]
print("Enter the representative objects for 1st
iteration:") i = int(input())
j = int(input())
dist_c1 = []
dist_c2 = []
clust_c1 = []
clust_c2 = []
for k in range(10):
dist_c1 = dist_c1 + [abs(x[k]-x[i]) + abs(y[k]-
y[i])] dist_c2 = dist_c2 + [abs(x[k]-x[j]) +
abs(y[k]-y[j])] if(dist_c1[k] < dist_c2[k]):
clust_c1 = clust_c1 + [k]
else:
clust_c2 = clust_c2 + [k]
q=0
print("Cluster 1: ",end="")
for i in range(len(clust_c1)):
print(clust_c1[i],"
",end="") q = q +
dist_c1[clust_c1[i]] print()
print("Cluster 2: ",end="")
for i in range(len(clust_c2)):
print(clust_c2[i],"
",end="") q = q +
dist_c2[clust_c2[i]] print()
print("Q = ",q) print("Enter the second representative points for second iteration:")
i = int(input())
j = int(input())
dist_c1 = []
dist_c2 = []
clust_c1 = []
clust_c2 = []
for k in range(10):
dist_c1 = dist_c1 + [abs(x[k]-x[i]) + abs(y[k]-y[i])]
dist_c2 = dist_c2 + [abs(x[k]-x[j]) + abs(y[k]-y[j])]
if(dist_c1[k] < dist_c2[k]):
clust_c1 = clust_c1 + [k]
else:
clust_c2 = clust_c2 + [k]
q=0
print("Cluster 1: ",end="")
for i in range(len(clust_c1)):
print(clust_c1[i],"
",end="") q = q +
dist_c1[clust_c1[i]] print()
print("Cluster 2: ",end="")
for i in range(len(clust_c2)):
print(clust_c2[i],"
",end="") q = q +
dist_c2[clust_c2[i]] print()
print("Q = ",q)
Output:
Result:
Thus the clustering algorithms like K-Means and Partitioning Around Medoids(PAM) was
implemented and verified using python successfully.
Ex No. 10 IMPLEMENT EM FOR BAYESIAN NETWORKS
Aim :
Algorithm:
1. Initialize the Bayesian network model with some initial structure and parameters.
The structure defines the conditional dependencies between variables, and the
parameters represent the conditional probability distributions for each variable given
its parents.
2. Initialize the missing values in the dataset (if any) using some heuristic or
random imputation method.
3. Repeat the following steps until convergence or a maximum number of iterations is
reached:
4. Return the final model with updated structure and parameters.
Program:
Result:
Thus the above Python program for implementing EM for Bayesian Network was
successfully executed and verified successfully.
Ex No. 11 BUILD SIMPLE NN MODELS
Aim :
Algorithm:
1. Take the inputs from the training dataset, performed some adjustments based on
their weights, and siphoned them via a method that computed the output of the ANN.
Step
2. Compute the back-propagated error rate. In this case, it is the difference between
neuron’s predicted output and the expected output of the training dataset.
3. Based on the extent of the error got, we performed some minor weight adjustments using
the Error Weighted Derivative formula.
4. We iterated this process an arbitrary number of 15,000 times. In every iteration, the
whole training set is processed simultaneously.
Program:
import numpy as np
class NeuralNetwork():
def init (self):
np.random.seed(1)
self.synaptic_weights = 2 * np.random.random((3, 1)) - 1
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(self, x):
#computing derivative to the Sigmoid function
return x * (1 - x)
def train(self, training_inputs, training_outputs, training_iterations):
for iteration in range(training_iterations):
output = self.think(training_inputs)
error = training_outputs - output
adjustments = np.dot(training_inputs.T, error * self.sigmoid_derivative(output))
self.synaptic_weights += adjustments
def think(self, inputs):
inputs = inputs.astype(float)
output = self.sigmoid(np.dot(inputs, self.synaptic_weights))
return output
if name == " main ":
neural_network =
NeuralNetwork()
print("Beginning Randomly Generated Weights: ")
print(neural_network.synaptic_weights)
training_inputs = np.array([[0,0,1],
[1,1,1],
[1,0,1],
[0,1,1]])
training_outputs = np.array([[0,1,1,0]]).T
neural_network.train(training_inputs, training_outputs, 15000)
print("Ending Weights After Training: ")
print(neural_network.synaptic_weights)
user_input_one = str(input("User Input One: "))
user_input_two = str(input("User Input Two: "))
user_input_three = str(input("User Input Three: "))
print("Considering New Situation: ", user_input_one, user_input_two,
user_input_three) print("New Output data: ")
print(neural_network.think(np.array([user_input_one, user_input_two,
user_input_three]))) print("Success")
Output:
Result:
Thus we have successfully built and implemented a Simple NN Model using python.
Ex No. 12 BUILD DEEP LEARNING SIMPLE NN MODELS
Aim :
Algorithm:
1. Load data.
2. define keras model.
3. compile the keras model.
4. start training(fit the model).
5. evaluate the model.
6. making predictions.
Program:
import pandas as pd
data = pd.read_csv('diabetes.csv')
x = data.drop("Outcome", axis=1)
y = data["Outcome"]
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(12, input_dim=8, activation="relu"))
model.add(Dense(12, activation="relu"))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss="binary_crossentropy", optimizer="adam",
metrics=["accuracy"]) model.fit(x,y, epochs=5, batch_size=10)
_, accuracy = model.evaluate(x, y)
print("Model accuracy: %.2f"%
(accuracy*100)) predictions = model.predict(x)
print([round(x[0]) for x in predictions])
model = Sequential() #define model
model.add(Dense(12, input_dim=8, activation="relu"))
model.add(Dense(8, activation="relu"))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss="binary_crossentropy", optimizer="adam",
metrics=["accuracy"]) #compile model
model.fit(x,y, epochs=5, batch_size=10) #training
_, accuracy = model.evaluate(x,y) #testing
print("Model accuracy: %.2f"% (accuracy*100))
predictions = model.predict(x) #make predictions
#round the prediction
rounded = [round(x[0]) for x in predictions]
Output:
Result:
Thus we have successfully built a Deep Learning NN Model using Keras model
(tensorflow framework).
Ex No. 13 IMPLEMENT OF APPLICATIONS OF BACK PROPAGATION
ALGORITHM
Aim :
Algorithm:
1. Initialize data and parameters
Define XOR inputs/outputs and initialize weights/biases randomly.
2. Set activation functions
Use sigmoid and its derivative for forward and backward passes.
3. Forward propagation
Compute outputs of hidden and final layers using sigmoid.
4. Compute error
Calculate difference between predicted and actual output.
5. Backpropagation
Update weights and biases using gradients from error.
6. Repeat and display result
Train for many epochs, print loss periodically, and show final output.
Program:
import numpy as np
# Sigmoid activation and its derivative
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(x):
return x * (1 - x)
# Input data for XOR
X = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
# Output labels
y = np.array([[0],
[1],
[1],
[0]])
# Seed for reproducibility
np.random.seed(42)
# Initialize weights randomly
input_neurons = 2
hidden_neurons = 2
output_neurons = 1
# Weights and biases
weights_input_hidden = np.random.uniform(size=(input_neurons, hidden_neurons))
weights_hidden_output = np.random.uniform(size=(hidden_neurons, output_neurons))
# Training parameters
epochs = 10000
learning_rate = 0.1
# Training loop
for epoch in range(epochs):
# Forward Propagation
hidden_input = np.dot(X, weights_input_hidden) + bias_hidden
hidden_output = sigmoid(hidden_input)
final_input = np.dot(hidden_output, weights_hidden_output) + bias_output
final_output = sigmoid(final_input)
# Backpropagation
error = y - final_output
d_output = error * sigmoid_derivative(final_output)
error_hidden = d_output.dot(weights_hidden_output.T)
d_hidden = error_hidden * sigmoid_derivative(hidden_output)
# Updating weights and biases
weights_hidden_output += hidden_output.T.dot(d_output) * learning_rate
weights_input_hidden += X.T.dot(d_hidden) * learning_rate
bias_output += np.sum(d_output, axis=0, keepdims=True) * learning_rate
bias_hidden += np.sum(d_hidden, axis=0, keepdims=True) * learning_rate
# Optional: Print error every 1000 epochs
if epoch % 1000 == 0:
loss = np.mean(np.square(error))
print(f"Epoch {epoch}, Loss: {loss:.4f}")
# Final output after training
print("\nFinal Output After Training:")
print(final_output)
Output
Epoch 0, Loss: 0.2880
Epoch 1000, Loss: 0.2494
Epoch 2000, Loss: 0.2457
Epoch 3000, Loss: 0.2200
Epoch 4000, Loss: 0.1622
Epoch 5000, Loss: 0.0527
Epoch 6000, Loss: 0.0169
Epoch 7000, Loss: 0.0089
Epoch 8000, Loss: 0.0058
Epoch 9000, Loss: 0.0043
Result:
Thus we have successfully implemented an application of back propagation algorithm.