Aiml Lab Manual Final
Aiml Lab Manual Final
Aim:
To write a Python program to implement Breadth First Search (BFS).
Algorithm:
Step 1. Start
Step 2. Put any one of the graph’s vertices at the back of the queue.
Step 3. Take the front item of the queue and add it to the visited list.
Step 4. Create a list of that vertex's adjacent nodes. Add those which are not within the
visited list to the rear of the queue.
Step 5. Continue steps 3 and 4 till the queue is
empty.
Step 6. Stop
Graph:
Program:
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited.append(nod
e)
queue.append(nod
e)
m = queue.pop(0)
print (m, end = " ")
# Driver Code
Result:
Thus the Python program to implement Breadth First Search (BFS) was
developed successfully.
Ex.No. 2 IMPLEMENT DEPTH FIRST SEARCH
DATE:
Aim:
To write a Python program to implement Depth First Search (DFS).
Algorithm:
Step 1.Start
Step 2.Put any one of the graph's vertex on top of the stack.
Step 3.After that take the top item of the stack and add it to the visited list of the vertex.
Step 4.Next, create a list of that adjacent node of the vertex. Add the ones which aren't in
thevisited list of vertexes to the top of the stack.
Step 5.Repeat steps 3 and 4 until the stack is
empty
Step 6.Stop
Graph:
Program:
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
# Driver Code
print("Following is the Depth-First Search")dfs(visited, graph, '5')
Result:
Thus the Python program to implement Depth First Search (DFS) was developedsuccessfully
Ex.No. 3 IMPLEMENT AND COMPARE GREEDY & A* ALGORITHMS.
DATE:
Aim:
To write a Python program to implement Greedy & A* search algorithm.
Algorithm:
Step 1: Create a priority queue and push the starting node onto the queue.Initialize
minimum value (min_index) to location 0.
Step 2: Create a set to store the visited nodes.
Step 3: Repeat the following steps until the queue is empty:
3.1: Pop the node with the lowest cost + heuristic from the queue.
3.2: If the current node is the goal, return the path to the goal.
3.3: If the current node has already been visited, skip
it.
3.4: Mark the current node as visited.
3.5: Expand the current node and add its neighbors to the
queue.
Step 4: If the queue is empty and the goal has not been found, return None (no path found).
Step 5: Stop
Program:
return path
def astar(graph, start, goal, heuristic):
frontier = PriorityQueue()
frontier.put((0, start))
came_from = {}
cost_so_far = {}
came_from[start] = None
cost_so_far[start] = 0
if current == goal:
break
path = []
while current != start:
path.append(current)
current = came_from[current]
path.append(start)
path.reverse()
return path
# Example usage
graph = {
'A': {'B': 1, 'C': 2},
'B': {'A': 1, 'D': 3},
'C': {'A': 2, 'D': 1},
'D': {'B': 3, 'C': 1}
}
heuristic = {'A': 2, 'B': 1, 'C': 1, 'D': 0}
start, goal = 'A', 'D'
print("Greedy Path:", greedy(graph, start, goal, heuristic))
output:
Greedy Path: ['A', 'B', 'D']
A* Path: ['A', 'C', 'D']
Result:
Thus the Python program to implement Greedy and A* Algorthim was developed successfully
Ex.No. 4 Implement the non-parametric locally weighted regression
algorithm in order to fit data points. Select appropriate data set for your
experiment and draw graphs
DATE:
Aim:
Algorithm:
Step 2. Create a numpy array for waist and weight values and store them in separate
variables.
Step 3. Create a pandas DataFrame with waist and weight columns using the numpyarrays.
Step 4. Extract input (X) and output (y) variables from the DataFrame.
Step 8. Use the predict() method of the LinearRegression model to predict the weight for
thenew waist value.
Step 9. Calculate the mean squared error and R-squared values using
mean_squared_error()and r2_score() functions respectively.
Step 10. Plot the actual and predicted values using matplotlib.pyplot.scatter()
andmatplotlib.pyplot.plot() functions.
Program:
import numpy as np
import pandas as pd
waist = np.array([70, 71, 72, 73, 74, 75, 76, 77, 78, 79])
weight = np.array([55, 57, 59, 61, 63, 65, 67, 69, 71, 73])
X = data[['waist']]
y = data['weight']
model = LinearRegression()
model.fit(X, y)
predicted_weight = model.predict(new_data[['waist']])
y_pred = model.predict(X)
mse = mean_squared_error(y, y_pred)
r2 = r2_score(y, y_pred)
print('R-squared:', r2)
plt.xlabel('Waist (cm)')
plt.ylabel('Weight (kg)')
plt.show()
OUTPUT:
R-squared: 1.0
Result:
Thus the Python program to implement Greedy and A* Algorthim was developed successfully
was developed successfully
Ex.No. 5 Write a program to demonstrate the working of the decision tree based
algorithm.
DATE:
Aim:
To write a Python program to demonstrate the working of the decision tree based algorithm.
Algorthim:
2. Split the data into training and testing sets– using train_test_split from sklearn
3. Apply the decision tree classifier – using DecisionTreeClassifier from sklearn
4. Predict the target for the test set
Program:
import pandas as pd
from sklearn.tree import DecisionTree Classifier
from sklearn.model_selection import train_test_split
from sklearn import metrics
pima = pd.read_csv("D:\diabetes_dataset.csv")
print(pima.head())
clf = DecisionTreeClassifier(max_depth=5)
clf1= clf.fit(X_train,y_train)
y_pred = clf.predict(X_test)
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
fig = plt.figure()
tree.plot_tree(clf1)
plt.show()
fig.savefig("decision_tree.png")
output:
================================================================
RESTART: C:\Users\admin\Desktop\id3.py
===============================================================
Pregnancies Glucose BloodPressure ... DiabetesPedigreeFunction Age Outcome
0 6 148 72 ... 0.627 50 1
1 1 85 66 ... 0.351 31 0
2 8 183 64 ... 0.672 32 1
3 1 89 66 ... 0.167 21 0
4 0 137 40 ... 2.288 33 1
[5 rows x 9 columns]
Accuracy: 1.0
>>>
Result:
Thus the Python program to implement decision tree based algorithm. was developed
successfully
Ex.No. 6 Build an artificial neural network by implementing the
back propagation algorithm and test the same using appropriate
data sets.
DATE:
Aim:
Algorthim:
Program:
import numpy as np
y = y/100
#Sigmoid Function
def derivatives_sigmoid(x):
return x * (1 - x)
#Variable initialization
wh=np.random.uniform(size=(inputlayer_neurons,hiddenlayer_neurons))
bh=np.random.uniform(size=(1,hiddenlayer_neurons))
wout=np.random.uniform(size=(hiddenlayer_neurons,output_neurons))
bout=np.random.uniform(size=(1,output_neurons))
for i in range(epoch):
#Forward Propogation
hinp1=np.dot(X,wh)
hinp=hinp1 + bh
hlayer_act = sigmoid(hinp)
outinp1=np.dot(hlayer_act,wout)
outinp= outinp1+bout
output = sigmoid(outinp)
#Backpropagation
EO = y-output
outgrad = derivatives_sigmoid(output)
d_output = EO * outgrad
EH = d_output.dot(wout.T)
d_hiddenlayer = EH * hiddengrad
wout += hlayer_act.T.dot(d_output) *lr # dotproduct of nextlayererror and currentlayerop
wh += X.T.dot(d_hiddenlayer) *lr
output:
-----------Epoch- 1 Starts----------
Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Actual Output:
[[0.92]
[0.86]
[0.89]]
Predicted Output:
[[0.73994859]
[0.72235028]
[0.74349565]]
-----------Epoch- 1 Ends----------
-----------Epoch- 2 Starts----------
Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Actual Output:
[[0.92]
[0.86]
[0.89]]
Predicted Output:
[[0.74302162]
[0.7252485 ]
[0.74658857]]
-----------Epoch- 2 Ends----------
-----------Epoch- 3 Starts----------
Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Actual Output:
[[0.92]
[0.86]
[0.89]]
Predicted Output:
[[0.74599163]
[0.72805257]
[0.74957693]]
-----------Epoch- 3 Ends----------
-----------Epoch- 4 Starts----------
Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Actual Output:
[[0.92]
[0.86]
[0.89]]
Predicted Output:
[[0.74886344]
[0.73076681]
[0.75246569]]
-----------Epoch- 4 Ends----------
-----------Epoch- 5 Starts----------
Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Actual Output:
[[0.92]
[0.86]
[0.89]]
Predicted Output:
[[0.75164162]
[0.73339529]
[0.75525949]]
-----------Epoch- 5 Ends----------
Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Actual Output:
[[0.92]
[0.86]
[0.89]]
Predicted Output:
[[0.75164162]
[0.73339529]
[0.75525949]]
>>>
Result:
ALGORITHM:
STEP 1: Load the training data set from the CSV file into a list of dictionaries, where
each dictionary represents a single instance (row) in the data set and the keys
representthe attribute names (columns) and the values represent the
corresponding attribute values for that instance.
STEP 2: Determine the class variable for each instance in the training data set and
add itas a new key-value pair to the corresponding dictionary
.STEP 3: Create a dictionary to store the prior probabilities for each class variable
in thetraining data set. The key-value pairs should be of the form
{class_variable:prior_probability
STEP 4: For each attribute in the training data set, create a dictionary to store
theconditional probabilities for each attribute value given each class variable. The
key-valuepairs should be of the form {(attribute, attribute_value,
class_variable):conditional_probability
STEP 5: Compute the prior probabilities for each class variable by counting the
numberof instances in the training data set that belong to each class variable and
dividing by thetotal number of instances.
STEP 6: For each attribute in the training data set, compute the conditional
probabilitiesfor each attribute value given each class variable by counting the
number of instances inthe training data set that have that attribute value and
belong to each class variable, anddividing by the number of instances that belong
to that class variable
.STEP 7: Load the test data sets from CSV files into lists of dictionaries, following
thesame format as the training data set.
STEP 8: For each instance in each test data set, compute the posterior probability
foreach class variable given the attribute values in that instance, using the Naive
Bayesianformula:P(class variable | attribute _values) = P(class variable)
*product(P(attribute_value | class variable) for attribute_value in
attribute_values)
STEP 9: Determine the predicted class variable for each instance in each test data
set asthe class variable with the highest posterior probability
.STEP 10: Compare the predicted class variables to the actual class variables in each
testdata set to compute the accuracy of the classifier
.STEP 11: Output the accuracy for each test data set.
Program:
import pandas as pd
data = pd.read_csv('tennisdata.csv')
X = data.iloc[:,:-1]
y = data.iloc[:,-1]
le_outlook = LabelEncoder()
X.Outlook = le_outlook.fit_transform(X.Outlook)
le_Temperature = LabelEncoder()
X.Temperature = le_Temperature.fit_transform(X.Temperature)
le_Humidity = LabelEncoder()
X.Humidity = le_Humidity.fit_transform(X.Humidity)
le_Windy = LabelEncoder()
X.Windy = le_Windy.fit_transform(X.Windy)
le_PlayTennis = LabelEncoder()
y = le_PlayTennis.fit_transform(y)
classifier = GaussianNB()
classifier.fit(X_train,y_train)
output:
0 No
1 No
2 Yes
3 Yes
4 Yes
0 2 1 0 0
1 2 1 0 1
2 0 1 0 0
3 1 2 0 0
4 1 0 1 0
[0 0 1 1 1 0 1 0 1 1 1 1 1 0]
Thus the Python program to implement the naïve Bayesian classifier was
developed successfully
AIM:
ALGORITHM:
Step 1: Initialize the weights wij random value may be assumed. Initialize the
learning rate α.
Step 3: Find index J, when D(j) is minimum that will be considered as winning
index.
Step 4: For each j within a specific neighborhood of j and for all i, calculate the
new weight.
α(t+1) = 0.5 * t
Program:
import math
class SOM:
# by Euclidean distance
D0 = 0
D1 = 0
for i in range(len(sample)):
D0 = D0 + math.pow((sample[i] - weights[0][i]), 2)
D1 = D1 + math.pow((sample[i] - weights[1][i]), 2)
if D0 < D1:
return 0
else:
return 1
# Here iterating over the weights of winning cluster and modifying them
for i in range(len(weights[0])):
return weights
# Driver code
def main():
# Training Examples ( m, n )
m, n = len(T), len(T[0])
# weight initialization ( n, C )
# training
ob = SOM()
epochs = 3
alpha = 0.5
for i in range(epochs):
for j in range(m):
# training sample
sample = T[j]
J = ob.winner(weights, sample)
# Update winning vector
s = [0, 0, 0, 1]
J = ob.winner(weights, s)
if __name__ == "__main__":
main()
OUTPUT:
Result:
Thus the Python program to implementing using neural networks using self organizing maps was
devolped successfully
DATE:
AIM:
To write a Python program to Implementing k-Means algorithm to cluster a set of data.
ALGORITHM:
formed.
Step 2: Select random K points that will act as cluster centroids (cluster_centers).
Step 3: Assign each data point, based on their distance from the randomly
selected points (Centroid), to the nearest/closest centroid, which will form the
predefined clusters.
Step 5: Repeat step no.3, which reassigns each datapoint to the new closest
Step 7: Finish
Program:
# Sample data
data = np.array([[1, 2], [5, 8], [1.5, 1.8], [8, 8], [1, 0.6], [9, 11]])
kmeans = KMeans(n_clusters=2)
kmeans.fit(data)
centroids = kmeans.cluster_centers_
labels = kmeans.labels_
print("Centroids:")
print(centroids)
print("Labels:")
print(labels)
OUTPUT:
Centroids:
[[1.16666667 1.46666667]
[7.33333333 9. ]]
Labels:
[0 1 0 1 0 1]
Example:2
import matplotlib.pyplot as plt
import sklearn.metrics as sm
import pandas as pd
import numpy as np
iris = datasets.load_iris()
X = pd.DataFrame(iris.data)
X.columns = ['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width']
y = pd.DataFrame(iris.target)
y.columns = ['Targets']
model = KMeans(n_clusters=3)
model.fit(X)
plt.figure(figsize=(14,7))
plt.subplot(1, 2, 1)
plt.title('Real Classification')
plt.xlabel('Petal Length')
plt.ylabel('Petal Width')
plt.subplot(1, 2, 2)
plt.xlabel('Petal Length')
plt.ylabel('Petal Width')
scaler = preprocessing.StandardScaler()
scaler.fit(X)
xsa = scaler.transform(X)
xs = pd.DataFrame(xsa, columns = X.columns)
#xs.sample(5)
gmm = GaussianMixture(n_components=3)
gmm.fit(xs)
y_gmm = gmm.predict(xs)
#y_cluster_gmm
plt.subplot(2, 2, 3)
plt.title('GMM Classification')
plt.xlabel('Petal Length')
plt.ylabel('Petal Width')
output:
[ 0 2 48]
[ 0 36 14]]
[45 5 0]
[ 0 50 0]]
Result:
Thus the Python program to Implementing k-Means algorithm to cluster a set of data was
devolped was successfully.
DATE:
AIM:
ALGORITHM:
Program:
import numpy as nm
import pandas as pd
dataset = pd.read_csv('Mall_Customers.csv')
mtp.title("Dendrogrma Plot")
mtp.ylabel("Euclidean Distances")
mtp.xlabel("Customers")
mtp.show()
y_pred= hc.fit_predict(x)
mtp.title('Clusters of customers')
mtp.legend()
mtp.show()
OUTPUT:
Result:
Thus the python program to implement hierarchical clustering algorithm was devolped
sucessfully
Aim:
To implement the breadth first and depth first search in terms of time and space
Algorithm:
Step 1: Create a queue for BFS and initialize it with the starting node
Step 2: Create a set to mark visited nodes and mark the starting node as visited
Step 5: Process the current node (in this example, print it)
PROGRAM:
from collections import deque
queue = deque()
queue.append(start)
visited = set()
visited.add(start)
while queue:
current_node = queue.popleft()
queue.append(neighbor)
visited.add(neighbor)
# Main function
if __name__ == "__main__":
graph = {
'D': ['B'],
'F': ['C'],
'G': ['C'],
'H': ['E']
}
print("BFS traversal starting from node 'A':")
bfs(graph, 'A')
Output:
ABCDEFGH
Result: