AI Manual
AI Manual
Laboratory Manual
Artificial Intelligence
VISION
To become a leading department committed to nurture student centric learning through
outcome and skill based transformative IT education to create Technocrats and leaders
for the service of society.
MISSION
M1: To shape ourselves into a learning community to flourish leadership, team spirit,
ethics, listen and respect each other.
M2: To provide computer educational experience that transforms student through
rigorous course work and by providing an understanding of the need of the society
and industry.
M3: To educate students to be professionally IT competent for industry and research
program by providing industry institute interaction.
M4: To strive for excellence among students by infusing a sense of excitement in
Computer innovation, invention, design, creation and entrepreneurship.
M5: To contribute in the service of society by participation of faculty, staff and
problems.
PO.2 Problem analysis: Identify, formulate, review research literature, and analyze
problems and design system components or processes that meet the specified needs with
appropriate consideration for the public health and safety, and the cultural, societal, and
environmental considerations.
research methods including design of experiments, analysis and interpretation of data, and
PO.5 Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex engineering
PO.6 The engineer and society: Apply reasoning informed by the contextual knowledge
to assess societal, health, safety, legal and cultural issues and the consequent responsibilities
engineering solutions in societal and environmental contexts, and demonstrate the knowledge
PO.8 Ethics: Apply ethical principles and commit to professional ethics and
PO.9 Individual and team work: Function effectively as an individual, and as a member
the engineering community and with society at large, such as, being able to comprehend and
write effective reports and design documentation, make effective presentations, and give and
the engineering and management principles and apply these to one’s own work, as a member
PO.12 Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of technological change.
PEO 3. To equip the Learner with broad education necessary to understand the impact
PEO 4. To encourage, motivate and prepare the Learners for Life long learning.
PEO 5. To inculcate professional and ethical attitude, good leadership qualities and
commitment to social responsibilities in the Learners thought process.
COURSE OUTCOMES:
intelligent agents.
representation technique.
4. Ability to design models for reasoning with uncertainty as well as the use of
unreliable information.
LIST OF EXPERIMENTS
EXPERIMENT NUMBER: 1
Name:-
Div & Roll No.:-
Date:-
THEORY:
1. Title of Technical paper
2. Literature Review Summery
3. System Architecture
CONCLUSION:
EXPERIMENT N0: 2
Name:-
Div & Roll No.:-
Date:-
AIM: Assignments on state space formulation and PEAS representation for various AI
Applications.
3. An Essay Evaluator
Conclusion:
EXPERIMENT N0: 3
Theory:
Depth First Search(DFS)
Depth-first search (DFS) is an algorithm for traversing or searching a tree, tree structure, or graph.
One starts at the root (selecting some node as the root in the graph case) and explores as far as
possible along each branch before backtracking.
The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of
agraph) and explores as far as possible along each branch before backtracking. So the basic idea is
to start from the root or any arbitrary node and mark then odeandmove to the adjacent unmark
ednode and continue this loop until there is no unmarked adjacent node. Then backtrack and check
for other unmark ednode sand traverse them. Finally print then odesin the path.
Intheabovetreethenodeswillbevisitedinthefollowingorder:
1.
2.
3.
4.
5.
Program:
graph = {
'A': ['B', 'C', "D"],
'B': ['E', "F"],
'C': ['G', "I"],
'D': ["I"],
'E': [],
"F": [],
'G': [],
"I": []
}
while queue:
s = queue.pop(0)
print(s)
Output:
Conclusion:
EXPERIMENT NUMBER: 4
Name:-
Div & Roll No.:-
Date:-
Theory:
1. f(n)= g(n).
Advantages:
o Best first search can switch between BFS and DFS by gaining the advantages of both the
algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
o It can behave as an unguided depth-first search in the worst case scenario.
o It can get stuck in a loop as DFS.
o This algorithm is not optimal.
Example:
Consider the below search problem, and we will traverse it using greedy best-first search. At each iteration,
each node is expanded using evaluation function f(n)=h(n) , which is given in the below table.
In this search example, we are using two lists which are OPEN and CLOSED Lists. Following are the
iteration for traversing the above example.
Time Complexity: The worst case time complexity of Greedy best first search is O(bm).
Space Complexity: The worst case space complexity of Greedy best first search is O(bm). Where,
m is the maximum depth of the search space.
Complete: Greedy best-first search is also incomplete, even if the given state space is finite.
Program:
Output:
Conclusion:
EXPERIMENT NUMBER: 5
Name:-
Div & Roll No.:-
AI(2021-2022) 17 Subject Incharge: Prof. D.S.Kale/Dr.
S.N.Rathod
Department of Computer Engineering
Date:-
Theory:
A* search is the most commonly known form of best-first search. It uses heuristic function h(n),
and cost to reach the node n from the start state g(n). It has combined features of UCS and
greedy best-first search, by which it solve the problem efficiently. A* search algorithm finds the
shortest path through the search space using the heuristic function. This search algorithm
expands less search tree and provides optimal result faster. A* algorithm is similar to UCS
except that it uses g(n)+h(n) instead of g(n).
In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence
we can combine both costs as following, and this sum is called as a fitness number.
Algorithm of A* search:
Step1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.
Step 3: Select the node from the OPEN list which has the smallest value of evaluation function
(g+h), if node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each
successor n', check whether n' is already in the OPEN or CLOSED list, if not then compute
evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back
pointer which reflects the lowest g(n') value.
Advantages:
o A* search algorithm is the best algorithm than other search algorithms.
o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.
Disadvantages:
o It does not always produce the shortest path as it mostly based on heuristics and approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated nodes in the memory, so
it is not practical for various large-scale problems.
Example:
In this example, we will traverse the given graph using the A* algorithm. The heuristic value of all states is
given in the below table so we will calculate the f(n) of each state using the formula f(n)= g(n) + h(n), where
g(n) is the cost to reach any node from start state.
Here we will use OPEN and CLOSED list.
Solution:
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path with cost 6.
Points to remember:
o A* algorithm returns the path which occurred first, and it does not search for all remaining paths.
o The efficiency of A* algorithm depends on the quality of heuristic.
o A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">
o Admissible: the first condition requires for optimality is that h(n) should be an admissible heuristic
for A* tree search. An admissible heuristic is optimistic in nature.
o Consistency: Second required condition is consistency for only A* graph-search.
If the heuristic function is admissible, then A* tree search will always find the least cost path.
Time Complexity: The time complexity of A* search algorithm depends on heuristic function, and
the number of nodes expanded is exponential to the depth of solution d. So the time complexity is
O(b^d), where b is the branching factor.
Program:
Output:
Conclusion:
EXPERIMENT NUMBER: 6
Name:-
Div & Roll No.:-
Date:-
AI(2021-2022) 21 Subject Incharge: Prof. D.S.Kale/Dr.
S.N.Rathod
Department of Computer Engineering
Theory:
Hill climbing algorithm is a local search algorithm which continuously moves in the direction of
increasing elevation/value to find the peak of the mountain or best solution to the problem. It
terminates when it reaches a peak value where no neighbor has a higher value.
Hill climbing algorithm is a technique which is used for optimizing the mathematical problems.
One of the widely discussed examples of Hill climbing algorithm is Traveling-salesman Problem
in which we need to minimize the distance traveled by the salesman.
It is also called greedy local search as it only looks to its good immediate neighbor state and not
beyond that.
A node of hill climbing algorithm has two components which are state and value.
Hill Climbing is mostly used when a good heuristic is available.
In this algorithm, we don't need to maintain and handle the search tree or graph as it only keeps a
single current state.
o Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The
Generate and Test method produce feedback which helps to decide which direction to move in the
search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not remember the previous
states.
On Y-axis we have taken the function which can be an objective function or cost function, and
state-space on the x-axis. If the function on Y-axis is cost then, the goal of search is to find the
global minimum and local minimum. If the function of Y-axis is Objective function, then the goal of
the search is to find the global maximum and local maximum.
Local Maximum: Local maximum is a state which is better than its neighbor states, but there is
also another state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space landscape. It has the
highest value of objective function.
Flat local maximum: It is a flat space in the landscape where all the neighbor states of current
states have the same value.
o Steepest-Ascent hill-climbing:
o Stochastic hill Climbing:
Program:
Output:
Conclusion:
EXPERIMENT NUMBER: 7
Name:-
Div & Roll No.:-
Date:-
Theory:
Program:
Output
Conclusion:
EXPERIMENT NUMBER: 8
Name:-
Div & Roll No.:-
Date:-
Theory:
Alpha-Beta Pruning
o Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization
technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of game states it has
to examine are exponential in depth of the tree. Since we cannot eliminate the exponent,
but we can cut it to half. Hence there is a technique by which without checking each node
of the game tree we can compute the correct minimax decision, and this technique is
called pruning. This involves two threshold parameter Alpha and beta for future
expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune
the tree leaves but also entire sub-tree.
o The two-parameter can be defined as:
Alpha: The best (highest-value) choice we have found so far at any point
along the path of Maximizer. The initial value of alpha is -∞.
Beta: The best (lowest-value) choice we have found so far at any point
along the path of Minimizer. The initial value of beta is +∞.
o The Alpha-beta pruning to a standard minimax algorithm returns the same move as the
standard algorithm does, but it removes all the nodes which are not really affecting the
final decision but making algorithm slow. Hence by pruning these nodes, it makes the
algorithm fast.
α>=β
o While backtracking the tree, the node values will be passed to upper nodes instead of values of
alpha and beta.
o We will only pass the alpha, beta values to the child nodes.
Program:
Output:
Conclusion:
EXPERIMENT NUMBER: 9
Name:-
Div & Roll No.:-
Date:-
Theory:
Genetic algorithms (GAs) are a class of search algorithms designed on the natural evolution
process. Genetic Algorithms are based on the principles of survival of the fittest.
A Genetic Algorithm method inspired in the world of Biology, particularly, the Evolution Theory
by Charles Darwin, is taken as the basis of its working. John Holland introduced the Genetic
Algorithm in 1975. Genetic Algorithms are utilized to tackle optimization problems by copying
the evolutionary behavior of species. From an initial random population of solutions, this
population is advanced through selection, mutation, and crossover operators, inspired in
natural evolution. By implementing the given set of operations, the population goes through an
iterative procedure in which it reaches various states, and each one is called generation. As a
result of this procedure, the population is expected to reach a generation in which it contains a
decent solution to the problem. In the Genetic Algorithm, the solution of the problem is coded
as a string of bits or real numbers.
They have been shown in practice to be very efficient at functional optimization. It is used in
searching for huge and sophisticated spaces. Genetic algorithms (GAs) are algorithms that are
used for optimization and machine learning based on various features of biological evolution.
Program:
Output:
Conclusion: