0% found this document useful (0 votes)
31 views169 pages

Artificial Intelligence Notes Unit 1

The document provides an overview of Artificial Intelligence (AI), detailing its definition, goals, approaches, and historical development. It discusses various AI techniques, including problem-solving methods, search algorithms, and applications in fields like robotics and natural language processing. Additionally, it contrasts human intelligence with AI, highlighting the limitations and current capabilities of AI systems.

Uploaded by

Amruta More
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views169 pages

Artificial Intelligence Notes Unit 1

The document provides an overview of Artificial Intelligence (AI), detailing its definition, goals, approaches, and historical development. It discusses various AI techniques, including problem-solving methods, search algorithms, and applications in fields like robotics and natural language processing. Additionally, it contrasts human intelligence with AI, highlighting the limitations and current capabilities of AI systems.

Uploaded by

Amruta More
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 169

UNI

T
GENERAL ISSUES AND OVERVIEW OF
AI
Syllabus
• IntrodUCtion tD Ai,
• Problem Solving, State space search,
• Blind search:
• Depth first search,
• Breadth first search,
• Informed search:
• Heuristic function,
• Hill climbing search,
• Best first search,
• A’ & AO” Search,
• Constraint satisfaction.
• Game tree
• Evaluation function,
• Mini-Max search,
• Alpha-beta pruning,
• Games of chance.
Introduction to
What is
• AI ? is the science and
Artificial Intelligence engineering of
making intelligent machines.

• Artificial Intelligence is the study of how to make


computers do things which, at the moment, people do better.

• Artificial Intelligence is the branch of computer science


that is concerned with the automation of intelligent behavior.

• Artificial Intelligence is the study and design of


intelligent agents, where an intelligent agent is a system that
perceives its environment and takes actions that maximize
its chance of success.
What is
• Artificial AI ?is concerned with the design of
intelligence in an artificial device.
Intelligence

• The term was coined by McCarthy in 1956.

• There are two ideas in the


definition. Intelligence
Aniñcial

• The term artificial iS easy to


understand. But it's very difficult
to define intelligence.
What is intelligence*
• Intelligence is what we use when we don't know what to do.
• IntclliRence relates to tasks involving higher
mental
processes.

Examples: Creativity, Solving problems, Pattern recognition,


Classification, Learning, Induction, Deduction, Building
analogies, Optimization, Language processing. Knowledge
and many more.
Approaches to AT
• I lard or Strong
AI
• Soft or Weak Al
• Applied AI
• Cognitive AI
Hard or Strong AI
• Strong AI refers to a machine that approaches or
supersedes
human intelligence.
if it can do typically human tasks,
— If it can apply a wide range of background
knowledge and
— If it has some degree of self-consciousness
• Strong AI aims to build machines whose overall
intellectual
ability is indistinguishable from that of a human being.
• Weak AI refers to the use of software io study or
accomplish
specific problem solving or reasoning tasks do
encompass the full that not
range
Example: a chess program suchof human
as Deep Blue abilities.
cognitive
• Weak AI doss not achieve self-awareness; it demonstrates
wide range of human level cognitive abilities; it is
merely an intelligent, a specific problem-solver.
• Aims to produce commercially viable
"smart" systems such as, for example, a
security system that is able to recognize the
faces of people who are permitted to
enter a particular building.
• Applied AI has already enjoyed considerable
success.
Cognitive AT
• Computers are used to test theories
about how the human mind works--for
example, theories about how we recognize
faces and other objects, or about how we
solve abstract problems.
Cognitive science
• Aims to develop. explore and evaluate theoriesof how
the
mind works through the use of computational models.

• The important is not what is done but how it is done; means


intelligent behavior is not enough, the program must operate
in an intelligent manner.

• Example: the chess programs are successful, but say


little
about the ways humans play chess.
C ber etics
• Cybemetics” comes from a GrRek word meaning “the art of
steering”.
• Cybernetics is about having and taking action to achiRve that
goal. Knowing whether you have reached your goal (or at
least are getting closer to it) requires “”, a concept that
comes from cybemetics.
• Cybernetics grew from a desire to understand and build
systems that can achieve goals, whether complex human
goals or just goals like maintaining the temperature of a
room under changing conditions.
Goa s of
• Ahfour possible goals to pursue:
The dcfiniÎion ofAl gives
I. Systems that think like mamans
2. Systems that think rationally
3. Systems that act like humans
4. Systems that act rationally
• Tmditionally, all four goals have been followed and the
approaches
were:

Tbi rtlt

• Most ofAl works falls into category 2 and


4.
• Most of the time it is a black box where we are not
clear about our thought process.
• One to know functioning of brain and its
mechanism for possessing information.
has
• It is an area of cognitive science.
— The stimuli are converted into mental representation.
— Cognitive processes manipulate representation to build new
representations that arE' usE'd to generate actions.
• Neural network is a computing model for processing
information similar to brain.
Systems thai act like humans

• The overall of the system


behaviour should be
human like.
• It could be achieved by
observation.
Systems thai Think nationally
• Such systems rely on logic rather than human
to measure correctness.
• For thinking rationally or logically, logic formulas
and theories are used for synthesizing outcomes.
• For example,
given John is a human and all humans are mortal then one
can conclude logically that John is mortal
• Not all intelligent behavior are mediated by
logical deliberation.
• Rational behavior means doing right thing.
• Goal is to develop systems that are
rational and sufficient.
General AT
Goals
• Replicate human intelligence

• Solve knowledge intensive tasks

• Make an intelligent connection between perception and action

• Enhance human- human-computer and computer


human, to
computer interaction/communication
Goa
Engineering based AI Goal l
• Develop concepts, theory and practice of building
intelligent
machines
• Emphasis is on system building

Science based AI Goal


• Develop concepts, mechanisms and vocabulary to
understand
biological intelligent behavior.
• Emphasis is on understanding intelligent behavior.
Major components of an AI
system
° ^dB° Heuristic
Representation Search

AT
Programming AI Hardware
languages and
• The quality of the result depends on how much knowledge the
system possesses. The available knowledge must be represented in
a very efficient way. Hence, knowledge representation is vital component
of the system.
• It is not merely enough that knowledge is represented efficiently. The
inference process should also be equally good for satisfactory results. The
inference process is broadly divided into brute and heuristic search
procedure.
• Today, just like we have specialized languages and programs for
data processing and scientific applications, we encounter specialized
languages and tools for Al programming. Al languages provide the basis
functions for Al programming and tools for the right environment.
• Today, most of the AI programs in India are implemented on Von
Neumann machines only. However dedicated workshops have emerged
for AI programming.
Applications area of AI
• Perception
Machine vision
Speech understanding
—Touch ( tactile or
hoptic) sensation
• Robotics
• Natural Language
Processing
— Natural Language
Understanding
Speech Understanding
Language Generation
Machine Translation
• Planning
• Expert Systems
• Machine Learning
. Gcstatio or Ai 1943- I
ii.9§r)Early e:ztl3Hsiasm, great expectatlOłłS ( I 9ô2- 1969)
ííi. A ‹lose uŁ reality ( ] 9f›G-1974)
iv. Knov'lcdgo-based systems ( I*J69- I 97*J)
v. Al bcconzes an industry ( ł 9B0-1988)
vi. Evotvi lb systc ns ( ł 9gC p -cscnt)
vii. Itecent eve Its ( ł 9H7-present)
Gestation of AT
• First work
— McCulloch and Pitts (J943): build a 8oolean circuit model of brain
• First neural network computer
— Minsky and Edmonds (1951) build “SNARC” neural network
computer
• irst chessprqgrams
— Shannon (1950) and Turing (1953)
• Official birthplace of AI
— The 1956 Dartmouth workshop organized by John McCarthy
Early enthusiasm, great expectations
• iNe cel and o s earnl
ork
Logic Theorist(1956)
— General Problem Solver
• Samuel s checkers
programs 1952-
19¿6j
• The MIT connection
McCarthy's LISP (1958)
— Minsky's microworlds (1963)
• Thin s were not so eas
AI discovers computational
complexity(1966- 1974)
—Only syntactic manipulation and little knowledge
—A lot of AI problems are intractable
—Basic structures had fundamental limitations

Can AI only be used for toy problems?


knowledge-based sy te s(1969-
79)
• Expert systems
— Use knowledge to suit larger reasoning steps
— Inferring molecular structures: DENDRAL (1969)
— Medical diagnosis: MYCIN(1974)
— Geological system: PROSPECTOR
• Knowledge representation schemes
— Production System (Newell)
— Frame theory (Minsky)
Conceptual Dependency (Schank)
— PROLOG (Colmerauer)
AT becomes an industry
• The first commercial exoert system
DE ’s R1 (19 2
• The ’Fifth Generation” p « i e c t
— A Japanese initiative to take the lead in AI research
— Parallel reasoning machine
— Running Prolog like machine code
• Machine rni gn
An attempt to solve the knowledge acquisition
bottleneck
— Reinvention of the back propagation learning algorithm
° Exoert system diceesi n i ve cons
— Can machine learning fix that problem3
• I nte1ligent tutoring
• Case based reason\ne
• Multi aRent planning
• Scheduling
° Natural langua
• Virtual reality, games
• Ge eti ori h s
— Evolutionary programming
Recent events
• Based on existing theories instead of proposing
new ones
—Speech recognition based on hidden Markov models
— Planning based from the start on a simple framework
Belief networks for reasoning about the
combination of
uncertain evidence
• This has lead to robust methods and
workable research agendas
• Is the "whole agent” problem within reach?
The Turing Test
Turing proposed operational test for intelligent
behavior in 1950.

Human

Human
Interrogator
AI system
Adam M. Turing, 1954)
• MajOr(191
contributor to the code-breaking
WDrk at Bletchley Park \Enigma),
during World War II.
• Major contributor to the
early development of computers.
• Foresaw Artificial Intelligence &
devised
the Turing Test.
• To conduct Turing test, we need two people and the machine
to be evaluated. One person plays the role of the
interrogator, who is in a separate room from the
computer and the other person.
• The interrogator can ask questions of either the person or the
computer by typing questions and receiving typed
responses. however, the integrator knows only as A
and B and aims to determine which is the person and
which is the machine.
• The goal of the machine is to fool the interrogator into
then we that
believing will it conclude that If the machine succeeds
is the person. can think. The
at this,
the
machine is allowed tD do whatever it can tD fool the
interrogator.
information, knowledge, intelligence
• information is a message that contains relevant
meaning,
implication,
comes from orboth
inputcurrent
for decision and/or action.
(communication) andInformation
historical
(processed data or ‘reconstructed picture’) sources. In
essence, the purpose of information is to aid in making
decisions and/or solving problems or realizing an opportunity
• Knowledge is the cognition or recognition (know-
what), capacity to act (know-how), and understanding
(know-why) that resides or is contained within the mind or
in the brain. The purpose of knowledge is to better our lives.
• Intelligence (also called intellect) is an umbrella term used
to describe a property of the mind that encompasses
many related abilities, such as the capacities to reason, to
plan, to solve problems, to think abstractly, to comprehend
ideas, to use language, and to learn.
Human intelligence Vs. Artificial intelligence
Human intelligence revolves around The field of Artificial intelligence focuses
adapting to the environment using a on designing machines that can mimic
combination of several cognitive human behavior.
processes.

Human Intelligence is organically based, Artificial Intelligence is silicon based.

Human intelligence is separated Computer intelligence is separated


into binary code
through
the five senses. Artificial intelligence. . ust one
access point....
Human memory...Five access points Artificially Intelligence works along pre-
set formulas and ways, it is rather straight
The human mind on the other hand forward.
works by asociation, which is not always
logical in a technical sense.
Intelligent Computing Vs. conventional
computing
The AI Problem
• Much of the early work in the field focused
on formal tasks, such as game playing and
theorem proving.
• Another early foray into AI focused on the sort
of problem solving that we do every day called
as mundane tasks, such as
commonsense reasoning.
• The problem area where AI is now flourishing
most as a practical discipline are primarily the
domains that require only specialized expertise
without the assistance of commonsense
knowledge, called as expert systems.
• Games
Chess
Backgammon
Checkers-Go
• Mathematics
Geometry
Logic
Internal
calculus
— Proving
properties of
programs
Mundane Tasks
• Perception
Vision
Speech
• Natural language

Understanding
Generation
Translation
• Commonsense
reasoning
• Robot control
• Engineering
Design
Fault finding
Manufacturing planning
• Scientific analysis
• Medical diagnosis
• Financial analysis
Limits of AI Today
Today‘s AI systems have been able to achieve limited success in some
of thE'se tasks.
• In Computer vision, the systems are capable of face recognition
• In RDbotics, we have b£'en able to makE' vehicles that are mostly
autonomous.
• In Natural language processing, we have systems that arE' capable of
simple machine translation.
• Today's ExpE'rt systems can carry out medical diagnosis in a narrow
domain
• Speech understanding systems are capable of recognizing
several
thousand words continuous speE'ch
• Planning and scheduling systems had been emplDyed in
scheduling
experiments with the Hubble TE'lescopE'.
• The Learning systems are capable of doing text categorization into
about a 1000 topics
• In Games, Al systems can play at the Grand Master level in chess
(world champion), checkers, etc.
What can AI systems NOT do yet?
• Understand natural language robustly (e.g.,
read and understand articles in a newspaper)
• Surf the web
• Interpret an arbitrary visual scene
• Learn a natural language
• Construct plans in dynamic real-time
domains
• Exhibit true autonomy and intelligence
What is an AI
technique
What is an AT
technique
AI technique is a method that exploits knowledge
which should be represented in such a way thai —
The knowledge captures generalization or we can say that
situations that share important properties and grouped together
rather than to represent separately each individual situation.
It can be understood by people who provide it. In many AI
domains thrust of the knowledge, a programs has, must ultimately
be provided by people in terms they understand.
It can easily be modified to correct errors and to reflcet changes in
the world and in our world view.
It can be used in a great many situations cven if it is not
totally accurate or complete.
v. It can be used to help overcome its own sheer bulk by to
narrow
the range of possibilities that must usually be considered.

g-.,.,. -..!}-., .•,:-..-,-.. 1 '. . }.-,-. ,!- ,


Examples of AI
1. Tic-Tac toe
2. Waterproblems
jug problem
3. 8 pu2zle problem
4. 8-queen problem
5. Chess problem
6. Missionaries and cannibals
problem
7. Tower of Hanoi probIF'fTi
8. Traveling salesman problem
9. Magic square
10. Language understanding problems
11. Monkey and Banana Problem
12. Crypt arithmat|c puzzle
13. Block World problem
Problem solving Problem
representa ion
• Problem solving is a process Df generating solutions
from
observed data.
• Key element of problem solving
State: A state is a representation of problem at a
given
moment.
problem.
USts
Operators: theace:available
Contains actions
all the possible
performedstates
is for a
given
operators. called
Initial state: position from the problem-
which solving
process may start.
— Goal State: SOlution to the
problem.
General Problem solving
• To build a system, to solve a praticular problem, there are
four things:
1. Define the problem precisely (apply the State Space
representation).
2. Anal e the problem.
3. Isolateand represent the task knowledge
that is
necessary to solve the problem.
4. Choose the best problem solving
technique(s) and apply it to the particular problem.
To choose an appropriate method for a
particular
problem:
1. Is the problem decornposable?
2. Can solution steps be ignored or
undone?
3. ls the universe predictable?
4. Is a good solution absolute or relative?
5. 1s the solution a state or a path?
6. What is the role of knowledge?
7. Does the task require human-
interception?
Search and Control
strategies
Search Strategies
i. Uninformed search (blind search)(Exhaustive search)(Bruie
force)
Having no information about the number of steps from the
current state to the goal.
2. Informed search (heuristic seorch)
More efficient than uninformed search.
Brute Force or Uninformed
Search
These are commonly used Search procedure
which
explore all the alternatives during the search process.
• They do not have any domain specific knowledge.
Thev need the initial state, the goal state and a set
of
legal operators.
• The strategy gives the order in which the search space is
searched
• The followings are example ol uninformed search
Depth First Search (DFS)
— Breadth First Search (BFW)
i
O
Search Strategies: Blind Search
• Breadth-tirsi search
Expand all the nodes of
one icvel first.

• Depth-first search
Expand one of the nodes
at
the deepest level.
Depth First Search
• The search begins by expanding the initial node,
generate
all successors of the initial node and test them.
• Depth-first search always expands the deepest node in
the
current frontier of the search tree.
• Depth-first search uses a L IFO approach.
Dept h- first search
Expand deepest unexpandeâ node

Implementation:
)i iii.#‹ —— LII-0 queue, i.e., put successors at
Iron I
I Depth -Ïïi'st
search
Expand deepest unexpanded
noóc
Implemcn lation:
D e p t h - first searcl i

Expand deepest unexpanded node

Implement at ior:
r’,„j/ ’ LIFO queue, i.e., put successors
at Iron I
Depth-first search
Expand deepest unexpanded node
lmp'emen1ation:
/”iftf{f. = LIF0 queue, i.e., pvt svcc9N0W Bt
fF0flt
hDot p
s
Enpand deepest unexpanded node

Imp ementation:
/ ’‹»j/‹ — LIFO g«eue, i e., put successors at
front
Dept t se r i

Expand deepest onexpanded node


mp:ement # t i n :
//»‹y = LIFO queue. i.e.. put successors at
front
Depth- fii st search
Expand deepest unexpanded node

Amp.en enrzTion:
/ LIF0 o\ etie, , put successors at
i fFont
//‹
Expand deepest unexpanded node
Implementation:
/zYiu/c LTSO qneue, i , put succœsors
at front
Depth-first
search
Algorithm for Depth First
1. IfSearch
the initial state is a goal state, quit and return success.
2. Otherwise, do the following until success or failure
is
signaled:
a) Generate a successor, E, of the initial state. If there are
no more successors, signal failure.

b) Call Depth-First Search with E as the initial state.

c) If success is returned, signal success.


Otherwise continue in this loop.
Time and space complexity
Time Complexity :
1+b+ + b3
+...+. . . ...&
Hence Time complexity --0 !H!
Where b-> branching factor
d-> Depth ofa
tree Space
Complexity .
ffence Space
complexity — 0

(d) h Mrs hr9, Rat I,


116
Advantages of Depth-First Search
i. It requires less memory since only the nodes
of the current path are stored.
ii. By chance, it may find a solution
without examining much of the search space
at all.
Disadvantages of Depth-First
i. Search
Determination of the depth until which the
search has to proceed. This depth is called
cut-off depth.
ii. If the cut-off depth is smaller, solution may
not be found.
iii. Tf cut-off depth is large, time complexity
will be more.
iv. And there is no guarantee to find a
minimal solution, if more than one solution
exists.
Breadth First
• Searching Search
processes level by level depth first
unlike
search which goes deep into the tree.
• An operator is employed to generate all possible children

of a node.
Expand shah o vest unexpanded node

pit'menTaI ion:
/.' ' / is a FIFO queue, .e., new
successors go aT end
Breacit h-first seorch
Expand shaTlowesr unexpanded node

Imp emen ta I ion'


her successors go at and
Breadth-flrat eoarch
Expand shallowest unexpande‹l node
Implementations
grin /e is a FIFO queue, i.e., new successors go at end
Breadt h-fìrst search
Expand shallowest unexpanded node

end
Breadth-first search

•Guaranteed to find shortest solution fi rst


•bes†-first finds so u†ion a-c-f
depth-firs† finds a-b-e-j
Algorithm of Breadth First
1. Search
Create a variable called Node-LIST and set ii to the
initial state.
2. Until a goal state is found or Node-LIST is empty:
a) Remove the first element from Node-LIST and call it E.
if
Node-LIST was empty, quit.
b) For each way thai each nile can match the state
described
in E do:
i. Apply the rule to generate a new state,
ii. If the new stateis a goal state, quit and
return this
state.
Time and space completion
Time Complexity :
1 + b + b' + b3 +...+......bd
Hence Time complexity ‹b°›
—-
Space Complexity :
1 + b + b 2 + b3 +...+......bd
Hence Space complexity -
—0 *b d
Advantages of Breadth-First
Search
i. Breadth first search will never get trapped
exploring the useless path forever.
ii. If there is a solution, BFS will definitely find it
out.
iii. Tf there is more than one solution then
BFS can find the minimal one that requires
less number of steps.
Disadvantages of Breadth-First
Search
It requires more memory
Searching process remembers all unwanted
nodes which is of no practical use for the
search.
CFS Vs BFS
It require less memory because only It require more memory because all the
the nodes on the current path are tree that has so far been generated
stored. must be stored.

It is one in which by luck solution While in 8FS all parts of the tree
can be found without examining must be examined to level n before
much of the search space at all. any nodes on level n+1 can be
It does not give optimal solution. examined.
It gives optimal solution.
DFS may find a long path to a
solution in one part of the tree, when BFS guarantees to find a solution if
a shatter path exists in some other, it exists. Furthermore if there are
unexplored part of the tree. multiple solutions, then a minimal
Time complexity: D(bd ) solution will be found.
where b : branching factor, d: depth Time complexity: O(b^ )
Space complexity: O{dl , d: depth where b : branching factor, d: depth
Space complexity: 0(b d )
where b : branching factor, d: depth
• Informed search tries to reduce the amount of
search that be done by
intelligent
must making for the
selected
choices for expan5ion.
nodes that are
• In general this is done
using a heuristic function.
Heuristic
• A heuristic Function
function on is a function that ranks
alternatives in various search algorithms at each branching
step based on the available information (heuristically) in
order to make a decision about which branch to follow
during a search.

• Well designed heuristic functions can play an important


part in efficiently guiding a search process toward a
solution. Sometimes very simple heuristic functions can
provide a fairly good estimate of whether a path is any good
or not. In other situations, more complex heuristic
functions should be employed.
The first picture shows the current state and the second
the goal
picture
Heuristics D is the number of tiles out of place.
state.
h(n) 5
because the tiles 2, 8, 1, 6 and 7 are out of
place.
Heuristic Search Algorithm

Hill Climbing

— Best First
Search

A”

AO*
• This algorithm also called discrete optimization
algorithm.
• lt utilizes a simple heurisitic function.
• Hill Climbing = Depth First Search + Heuristic Function
• There is practically no difference between hill
climbing and depth first search except that the childem
of the node that has been Rxapanded are sorted by
the remaining distance.
• There are two ways to implement hill climbing
— Simple hill climbing
— Steepest-Ascent hill climbing or gradient
search
1. Evaluate the initial state if goal then return (success) . Else
with initial state as the current
continue
2. state
Loop until a solution is found or until there are no new operator
to
apply to current node :
a) Select a new opRrator arld apply current state to produce a
new state .
b) i. Evaluate
if it is the newthen
a goal state.
remrn (success) .
ii. if not goal but better than current state then make ii
the
iii. current state .
if it is not better than current state then continue
the lOOp.
Steepest-Ascent hill
climbing
1. Evaluate the initial state . If it is also a goal state , then return
it and quit . Otherwise , continue with the initial state as
current state .
2. Loop until a solution is found or until a complete iteration
produces no change to current state
a) Let SUCC be a state such that any possible successor of
the current state will be better than SUCC
b) For each Operator that applies to the current state do :
i) Apply the operator and generate a new state .
ii)Evaluate the new state . if it is a goal state , return it
and quit . lf not, compare it to SUCC . If it is better ,
then set SUCC to this state . if it is not better , then
leave SUCC alone .
iii) if the SUCC is bettertha,ncurrent , then set
state “
Di erence between simple
& steepest-assert hill
• thinking
Steepest-ascent hill climbing or gradient search considers
all the moves from the current state and selects the best
one as the nexi state.
• ln the simple hill climbing the first state that is better than the
current state is selected.
Search Tree for Hill Climbing

2.
9
Example

B C D
(2)

(4)

E F
(3)
• Local Maximum : A state that is better than all its neighbours
but not so when compared to states to states that arc farther
away.
Local Maximum Global
Peak
T ‹ nAi - l4ØfD- j

i
n
Plateau :A flat area of the search space in which ali
neighbounng
states have the same
value.
R i n d :The orientation of the high region, compared io the
set of available moves, makes it impossible to climb up.
However, two moves executed serially may increase the
height.
Methods No overcome these
problems
• Backtracking for local maximum. Backtracking
helps in undoing what has been done so far
and permits to try totally different path to
attain the global peak.
• A big jump is the solution to escape from
the plateau. A huge jump is recommended
because in a plateau all neighboring points have
the same value.
• Trying different paths at the same time is the
solution for circumventing ridges.
Best First
• It is a way of Search
combining the advantages of both depth-
first
search and breadth first search into a single method.
• One way of combining the DFS and BFS is to follow a
single path at a time, but switch paths whenever
some competing path looks more promising than the
current one does.
• At each step of the best-first search process, we select
the most promising of the nodes we have generated so
far. This is done by applying an appropriate heuristic
function to each of them. We then expand the chosen
node by using the rules to generate its successors. lf
one of them is a solution, we can quit. If not, all
those new nodes are added to the set of nodes
generated so far. Again the most promising node is
Best First Search Example

4 6

6 5 4

2 Æ
List to maintain in Best-First Search
• OPEN: nodes iliat have bccii but have
gcncratcd, not
examined. This is organized as a priority queuc.
• CLOSED: nodes that have already been examined.
Whenever a new node is generated, check ivhcther it has
been generatcd before.
Algorithm of Best First
1. Search initial state).
OPEN
2. Loop until a goal is found or there are no nodes left
in
OPEN do:
a. Pick the best node in OPEN
b. Generate its successors.
c. For each succoSSOr do:
i. If it has not been generated before. evaluate ii,
add it to OPEN, and record its parent.
ii. If it has been generated
before. change the parent if this new path
is better than the previous one. In that
case. update the cost of gening to this node
and to any successors that this node may
Start
Node

Goal
Node
Search process of best first
Ste Children OPEN List CLOSE Lim
p search

1 ' {A:3)(B:6 fC.5) (A:3 (B.6)(C:S (A:3)


S
2 A {D:9)(E 8) (B:6)(C:5) {D:9)(E:8) (C.5)

3 C (B:6) (D.9) {E:8) (H.7) (B.6)

4 B {F:12) (G.14) (D:9) (E.8) (H.7) (F 12) (G (H:7)


141

{l:5) (J:6) (O:9) (E:8) (F:1 2) (G:1 4) (I:5)


(I:5) {J:6

{K:1 ) (L.0) (0:9) (E.8) (F:12) (G.1 4) Search slops as


{M.2) (J.6) ‹K: 1)(L:0›(M.2) goa is reached
A

• A* algorithm was given by Hart, Nilsson and
Rafael in
1968.

A* is z best fusi seai'ch algorithm with

here
,elm' st»c of edge costs 8'ow stew la »
/x/ = esti»ate of lo test cost yath flow› x to goal
Example
Obtain tlzc fitncss number tor nodc K
F(n)'g(n)+I (n)
'( Ok( ftnlCtici \ involved fron start »odc S to node
K)+(evaluation function tor K)
=6*5+7+ I + l
A
1. Initialize: set OPEN= {s), CLOSED=( )
Algorithm
g(s)=0, f(s)-h(s)
2. Fail: i OPEN =( ), Terminate & fail.
3. Select: select the minimum cost state, n, from OPEN. Save n
in
CLOSED.
4. Terminate: if n e G, terminate with success, and return f(n).
5. Expand: for each successor , m, of n
If m g[open U closed]
Set g(m) =g(n + C(n,m)
Set f(m - ( ) h(m
Insert m in OPEN.
If m e[open U closed]
Set g(m =min(g(m) ,g(n)+ C(n,m))
Set f(m = ( ) + h(m
IN finn) has decreased and m c CLOSED, move m to OPEN
OPEM

y
(S)
Merit and demerit of A*
Algorithm
Merits
A* is both complete and admissible.Thus A* always finds
an
optimal path, if one exists.
Derneriis
It is costly if the computation cost is high.

I ,'.,., ,! ,.„,.'.,..• '.'. 1,. '•


1,'. •,
Problem Reduction
• Sometimes problems oniy seen hard to solve. A hard
problem may be one that can be reduced to a number
of simple problems...and, when each of the simple
problems is solved. then the hard problem has been solved.

• Problem reduction may be defined as planning how best to


solve a problem that can be recursively decomposed
into subproblems in multiple ways.
AND or
• The complex OR and the sub problem , there
problem
exist
two kinds of relationships.
AND relationship
OR relationship.
• In AND relationship, the solution for the
problem is
obtained by solving all the sub problems.
• In OR relationship, the solution for the
problem is
obtained by solving any of the sub problem.
• An arc ( ) connecting different branches is
callRd
AND.
AND/OR
• Real life graphs
situations do not exactly decompose into either
AND
tree or OR tree but are always a combination of both.
• AND/OR graph is useful for representating the
solutions of problem that can be solve by decomposing them
into a set of smaller problems.
• A* algorithm is not adequate for AND/OR graphs.
• AO* algorithm is used for AND/OR graphs.

I .'.,.- '.1. ,:...... • '. . !, ..'•


' •,
AND/OR tree

D
B
AO* Algorithm
1. Initialize: set G * {s), f(s)=h(s)
if s e T, label s as SOLVED
2. Terminate: if s is SOLVED, then terminate.
3. Select: seiect a non-terminal leaf node n from the marked
sub
tree.
4. Expand: make explicit the successors of n
For each new successor, m:
set f{m)=h(m)
if m is terminal, label m SOLVED
5. Cost Revision: call cosi-revise(n)
Cost-revise(n)
1. create Z=tn}
2. lf Z=() return
3. Seiect a node m from z such that m has no descendants in Z.
4. If ni is an AND nude with successnrs.
rt r2 •3 . . ...... ............ £
Set (m) = [(ri) + C(m, ri )]
Mark the edge to each
successor of m.
If each successor is labeled
SOLVED, then label m as
SOLVED.
5. lfni is an OR node with
successors.

Set f(m) = min[9r,) + C(m,


r, )] Prep,irefi bv : -Agn iz'esh Mishra, Rel
is
1,
Mark the edg to best s
Exampl
e

Iflush ate ihc opcration of AO* .search upon the


following search space.
Constraint Satisfaction
4• Many AI problems can be viewed as problems of
constraint
satisfaction.
Examples
Scheduling
Timetabling
— Supply
Chain
Managerncnt
Graph
colouring
Puzzles
Constraint Satisfaction P ob e (CSP
• A CSP consists of
A set of variables, X
For each variable x , in X, a domain
D;
D, is a finite set of possible values to each
• A solution is an assignment of a value in
D;
variable x , such that every constraint satisfied.
Crypt-arithmetic puzzle
SEND
• We have every letters standing for a digit + MO D E
and MONE
every letter stands for a different digit.
• We have to find an assignment of letters to
digits
such that a given arithmetic formula is correct.
• Variables are D, E, M, N, O, R, S, Y
• Domains are
— {0, 1,2,3,4,5,6,7,8,9} for D, E,N,O,R,Y
— (1,2,3,4,5,6,7.8,9} for S, M

g-,•.,.--,•}-„•
.•,:-,.,-.-., •..' .},-.--••-•,
Constraint 1:
We can write one long constraint for the
1000* S + 100* E+10* N+ D
sum.
+ 1000* M+ 100* O+10* R+ E

1 0000*M + 1000* O I 00*N+


10* E+ Y
Constraint 2:
alldifferen D, E, M, N, O, R,
S, Y)
These two constraints express
Solution
• Rules for propagating constraints generates thB
following
constraints:
1 M = 1, since two single-digit numbers plus a carry can not total
more than 19.
S = 8 or 9, since S+M+C3 > 9 (to generate the carry) and M
= 1, S+1+C3>9, so S+C3 >8 and C3 is at most 1,
3. 0 = 0, since S + M(1) + C3(<=1) must be at least 1D tD generate a
carry and it can be most 11. But M is already 1, so 0 must be 0.
4. N = E or N=E+1, depending on the value of C2. But N cannot
have
the same value as E. So N = E+1 and C2 is 1.
5. In order for C2 to be 1, the sum of N + R + C1 must be greater
than 9, so N + R must be greater than 8.
6. N + R cannot be greater than 18, even with a carry in so E cannot be
9.
Solution...
• Suppose E is assigned the value 2.
• The constrainl propagator now observes that:
• N = 3 since N = E + 1.
• R - S or 9, si»cc R + N (3) * C I ( 1 or 0) — 2 or
12. But since N is already 3, the sum of these
iioiiicgativo numbers cannot be less than 3. Thus R * 3
*t0 or I ) = 12 and R S or 9.
• 2 + D = Y or 2 + D = 10 * Y, fro the sum iii rilhniosi
column.
Star
t

S = 8 or SEND
Initial state 9
• No two letters O=0
have
C2 = 1
• The sum of the E•
digits must be as 9
shown

2 * D= 10• C1
* Y

C1 = C1 =
S= 1
0
8 O=
2 + D = 10
N + R = 10 +Y 0
O= + E
0 R=9
S =8

COWfâiC
t
Confl ic COnf)
Star
t

S = 8 or SEND
Initial state 9
MORE
• No two letters 0 =0
have C2 =
1
• The sum of the E•
digits must be as 9
shown
R = 8 or 9
5 * D = 10• C1
* Y
M=
C1 = C1 = 1R
S= 1
0 =8
8 E:5
N • R = 10 s-6
O=0 +E O:
5 * D = 10 0
o= R=9 +Y D-
2Y D= 7
:7 7

COWf
Final solution
• M-1
• R=8
• S=9
• E-5
• N-6
• o-o
• D=7
• Y=2
• C1=1
• C2=1
• C3-0
• C4=1
SOLVE
• US +AS=AL L
• SHE +TH Ü= EST
• C ROSS RoA us-
DANGER
• DAYS-.-TOO=SI4ORT
Solve
CROSS
ROADS

DANGER
Rule I: Well you can see that the DANGER has one more letter than
CROSS and ROAOS, and the extra letter is D. Thni means that C R
equals something more than 10. Which alsp means D is 1,
Rule 2: Oh look, S + S = R. That means thai R must be even. We have a
choice uf 4, 6 and 8, because if R was 2, S would hove to be 1, and D is
already 1. Let's try 6 for the value of R, because we need high numbers
ifwc want C
+ R to equal something more than 10. Oh look, if R is ñ and S is R
divided by 2, then S must be 3!
Rule 3: S+D-E, 3+1=4, So, EN
Rule 4: And since we now only have 4 spots in the key left, we choose the
highest numhcr for C, which is 9. Again, we need high numbers tn make
C
+ R equal something more than 10.
Rule 5: In the equation. O + A = G. We have 2, 5, 7 and 8 vacant. Set's plny
around with ihcse lettcrs. get's see if wc can rd d an equation in there,
Yes! There is an equation there. 5 + 2 = 7! So G must equal 7. We know
that 9 + 6 = 15, but it's missing the 5! So, A must equal 5. In turn, this
a
• 96233
62513
158746
And the following
key...

• C=9

• S=3
• A=S
• ß=1
• E=4
• N=
8
G:
7
’ 85
US 1S

ALL 100

SHE 634
+THE 834

BEST 146
B
CROSS
+ROADS 96233
62513
DANCER
15874
6
Why has game playing been a focus of Al?
• Games have well-defined rules, which can be implemented in
programs
• Interfaces required are usually simple
• Many human expert exist to assist in the developing of
the
programs.
• Games provide a structured task wherein success or
failure
can be measured with least effort.
• John von Neumann is acknowledged father gam
as of e
• The
theory. Game means a sort of conflict in which
term
individuals n or groups (known as players) participate.
• Game theory denotes strategy for game.
• Grow a search tree
• Only one player move at each turn
• At the leaf position, when the game is finish, assign the
utility
to player.
Usual conditions:
• Each player has a global view of the board
• Zero-sum game: any gain for one player is a loss for the
other
Major components of a game
playing program
Two major components
• Plausible move generator: plausible move generator is used
to generate the set of possible successor positions.
• Static evaluation function generator (utilitv function):
based on heuristics, this generates the static
evaluation function value for each and every move
that is being made. The static evaluation function gives
a snapshot of a particular ”v'’
Game Tree

computer's p
turn

opponent's „,
turn

computer's -‹s
turn The opponent is Min.

opponent's ’^
turn !

At the leaf nodes, the


leaf nodes
are evaluated is employed. Big value
means good, small is
bad. 1g9
Game playing rategies
• Míniniax stratcgy
• Alpha-Beta
Pruning
Minimal Strategy
• It is a simple look ahead strategy for two person game
playing.
• One player “inaximizer” tries to maximize the utility
function
• Other player “roinimizer” tries to minimize the utility function
• The plausible move generator generates the necessary
siates for further evaluation and the static evaluation
function “ranks” each of the positions.
• To decide one move, it explores the possibilities of winning
by looking ahead to more than one stop. This is calied ply.
To decide the current move, game tree would be explored
two revels farther.
Max Example

ma
x

min

1
2
MiniMax Example
Max Example
Max Example

12
max Algorithm
Illustrated
2

2 7 2 7

2 ,•’ 7
Minimax Algorithm
function MINIM AX(N) is
begin
if N is a leaf then
return the
estimated score of
this leaf
else
Let N1, N2, .., Nm be the successors of N;
if N is a Min node then
return min{MINlMAX(N 1), ..,

MlNlMAX(Nm)}
Example 1: Considering the
following
gave tree search space
Example 2: Considering the
gave tree search space
following
— lf the first player is a maximizing player, what
move
should be chosen under the win-max strategy‘*
Example 3: Considering the
following
gave tree search space

fi nd uut uytinxil path nnd value uf the following exanzplc


with the help of min-max algorithm
Alpha-Beta Pruning
• The problem with Mini-Max algorithm is that the number
of game states it has to examine is exponential in the
numbRr of moves.
• The Alpha-Beta Pruning helps to arrive at correct Min-
Max algofithm decision without looking at every
node of the game tree.
• Applying an alpha-cutoff means we stop search of
a particular branch becaus£^ we see that we aIrE'ady havR
a better opportunity el5ewhere.
• Applying a beta cutoff means we stop search of a
particular branch because we see that the opponent
already has a better opportunity elsewhere.
• Applying both forms is alpha beta pruning.
Alpha Beta Procedure
• Depth first search of game tree, keeping track of:
— Alpha: Highest value seen SO far on maximizing level
— Beta: Lowest value seen so far on minimizing level

ning
When Maximizing,
• do not expand any more sibling nodes once a node has been
seen whose evaluation is smaller than Alpha
When Minimizing,
• do not expand any sibling nodes once a node has
been seen
whosR evaluation is greater than
Beta
I ,'.,., ,! ,.„,.'.,..• '.'. 1,. '•
1,'. •,
pruning example

alpha cutoff
pruning example

hìlN
pruning example

I
PIN
pha-beta
algorithm
function MAX-VALUE (state, game, alpha, beta)
„ alpha = best MAX so far; beta = best MIN
if CUTOFF-TEST (state) then return EVAL
(state) for each s in SUCCESSORS (state) do
alpha := MAX (alpha, MIN-VALUE (state,
game,
alpha, beta))
if alpha >= beta then return beta
end
return alpha

function MIN-VALUE (state, gamesalpha,


beta) if CUTOFF-TEST (state) then return EVAL
(state) for each s in SUCCESSORS ( t t dv
beta := MIN (beta, MAX-VALUE (s, game, alpha,
beta)) if beta <= alpha then return alpha
end
return beta
pruning example
Example 1: Prone This tree
Example 2: Considering the
following
gave tree search space

•Whut nudc wuuld nut bc nccJcd tu Psi zg ulyha-bcta


be cxaniincJ pruning teclfiziquc’/
Example 3: Considering the
gave tree search space
following
What nodes should not be needed to be
exainincd
using alpha-beta pi'uniiig techniquc?
• In real life, many unpredictable external events can put us
unforeseen
into situations.
• Many games mirror this unpredictability by including a
random element. such as the throwing of
STOCHASTIC GAMES dice. We call these stochastic
games or games of chance.
• Backgammon is a typical game that combines luck and
skill. Dice are rolled at the beginning of a playerts rutn to
determine the legal moves in the backgammon position.
• 18 game tree iii backgaminon must include chance nodes
iii
addition to MAX and MIN nodes.
3-— 1 1 l
1
Games of

chance
The next step is to understand how to make correct
decisions. Obviously, we still want to pick the move that leads
to the best position. However, positions do not have
definite minimax values. instead, we can only calculate the
expected value of a position: the average over all possible
outcomes of the chance nodes.
• This leads us to generalize the rninirnax value for
detenrinistic games to an expecti ininiinax value for games
with chance nodes. Terminal codes and MAX and MIN
nodes (for which the dice roll is known) work exactly the
same way as before. For chance nodes we compute the
expected value, which is the sum of the value over all
outcomes, weighted by the probability of each cJjancp
actio . „

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy