Artificial Intelligence Notes Unit 1
Artificial Intelligence Notes Unit 1
T
GENERAL ISSUES AND OVERVIEW OF
AI
Syllabus
• IntrodUCtion tD Ai,
• Problem Solving, State space search,
• Blind search:
• Depth first search,
• Breadth first search,
• Informed search:
• Heuristic function,
• Hill climbing search,
• Best first search,
• A’ & AO” Search,
• Constraint satisfaction.
• Game tree
• Evaluation function,
• Mini-Max search,
• Alpha-beta pruning,
• Games of chance.
Introduction to
What is
• AI ? is the science and
Artificial Intelligence engineering of
making intelligent machines.
Tbi rtlt
AT
Programming AI Hardware
languages and
• The quality of the result depends on how much knowledge the
system possesses. The available knowledge must be represented in
a very efficient way. Hence, knowledge representation is vital component
of the system.
• It is not merely enough that knowledge is represented efficiently. The
inference process should also be equally good for satisfactory results. The
inference process is broadly divided into brute and heuristic search
procedure.
• Today, just like we have specialized languages and programs for
data processing and scientific applications, we encounter specialized
languages and tools for Al programming. Al languages provide the basis
functions for Al programming and tools for the right environment.
• Today, most of the AI programs in India are implemented on Von
Neumann machines only. However dedicated workshops have emerged
for AI programming.
Applications area of AI
• Perception
Machine vision
Speech understanding
—Touch ( tactile or
hoptic) sensation
• Robotics
• Natural Language
Processing
— Natural Language
Understanding
Speech Understanding
Language Generation
Machine Translation
• Planning
• Expert Systems
• Machine Learning
. Gcstatio or Ai 1943- I
ii.9§r)Early e:ztl3Hsiasm, great expectatlOłłS ( I 9ô2- 1969)
ííi. A ‹lose uŁ reality ( ] 9f›G-1974)
iv. Knov'lcdgo-based systems ( I*J69- I 97*J)
v. Al bcconzes an industry ( ł 9B0-1988)
vi. Evotvi lb systc ns ( ł 9gC p -cscnt)
vii. Itecent eve Its ( ł 9H7-present)
Gestation of AT
• First work
— McCulloch and Pitts (J943): build a 8oolean circuit model of brain
• First neural network computer
— Minsky and Edmonds (1951) build “SNARC” neural network
computer
• irst chessprqgrams
— Shannon (1950) and Turing (1953)
• Official birthplace of AI
— The 1956 Dartmouth workshop organized by John McCarthy
Early enthusiasm, great expectations
• iNe cel and o s earnl
ork
Logic Theorist(1956)
— General Problem Solver
• Samuel s checkers
programs 1952-
19¿6j
• The MIT connection
McCarthy's LISP (1958)
— Minsky's microworlds (1963)
• Thin s were not so eas
AI discovers computational
complexity(1966- 1974)
—Only syntactic manipulation and little knowledge
—A lot of AI problems are intractable
—Basic structures had fundamental limitations
Human
Human
Interrogator
AI system
Adam M. Turing, 1954)
• MajOr(191
contributor to the code-breaking
WDrk at Bletchley Park \Enigma),
during World War II.
• Major contributor to the
early development of computers.
• Foresaw Artificial Intelligence &
devised
the Turing Test.
• To conduct Turing test, we need two people and the machine
to be evaluated. One person plays the role of the
interrogator, who is in a separate room from the
computer and the other person.
• The interrogator can ask questions of either the person or the
computer by typing questions and receiving typed
responses. however, the integrator knows only as A
and B and aims to determine which is the person and
which is the machine.
• The goal of the machine is to fool the interrogator into
then we that
believing will it conclude that If the machine succeeds
is the person. can think. The
at this,
the
machine is allowed tD do whatever it can tD fool the
interrogator.
information, knowledge, intelligence
• information is a message that contains relevant
meaning,
implication,
comes from orboth
inputcurrent
for decision and/or action.
(communication) andInformation
historical
(processed data or ‘reconstructed picture’) sources. In
essence, the purpose of information is to aid in making
decisions and/or solving problems or realizing an opportunity
• Knowledge is the cognition or recognition (know-
what), capacity to act (know-how), and understanding
(know-why) that resides or is contained within the mind or
in the brain. The purpose of knowledge is to better our lives.
• Intelligence (also called intellect) is an umbrella term used
to describe a property of the mind that encompasses
many related abilities, such as the capacities to reason, to
plan, to solve problems, to think abstractly, to comprehend
ideas, to use language, and to learn.
Human intelligence Vs. Artificial intelligence
Human intelligence revolves around The field of Artificial intelligence focuses
adapting to the environment using a on designing machines that can mimic
combination of several cognitive human behavior.
processes.
Understanding
Generation
Translation
• Commonsense
reasoning
• Robot control
• Engineering
Design
Fault finding
Manufacturing planning
• Scientific analysis
• Medical diagnosis
• Financial analysis
Limits of AI Today
Today‘s AI systems have been able to achieve limited success in some
of thE'se tasks.
• In Computer vision, the systems are capable of face recognition
• In RDbotics, we have b£'en able to makE' vehicles that are mostly
autonomous.
• In Natural language processing, we have systems that arE' capable of
simple machine translation.
• Today's ExpE'rt systems can carry out medical diagnosis in a narrow
domain
• Speech understanding systems are capable of recognizing
several
thousand words continuous speE'ch
• Planning and scheduling systems had been emplDyed in
scheduling
experiments with the Hubble TE'lescopE'.
• The Learning systems are capable of doing text categorization into
about a 1000 topics
• In Games, Al systems can play at the Grand Master level in chess
(world champion), checkers, etc.
What can AI systems NOT do yet?
• Understand natural language robustly (e.g.,
read and understand articles in a newspaper)
• Surf the web
• Interpret an arbitrary visual scene
• Learn a natural language
• Construct plans in dynamic real-time
domains
• Exhibit true autonomy and intelligence
What is an AI
technique
What is an AT
technique
AI technique is a method that exploits knowledge
which should be represented in such a way thai —
The knowledge captures generalization or we can say that
situations that share important properties and grouped together
rather than to represent separately each individual situation.
It can be understood by people who provide it. In many AI
domains thrust of the knowledge, a programs has, must ultimately
be provided by people in terms they understand.
It can easily be modified to correct errors and to reflcet changes in
the world and in our world view.
It can be used in a great many situations cven if it is not
totally accurate or complete.
v. It can be used to help overcome its own sheer bulk by to
narrow
the range of possibilities that must usually be considered.
• Depth-first search
Expand one of the nodes
at
the deepest level.
Depth First Search
• The search begins by expanding the initial node,
generate
all successors of the initial node and test them.
• Depth-first search always expands the deepest node in
the
current frontier of the search tree.
• Depth-first search uses a L IFO approach.
Dept h- first search
Expand deepest unexpandeâ node
Implementation:
)i iii.#‹ —— LII-0 queue, i.e., put successors at
Iron I
I Depth -Ïïi'st
search
Expand deepest unexpanded
noóc
Implemcn lation:
D e p t h - first searcl i
Implement at ior:
r’,„j/ ’ LIFO queue, i.e., put successors
at Iron I
Depth-first search
Expand deepest unexpanded node
lmp'emen1ation:
/”iftf{f. = LIF0 queue, i.e., pvt svcc9N0W Bt
fF0flt
hDot p
s
Enpand deepest unexpanded node
Imp ementation:
/ ’‹»j/‹ — LIFO g«eue, i e., put successors at
front
Dept t se r i
Amp.en enrzTion:
/ LIF0 o\ etie, , put successors at
i fFont
//‹
Expand deepest unexpanded node
Implementation:
/zYiu/c LTSO qneue, i , put succœsors
at front
Depth-first
search
Algorithm for Depth First
1. IfSearch
the initial state is a goal state, quit and return success.
2. Otherwise, do the following until success or failure
is
signaled:
a) Generate a successor, E, of the initial state. If there are
no more successors, signal failure.
of a node.
Expand shah o vest unexpanded node
pit'menTaI ion:
/.' ' / is a FIFO queue, .e., new
successors go aT end
Breacit h-first seorch
Expand shaTlowesr unexpanded node
end
Breadth-first search
It is one in which by luck solution While in 8FS all parts of the tree
can be found without examining must be examined to level n before
much of the search space at all. any nodes on level n+1 can be
It does not give optimal solution. examined.
It gives optimal solution.
DFS may find a long path to a
solution in one part of the tree, when BFS guarantees to find a solution if
a shatter path exists in some other, it exists. Furthermore if there are
unexplored part of the tree. multiple solutions, then a minimal
Time complexity: D(bd ) solution will be found.
where b : branching factor, d: depth Time complexity: O(b^ )
Space complexity: O{dl , d: depth where b : branching factor, d: depth
Space complexity: 0(b d )
where b : branching factor, d: depth
• Informed search tries to reduce the amount of
search that be done by
intelligent
must making for the
selected
choices for expan5ion.
nodes that are
• In general this is done
using a heuristic function.
Heuristic
• A heuristic Function
function on is a function that ranks
alternatives in various search algorithms at each branching
step based on the available information (heuristically) in
order to make a decision about which branch to follow
during a search.
Hill Climbing
— Best First
Search
A”
AO*
• This algorithm also called discrete optimization
algorithm.
• lt utilizes a simple heurisitic function.
• Hill Climbing = Depth First Search + Heuristic Function
• There is practically no difference between hill
climbing and depth first search except that the childem
of the node that has been Rxapanded are sorted by
the remaining distance.
• There are two ways to implement hill climbing
— Simple hill climbing
— Steepest-Ascent hill climbing or gradient
search
1. Evaluate the initial state if goal then return (success) . Else
with initial state as the current
continue
2. state
Loop until a solution is found or until there are no new operator
to
apply to current node :
a) Select a new opRrator arld apply current state to produce a
new state .
b) i. Evaluate
if it is the newthen
a goal state.
remrn (success) .
ii. if not goal but better than current state then make ii
the
iii. current state .
if it is not better than current state then continue
the lOOp.
Steepest-Ascent hill
climbing
1. Evaluate the initial state . If it is also a goal state , then return
it and quit . Otherwise , continue with the initial state as
current state .
2. Loop until a solution is found or until a complete iteration
produces no change to current state
a) Let SUCC be a state such that any possible successor of
the current state will be better than SUCC
b) For each Operator that applies to the current state do :
i) Apply the operator and generate a new state .
ii)Evaluate the new state . if it is a goal state , return it
and quit . lf not, compare it to SUCC . If it is better ,
then set SUCC to this state . if it is not better , then
leave SUCC alone .
iii) if the SUCC is bettertha,ncurrent , then set
state “
Di erence between simple
& steepest-assert hill
• thinking
Steepest-ascent hill climbing or gradient search considers
all the moves from the current state and selects the best
one as the nexi state.
• ln the simple hill climbing the first state that is better than the
current state is selected.
Search Tree for Hill Climbing
2.
9
Example
B C D
(2)
(4)
E F
(3)
• Local Maximum : A state that is better than all its neighbours
but not so when compared to states to states that arc farther
away.
Local Maximum Global
Peak
T ‹ nAi - l4ØfD- j
i
n
Plateau :A flat area of the search space in which ali
neighbounng
states have the same
value.
R i n d :The orientation of the high region, compared io the
set of available moves, makes it impossible to climb up.
However, two moves executed serially may increase the
height.
Methods No overcome these
problems
• Backtracking for local maximum. Backtracking
helps in undoing what has been done so far
and permits to try totally different path to
attain the global peak.
• A big jump is the solution to escape from
the plateau. A huge jump is recommended
because in a plateau all neighboring points have
the same value.
• Trying different paths at the same time is the
solution for circumventing ridges.
Best First
• It is a way of Search
combining the advantages of both depth-
first
search and breadth first search into a single method.
• One way of combining the DFS and BFS is to follow a
single path at a time, but switch paths whenever
some competing path looks more promising than the
current one does.
• At each step of the best-first search process, we select
the most promising of the nodes we have generated so
far. This is done by applying an appropriate heuristic
function to each of them. We then expand the chosen
node by using the rules to generate its successors. lf
one of them is a solution, we can quit. If not, all
those new nodes are added to the set of nodes
generated so far. Again the most promising node is
Best First Search Example
4 6
6 5 4
2 Æ
List to maintain in Best-First Search
• OPEN: nodes iliat have bccii but have
gcncratcd, not
examined. This is organized as a priority queuc.
• CLOSED: nodes that have already been examined.
Whenever a new node is generated, check ivhcther it has
been generatcd before.
Algorithm of Best First
1. Search initial state).
OPEN
2. Loop until a goal is found or there are no nodes left
in
OPEN do:
a. Pick the best node in OPEN
b. Generate its successors.
c. For each succoSSOr do:
i. If it has not been generated before. evaluate ii,
add it to OPEN, and record its parent.
ii. If it has been generated
before. change the parent if this new path
is better than the previous one. In that
case. update the cost of gening to this node
and to any successors that this node may
Start
Node
Goal
Node
Search process of best first
Ste Children OPEN List CLOSE Lim
p search
here
,elm' st»c of edge costs 8'ow stew la »
/x/ = esti»ate of lo test cost yath flow› x to goal
Example
Obtain tlzc fitncss number tor nodc K
F(n)'g(n)+I (n)
'( Ok( ftnlCtici \ involved fron start »odc S to node
K)+(evaluation function tor K)
=6*5+7+ I + l
A
1. Initialize: set OPEN= {s), CLOSED=( )
Algorithm
g(s)=0, f(s)-h(s)
2. Fail: i OPEN =( ), Terminate & fail.
3. Select: select the minimum cost state, n, from OPEN. Save n
in
CLOSED.
4. Terminate: if n e G, terminate with success, and return f(n).
5. Expand: for each successor , m, of n
If m g[open U closed]
Set g(m) =g(n + C(n,m)
Set f(m - ( ) h(m
Insert m in OPEN.
If m e[open U closed]
Set g(m =min(g(m) ,g(n)+ C(n,m))
Set f(m = ( ) + h(m
IN finn) has decreased and m c CLOSED, move m to OPEN
OPEM
y
(S)
Merit and demerit of A*
Algorithm
Merits
A* is both complete and admissible.Thus A* always finds
an
optimal path, if one exists.
Derneriis
It is costly if the computation cost is high.
D
B
AO* Algorithm
1. Initialize: set G * {s), f(s)=h(s)
if s e T, label s as SOLVED
2. Terminate: if s is SOLVED, then terminate.
3. Select: seiect a non-terminal leaf node n from the marked
sub
tree.
4. Expand: make explicit the successors of n
For each new successor, m:
set f{m)=h(m)
if m is terminal, label m SOLVED
5. Cost Revision: call cosi-revise(n)
Cost-revise(n)
1. create Z=tn}
2. lf Z=() return
3. Seiect a node m from z such that m has no descendants in Z.
4. If ni is an AND nude with successnrs.
rt r2 •3 . . ...... ............ £
Set (m) = [(ri) + C(m, ri )]
Mark the edge to each
successor of m.
If each successor is labeled
SOLVED, then label m as
SOLVED.
5. lfni is an OR node with
successors.
g-,•.,.--,•}-„•
.•,:-,.,-.-., •..' .},-.--••-•,
Constraint 1:
We can write one long constraint for the
1000* S + 100* E+10* N+ D
sum.
+ 1000* M+ 100* O+10* R+ E
S = 8 or SEND
Initial state 9
• No two letters O=0
have
C2 = 1
• The sum of the E•
digits must be as 9
shown
2 * D= 10• C1
* Y
C1 = C1 =
S= 1
0
8 O=
2 + D = 10
N + R = 10 +Y 0
O= + E
0 R=9
S =8
COWfâiC
t
Confl ic COnf)
Star
t
S = 8 or SEND
Initial state 9
MORE
• No two letters 0 =0
have C2 =
1
• The sum of the E•
digits must be as 9
shown
R = 8 or 9
5 * D = 10• C1
* Y
M=
C1 = C1 = 1R
S= 1
0 =8
8 E:5
N • R = 10 s-6
O=0 +E O:
5 * D = 10 0
o= R=9 +Y D-
2Y D= 7
:7 7
COWf
Final solution
• M-1
• R=8
• S=9
• E-5
• N-6
• o-o
• D=7
• Y=2
• C1=1
• C2=1
• C3-0
• C4=1
SOLVE
• US +AS=AL L
• SHE +TH Ü= EST
• C ROSS RoA us-
DANGER
• DAYS-.-TOO=SI4ORT
Solve
CROSS
ROADS
DANGER
Rule I: Well you can see that the DANGER has one more letter than
CROSS and ROAOS, and the extra letter is D. Thni means that C R
equals something more than 10. Which alsp means D is 1,
Rule 2: Oh look, S + S = R. That means thai R must be even. We have a
choice uf 4, 6 and 8, because if R was 2, S would hove to be 1, and D is
already 1. Let's try 6 for the value of R, because we need high numbers
ifwc want C
+ R to equal something more than 10. Oh look, if R is ñ and S is R
divided by 2, then S must be 3!
Rule 3: S+D-E, 3+1=4, So, EN
Rule 4: And since we now only have 4 spots in the key left, we choose the
highest numhcr for C, which is 9. Again, we need high numbers tn make
C
+ R equal something more than 10.
Rule 5: In the equation. O + A = G. We have 2, 5, 7 and 8 vacant. Set's plny
around with ihcse lettcrs. get's see if wc can rd d an equation in there,
Yes! There is an equation there. 5 + 2 = 7! So G must equal 7. We know
that 9 + 6 = 15, but it's missing the 5! So, A must equal 5. In turn, this
a
• 96233
62513
158746
And the following
key...
• C=9
• S=3
• A=S
• ß=1
• E=4
• N=
8
G:
7
’ 85
US 1S
ALL 100
SHE 634
+THE 834
BEST 146
B
CROSS
+ROADS 96233
62513
DANCER
15874
6
Why has game playing been a focus of Al?
• Games have well-defined rules, which can be implemented in
programs
• Interfaces required are usually simple
• Many human expert exist to assist in the developing of
the
programs.
• Games provide a structured task wherein success or
failure
can be measured with least effort.
• John von Neumann is acknowledged father gam
as of e
• The
theory. Game means a sort of conflict in which
term
individuals n or groups (known as players) participate.
• Game theory denotes strategy for game.
• Grow a search tree
• Only one player move at each turn
• At the leaf position, when the game is finish, assign the
utility
to player.
Usual conditions:
• Each player has a global view of the board
• Zero-sum game: any gain for one player is a loss for the
other
Major components of a game
playing program
Two major components
• Plausible move generator: plausible move generator is used
to generate the set of possible successor positions.
• Static evaluation function generator (utilitv function):
based on heuristics, this generates the static
evaluation function value for each and every move
that is being made. The static evaluation function gives
a snapshot of a particular ”v'’
Game Tree
computer's p
turn
opponent's „,
turn
computer's -‹s
turn The opponent is Min.
opponent's ’^
turn !
ma
x
min
1
2
MiniMax Example
Max Example
Max Example
12
max Algorithm
Illustrated
2
2 7 2 7
2 ,•’ 7
Minimax Algorithm
function MINIM AX(N) is
begin
if N is a leaf then
return the
estimated score of
this leaf
else
Let N1, N2, .., Nm be the successors of N;
if N is a Min node then
return min{MINlMAX(N 1), ..,
MlNlMAX(Nm)}
Example 1: Considering the
following
gave tree search space
Example 2: Considering the
gave tree search space
following
— lf the first player is a maximizing player, what
move
should be chosen under the win-max strategy‘*
Example 3: Considering the
following
gave tree search space
ning
When Maximizing,
• do not expand any more sibling nodes once a node has been
seen whose evaluation is smaller than Alpha
When Minimizing,
• do not expand any sibling nodes once a node has
been seen
whosR evaluation is greater than
Beta
I ,'.,., ,! ,.„,.'.,..• '.'. 1,. '•
1,'. •,
pruning example
alpha cutoff
pruning example
hìlN
pruning example
I
PIN
pha-beta
algorithm
function MAX-VALUE (state, game, alpha, beta)
„ alpha = best MAX so far; beta = best MIN
if CUTOFF-TEST (state) then return EVAL
(state) for each s in SUCCESSORS (state) do
alpha := MAX (alpha, MIN-VALUE (state,
game,
alpha, beta))
if alpha >= beta then return beta
end
return alpha