0% found this document useful (0 votes)
120 views28 pages

Artificial Intelligence 7: Local Search

This document discusses local search algorithms as alternatives to systematic search algorithms for goal-based agents. It describes hill climbing search, which iteratively moves to successor states with better objective function values until a local optimum is reached. Variants like stochastic hill climbing, tabu search, and simulated annealing are introduced to help hill climbing escape local optima. Simulated annealing is inspired by the metallurgical process and accepts worse moves probabilistically based on temperature to balance exploration and exploitation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views28 pages

Artificial Intelligence 7: Local Search

This document discusses local search algorithms as alternatives to systematic search algorithms for goal-based agents. It describes hill climbing search, which iteratively moves to successor states with better objective function values until a local optimum is reached. Variants like stochastic hill climbing, tabu search, and simulated annealing are introduced to help hill climbing escape local optima. Simulated annealing is inspired by the metallurgical process and accepts worse moves probabilistically based on temperature to balance exploration and exploitation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Alvin Tjondrowiguno

Artificial Intelligence
7: Local Search
Previously: Goal-Based Agents
A goal-based agent:
● Has knowledge about the environment state AND the agent’s goal.
● Combines the goal and the environment model to choose actions.
Previously: Goal-Based Agents
A goal-based agent:

● Uses states as the model of the world


● Conceptually uses a search tree to perform searching among states
● Uses search algorithms to find solutions:
○ Depth-first search, breadth-first search, backtracking search (uninformed search)
○ Greedy best-first search, A* search (informed search)
Challenges of Goal-Based Agents
● Informed search:
○ There is only a handful of problems where a good heuristic is available

● Uninformed search:
○ Search trees are HUGE (high time complexity, high space complexity)
○ In the worst-case scenario, not much better than brute-force search

● Both requires the environments to be (at least):


○ Fully-observable
○ Deterministic

● Real-world environments are (most of the time) more complex


Utility-Based Agents
A utility-based agent:
● Has the information about how happy it is with the world at any time.
● The agent’s goal is implicitly embedded into the utility: To maximize it.
Local Search
Local Search
● The previously discussed search algorithms are basically systematic
searches in the search spaces
● When a goal is found, the path to the goal is also an important part of the
solution
○ In many problems, the path is irrelevant! (Example: 8-Queen problem)
● Local search algorithms do not worry about paths
● Local search algorithms operate using one current state and generally
“move” to the neighbors of that state
Local Search
Advantages:
● Keep a single state instead of visited vs unvisited states and paths in
memory
(very low space complexity)
● In a large (or even infinite / continuous) state spaces where systematic
algorithms are not feasible, local search can still find reasonable solutions

Disadvantages:
● Local search algorithms are generally incomplete
● Sometimes finds sub-optimal solutions
Local Search Problem Formulation
For goal-based agents:
utility-based
● States: How the world
agents: is represented in the agent’s point of view
● Initial State: The state where the agent starts in
● Actions: A set of actions the agent can take (based on the state the agent is
in)
● Transition Model: A description of how actions taken by the agent changes
the states
● Goal Test: Determines whether a state is a goal state
● Objective
Path Cost:Function
The cost (Utility
that theFunction): A function
agent suffers thatan
for doing calculates
action in the value
a given of a
state
state
○ Represents “how happy” the agent is with the particular state
○ The goal of the agent is to find the state that maximizes (or minimizes) the value of that objective
function
Example: 8-Queen Problem
● State: A vector of length 8 containing integers in the range 1-8
A number on the ith position represents the row number
of
the queen placed on the ith column
● Initial state: [1 1 1 1 1 1 1 1]
● Actions: Change one number in the vector

● Transition
Objective model: Obvious
function: The number of queen pairs that attack
each
other (to be minimized)
○ Example: f([1 1 1 1 1 1 1 1]) = 28
○ Example: f([8 2 4 1 7 5 3 6]) = 0
Hill Climbing Search
Hill Climbing Search
● Hill Climbing Search is simply a loop that continually moves in the direction
of increasing value (“uphill”):
1. Start with a random state
2. Move to a successor state that has the best value based on the objective function
3. Repeat step 2 until there is no neighbor with a better objective function value
State-Space Landscape
Hill Climbing Search
● Hill climbing search is the most basic local search algorithm
● The idea is simply to iteratively find the locally best improvement, without
looking ahead beyond the immediate neighbors of the current state
○ It is also called the greedy local search
○ Analogical to trying to find the top of Mount Everest in a thick fog while suffering from amnesia
● No search tree!
○ The data structure involves only the current state, its neighbors, and their objective function
values
Example: 8-Queen Problem
● An example of a random state with an
objective function value h = 17
● Numbers in other squares represent
objective function values h of each
successor state
○ The best successor state has a value h = 12
Example: 8-Queen Problem
● After 5 iterations, this state is reached
○ h=1
● However, no further successors have better
objective function value
● Hill Climbing terminates!
● This state is called a local optimum (as
opposed to global optimum)
Local Optima
● A local optimum is a peak
state that is better than any of
its successor states, but is
worse than the global
optimum
● In general, local search
algorithms are prone to local
optima
Hill Climbing Analysis
● Starting from randomly generated 8-queen states, hill climbing reaches a
local optimum of 8-queen problem 86% of the time
○ It finds the global optimum only 14% of the time
● However, it takes 4 steps on average to find the global optimum
○ 3 steps on average when it gets stuck in a local optimum
○ Recall that the 8-queen problem has 88 ≈ 17 million states
Hill Climbing Variants
● The basic hill climbing algorithm terminates when there is no better
successor
● Sometimes, it is beneficial to also allow sideways moves instead of only
uphill moves
○ A sideways move is a hill climbing iteration where the chosen successor has exactly the
same objective function value as the current state
● However, beware of infinite loops among same-value states!
○ Can be solved by putting a limit on consecutive sideways moves
● Allowing 100 consecutive sideways moves on 8-queen problem:
○ Problem instances solved by hill climbing improves from 14% to 94%
○ Average steps become 21 steps for successes, 64 for failures
Hill Climbing Variants
● Stochastic Hill Climbing:
○ Chooses the next state randomly among all uphill moves instead of choosing the best one
○ Probabilities proportional to the objective values
○ Usually slower but finds better solutions

● First-Choice Hill Climbing:


○ Instead of generating all successors, randomly generate successors until a better state is
generated
○ Good strategy when each state has many (e.g.: thousands) successors
● Random-Restart Hill Climbing:
○ Generate several initial states at random, and run hill climbing separately on each of them
○ Choose the best result among all hill climbing runs
Hill Climbing Variants
● Local Beam Search:
○ Hill climbing search with k current states instead of one
○ Initially, generate k random initial states
○ At each iteration, generate all successors of the k current states
○ Among all successors, keep k best successors as the next states

● Stochastic Beam Search:


○ Instead of choosing k best successors, choose k successors randomly
○ Probabilities of each successor being chosen increases proportionally to its value
Hill Climbing Variants
● Tabu Search:
○ Hill climbing search with a tabu list
○ The tabu list represents a short-term memory of recently-visited states
○ Any states (or other user-defined concepts) in the tabu list are not considered as successors
○ Sometimes, “downhill” moves are allowed if there are no uphill moves available
Simulated Annealing
Hill Climbing vs Random Walk
● A hill-climbing algorithm that never makes “downhill” moves toward states
with lower value is guaranteed to be incomplete
○ Because it gets stuck on local optima
● In contrast, a purely random walk (moving to a successor chosen uniformly
at random from the set of successors) is complete but extremely inefficient
● It is reasonable to combine them in some way that aims for both efficiency
and completeness
Annealing
● Annealing (metallurgy) is the
process used to temper or
harden metals and glass by
heating them to a high
temperature and then gradually
cooling them, thus allowing the
material to reach a low-energy
crystalline state
Simulated Annealing
● Simulated annealing is a local search algorithm inspired by the annealing
process:
○ At the beginning, start at high temperature where randomization is high
○ Gradually reduce the temperature, where randomization slows down and solutions gradually
converge

● Adapted into a local search idea:


○ Instead of picking the best move like Hill Climbing, Simulated Annealing picks a random
move
○ If the move improves the situation, it is always accepted
○ Otherwise, the algorithm accepts the move with some probability
■ The probability decreases with the “badness” of the move
■ The probability also decreases along with the decreasing temperature (or time)
● “Bad” moves are more likely to be allowed at the start when the temperature is
high, and they become more unlikely as T decreases
Simulated Annealing
Thank you for listening

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy