0% found this document useful (0 votes)
2 views

Unit i Problem Solving

Uploaded by

rsviniba96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Unit i Problem Solving

Uploaded by

rsviniba96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

UNIT I PROBLEM SOLVING

Introduction to AI - AI Applications - Problem solving agents – search algorithms –


uninformed search strategies – Heuristic search strategies – Local search and optimization
problems – adversarial search – constraint satisfaction problems (CSP)

Topic 1: Introduction to AI
AI is one of the newest fields in science and engineering. The word Artificial Intelligence comprises of two
words “Artificial” and “Intelligence”. Artificial refers to something which is made by humans or non-
natural thing and intelligence means the ability to understand or think.

AI is the study of how to train the computers so that computers can do things which at present human can
do better. It is the ability of a computer to act like a human being. Therefore, AI is an intelligence where
we want to add all the capabilities to machine that human contains.

Four approaches to AI
• Systems that think like humans.
• Systems that act like humans
• Systems that think rationally
• Systems that act rationally.

A human-centred approach must be in part an empirical science, involving observations and hypotheses
about human behaviour. A rationalist approach involves a combination of mathematics and engineering.
1.Acting humanly: The Turing Test approach
The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory opera-
tional definition of intelligence. A computer passes the test if a human interrogator, after posing some writ-
ten questions, cannot tell whether the written responses come from a person or from a computer.

The computer needs the following capabilities:


• Natural language processing to enable it to communicate successfully in English;
• Knowledge representation to store what it knows or hears;
• Automated reasoning to use the stored information to answer questions and to draw new conclu-
sions;
• Machine learning to adapt to new circumstances and to detect and extrapolate patterns
• Computer vision to perceive objects, and
• Robotics to manipulate objects and move.

2.Thinking humanly: The cognitive modelling approach


There are three ways
1. Through introspection—trying to catch our own thoughts as they go by;
2. Through psychological experiments—observing a person in action;
3. Through brain imaging—observing the brain in action.
Cognitive science brings together computer models from AI and experimental techniques from psychology
to construct theories of the human mind.

3.Thinking rationally: The “laws of thought” approach


The Greek philosopher Aristotle was one of the first to attempt to codify “right thinking. Their study initi-
ated the field called logic.

Example:
“Socrates is a man;
all men are mortal;
therefore, Socrates is mortal.”

There are two main obstacles to this approach.


First, it is not easy to take informal Knowledge.
Second, there is a big difference between solving a problem “in principle” and solving it in practice.
4.Acting rationally: The rational agent approach
An agent is just something that acts. all computer programs do something, but computer agents are
expected to do more: operate autonomously, perceive their environment, adapt to change.
A rational agent is one that acts to achieve the best outcome. the “laws of thought” approach to
AI, the emphasis was on correct inferences.

The rational-agent approach has two advantages:


1. It is more general than the “laws of thought” approach because correct inference is just one of several
possible mechanisms for achieving rationality.
2. It is more amenable to scientific development.

TOPIC 2: AI Applications

1. Robotic vehicles: A driverless robotic car


2. Speech recognition
3. Autonomous planning and scheduling
4. Game playing
5. Logistics planning
6. Robotics
7. Machine Translation

AGENTS:
An agent is anything that can be ENVIRONMENT viewed as perceiving its environment through sensors
and acting upon that environment through actuators.
percept to refer to the agent’s perceptual inputs at any given instant.
An agent’s behaviour is described by the agent function that maps any given percept sequence to
an action. The agent function for an artificial agent will be implemented by an agent program. It is im-
portant to keep these two ideas distinct.
The agent function is an abstract mathematical description;
The agent program is a concrete implementation, running within some physical system.

EXAMPLE. Vacuum Cleaner world


This particular world has just two locations: squares A and B. The vacuum agent perceives which
square it is in and whether there is dirt in the square. It can choose to move left, move right, suck up the
dirt, or do nothing.
Simple agent function:
if the current square is dirty, then suck;
otherwise, move to the other square.

The notion of an agent is meant to be a tool for analyzing systems, not an absolute characterization
that divides the world into agents and non-agents.AI operates at the most interesting end of the spectrum,
where the artifacts have significant computational resources and the task environment requires nontrivial
decision making.

THE STRUCTURE OF AGENTS


AI is to design an agent program that implements the agent function— the mapping from percepts
to actions. This program will run on some sort of computing device with physical sensors and actuators

The architecture: agent = architecture + program


AGENT PROGRAM:
They take the current percept as input from the sensors and return an action to the actuators.

Four types of AI Agents:

1. Simple reflex agents


The simplest kind of agent is the simple reflex agent. These agents select actions on the
basis of the current percept, ignoring the rest of the percept history. it has a Condition action rule.
Example: if car-in-front-is-braking then initiate-braking.
2. Model-based reflex agents
The agent should maintain some sort of internal state that depends on the percept history
and thereby reflects at least some of the unobserved aspects of the current state.
Example: “how the world works”—whether implemented in simple Boolean circuits or in
complete scientific theories—is called a model of the world. An agent that uses such a model is
called a model-based agent.
3. Goal-based agents
The agent needs some sort of goal information that describes situations that are desirable.
Goal-based action selection is straightforward.
4. Utility-based agents.
It uses a utility to measure the value of the next possible action to achieving the goal. A
agent chooses the action that maximizes the expected utility.
5. Learning Agent
A learning agent is a tool in AI that is capable of learning from its experience. All agents
can improve their performance through learning.

Critic: Determines the outcome of the action and gives feedback.


Learning Element: It is responsible for making improvements by learning from the environment
Performance Element: Chooses what action to take.
Problem Generator: This component is responsible for suggesting new or alternative actions
which will lead to new and informative experiences.

TOPIC 3: Problem solving agents:


Goal-based agents are called as problem-solving agent. Problem solving agent adapt to the task
environment and understand the goal to achieve success. It determines sequence of actions which generate
successful state.

Fig. Approach of problem solving Agent

STEPS IN PROBLEM SOLVING:

Step 1: Goal formulation, based on the current situation and the agent’s performance measure,
Step 2: Problem formulation is the process of deciding what actions and states to consider, to achieve a
goal.
Step 3: Search The process of looking for a sequence of actions that reaches the goal is called search.
Step 4: Execution A search algorithm takes a problem as input and returns a solution in the form of an
action sequence. Once a solution is found, the actions it recommends can be carried out. This is called the
execution phase.

A problem can be defined formally by five components:

1. The initial state that the agent starts in.


2. The actions available to the agent.
Given a particular state s, ACTION(s) returns the set of actions that can be executed in s.
3. The transition model, specified by a function RESULT(s, a) that returns the state that results from
doing action a in state s.
4. The goal test, which determines whether a given state is a goal state.
5. A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses
a cost function that reflects its own performance measure.

The initial state, actions, and transition model implicitly define the state space of the problem, ie) the set
of all states reachable from the initial state by any sequence of actions
A path in the state space is a sequence of states connected by a sequence of actions. A solution to a problem
is an action sequence that leads from the initial state to a goal state. Solution quality is measured by the
path cost function, and an optimal solution has the lowest path cost among all solutions.

Example Problems:
1. Toy problem

The first example we examine is the vacuum world


a) States: The state is determined by both the agent location and the dirt locations. The agent is in one
of two locations, each of which might or might not contain dirt. Thus, there are 2 ∗ 22 = 8 possible
world states. A larger environment with n locations has 𝑛. 2𝑛
states
b) Initial state: Any state can be designated as the initial state.
c) Actions: In this simple environment, each state has just three actions: Left, Right, and Suck. Larger
environments might also include Up and Down.
d) Transition model: The actions have their expected effects, except that moving Left in the leftmost
square, moving Right in the rightmost square, and Sucking in a clean square have no effect.
e) Goal test: This checks whether all the squares are clean.
f) Path cost: Each step costs 1, so the path cost is the number of steps in the path.
1. The 8-puzzle,
It consists of a 3×3 board with eight numbered tiles and a blank space. A tile adjacent to the blank space
can slide into the space.

a) States: A state description specifies the location of each of the eight tiles and the blank in one of
the nine squares.
b) Initial state: Any state can be designated as the initial state.
c) Actions: The simplest formulation defines the actions as movements of the blank space Left, Right,
Up, or Down.
d) Transition model: Given a state and action, this returns the resulting state.
e) Goal test: This checks whether the state matches the goal configuration
f) Path cost: Each step costs 1, so the path cost is the number of steps in the path.

The 8-puzzle belongs to the family of sliding-block puzzles, which are often used as test problems for
new search algorithms in AI. This family is known to be NP-complete.
9!
• The 8-puzzle has 2 = 181, 440 reachable states and is easily solved.
• The 15-puzzle (on a 4×4 board) has around 1.3 trillion states (Few milli seconds to solve)
• The 24-puzzle (on a 5 × 5 board) has around 1025 states (Hours to solve)

2. The 8-queens problem


The goal of the 8-queens problem is to place eight queens on a chessboard such that no queen attacks
any other. (A queen attacks any piece in the same row, column or diagonal.) There are two main kinds of
formulation.
i. An incremental formulation involves operators that augment the state description, starting
with an empty state; for the 8-queens problem, this means that each action adds a queen to the
state.
ii. A complete-state formulation starts with all 8 queens on the board and moves them around.

a) States: Any arrangement of 0 to 8 queens on the board is a state.


b) Initial state: No queens on the board.
c) Actions: Add a queen to any empty square.
d) Transition model: Returns the board with a queen added to the specified square.
e) Goal test: 8 queens are on the board, none attacked.

REAL WORLD PROBLEMS

1.Route Finding Problems is defined in terms of specified locations and transitions along links between
them. Route-finding algorithms are used in a variety of applications.
(e.g., an airport)
a) States: Each state obviously includes a location and the current time.
b) Initial state: This is specified by the user’s query.
c) Actions: Take any flight from the current location, in any seat class, leaving after the current time,
leaving enough time for within-airport transfer if needed
d) Transition model: The state resulting from taking a flight will have the flight’s destination as the
current location and the flight’s arrival time as the current time.
e) Goal test: Are we at the final destination specified by the user?
f) Path cost: This depends on monetary cost, waiting time, flight time, customs and immigration
procedures, seat quality, time of day, type of airplane, frequent-flyer mileage awards, and so on.

2.The Traveling Salesperson Problem (TSP) is a touring problem in which each city must be visited
exactly once. The aim is to find the shortest tour.

Topic 4: SEARCH ALGORITHMS

The solution to a search problem is a sequence of actions called the plan that transforms the start state
to the goal state. This plan is achieved through search algorithms. Search algorithm is classified into 2 types

Uninformed Search

Uninformed search algorithms that are given no information about the problem other than its definition.
The basic algorithms are as follows:

a) Breadth-first search expands the shallowest nodes first; it is complete, optimal for unit step costs,
but has exponential space complexity.
b) Uniform-cost search expands the node with lowest path cost, g(n), and is optimal for general step
costs.
c) Depth-first search expands the deepest unexpanded node first. It is neither complete nor optimal,
but has linear space complexity.
d) Depth-limited search adds a depth bound.

Informed Search

Informed search methods may have access to a heuristic function h(n) that estimates the cost of a solution
from n.

a) The generic best-first search algorithm selects a node for expansion according to an evaluation
function.
b) Greedy best-first search expands nodes with minimal h(n). It is not optimal but is often efficient.
c) A∗ search expands nodes with minimal f(n) = g(n) + h(n). A∗ is complete and optimal, provided
that h(n) is admissible (for TREE-SEARCH) or consistent (for GRAPH-SEARCH). The space complexity
of A∗ is still prohibitive.
d) SMA∗ (simplified memory-bounded A∗) are robust, optimal search algorithms that use limited
amounts of memory; given enough time, they can solve problems that A∗ cannot solve because it
runs out of memory.
The performance of heuristic search algorithms depends on the quality of the heuristic function.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy